superduperdb.ext.anthropic package#

Submodules#

superduperdb.ext.anthropic.model module#

class superduperdb.ext.anthropic.model.Anthropic(identifier: str, artifacts: dc.InitVar[t.Optional[t.Dict]] = None, *, datatype: EncoderArg = None, output_schema: t.Optional[Schema] = None, flatten: bool = False, preprocess: t.Optional[t.Callable] = None, postprocess: t.Optional[t.Callable] = None, collate_fn: t.Optional[t.Callable] = None, batch_predict: bool = False, takes_context: bool = False, metrics: t.Sequence[t.Union[str, Metric, None]] = (), model_update_kwargs: t.Dict = <factory>, validation_sets: t.Optional[t.Sequence[t.Union[str, Dataset]]] = None, predict_X: t.Optional[str] = None, predict_select: t.Optional[CompoundSelect] = None, predict_max_chunk_size: t.Optional[int] = None, predict_kwargs: t.Optional[t.Dict] = None, model: t.Optional[str] = None, client_kwargs: ~typing.Dict[str, ~typing.Any] = <factory>)[source]#

Bases: APIModel

Anthropic predictor.

client_kwargs: Dict[str, Any]#
class superduperdb.ext.anthropic.model.AnthropicCompletions(identifier: str, artifacts: dc.InitVar[t.Optional[t.Dict]] = None, *, datatype: EncoderArg = None, output_schema: t.Optional[Schema] = None, flatten: bool = False, preprocess: t.Optional[t.Callable] = None, postprocess: t.Optional[t.Callable] = None, collate_fn: t.Optional[t.Callable] = None, batch_predict: bool = False, takes_context: bool = False, metrics: t.Sequence[t.Union[str, Metric, None]] = (), model_update_kwargs: t.Dict = <factory>, validation_sets: t.Optional[t.Sequence[t.Union[str, Dataset]]] = None, predict_X: t.Optional[str] = None, predict_select: t.Optional[CompoundSelect] = None, predict_max_chunk_size: t.Optional[int] = None, predict_kwargs: t.Optional[t.Dict] = None, model: t.Optional[str] = None, client_kwargs: ~typing.Dict[str, ~typing.Any] = <factory>, prompt: str = '')[source]#

Bases: Anthropic

Cohere completions (chat) predictor.

Parameters:
  • takes_context – Whether the model takes context into account.

  • prompt – The prompt to use to seed the response.

pre_create(db: Datalayer) None[source]#

Called the first time this component is created

Parameters:

db – the db that creates the component

prompt: str = ''#

Module contents#

class superduperdb.ext.anthropic.AnthropicCompletions(identifier: str, artifacts: dc.InitVar[t.Optional[t.Dict]] = None, *, datatype: EncoderArg = None, output_schema: t.Optional[Schema] = None, flatten: bool = False, preprocess: t.Optional[t.Callable] = None, postprocess: t.Optional[t.Callable] = None, collate_fn: t.Optional[t.Callable] = None, batch_predict: bool = False, takes_context: bool = False, metrics: t.Sequence[t.Union[str, Metric, None]] = (), model_update_kwargs: t.Dict = <factory>, validation_sets: t.Optional[t.Sequence[t.Union[str, Dataset]]] = None, predict_X: t.Optional[str] = None, predict_select: t.Optional[CompoundSelect] = None, predict_max_chunk_size: t.Optional[int] = None, predict_kwargs: t.Optional[t.Dict] = None, model: t.Optional[str] = None, client_kwargs: ~typing.Dict[str, ~typing.Any] = <factory>, prompt: str = '')[source]#

Bases: Anthropic

Cohere completions (chat) predictor.

Parameters:
  • takes_context – Whether the model takes context into account.

  • prompt – The prompt to use to seed the response.

client_kwargs: t.Dict[str, t.Any]#
identifier: str#
model_update_kwargs: t.Dict#
pre_create(db: Datalayer) None[source]#

Called the first time this component is created

Parameters:

db – the db that creates the component

prompt: str = ''#