Skip to main content

model

superduperdb.ext.openai.model

Source code

OpenAIChatCompletion​

OpenAIChatCompletion(self,
identifier: str,
db: dataclasses.InitVar[typing.Optional[ForwardRef('Datalayer')]] = None,
uuid: str = <factory>,
*,
artifacts: 'dc.InitVar[t.Optional[t.Dict]]' = None,
signature: str = 'singleton',
datatype: 'EncoderArg' = None,
output_schema: 't.Optional[Schema]' = None,
flatten: 'bool' = False,
model_update_kwargs: 't.Dict' = <factory>,
predict_kwargs: 't.Dict' = <factory>,
compute_kwargs: 't.Dict' = <factory>,
validation: 't.Optional[Validation]' = None,
metric_values: 't.Dict' = <factory>,
model: 't.Optional[str]' = None,
max_batch_size: 'int' = 8,
openai_api_key: Optional[str] = None,
openai_api_base: Optional[str] = None,
client_kwargs: Optional[dict] = <factory>,
batch_size: int = 1,
prompt: str = '') -> None
ParameterDescription
identifierIdentifier of the leaf.
dbDatalayer instance.
uuidUUID of the leaf.
artifactsA dictionary of artifacts paths and DataType objects
signatureModel signature.
datatypeDataType instance.
output_schemaOutput schema (mapping of encoders).
flattenFlatten the model outputs.
model_update_kwargsThe kwargs to use for model update.
predict_kwargsAdditional arguments to use at prediction time.
compute_kwargsKwargs used for compute backend job submit. Example (Ray backend): compute_kwargs = dict(resources=...).
validationThe validation Dataset instances to use.
metric_valuesThe metrics to evaluate on.
modelThe Model to use, e.g. 'text-embedding-ada-002'
max_batch_sizeMaximum batch size.
openai_api_keyThe OpenAI API key.
openai_api_baseThe server to use for requests.
client_kwargsThe kwargs to be passed to OpenAI
batch_sizeThe batch size to use.
promptThe prompt to use to seed the response.

OpenAI chat completion predictor.

OpenAIEmbedding​

OpenAIEmbedding(self,
identifier: str,
db: dataclasses.InitVar[typing.Optional[ForwardRef('Datalayer')]] = None,
uuid: str = <factory>,
*,
artifacts: 'dc.InitVar[t.Optional[t.Dict]]' = None,
signature: str = 'singleton',
datatype: 'EncoderArg' = None,
output_schema: 't.Optional[Schema]' = None,
flatten: 'bool' = False,
model_update_kwargs: 't.Dict' = <factory>,
predict_kwargs: 't.Dict' = <factory>,
compute_kwargs: 't.Dict' = <factory>,
validation: 't.Optional[Validation]' = None,
metric_values: 't.Dict' = <factory>,
model: 't.Optional[str]' = None,
max_batch_size: 'int' = 8,
openai_api_key: Optional[str] = None,
openai_api_base: Optional[str] = None,
client_kwargs: Optional[dict] = <factory>,
shape: Optional[Sequence[int]] = None,
batch_size: int = 100) -> None
ParameterDescription
identifierIdentifier of the leaf.
dbDatalayer instance.
uuidUUID of the leaf.
artifactsA dictionary of artifacts paths and DataType objects
signatureModel signature.
datatypeDataType instance.
output_schemaOutput schema (mapping of encoders).
flattenFlatten the model outputs.
model_update_kwargsThe kwargs to use for model update.
predict_kwargsAdditional arguments to use at prediction time.
compute_kwargsKwargs used for compute backend job submit. Example (Ray backend): compute_kwargs = dict(resources=...).
validationThe validation Dataset instances to use.
metric_valuesThe metrics to evaluate on.
modelThe Model to use, e.g. 'text-embedding-ada-002'
max_batch_sizeMaximum batch size.
openai_api_keyThe OpenAI API key.
openai_api_baseThe server to use for requests.
client_kwargsThe kwargs to be passed to OpenAI
shapeThe shape as tuple of the embedding.
batch_sizeThe batch size to use.

OpenAI embedding predictor.

OpenAIAudioTranscription​

OpenAIAudioTranscription(self,
identifier: str,
db: dataclasses.InitVar[typing.Optional[ForwardRef('Datalayer')]] = None,
uuid: str = <factory>,
*,
artifacts: 'dc.InitVar[t.Optional[t.Dict]]' = None,
signature: 'Signature' = '*args,
**kwargs',
datatype: 'EncoderArg' = None,
output_schema: 't.Optional[Schema]' = None,
flatten: 'bool' = False,
model_update_kwargs: 't.Dict' = <factory>,
predict_kwargs: 't.Dict' = <factory>,
compute_kwargs: 't.Dict' = <factory>,
validation: 't.Optional[Validation]' = None,
metric_values: 't.Dict' = <factory>,
model: 't.Optional[str]' = None,
max_batch_size: 'int' = 8,
openai_api_key: Optional[str] = None,
openai_api_base: Optional[str] = None,
client_kwargs: Optional[dict] = <factory>,
takes_context: bool = True,
prompt: str = '') -> None
ParameterDescription
identifierIdentifier of the leaf.
dbDatalayer instance.
uuidUUID of the leaf.
artifactsA dictionary of artifacts paths and DataType objects
signatureModel signature.
datatypeDataType instance.
output_schemaOutput schema (mapping of encoders).
flattenFlatten the model outputs.
model_update_kwargsThe kwargs to use for model update.
predict_kwargsAdditional arguments to use at prediction time.
compute_kwargsKwargs used for compute backend job submit. Example (Ray backend): compute_kwargs = dict(resources=...).
validationThe validation Dataset instances to use.
metric_valuesThe metrics to evaluate on.
modelThe Model to use, e.g. 'text-embedding-ada-002'
max_batch_sizeMaximum batch size.
openai_api_keyThe OpenAI API key.
openai_api_baseThe server to use for requests.
client_kwargsThe kwargs to be passed to OpenAI
takes_contextWhether the model takes context into account.
promptThe prompt to guide the model's style.

OpenAI audio transcription predictor.

The prompt should contain the "context" format variable.

OpenAIAudioTranslation​

OpenAIAudioTranslation(self,
identifier: str,
db: dataclasses.InitVar[typing.Optional[ForwardRef('Datalayer')]] = None,
uuid: str = <factory>,
*,
artifacts: 'dc.InitVar[t.Optional[t.Dict]]' = None,
signature: str = 'singleton',
datatype: 'EncoderArg' = None,
output_schema: 't.Optional[Schema]' = None,
flatten: 'bool' = False,
model_update_kwargs: 't.Dict' = <factory>,
predict_kwargs: 't.Dict' = <factory>,
compute_kwargs: 't.Dict' = <factory>,
validation: 't.Optional[Validation]' = None,
metric_values: 't.Dict' = <factory>,
model: 't.Optional[str]' = None,
max_batch_size: 'int' = 8,
openai_api_key: Optional[str] = None,
openai_api_base: Optional[str] = None,
client_kwargs: Optional[dict] = <factory>,
takes_context: bool = True,
prompt: str = '',
batch_size: int = 1) -> None
ParameterDescription
identifierIdentifier of the leaf.
dbDatalayer instance.
uuidUUID of the leaf.
artifactsA dictionary of artifacts paths and DataType objects
signatureModel signature.
datatypeDataType instance.
output_schemaOutput schema (mapping of encoders).
flattenFlatten the model outputs.
model_update_kwargsThe kwargs to use for model update.
predict_kwargsAdditional arguments to use at prediction time.
compute_kwargsKwargs used for compute backend job submit. Example (Ray backend): compute_kwargs = dict(resources=...).
validationThe validation Dataset instances to use.
metric_valuesThe metrics to evaluate on.
modelThe Model to use, e.g. 'text-embedding-ada-002'
max_batch_sizeMaximum batch size.
openai_api_keyThe OpenAI API key.
openai_api_baseThe server to use for requests.
client_kwargsThe kwargs to be passed to OpenAI
takes_contextWhether the model takes context into account.
promptThe prompt to guide the model's style.
batch_sizeThe batch size to use.

OpenAI audio translation predictor.

The prompt should contain the "context" format variable.

OpenAIImageCreation​

OpenAIImageCreation(self,
identifier: str,
db: dataclasses.InitVar[typing.Optional[ForwardRef('Datalayer')]] = None,
uuid: str = <factory>,
*,
artifacts: 'dc.InitVar[t.Optional[t.Dict]]' = None,
signature: str = 'singleton',
datatype: 'EncoderArg' = None,
output_schema: 't.Optional[Schema]' = None,
flatten: 'bool' = False,
model_update_kwargs: 't.Dict' = <factory>,
predict_kwargs: 't.Dict' = <factory>,
compute_kwargs: 't.Dict' = <factory>,
validation: 't.Optional[Validation]' = None,
metric_values: 't.Dict' = <factory>,
model: 't.Optional[str]' = None,
max_batch_size: 'int' = 8,
openai_api_key: Optional[str] = None,
openai_api_base: Optional[str] = None,
client_kwargs: Optional[dict] = <factory>,
takes_context: bool = True,
prompt: str = '',
n: int = 1,
response_format: str = 'b64_json') -> None
ParameterDescription
identifierIdentifier of the leaf.
dbDatalayer instance.
uuidUUID of the leaf.
artifactsA dictionary of artifacts paths and DataType objects
signatureModel signature.
datatypeDataType instance.
output_schemaOutput schema (mapping of encoders).
flattenFlatten the model outputs.
model_update_kwargsThe kwargs to use for model update.
predict_kwargsAdditional arguments to use at prediction time.
compute_kwargsKwargs used for compute backend job submit. Example (Ray backend): compute_kwargs = dict(resources=...).
validationThe validation Dataset instances to use.
metric_valuesThe metrics to evaluate on.
modelThe Model to use, e.g. 'text-embedding-ada-002'
max_batch_sizeMaximum batch size.
openai_api_keyThe OpenAI API key.
openai_api_baseThe server to use for requests.
client_kwargsThe kwargs to be passed to OpenAI
takes_contextWhether the model takes context into account.
promptThe prompt to use to seed the response.
nThe number of images to generate.
response_formatThe response format to use.

OpenAI image creation predictor.

OpenAIImageEdit​

OpenAIImageEdit(self,
identifier: str,
db: dataclasses.InitVar[typing.Optional[ForwardRef('Datalayer')]] = None,
uuid: str = <factory>,
*,
artifacts: 'dc.InitVar[t.Optional[t.Dict]]' = None,
signature: 'Signature' = '*args,
**kwargs',
datatype: 'EncoderArg' = None,
output_schema: 't.Optional[Schema]' = None,
flatten: 'bool' = False,
model_update_kwargs: 't.Dict' = <factory>,
predict_kwargs: 't.Dict' = <factory>,
compute_kwargs: 't.Dict' = <factory>,
validation: 't.Optional[Validation]' = None,
metric_values: 't.Dict' = <factory>,
model: 't.Optional[str]' = None,
max_batch_size: 'int' = 8,
openai_api_key: Optional[str] = None,
openai_api_base: Optional[str] = None,
client_kwargs: Optional[dict] = <factory>,
takes_context: bool = True,
prompt: str = '',
response_format: str = 'b64_json',
n: int = 1) -> None
ParameterDescription
identifierIdentifier of the leaf.
dbDatalayer instance.
uuidUUID of the leaf.
artifactsA dictionary of artifacts paths and DataType objects
signatureModel signature.
datatypeDataType instance.
output_schemaOutput schema (mapping of encoders).
flattenFlatten the model outputs.
model_update_kwargsThe kwargs to use for model update.
predict_kwargsAdditional arguments to use at prediction time.
compute_kwargsKwargs used for compute backend job submit. Example (Ray backend): compute_kwargs = dict(resources=...).
validationThe validation Dataset instances to use.
metric_valuesThe metrics to evaluate on.
modelThe Model to use, e.g. 'text-embedding-ada-002'
max_batch_sizeMaximum batch size.
openai_api_keyThe OpenAI API key.
openai_api_baseThe server to use for requests.
client_kwargsThe kwargs to be passed to OpenAI
takes_contextWhether the model takes context into account.
promptThe prompt to use to seed the response.
response_formatThe response format to use.
nThe number of images to generate.

OpenAI image edit predictor.