Skip to main content

Retrieval augmented generation

The first step in any SuperDuperDB application is to connect to your data-backend with SuperDuperDB:

Configure your production system​

note

If you would like to use the production features of SuperDuperDB, then you should set the relevant connections and configurations in a configuration file. Otherwise you are welcome to use "development" mode to get going with SuperDuperDB quickly.

import os

os.makedirs('.superduperdb', exist_ok=True)
os.environ['SUPERDUPERDB_CONFIG'] = '.superduperdb/config.yaml'
CFG = '''
data_backend: mongodb://127.0.0.1:27017/documents
artifact_store: filesystem://./artifact_store
cluster:
cdc:
strategy: null
uri: ray://127.0.0.1:20000
compute:
uri: ray://127.0.0.1:10001
vector_search:
backfill_batch_size: 100
type: in_memory
uri: http://127.0.0.1:21000
'''
with open(os.environ['SUPERDUPERDB_CONFIG'], 'w') as f:
f.write(CFG)

Start your cluster​

note

Starting a SuperDuperDB cluster is useful in production and model development if you want to enable scalable compute, access to the models by multiple users for collaboration, monitoring.

If you don't need this, then it is simpler to start in development mode.

!python -m superduperdb local-cluster up        

Connect to SuperDuperDB​

note

Note that this is only relevant if you are running SuperDuperDB in development mode. Otherwise refer to "Configuring your production system".

from superduperdb import superduper

db = superduper('mongodb://localhost:27017/documents')

Get useful sample data​

from superduperdb import dtype

!curl -O https://superduperdb-public-demo.s3.amazonaws.com/text.json
import json

with open('text.json', 'r') as f:
data = json.load(f)
sample_datapoint = "What is mongodb?"

chunked_model_datatype = dtype('str')

Setup tables or collections​

# Note this is an optional step for MongoDB
# Users can also work directly with `DataType` if they want to add
# custom data
from superduperdb import Schema, DataType
from superduperdb.backends.mongodb import Collection

table_or_collection = Collection('documents')
USE_SCHEMA = False

if USE_SCHEMA and isinstance(datatype, DataType):
schema = Schema(fields={'x': datatype})
db.apply(schema)

Insert data​

In order to create data, we need to create a Schema for encoding our special Datatype column(s) in the databackend.

from superduperdb import Document

def do_insert(data):
schema = None


if schema is None and (datatype is None or isinstance(datatype, str)) :
data = [Document({'x': x}) for x in data]
db.execute(table_or_collection.insert_many(data))
elif schema is None and datatype is not None and isintance():
data = [Document({'x': datatype(x)}) for x in data]
db.execute(table_or_collection.insert_many(data))
else:
data = [Document({'x': x}) for x in data]
db.execute(table_or_collection.insert_many(data, schema='my_schema'))
do_insert(data[:-len(data) // 4])

Build simple select queries​


select = table_or_collection.find({})

Create Model Output Type​

chunked_model_datatype = None        
note

Note that applying a chunker is not mandatory for search. If your data is already chunked (e.g. short text snippets or audio) or if you are searching through something like images, which can't be chunked, then this won't be necessary.

from superduperdb import objectmodel

CHUNK_SIZE = 200

@objectmodel(flatten=True, model_update_kwargs={'document_embedded': False}, datatype=chunked_model_datatype)
def chunker(text):
text = text.split()
chunks = [' '.join(text[i:i + CHUNK_SIZE]) for i in range(0, len(text), CHUNK_SIZE)]
return chunks

Now we apply this chunker to the data by wrapping the chunker in Listener:

from superduperdb import Listener

upstream_listener = Listener(
model=chunker,
select=select,
key='x',
)

db.apply(upstream_listener)

Select outputs of upstream listener​

note

This is useful if you have performed a first step, such as pre-computing features, or chunking your data. You can use this query to operate on those outputs.

from superduperdb.backends.mongodb import Collection

indexing_key = upstream_listener.outputs
select = Collection(upstream_listener.outputs).find()

Build text embedding model​

!pip install openai
from superduperdb.ext.openai import OpenAIEmbedding
model = OpenAIEmbedding(identifier='text-embedding-ada-002')
print(len(model.predict_one("What is SuperDuperDB")))

Create vector-index​

vector_index_name = 'my-vector-index'
from superduperdb import VectorIndex, Listener

jobs, _ = db.add(
VectorIndex(
vector_index_name,
indexing_listener=Listener(
key=indexing_key, # the `Document` key `model` should ingest to create embedding
select=select, # a `Select` query telling which data to search over
model=model, # a `_Predictor` how to convert data to embeddings
)
)
)
query_table_or_collection = select.table_or_collection
sample_datapoint = data[0]
query = "Tell me about the SuperDuperDb"
from superduperdb import Document

def get_sample_item(key, sample_datapoint, datatype=None):
if not isinstance(datatype, DataType):
item = Document({key: sample_datapoint})
else:
item = Document({key: datatype(sample_datapoint)})

return item

if compatible_key:
item = get_sample_item(compatible_key, sample_datapoint, None)
else:
item = get_sample_item(indexing_key, sample_datapoint, datatype=datatype)

Once we have this search target, we can execute a search as follows:

select = query_table_or_collection.like(item, vector_index=vector_index_name, n=10).find()        
results = db.execute(select)

Create Vector Search Model​

from superduperdb.base.serializable import Variable
item = {indexing_key: Variable('query')}
from superduperdb.components.model import QueryModel

vector_search_model = QueryModel(
identifier="VectorSearch",
select=select,
postprocess=lambda docs: [{"text": doc[indexing_key], "_source": doc["_source"]} for doc in docs]
)
vector_search_model.db = db
vector_search_model.predict_one(query=query)

Build LLM​

!pip install openai
from superduperdb.ext.openai import OpenAIChatCompletion

llm = OpenAIChatCompletion(identifier='llm', model='gpt-3.5-turbo')
# test the llm model
llm.predict_one("Hello")

Answer question with LLM​


llm.predict_one(query)