Skip to main content

Retrieval augmented generation

The first step in any SuperDuperDB application is to connect to your data-backend with SuperDuperDB:

Configure your production system


If you would like to use the production features of SuperDuperDB, then you should set the relevant connections and configurations in a configuration file. Otherwise you are welcome to use "development" mode to get going with SuperDuperDB quickly.

import os

os.makedirs('.superduperdb', exist_ok=True)
os.environ['SUPERDUPERDB_CONFIG'] = '.superduperdb/config.yaml'
CFG = '''
data_backend: mongodb://
artifact_store: filesystem://./artifact_store
strategy: null
uri: ray://
uri: ray://
backfill_batch_size: 100
type: in_memory
with open(os.environ['SUPERDUPERDB_CONFIG'], 'w') as f:

Start your cluster


Starting a SuperDuperDB cluster is useful in production and model development if you want to enable scalable compute, access to the models by multiple users for collaboration, monitoring.

If you don't need this, then it is simpler to start in development mode.

!python -m superduperdb local-cluster up        

Connect to SuperDuperDB


Note that this is only relevant if you are running SuperDuperDB in development mode. Otherwise refer to "Configuring your production system".

from superduperdb import superduper

db = superduper('mongodb://localhost:27017/documents')

Get useful sample data

from superduperdb.backends.ibis import dtype

!curl -O
import json

with open('text.json', 'r') as f:
data = json.load(f)
sample_datapoint = "What is mongodb?"

chunked_model_datatype = dtype('str')

Setup tables or collections

# Note this is an optional step for MongoDB
# Users can also work directly with `DataType` if they want to add
# custom data
from superduperdb import Schema, DataType
from superduperdb.backends.mongodb import Collection

table_or_collection = Collection('documents')

if USE_SCHEMA and isinstance(datatype, DataType):
schema = Schema(fields={'x': datatype})

Insert data

In order to create data, we need to create a Schema for encoding our special Datatype column(s) in the databackend.

from superduperdb import Document, DataType

def do_insert(data, schema = None):

if schema is None and (datatype is None or isinstance(datatype, str)):
data = [Document({'x': x['x'], 'y': x['y']}) if isinstance(x, dict) and 'x' in x and 'y' in x else Document({'x': x}) for x in data]
elif schema is None and datatype is not None and isinstance(datatype, DataType):
data = [Document({'x': datatype(x['x']), 'y': x['y']}) if isinstance(x, dict) and 'x' in x and 'y' in x else Document({'x': datatype(x)}) for x in data]
data = [Document({'x': x['x'], 'y': x['y']}) if isinstance(x, dict) and 'x' in x and 'y' in x else Document({'x': x}) for x in data]
db.execute(table_or_collection.insert_many(data, schema=schema))

do_insert(data[:-len(data) // 4])

Build simple select queries

select = table_or_collection.find({})

Create Model Output Type

chunked_model_datatype = None        

Note that applying a chunker is not mandatory for search. If your data is already chunked (e.g. short text snippets or audio) or if you are searching through something like images, which can't be chunked, then this won't be necessary.

from superduperdb import objectmodel


@objectmodel(flatten=True, model_update_kwargs={'document_embedded': False}, datatype=chunked_model_datatype)
def chunker(text):
text = text.split()
chunks = [' '.join(text[i:i + CHUNK_SIZE]) for i in range(0, len(text), CHUNK_SIZE)]
return chunks

Now we apply this chunker to the data by wrapping the chunker in Listener:

from superduperdb import Listener

upstream_listener = Listener(


Select outputs of upstream listener


This is useful if you have performed a first step, such as pre-computing features, or chunking your data. You can use this query to operate on those outputs.

from superduperdb.backends.mongodb import Collection

indexing_key = upstream_listener.outputs
select = Collection(upstream_listener.outputs).find()

Build text embedding model

!pip install openai
from superduperdb.ext.openai import OpenAIEmbedding
model = OpenAIEmbedding(identifier='text-embedding-ada-002')
print(len(model.predict_one("What is SuperDuperDB")))

Create vector-index

vector_index_name = 'my-vector-index'
from superduperdb import VectorIndex, Listener

jobs, _ = db.add(
key=indexing_key, # the `Document` key `model` should ingest to create embedding
select=select, # a `Select` query telling which data to search over
model=model, # a `_Predictor` how to convert data to embeddings
query_table_or_collection = select.table_or_collection
sample_datapoint = data[0]
query = "Tell me about the SuperDuperDb"
from superduperdb import Document

def get_sample_item(key, sample_datapoint, datatype=None):
if not isinstance(datatype, DataType):
item = Document({key: sample_datapoint})
item = Document({key: datatype(sample_datapoint)})

return item

if compatible_key:
item = get_sample_item(compatible_key, sample_datapoint, None)
item = get_sample_item(indexing_key, sample_datapoint, datatype=datatype)

Once we have this search target, we can execute a search as follows:

select =, vector_index=vector_index_name, n=10).find()        
results = db.execute(select)

Create Vector Search Model

from superduperdb.base.serializable import Variable
item = {indexing_key: Variable('query')}
from superduperdb.components.model import QueryModel

vector_search_model = QueryModel(
postprocess=lambda docs: [{"text": doc[indexing_key], "_source": doc["_source"]} for doc in docs]
vector_search_model.db = db

Build LLM

!pip install openai
from superduperdb.ext.openai import OpenAIChatCompletion

llm = OpenAIChatCompletion(identifier='llm', model='gpt-3.5-turbo')
# test the llm model

Answer question with LLM