Search...
ctrl/
Light
Dark
System
Sign in

JavaScript API

@gel/ai is a wrapper around the AI extension in Gel.

npm
yarn
pnpm
bun
Copy
$ 
npm install @gel/ai
Copy
$ 
yarn add @gel/ai
Copy
$ 
pnpm add @gel/ai
Copy
$ 
bun add @gel/ai

The AI package is built on top of the regular Gel client objects.

Example:

Copy
import { createClient } from "gel";
import { createRAGClient } from "@gel/ai";


const client = createClient();

const gpt4Ai = createRAGClient(client, {
  model: "gpt-4-turbo-preview",
});

const astronomyAi = gpt4Ai.withContext({
  query: "Astronomy"
});

console.log(
  await astronomyAi.queryRag("What color is the sky on Mars?")
);

function

createRAGClient()
createRAGClient(client: Client, options: Partial<RAGOptions> = {}): RAGClient

Creates an instance of RAGClient with the specified client and options.

Arguments
  • client – A Gel client instance.

  • options.model (string) – Required. Specifies the AI model to use. This could be a version of GPT or any other model supported by Gel AI.

  • options.prompt – Optional. Defines the input prompt for the AI model. The prompt can be a simple string, an ID referencing a stored prompt, or a custom prompt structure that includes roles and content for more complex interactions. The default is the built-in system prompt.

class

RAGClient

Instances of RAGClient offer methods for client configuration and utilizing RAG.

Ivar client

An instance of Gel client.

method

withConfig()
withConfig(options: Partial<RAGOptions>): RAGClient

Returns a new RAGClient instance with updated configuration options.

Arguments
  • options.model (string) – Required. Specifies the AI model to use. This could be a version of GPT or any other model supported by Gel AI.

  • options.prompt – Optional. Defines the input prompt for the AI model. The prompt can be a simple string, an ID referencing a stored prompt, or a custom prompt structure that includes roles and content for more complex interactions. The default is the built-in system prompt.

method

withContext()
withContext(context: Partial<QueryContext>): RAGClient

Returns a new RAGClient instance with an updated query context.

Arguments
  • context.query (string) – Required. Specifies an expression to determine the relevant objects and index to serve as context for text generation. You may set this to any expression that produces a set of objects, even if it is not a standalone query.

  • context.variables (string) – Optional. Variable settings required for the context query.

  • context.globals (string) – Optional. Variable settings required for the context query.

  • context.max_object_count (number) – Optional. A maximum number of objects to return from the context query.

method

async queryRag()
async queryRag(message: string, context: QueryContext = this.context): Promise<string>

Sends a query with context to the configured AI model and returns the response as a string.

Arguments
  • message (string) – Required. The message to be sent to the text generation provider's API.

  • context.query (string) – Required. Specifies an expression to determine the relevant objects and index to serve as context for text generation. You may set this to any expression that produces a set of objects, even if it is not a standalone query.

  • context.variables (string) – Optional. Variable settings required for the context query.

  • context.globals (string) – Optional. Variable settings required for the context query.

  • context.max_object_count (number) – Optional. A maximum number of objects to return from the context query.

method

async streamRag()
async streamRag(message: string, context: QueryContext = this.context): AsyncIterable<StreamingMessage> & PromiseLike<Response>

Can be used in two ways:

  • as an async iterator - if you want to process streaming data in

    real-time as it arrives, ideal for handling long-running streams.

  • as a Promise that resolves to a full Response object - you have

    complete control over how you want to handle the stream, this might be useful when you want to manipulate the raw stream or parse it in a custom way.

Arguments
  • message (string) – Required. The message to be sent to the text generation provider's API.

  • context.query (string) – Required. Specifies an expression to determine the relevant objects and index to serve as context for text generation. You may set this to any expression that produces a set of objects, even if it is not a standalone query.

  • context.variables (string) – Optional. Variable settings required for the context query.

  • context.globals (string) – Optional. Variable settings required for the context query.

  • context.max_object_count (number) – Optional. A maximum number of objects to return from the context query.

method

async generateEmbeddings()
async generateEmbeddings(inputs: string[], model: string): Promise<number[]>

Generates embeddings for the array of strings.

Arguments
  • inputs (string[]) – Required. Strings array to generate embeddings for.

  • model (string) – Required. Specifies the AI model to use.