Python API
The gel.ai
package is an optional binding of the AI extension in Gel.
$
pip install 'gel[ai]'
Blocking and async API
The AI binding is built on top of the regular Gel client objects, providing both blocking and asynchronous versions of its API.
Blocking client example:
import gel
import gel.ai
client = gel.create_client()
gpt4ai = gel.ai.create_rag_client(
client,
model="gpt-4-turbo-preview"
)
astronomy_ai = gpt4ai.with_context(
query="Astronomy"
)
print(
astronomy_ai.query_rag("What color is the sky on Mars?")
);
for data in astronomy_ai.stream_rag("What color is the sky on Mars?"):
print(data)
Async client example:
import gel
import gel.ai
import asyncio
client = gel.create_async_client()
async def main():
gpt4ai = await gel.ai.create_async_rag_client(
client,
model="gpt-4-turbo-preview"
)
astronomy_ai = gpt4ai.with_context(
query="Astronomy"
)
query = "What color is the sky on Mars?"
print(
await astronomy_ai.query_rag(query)
);
#or streamed
async for data in blog_ai.stream_rag(query):
print(data)
asyncio.run(main())
Factory functions
Creates an instance of RAGClient
with the specified client and options.
This function ensures that the client is connected before initializing the AI with the specified options.
-
client
– A Gel client instance. -
kwargs
– Keyword arguments that are passed to theRAGOptions
data class to configure AI-specific options. These options are:model
: The name of the model to be used. (required)prompt
: An optional prompt to guide the model's behavior.None
will result in the client using the default prompt. (default:None
)
Creates an instance of AsyncRAGClient
w/ the specified client & options.
This function ensures that the client is connected asynchronously before initializing the AI with the specified options.
-
client
– An asynchronous Gel client instance. -
kwargs
– Keyword arguments that are passed to theRAGOptions
data class to configure AI-specific options. These options are:model
: The name of the model to be used. (required)prompt
: An optional prompt to guide the model's behavior. (default: None)
Core classes
The base class for Gel AI clients.
This class handles the initialization and configuration of AI clients and provides methods to modify their configuration and context dynamically.
Both the blocking and async AI client classes inherit from this one, so these methods are available on an AI client of either type.
-
options
– An instance ofRAGOptions
, storing the RAG options. -
context
– An instance ofQueryContext
, storing the context for AI queries. -
client_cls
– A placeholder for the client class, should be implemented by subclasses.
-
client
– An instance of Gel client, which could be either a synchronous or asynchronous client. -
options
– AI options to be used with the client. -
kwargs
– Keyword arguments to initialize the query context.
Creates a new instance of the same class with modified configuration
options. This method uses the current instance's configuration as a base and
applies the changes specified in kwargs
.
kwargs
– Keyword arguments that specify the changes to the AI configuration.
These changes are passed to the derive
method of the current
configuration options object. Possible keywords include:
model
: Specifies the AI model to be used. This must be a string.prompt
: An optional prompt to guide the model's behavior. This is optional and defaults to None.
Creates a new instance of the same class with a modified context. This
method preserves the current AI options and client settings, but uses the
modified context specified by kwargs
.
kwargs
– Keyword arguments that specify the changes to the context. These changes
are passed to the derive
method of the current context object.
Possible keywords include:
query
: The database query string.variables
: A dictionary of variables used in the query.globals
: A dictionary of global settings affecting the query.max_object_count
: An optional integer to limit the number of objects returned by the query.
Sends a request to the AI provider and returns the response as a string.
This method uses a blocking HTTP POST request. It raises an HTTP exception if the request fails.
-
message
– The query string to be sent to the AI model. -
context
– An optionalQueryContext
object to provide additional context for the query. If not provided, uses the default context of this AI client instance.
Opens a connection to the AI provider to stream query responses.
This method yields data as it is received, utilizing Server-Sent Events (SSE) to handle streaming data. It raises an HTTP exception if the request fails.
-
message
– The query string to be sent to the AI model. -
context
– An optionalQueryContext
object to provide additional context for the query. If not provided, uses the default context of this AI client instance.
An asynchronous class for creating Gel AI clients.
This class provides methods to send queries and receive responses using both blocking and streaming communication modes asynchronously.
client
– An instance of httpx.AsyncClient
used for making HTTP requests
asynchronously.
Sends an async request to the AI provider, returns the response as a string.
This method is asynchronous and should be awaited. It raises an HTTP exception if the request fails.
-
message
– The query string to be sent to the AI model. -
context
– An optionalQueryContext
object to provide additional context for the query. If not provided, uses the default context of this AI client instance.
Opens an async connection to the AI provider to stream query responses.
This method yields data as it is received, using asynchronous Server-Sent Events (SSE) to handle streaming data. This is an asynchronous generator method and should be used in an async for loop. It raises an HTTP exception if the connection fails.
-
message
– The query string to be sent to the AI model. -
context
– An optionalQueryContext
object to provide additional context for the query. If not provided, uses the default context of this AI client instance.
Configuration classes
An enumeration of roles used when defining a custom text generation prompt.
-
SYSTEM
– Represents a system-level entity or process. -
USER
– Represents a human user participating in the chat. -
ASSISTANT
– Represents an AI assistant. -
TOOL
– Represents a tool or utility used within the chat context.
A single message in a custom text generation prompt.
-
role
– The role of the chat participant. Must be an instance ofChatParticipantRole
. -
content
– The content associated with the role, expressed as a string.
The metadata and content of a text generation prompt.
-
name
– An optional name identifying the prompt. -
id
– An optional unique identifier for the prompt. -
custom
– An optional list ofCustom
objects, each providing role-specific content within the prompt.
A data class for RAG options, specifying model and prompt settings.
-
model
– The name of the AI model. -
prompt
– An optionalPrompt
providing additional guiding information for the model.
Creates a new instance of RAGOptions
by merging existing options
with provided keyword arguments. Returns a new RAGOptions
instance with updated attributes.
A data class defining the context for a query to an AI model.
-
query
– The base query string. -
variables
– An optional dictionary of variables used in the query. -
globals
– An optional dictionary of global settings affecting the query. -
max_object_count
– An optional integer specifying the maximum number of objects the query should return.
Creates a new instance of QueryContext
by merging existing
context with provided keyword arguments. Returns a new
QueryContext
instance with updated attributes.
A data class defining a request to a text generation model.
-
model
– The name of the AI model to query. -
prompt
– An optionalPrompt
associated with the request. -
context
– TheQueryContext
defining the query context. -
query
– The specific query string to be sent to the model. -
stream
– A boolean indicating whether the response should be streamed (True) or returned in a single response (False).
Converts the RAGRequest into a dictionary suitable for making an HTTP request using the httpx library.