Completions + Voice
For use cases that don't need full realtime voice-to-voice, Gabber has an OpenAI client-compatible API.
This gives you access to Gabber LLMs via the OpenAI SDKs. Optionally, this API can return synthesized voice alongside the text response. The file is streamed as the voice is synthesized in parellel to text synthesis. This means you can start playing the file immediately resulting in a low-latency experience for your end users.
Getting started
The following snippet shows how you can use the chat/completions api. This can be run client-side using token authentication or server-side using api key authentication.
import OpenAI from "openai";
// Client-side using a token you generate on your bakend
const token = await yourCodeToGenerateAGabberToken();
// Or server-side using your GABBER_API_KEY
const api_key = process.env.GABBER_API_KEY
// Gabber LLM ID. A UUID that can be found in your https://app.gabber.dev dashboard
const gabber_llm = "<id from gabber dashboard>";
// Gabber Voice ID. A UUID that can be found in your https://app.gabber.dev dashboard
const gabber_voice_id = "<id from gabber dashboard>";
const client = new OpenAI({
baseURL: "https://api.gabber.dev/v1",
apiKey: token, // If using token authentication
// defaultHeaders: {"x-api-key": api_key} // If using the api key authentication
});
const stream = await client.chat.completions.create({
model: gabber_llm,
messages,
stream: true,
// If you want to generate voice for the response, add the gabber field in extraBody
extraBody = {
gabber: {
voice: gabber_voice_id
}
}
});
let totalAssistantResponse = ""
for await (const chunk of stream) {
if(chunk.gabber?.voice) {
const audio_url = chunk.gabber.voice.audio_url;
// You can start playing this url in an <audio /> tag
continue
}
const content = chunk.choices[0].delta.content;
if (content) {
totalAssistantResponse += content;
}
}