Skip to content

Text generation

Text modality is represented as TextParts.

types.ts
interface TextPart {
type: "text";
text: string;
citations?: Citation[];
}

Use generate() to call the language model with TextPart objects.

generate-text

generate-text.ts
import { getModel } from "./get-model.ts";
const model = getModel("openai", "gpt-4o");
const response = await model.generate({
messages: [
{
role: "user",
content: [
{
type: "text",
text: "Tell me a story.",
},
],
},
{
role: "assistant",
content: [
{
type: "text",
text: "What kind of story would you like to hear?",
},
],
},
{
role: "user",
content: [
{
type: "text",
text: "A fairy tale.",
},
],
},
],
});
console.dir(response, { depth: null });

Text generation can also be streamed using the stream() method. TextPart in streamed responses will be represented as TextPartDelta.

types.ts
interface TextPartDelta {
type: "text";
text: string;
citation?: CitationDelta;
}

Individual text chunks can be combined to create the final text output.

stream-text.ts
import { StreamAccumulator } from "@hoangvvo/llm-sdk";
import { getModel } from "./get-model.ts";
const model = getModel("openai", "gpt-4o");
const response = model.stream({
messages: [
{
role: "user",
content: [
{
type: "text",
text: "Tell me a story.",
},
],
},
{
role: "assistant",
content: [
{
type: "text",
text: "What kind of story would you like to hear?",
},
],
},
{
role: "user",
content: [
{
type: "text",
text: "A fairy tale.",
},
],
},
],
});
const accumulator = new StreamAccumulator();
let current = await response.next();
while (!current.done) {
console.dir(current.value, { depth: null });
accumulator.addPartial(current.value);
current = await response.next();
}
const finalResponse = accumulator.computeResponse();
console.dir(finalResponse, { depth: null });