Skip to content

Overview

Agents enable the development of agentic AI applications that can generate responses and execute tasks autonomously. Agents utilize the LLM SDK to interact with different language models and allow definitions of instructions, tools, and other language model parameters.

We provide Agent implementations in the following programming languages:

Below is an example of how to implement an agent in each language:

agent.ts
import { Agent, tool, type AgentItem } from "@hoangvvo/llm-agent";
import { typeboxTool } from "@hoangvvo/llm-agent/typebox";
import { zodTool } from "@hoangvvo/llm-agent/zod";
import { Type } from "@sinclair/typebox";
import readline from "node:readline/promises";
import { z } from "zod";
import { getModel } from "./get-model.ts";
// Define the context interface that can be accessed in the instructions and tools
interface MyContext {
userName: string;
}
// Define the model to use for the Agent
const model = getModel("openai", "gpt-4o");
// Define the agent tools
const getTimeTool = tool({
name: "get_time",
description: "Get the current time",
parameters: {
type: "object",
properties: {},
additionalProperties: false,
},
execute() {
return {
content: [
{
type: "text",
text: JSON.stringify({
current_time: new Date().toISOString(),
}),
},
],
is_error: false,
};
},
});
// Create an agent tool using @sinclair/typebox with type inference
// npm install @sinclair/typebox
const getWeatherTool = typeboxTool({
name: "get_weather",
description: "Get weather for a given city",
parameters: Type.Object(
{
city: Type.String({ description: "The name of the city" }),
},
{ additionalProperties: false },
),
execute(params) {
// inferred as { city: string }
const { city } = params;
console.log(`Getting weather for ${city}`);
return {
content: [
{
type: "text",
text: JSON.stringify({
city,
forecast: "Sunny",
temperatureC: 25,
}),
},
],
is_error: false,
};
},
});
// Create an agent tool using zod with type inference
// npm install zod zod-to-json-schema
const sendMessageTool = zodTool({
name: "send_message",
description: "Send a text message",
parameters: z.object({
message: z.string().min(1).max(500),
phoneNumber: z.string(),
}),
execute(params) {
// inferred as { message: string, phoneNumber: string }
const { message, phoneNumber } = params;
console.log(`Sending message to ${phoneNumber}: ${message}`);
return {
content: [
{
type: "text",
text: JSON.stringify({
success: true,
}),
},
],
is_error: false,
};
},
});
// Create the Agent
const myAssistant = new Agent<MyContext>({
name: "Mai",
model,
instructions: [
"You are Mai, a helpful assistant. Answer questions to the best of your ability.",
// Dynamic instruction
(context) => `You are talking to ${context.userName}.`,
],
tools: [getTimeTool, getWeatherTool, sendMessageTool],
});
// Implement the CLI to interact with the Agent
const rl = readline.createInterface({
input: process.stdin,
output: process.stdout,
});
const userName = await rl.question("Your name: ");
const context: MyContext = {
userName,
};
console.log(`Type 'exit' to quit`);
const items: AgentItem[] = [];
let userInput = "";
while (userInput !== "exit") {
userInput = (await rl.question("> ")).trim();
if (!userInput) {
continue;
}
if (userInput.toLowerCase() === "exit") {
break;
}
// Add user message as the input
items.push({
type: "message",
role: "user",
content: [
{
type: "text",
text: userInput,
},
],
});
// Call assistant
const response = await myAssistant.run({
context,
input: items,
});
// Append items with the output items
items.push(...response.output);
console.dir(response, { depth: null });
}

This agent library (not framework) is designed for transparency and control. Unlike many “agentic” frameworks, it ships with no hidden prompt templates or secret parsing rules—and that’s on purpose:

  • Nothing hidden – What you write is what runs. No secret prompts or “special sauce” behind the scenes, so your instructions aren’t quietly overridden.
  • Works in any settings – Many frameworks bake in English-only prompts. Here, the model sees only your words, in whichever language or format.
  • Easy to tweak – Change prompts, parsing, or flow without fighting built-in defaults.
  • Less to debug – Fewer layers mean you can trace exactly where things break.
  • No complex abstraction – Don’t waste time learning new concepts or APIs (e.g., “chains”, “graphs”, syntax with special meanings, etc.). Just plain functions and data structures.

LLM in the past was not as powerful as today, so frameworks had to do a lot of heavy lifting to get decent results. But with modern LLMs, much of that complexity is no longer necessary.

Because we keep the core minimal (500 LOC!) and do not want to introduce such hidden magic, the library doesn’t bundle heavy agent patterns like hand-off, memory, or planners. Instead, the examples/ folder shows clean, working references you can copy or adapt to see that it can still be used to build complex use cases.

This philosophy is inspired by this blog post.