Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.svantic.com/llms.txt

Use this file to discover all available pages before exploring further.

Smart agents

A smart agent is an agent that plans its own work. You give it instructions and an LLM, and it decides which of its capabilities to call, in what order, to satisfy a natural-language task. Regular agents (what the Defining capabilities guide produces) are tool servers — they expose capabilities and wait to be told which to call. Smart agents expose the same capabilities but can also be handed a free-form prompt.

Turning an agent into a smart agent

Add instructions and llm to the agent config:
import { Agent } from '@svantic/sdk';

const agent = new Agent({
  name: 'billing-assistant',
  description: 'Answers billing questions and takes billing actions.',
  instructions: `You are a billing assistant for Acme Corp.
When a customer asks about an invoice, use \`lookup_invoice\` and summarize the result.
When a customer asks for a refund, confirm the amount, then call \`issue_refund\`.
Never make up invoice ids.`,
  llm: {
    provider: 'gemini',
    model: 'gemini-2.0-flash',
  },
  mesh: {
    client_id: process.env.SVANTIC_CLIENT_ID!,
    client_secret: process.env.SVANTIC_CLIENT_SECRET!,
  },
});

agent.define_capability({ /* lookup_invoice */ });
agent.define_capability({ /* issue_refund */ });

await agent.start();
The agent is now callable two ways:
  • Structured invocation — a caller invokes lookup_invoice directly via RemoteAgent.invoke_capability. Instructions and the LLM are bypassed.
  • Natural language — a caller sends a text message via RemoteAgent.send(...). The LLM plans which capability (or capabilities) to call.
Both continue to work. You don’t lose the tool-server API by adding a smart agent.

Picking a provider

LlmConfig.provider accepts 'gemini', 'openai', 'anthropic'. API keys come from the config or a matching environment variable:
ProviderEnv varDefault model
geminiGOOGLE_API_KEYgemini-2.0-flash
openaiOPENAI_API_KEYgpt-4o
anthropicANTHROPIC_API_KEYclaude-sonnet-4-20250514
Override model for long-context / reasoning variants:
llm: { provider: 'openai', model: 'gpt-4o-mini', temperature: 0.2 }
See LlmConfig for every option.

Writing good instructions

  • Address the agent directly (“You are…”). The system-prompt framing matters.
  • Enumerate capabilities by name when the intent→capability mapping isn’t obvious from descriptions. Mention them in backticks.
  • Be explicit about what not to do. Destructive capabilities deserve explicit guardrails in the instructions (“Never issue a refund larger than $500 without calling check_policy first”).
  • Keep it under ~500 tokens. Long instructions cost money on every turn.

Capability descriptions matter more, not less

Smart agents pick capabilities by reading their descriptions. If the planner is picking the wrong tool, sharpen the description before touching the instructions. Good: “Return the line items and total for a given invoice. Use this whenever the user asks about invoice contents, charges, or breakdowns.” Less good: “Looks up an invoice.”

Mixing smart and structured

Smart agents can delegate to other agents via RemoteAgent inside a capability handler. This is the most common way to compose a larger system:
agent.define_capability({
  name: 'answer_with_docs',
  description: 'Answer a question using the policy docs agent.',
  parameters: {
    type: 'object',
    properties: { question: { type: 'string' } },
    required: ['question'],
  },
  handler: async ({ question }) => {
    const docs = await RemoteAgent.connect('https://api.svantic.com/agents/docs-agent');
    return docs.invoke_capability('search', { query: question });
  },
});

Observing the plan

Every smart-agent turn produces a trace: the LLM call, the capabilities it chose, their child spans, any nested agent calls. See Telemetry for how to read and extend it.

See also