Documentation Index
Fetch the complete documentation index at: https://docs.svantic.com/llms.txt
Use this file to discover all available pages before exploring further.
Telemetry and Tracing
Telemetry is enabled by default. Every agent records spans, LLM usage, and events automatically — no setup required. When connected to the mesh, telemetry exports to the Svantic dashboard. Otherwise, it exports to the console.
Zero-Config Telemetry
import { Agent } from '@svantic/sdk';
const agent = new Agent({
name: 'ticket-agent',
description: 'Manages support tickets.',
port: 4000,
mesh: {
client_id: process.env.SAVANT_CLIENT_ID!,
client_secret: process.env.SAVANT_CLIENT_SECRET!,
},
});
agent.define_capability({ /* ... */ });
await agent.start();
That’s it. Capability invocations are traced, LLM token usage is recorded, and everything exports to the Svantic dashboard automatically.
How it works
| Scenario | What happens |
|---|
mesh config provided | Spans export to {svantic_url}/telemetry using your credentials. |
No mesh config | Spans export to the console (useful during local development). |
telemetry: false | Telemetry is completely disabled. Nothing is recorded. |
Disabling Telemetry
Set telemetry: false in the agent config:
const agent = new Agent({
name: 'my-agent',
description: 'No telemetry.',
port: 4000,
telemetry: false,
});
Custom Spans
Use agent.tracer to add your own instrumentation. Spans automatically nest — starting a span while another is active makes it a child.
const span = agent.tracer.start_span('process_ticket');
try {
span.set_attribute('ticket_id', 4521);
const child = agent.tracer.start_span('fetch_from_db');
const ticket = await db.find_ticket(4521);
child.end();
span.end();
} catch (err) {
span.set_error(err instanceof Error ? err.message : String(err));
span.end();
throw err;
}
Span Types
Generic spans for any operation:
const span = agent.tracer.start_span('operation_name', {
'custom.attribute': 'value',
});
span.end();
Tool spans with the tool name as an automatic attribute:
const span = agent.tracer.start_tool_span('get_ticket', {
'ticket.id': 4521,
});
span.end();
LLM spans that capture model and token usage:
const span = agent.tracer.start_llm_span('gpt-4o');
// ... make LLM call ...
span.end({ input_tokens: 150, output_tokens: 42 });
Span Methods
| Method | Description |
|---|
set_attribute(key, value) | Add a string, number, or boolean attribute. |
set_error(message) | Mark the span as errored with a message. |
end(options?) | Mark the span as complete. LLM spans accept token counts. |
Events
For discrete signals that are not part of the span tree (e.g. “document processed”, “ticket escalated”):
agent.telemetry.event('ticket.escalated', {
ticket_id: 4521,
reason: 'SLA breach',
});
Events are buffered and flushed alongside spans. If emitted within an active span, they automatically capture the current trace_id and span_id for correlation.
W3C Trace Propagation
When your agent calls another agent (via RemoteAgent), trace context propagates automatically using the W3C traceparent header. This creates a unified trace across agent-to-agent calls.
Injecting context into outbound requests
For non-A2A HTTP calls (e.g. calling an external API), inject trace context manually:
import { ContextInjector } from '@svantic/sdk/telemetry';
const headers: Record<string, string> = { 'Content-Type': 'application/json' };
const context = agent.tracer.get_current_context();
if (context) {
ContextInjector.inject(headers, {
trace_id: context.trace_id,
span_id: context.span_id,
});
}
await fetch('https://external-service.com/api', { headers });
Adopting inbound context
When your agent receives an HTTP request with a traceparent header, adopt the remote context so your spans join the caller’s trace:
import { ContextExtractor } from '@svantic/sdk/telemetry';
app.post('/webhook', (req, res) => {
const extracted = ContextExtractor.extract(req.headers);
if (extracted) {
agent.tracer.adopt_remote_context(extracted.trace_id, extracted.span_id);
}
const span = agent.tracer.start_span('handle_webhook');
// this span is now a child of the remote caller's trace
span.end();
});
Advanced: Custom Telemetry Instance
If you need full control over export destination, auth, or flush behavior, pass your own SvanticTelemetry instance:
import { Agent, SvanticTelemetry } from '@svantic/sdk';
const telemetry = SvanticTelemetry.init({
service_name: 'ticket-agent',
export: {
endpoint: 'http://otel-collector:4318/v1/traces',
protocol: 'otlp',
auth: { type: 'api_key', key: process.env.COLLECTOR_KEY! },
},
flush_interval_ms: 10000,
});
const agent = new Agent({
name: 'ticket-agent',
description: 'Manages support tickets.',
port: 4000,
telemetry,
});
Export destinations
| Destination | Config |
|---|
| Svantic mesh | { endpoint: 'http://gateway/telemetry', auth: { type: 'bearer', token: jwt } } |
| OTLP collector | { endpoint: 'http://collector:4318/v1/traces', protocol: 'otlp' } |
| Console | { type: 'console' } |
Dashboard Integration
When telemetry exports to the Svantic mesh (the default for mesh-connected agents), your traces appear in the Svantic dashboard under the session timeline. You can:
- See the full span tree for each capability invocation.
- View LLM token usage per call.
- Trace requests across agent-to-agent boundaries.
- Correlate events with the spans that produced them.
Shutdown
Telemetry flushes automatically on shutdown:
process.on('SIGTERM', () => agent.stop());
agent.stop() (and agent.close()) calls telemetry.shutdown() internally, which stops the flush timer, flushes remaining data, and shuts down the exporter.