Documentation Index
Fetch the complete documentation index at: https://docs.svantic.com/llms.txt
Use this file to discover all available pages before exploring further.
Telemetry & tracing
Every agent built with the SDK produces OpenTelemetry spans automatically. When the agent runs on the Svantic mesh, that data lands in the dashboard’s Traces and Usage views. This guide covers what you get for free, what you can add, and how to interpret the output.What you get automatically
For every capability invocation the SDK opens anexecute_tool <capability_name> span with these attributes:
gen_ai.operation.name = "execute_tool"gen_ai.tool.name— the capability’s namegen_ai.conversation.id— the session idsvantic.tenant.id
instructions + llm config), you also get:
invoke_agent <name>— one span per user turn, carrying aggregated token counts (gen_ai.usage.input_tokens,gen_ai.usage.output_tokens).call_llm <model>— one span per LLM call inside that turn, withgen_ai.request.model,gen_ai.usage.*, andgen_ai.response.finish_reasons.- Nested
execute_toolspans for each tool the LLM invokes.
traceparent / baggage headers, so when the mesh dispatches a task to your agent, your spans join the same trace as everything upstream.
No code changes are required. The mesh runtime installs the OpenTelemetry provider at startup; SDK spans flow through it without any configuration on your side.
Turning it off
There is no per-agent switch — telemetry either has a globalTracerProvider or it doesn’t. If the host process has no provider registered, the SDK’s spans silently become no-ops with zero runtime cost.
When developing agents locally without a mesh connection, there’s usually nothing to turn off: spans just don’t go anywhere.
Adding custom spans — the easy way
The SDK ships three helpers that handle span lifecycle, status, error recording, and the OTel GenAI attribute names for you. Use them instead of hand-rollingstartActiveSpan — the code is shorter and the dashboard gets richer data.
trace_llm(meta, fn) — any LLM provider
Works for OpenAI, Anthropic, Bedrock, Vertex, Ollama, or anything else. You’re responsible for calling the provider; the helper takes care of the span.
gen_ai.system, gen_ai.request.model, gen_ai.usage.*, and gen_ai.response.finish_reasons populated automatically. The dashboard waterfall renders it as an llm.chat <model> row with the model name and 1.2k→340 token summary on the bar. Errors are captured as span events with status=ERROR.
The same shape works for Anthropic:
trace_tool(meta, fn) — any tool call
Use for database queries, HTTP calls, shell-outs, MCP servers — anything that goes outside your process on behalf of an LLM.
tool.execute postgres.query in the waterfall with a distinct color. If the tool throws, the span is marked red.
trace_step(name, fn) — everything else
Use for planning, parsing, validation, or any block of work that would otherwise show up as unaccounted time in the dashboard.
trace_step until that number is near zero and you’ll have a fully instrumented flow.
Errors and cancellation
All three helpers record the thrown exception as a span event, set span status to ERROR, and rethrow unchanged. You never lose the original error or its stack.No provider? No problem.
If the host process has no OpenTelemetryTracerProvider installed (e.g. during local unit tests), every helper becomes a no-op with zero runtime cost. Leave them in; nothing needs to be conditional.
Events
OpenTelemetry events are structured signals attached to a span:Forwarding trace context to downstream HTTP services
The session context carriespropagation_headers — forward them verbatim and W3C-compliant services will join the same trace:
Using your own OpenTelemetry backend
To export traces to Datadog, Honeycomb, Grafana Tempo, or any OTLP collector, register aTracerProvider at process startup before constructing any Agent:
Reading traces in the dashboard
- Traces tab — one row per session. Click to see the waterfall.
- Waterfall —
invoke_agentat the root,call_llmandexecute_toolas children, your custom spans nested underneath. - Events — rendered as markers on the span timeline.
- Usage — token counts aggregated from
gen_ai.usage.*, rolled up per trace and per model.
