Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.svantic.com/llms.txt

Use this file to discover all available pages before exploring further.

Deployment Topologies

Svantic supports three deployment topologies. Choose based on your scale, isolation requirements, and knowledge sharing needs.

1. Standalone

One machine. One Svantic. One or more applications. The simplest deployment: Svantic runs as a local server, and applications on the same machine (or network) register as agents. Standalone deployment: Svantic and your app on one machine with local knowledge store When to use:
  • Local development and testing
  • Single-team workloads
  • Prototyping and demos
  • Small-scale production (single server)
Setup:
cd agents && npm run api

SAVANT_AGENTS_URL=http://localhost:3000 node my-app.js
Characteristics:
  • Zero infrastructure overhead
  • Knowledge persists locally
  • All capabilities on one machine
  • No network latency between components

2. Sidecar

One pod. One Svantic sidecar. One application. Fully autonomous. Svantic runs as a sidecar container alongside your application in a Kubernetes pod. Each pod gets its own AI brain. The sidecar operates independently — it can plan, execute, and learn even when disconnected from any central infrastructure. Sidecar deployment: Svantic sidecar alongside your service in a Kubernetes pod with local knowledge When to use:
  • Distributed worker fleets, each with its own AI
  • Microservices that need autonomous AI capabilities
  • Edge deployments where connectivity is unreliable
  • Teams that want isolation — one service, one brain, no shared state
Setup (Docker Compose):
services:
  savant:
    image: savant:latest
    ports: ["3000:3000"]
    environment:
      - LLM_API_KEY=${LLM_API_KEY}

  my-service:
    image: my-service:latest
    ports: ["4100:4100"]
    environment:
      - SAVANT_AGENTS_URL=http://savant:3000
    depends_on:
      - savant
Setup (Kubernetes):
apiVersion: v1
kind: Pod
metadata:
  name: worker
spec:
  containers:
    - name: savant
      image: savant:latest
      ports: [{ containerPort: 3000 }]
      env:
        - name: LLM_API_KEY
          valueFrom:
            secretKeyRef: { name: savant-secrets, key: api-key }

    - name: my-service
      image: my-service:latest
      ports: [{ containerPort: 4100 }]
      env:
        - name: SAVANT_AGENTS_URL
          value: "http://localhost:3000"
Characteristics:
  • Full autonomy — works without any central coordination
  • Knowledge is local to each pod
  • Horizontal scaling: more pods = more capacity
  • Each pod is self-contained and independently recoverable

3. Central + Sidecar (Hub Mesh)

One central brain. Many sidecar arms. Shared knowledge. Distributed execution. A central Svantic instance acts as the hub — it holds the shared knowledge store, the agent registry, and coordinates across all sidecars. Each sidecar registers with central, inherits shared knowledge, and operates autonomously for local tasks. Hub Mesh deployment: Central Svantic hub with shared knowledge, connected to Pod A, B, C sidecars via A2A When to use:
  • Enterprise deployments with multiple teams and services
  • Fleets where cross-instance learning is valuable
  • When you want centralized monitoring and agent management
  • When different services need to compose capabilities across pods
Setup: Central Svantic:
SAVANT_BASE_URL=http://central-savant:3000 \
SAVANT_PORT=3000 \
npm run api
Sidecar Svantic (each pod):
SAVANT_BASE_URL=http://localhost:3000 \
SAVANT_PORT=3000 \
SAVANT_CENTRAL_URL=http://central-savant:3000 \
npm run api
Registration hierarchy:
  1. Your application registers with its local sidecar Svantic
  2. The sidecar registers itself with central Svantic
  3. Central sees all agents across all pods
  4. Central can route tasks to any pod’s capabilities
  5. Knowledge learned by any sidecar can be promoted to the shared store
Characteristics:
  • Shared knowledge store — learning from Pod A benefits Pod C
  • Centralized agent discovery — central knows about all capabilities
  • Sidecars remain autonomous — if central goes down, sidecars continue operating
  • Gradual rollout — start with sidecars, add central later