Documentation Index
Fetch the complete documentation index at: https://docs.svantic.com/llms.txt
Use this file to discover all available pages before exploring further.
Deployment Models
Svantic supports three deployment topologies, each designed for different organizational needs — from a developer’s laptop to a globally distributed enterprise fleet.1. Standalone
One machine. One Svantic. One or more applications. The simplest deployment: Svantic runs as a local server, and applications on the same machine (or network) register as agents.- Local development and testing
- Single-team workloads
- Prototyping and demos
- Small-scale production (single server)
- Zero infrastructure overhead
- Knowledge persists locally
- All capabilities on one machine
- No network latency between components
2. Sidecar
One pod. One Svantic sidecar. One application. Fully autonomous. Svantic runs as a sidecar container alongside your application in a Kubernetes pod. Each pod gets its own AI brain. The sidecar operates independently — it can plan, execute, and learn even when disconnected from any central infrastructure.- Distributed scraping fleets (e.g., 50 scraper workers, each with its own AI)
- Microservices that need autonomous AI capabilities
- Edge deployments where connectivity is unreliable
- Teams that want isolation — one service, one brain, no shared state
- Full autonomy — works without any central coordination
- Knowledge is local to each pod
- No cross-pod knowledge sharing (unless Central is added)
- Horizontal scaling: more pods = more capacity
- Each pod is self-contained and independently recoverable
3. Central + Sidecar (Hub Mesh)
One central brain. Many sidecar arms. Shared knowledge. Distributed execution. A central Svantic instance acts as the hub — it holds the shared knowledge store, the agent registry, and coordinates across all sidecars. Each sidecar registers with central, inherits shared knowledge, and operates autonomously for local tasks.- Enterprise deployments with multiple teams and services
- Fleets where cross-instance learning is valuable
- When you want centralized monitoring and agent management
- When different services need to compose capabilities across pods
SAVANT_CENTRAL_URL variable tells the sidecar to register with central on startup. Knowledge flows upward.
Registration hierarchy:
- Your application registers with its local sidecar Svantic
- The sidecar registers itself with central Svantic
- Central sees all agents across all pods
- Central can route tasks to any pod’s capabilities
- Knowledge learned by any sidecar can be promoted to the shared store
- Shared knowledge store — learning from Pod A benefits Pod C
- Centralized agent discovery — central knows about all capabilities
- Sidecars remain autonomous — if central goes down, sidecars continue operating
- Gradual rollout — start with sidecars, add central later
- Best for: scraping fleets, microservice architectures, multi-team organizations
Choosing a Topology
| Factor | Standalone | Sidecar | Central + Sidecar |
|---|---|---|---|
| Complexity | Minimal | Low | Medium |
| Knowledge sharing | Single instance | Per-pod isolated | Shared across fleet |
| Autonomy | Full | Full | Full (degraded if central down) |
| Scaling | Vertical | Horizontal | Horizontal + coordinated |
| Cross-service tasks | Local only | Pod-local only | Fleet-wide |
| Best for | Dev, small prod | Independent workers | Enterprise, fleet ops |
Migration Path
The topologies are additive. Start simple and grow:- Start standalone — Develop and test locally
- Move to sidecar — Containerize and deploy alongside your service
- Add central — When you need cross-pod knowledge sharing or fleet management, deploy a central instance and set
SAVANT_CENTRAL_URLon your sidecars
