> ## Documentation Index
> Fetch the complete documentation index at: https://docs.svantic.com/llms.txt
> Use this file to discover all available pages before exploring further.

# From natural language

# Generate Tools from Natural Language

You want an agent but you do not have an API spec or existing code. Describe what the tools should do in plain English, and Forge uses an LLM to propose a set of tools, lets you review them, then generates the Tool Spec and TypeScript capabilities.

This walkthrough builds a Datadog monitoring agent from scratch.

## Prerequisites

* The Svantic CLI installed (`npm install -g @svantic/cli`)
* A Google Gemini API key (set `GOOGLE_API_KEY` in your environment) -- or a custom LLM provider

## Step 1: Describe What You Want

```bash theme={null}
svantic forge tool --prompt "monitor alerts and create incidents in Datadog" --out ./tools/
```

Forge calls the LLM, which proposes a set of tools:

```
Proposed tools for "monitor alerts and create incidents in Datadog":

  1. list_alerts          - List active Datadog alerts, optionally filtered by severity
  2. get_alert_details    - Get full details for a specific alert by ID
  3. acknowledge_alert    - Acknowledge an alert to stop re-notification
  4. create_incident      - Create a new incident from an alert or manual description
  5. update_incident      - Update incident status, severity, or commander
  6. list_incidents       - List open incidents with optional status filter
  7. mute_monitor         - Temporarily mute a Datadog monitor

Accept these tools? [Y/n/edit]
```

Type `Y` to accept, `n` to cancel, or `edit` to modify the list interactively.

## Step 2: Add Context with Documentation

Give the LLM reference documentation so it generates accurate parameter schemas and endpoint paths:

```bash theme={null}
svantic forge tool \
  --prompt "monitor alerts and create incidents in Datadog" \
  --docs https://docs.datadoghq.com/api/latest/ \
  --out ./tools/
```

The `--docs` flag accepts URLs or local file paths. Forge fetches the content and includes it as context for the LLM. This dramatically improves the accuracy of generated parameter names, types, and API paths.

You can pass multiple `--docs` flags:

```bash theme={null}
svantic forge tool \
  --prompt "manage monitors and incidents" \
  --docs https://docs.datadoghq.com/api/latest/monitors/ \
  --docs https://docs.datadoghq.com/api/latest/incidents/ \
  --out ./tools/
```

## Step 3: The Propose, Review, Generate Flow

The generation process has three distinct phases:

1. **Propose** -- The LLM reads your prompt (and optional docs) and proposes a list of tools with names, descriptions, and parameter outlines.
2. **Review** -- You review the proposed list. Edit tool names, remove tools you don't need, or add missing ones.
3. **Generate** -- Forge takes the finalized list, generates the Tool Spec YAML, then materializes TypeScript capabilities.

After generation, you get the same output structure as the other paths:

| File                      | Contents                                 |
| ------------------------- | ---------------------------------------- |
| `datadog.tool-spec.yaml`  | The intermediate Tool Spec               |
| `datadog.capabilities.ts` | Runnable TypeScript tool implementations |

## Step 4: Using Source Code as Context

If you have an existing service that interacts with the target API, point Forge at it for better results:

```bash theme={null}
svantic forge tool \
  --prompt "generate tools for this monitoring service" \
  --docs src/monitoring/ \
  --out ./tools/
```

When `--docs` points to a local directory, Forge reads all `.ts`, `.js`, and `.md` files in it and includes them as LLM context. The LLM uses your existing code patterns, variable names, and error handling conventions to produce tools that match your codebase style.

## Programmatic Usage

### Propose then generate

```typescript theme={null}
import { propose_tools, generate_from_prompt } from '@svantic/sdk/forge';

const proposal = await propose_tools({
  prompt: 'monitor alerts and create incidents in Datadog',
  docs: ['https://docs.datadoghq.com/api/latest/'],
});

console.log(proposal.tools);
// [
//   { name: 'list_alerts', description: 'List active alerts...', params: [...] },
//   { name: 'create_incident', description: 'Create a new incident...', params: [...] },
//   ...
// ]

// Modify the proposal if needed
proposal.tools = proposal.tools.filter(t => t.name !== 'mute_monitor');

const result = await generate_from_prompt({
  proposal,
  output_dir: './tools/',
});

console.log(result.tool_spec);
console.log(result.capabilities);
```

### One-shot generation (skip review)

```typescript theme={null}
import { generate_from_prompt } from '@svantic/sdk/forge';

const result = await generate_from_prompt({
  prompt: 'CRUD operations for a user management API',
  output_dir: './tools/',
  auto_accept: true,
});
```

## Setting Up the LLM

### Google Gemini (default)

Set the `GOOGLE_API_KEY` environment variable:

```bash theme={null}
export GOOGLE_API_KEY=your-gemini-api-key
```

Forge uses `gemini-2.0-flash` by default. Override the model with `--model`:

```bash theme={null}
svantic forge tool --prompt "..." --model gemini-2.5-pro --out ./tools/
```

### Custom LLM Providers

For providers other than Gemini, implement the `LlmCallFn` interface and pass it to the programmatic API:

```typescript theme={null}
import { propose_tools, generate_from_prompt } from '@svantic/sdk/forge';
import type { LlmCallFn } from '@svantic/sdk/forge';

const my_llm: LlmCallFn = async (messages, options) => {
  const response = await fetch('https://api.openai.com/v1/chat/completions', {
    method: 'POST',
    headers: {
      'Content-Type': 'application/json',
      'Authorization': `Bearer ${process.env.OPENAI_API_KEY}`,
    },
    body: JSON.stringify({
      model: 'gpt-4o',
      messages,
      temperature: options?.temperature ?? 0.2,
    }),
  });
  const data = await response.json() as any;
  return data.choices[0].message.content;
};

const proposal = await propose_tools({
  prompt: 'manage incidents in PagerDuty',
  llm: my_llm,
});

const result = await generate_from_prompt({
  proposal,
  output_dir: './tools/',
  llm: my_llm,
});
```

The `LlmCallFn` signature:

```typescript theme={null}
type LlmCallFn = (
  messages: Array<{ role: 'system' | 'user' | 'assistant'; content: string }>,
  options?: { temperature?: number; max_tokens?: number },
) => Promise<string>;
```

## Tips for Better Results

* **Be specific.** "Create, read, update, and delete incidents in PagerDuty with severity and status fields" produces better tools than "PagerDuty stuff".
* **Provide docs.** The `--docs` flag is the single biggest lever for accuracy. Even a single API reference page helps.
* **Iterate.** Use `edit` at the review step to rename tools, adjust descriptions, or add missing parameters before generation.
* **Combine with other paths.** Generate a rough set from natural language, then refine by editing the Tool Spec YAML directly and regenerating with `svantic forge tool --spec`.
