diff --git a/_vale/config/vocabularies/Docker/accept.txt b/_vale/config/vocabularies/Docker/accept.txt index 0b4cbb806fda..88eedf3531e4 100644 --- a/_vale/config/vocabularies/Docker/accept.txt +++ b/_vale/config/vocabularies/Docker/accept.txt @@ -294,4 +294,4 @@ Zsh [Ww]alkthrough [Tt]oolsets? [Rr]erank(ing|ed)? - +[Ee]vals? diff --git a/content/manuals/ai/cagent/best-practices.md b/content/manuals/ai/cagent/best-practices.md index 24651673fb52..71ea47653445 100644 --- a/content/manuals/ai/cagent/best-practices.md +++ b/content/manuals/ai/cagent/best-practices.md @@ -2,7 +2,7 @@ title: Best practices description: Patterns and techniques for building effective cagent agents keywords: [cagent, best practices, patterns, agent design, optimization] -weight: 20 +weight: 40 --- Patterns you learn from building and running cagent agents. These aren't diff --git a/content/manuals/ai/cagent/evals.md b/content/manuals/ai/cagent/evals.md new file mode 100644 index 000000000000..dabc28cbae6d --- /dev/null +++ b/content/manuals/ai/cagent/evals.md @@ -0,0 +1,163 @@ +--- +title: Evals +description: Test your agents with saved conversations +keywords: [cagent, evaluations, testing, evals] +weight: 80 +--- + +Evaluations (evals) help you track how your agent's behavior changes over time. +When you save a conversation as an eval, you can replay it later to see if the +agent responds differently. Evals measure consistency, not correctness - they +tell you if behavior changed, not whether it's right or wrong. + +## What are evals + +An eval is a saved conversation you can replay. When you run evals, cagent +replays the user messages and compares the new responses against the original +saved conversation. High scores mean the agent behaved similarly; low scores +mean behavior changed. + +What you do with that information depends on why you saved the conversation. +You might save successful conversations to catch regressions, or save failure +cases to document known issues and track whether they improve. + +## Common workflows + +How you use evals depends on what you're trying to accomplish: + +Regression testing: Save conversations where your agent performs well. When you +make changes later (upgrade models, update prompts, refactor code), run the +evals. High scores mean behavior stayed consistent, which is usually what you +want. Low scores mean something changed - examine the new behavior to see if +it's still correct. + +Tracking improvements: Save conversations where your agent struggles or fails. +As you make improvements, run these evals to see how behavior evolves. Low +scores indicate the agent now behaves differently, which might mean you fixed +the issue. You'll need to manually verify the new behavior is actually better. + +Documenting edge cases: Save interesting or unusual conversations regardless of +quality. Use them to understand how your agent handles edge cases and whether +that behavior changes over time. + +Evals measure whether behavior changed. You determine if that change is good or +bad. + +## Creating an eval + +Save a conversation from an interactive session: + +```console +$ cagent run ./agent.yaml +``` + +Have a conversation with your agent, then save it as an eval: + +```console +> /eval test-case-name +Eval saved to evals/test-case-name.json +``` + +The conversation is saved to the `evals/` directory in your current working +directory. You can organize eval files in subdirectories if needed. + +## Running evals + +Run all evals in the default directory: + +```console +$ cagent eval ./agent.yaml +``` + +Use a custom eval directory: + +```console +$ cagent eval ./agent.yaml ./my-evals +``` + +Run evals against an agent from a registry: + +```console +$ cagent eval agentcatalog/myagent +``` + +Example output: + +```console +$ cagent eval ./agent.yaml +--- 0 +First message: tell me something interesting about kil +Eval file: c7e556c5-dae5-4898-a38c-73cc8e0e6abe +Tool trajectory score: 1.000000 +Rouge-1 score: 0.447368 +Cost: 0.00 +Output tokens: 177 +``` + +## Understanding results + +For each eval, cagent shows: + +- **First message** - The initial user message from the saved conversation +- **Eval file** - The UUID of the eval file being run +- **Tool trajectory score** - How similarly the agent used tools (0-1 scale, + higher is better) +- **[ROUGE-1](https://en.wikipedia.org/wiki/ROUGE_(metric)) score** - Text + similarity between responses (0-1 scale, higher is better) +- **Cost** - The cost for this eval run +- **Output tokens** - Number of tokens generated + +Higher scores mean the agent behaved more similarly to the original recorded +conversation. A score of 1.0 means identical behavior. + +### What the scores mean + +**Tool trajectory score** measures whether the agent called the same tools in +the same order as the original conversation. Lower scores might indicate the +agent found a different approach to solve the problem, which isn't necessarily +wrong but worth investigating. + +**Rouge-1 score** measures how similar the response text is to the original. +This is a heuristic measure - different wording might still be correct, so use +this as a signal rather than absolute truth. + +### Interpreting your results + +Scores close to 1.0 mean your changes maintained consistent behavior - the +agent is using the same approach and producing similar responses. This is +generally good; your changes didn't break existing functionality. + +Lower scores mean behavior changed compared to the saved conversation. This +could be a regression where the agent now performs worse, or it could be an +improvement where the agent found a better approach. + +When scores drop, examine the actual behavior to determine if it's better or +worse. The eval files are stored as JSON in your evals directory - open the +file to see the original conversation. Then test your modified agent with the +same input to compare responses. If the new response is better, save a new +conversation to replace the eval. If it's worse, you found a regression. + +The scores guide you to what changed. Your judgment determines if the change is +good or bad. + +## When to use evals + +Evals help you track behavior changes over time. They're useful for catching +regressions when you upgrade models or dependencies, documenting known failure +cases you want to fix, and understanding how edge cases evolve as you iterate. + +Evals aren't appropriate for determining which agent configuration works best - +they measure similarity to a saved conversation, not correctness. Use manual +testing to evaluate different configurations and decide which works better. + +Save conversations worth tracking. Build a collection of important workflows, +interesting edge cases, and known issues. Run your evals when making changes to +see what shifted. + +## What's next + +- Check the [CLI reference](reference/cli.md#eval) for all `cagent eval` + options +- Learn [best practices](best-practices.md) for building effective agents +- Review [example configurations](https://github.com/docker/cagent/tree/main/examples) + for different agent types diff --git a/content/manuals/ai/cagent/integrations/_index.md b/content/manuals/ai/cagent/integrations/_index.md index d3f7f7f2700a..d6a23870b668 100644 --- a/content/manuals/ai/cagent/integrations/_index.md +++ b/content/manuals/ai/cagent/integrations/_index.md @@ -1,6 +1,53 @@ --- -build: - render: never title: Integrations -weight: 50 +description: Connect cagent agents to editors, MCP clients, and other agents +keywords: [cagent, integration, acp, mcp, a2a, editor, protocol] +weight: 60 --- + +cagent agents can integrate with different environments depending on how you +want to use them. Each integration type serves a specific purpose. + +## Integration types + +### ACP - Editor integration + +Run cagent agents directly in your editor (Neovim, Zed). The agent sees your +editor's file context and can read and modify files through the editor's +interface. Use ACP when you want an AI coding assistant embedded in your +editor. + +See [ACP integration](./acp.md) for setup instructions. + +### MCP - Tool integration + +Expose cagent agents as tools in MCP clients like Claude Desktop or Claude +Code. Your agents appear in the client's tool list, and the client can call +them when needed. Use MCP when you want Claude Desktop (or another MCP client) +to have access to your specialized agents. + +See [MCP integration](./mcp.md) for setup instructions. + +### A2A - Agent-to-agent communication + +Run cagent agents as HTTP servers that other agents or systems can call using +the Agent-to-Agent protocol. Your agent becomes a service that other systems +can discover and invoke over the network. Use A2A when you want to build +multi-agent systems or expose your agent as an HTTP service. + +See [A2A integration](./a2a.md) for setup instructions. + +## Choosing the right integration + +| Feature | ACP | MCP | A2A | +| ------------- | ------------------ | ------------------ | -------------------- | +| Use case | Editor integration | Agents as tools | Agent-to-agent calls | +| Transport | stdio | stdio/SSE | HTTP | +| Discovery | Editor plugin | Server manifest | Agent card | +| Best for | Code editing | Tool integration | Multi-agent systems | +| Communication | Editor calls agent | Client calls tools | Between agents | + +Choose ACP if you want your agent embedded in your editor while you code. +Choose MCP if you want Claude Desktop (or another MCP client) to be able to +call your specialized agents as tools. Choose A2A if you're building +multi-agent systems where agents need to call each other over HTTP. diff --git a/content/manuals/ai/cagent/integrations/a2a.md b/content/manuals/ai/cagent/integrations/a2a.md new file mode 100644 index 000000000000..1e51b9e80519 --- /dev/null +++ b/content/manuals/ai/cagent/integrations/a2a.md @@ -0,0 +1,205 @@ +--- +title: A2A mode +linkTitle: A2A +description: Expose cagent agents via the Agent-to-Agent protocol +keywords: [cagent, a2a, agent-to-agent, multi-agent, protocol] +weight: 40 +--- + +A2A mode runs your cagent agent as an HTTP server that other systems can call +using the Agent-to-Agent protocol. This lets you expose your agent as a service +that other agents or applications can discover and invoke over the network. + +Use A2A when you want to make your agent callable by other systems over HTTP. +For editor integration, see [ACP integration](./acp.md). For using agents as +tools in MCP clients, see [MCP integration](./mcp.md). + +## Prerequisites + +Before starting an A2A server, you need: + +- cagent installed - See the [installation guide](../_index.md#installation) +- Agent configuration - A YAML file defining your agent. See the + [tutorial](../tutorial.md) +- API keys configured - If using cloud model providers (see [Model + providers](../model-providers.md)) + +## Starting an A2A server + +Basic usage: + +```console +$ cagent a2a ./agent.yaml +``` + +Your agent is now accessible via HTTP. Other A2A systems can discover your +agent's capabilities and call it. + +Custom port: + +```console +$ cagent a2a ./agent.yaml --port 8080 +``` + +Specific agent in a team: + +```console +$ cagent a2a ./agent.yaml --agent engineer +``` + +From OCI registry: + +```console +$ cagent a2a agentcatalog/pirate --port 9000 +``` + +## HTTP endpoints + +When you start an A2A server, it exposes two HTTP endpoints: + +### Agent card: `/.well-known/agent-card` + +The agent card describes your agent's capabilities: + +```console +$ curl http://localhost:8080/.well-known/agent-card +``` + +```json +{ + "name": "agent", + "description": "A helpful coding assistant", + "skills": [ + { + "id": "agent_root", + "name": "root", + "description": "A helpful coding assistant", + "tags": ["llm", "cagent"] + } + ], + "preferredTransport": "jsonrpc", + "url": "http://localhost:8080/invoke", + "capabilities": { + "streaming": true + }, + "version": "0.1.0" +} +``` + +### Invoke endpoint: `/invoke` + +Call your agent by sending a JSON-RPC request: + +```console +$ curl -X POST http://localhost:8080/invoke \ + -H "Content-Type: application/json" \ + -d '{ + "jsonrpc": "2.0", + "id": "req-1", + "method": "message/send", + "params": { + "message": { + "role": "user", + "parts": [ + { + "kind": "text", + "text": "What is 2+2?" + } + ] + } + } + }' +``` + +The response includes the agent's reply: + +```json +{ + "jsonrpc": "2.0", + "id": "req-1", + "result": { + "artifacts": [ + { + "parts": [ + { + "kind": "text", + "text": "2+2 equals 4." + } + ] + } + ] + } +} +``` + +## Example: Multi-agent workflow + +Here's a concrete scenario where A2A is useful. You have two agents: + +1. A general-purpose agent that interacts with users +2. A specialized code review agent with access to your codebase + +Run the specialist as an A2A server: + +```console +$ cagent a2a ./code-reviewer.yaml --port 8080 +Listening on 127.0.0.1:8080 +``` + +Configure your main agent to call it: + +```yaml +agents: + root: + model: anthropic/claude-sonnet-4-5 + instruction: You are a helpful assistant + toolsets: + - type: a2a + url: http://localhost:8080 + name: code-reviewer +``` + +Now when users ask the main agent about code quality, it can delegate to the +specialist. The main agent sees `code-reviewer` as a tool it can call, and the +specialist has access to the codebase tools it needs. + +## Calling other A2A agents + +Your cagent agents can call remote A2A agents as tools. Configure the A2A +toolset with the remote agent's URL: + +```yaml +agents: + root: + toolsets: + - type: a2a + url: http://localhost:8080 + name: specialist +``` + +The `url` specifies where the remote agent is running, and `name` is an +optional identifier for the tool. Your agent can now delegate tasks to the +remote specialist agent. + +If the remote agent requires authentication or custom headers: + +```yaml +agents: + root: + toolsets: + - type: a2a + url: http://localhost:8080 + name: specialist + remote: + headers: + Authorization: Bearer token123 + X-Custom-Header: value +``` + +## What's next + +- Review the [CLI reference](../reference/cli.md#a2a) for all `cagent a2a` + options +- Learn about [MCP mode](./mcp.md) to expose agents as tools in MCP clients +- Learn about [ACP mode](./acp.md) for editor integration +- Share your agents with [OCI registries](../sharing-agents.md) diff --git a/content/manuals/ai/cagent/integrations/mcp.md b/content/manuals/ai/cagent/integrations/mcp.md index 1745fb3c82af..1b51bb09349e 100644 --- a/content/manuals/ai/cagent/integrations/mcp.md +++ b/content/manuals/ai/cagent/integrations/mcp.md @@ -37,6 +37,30 @@ config with `root`, `designer`, and `engineer` agents gives Claude three tools to choose from. Claude might call the engineer directly or use the root coordinator—depends on your agent descriptions and what you ask for. +## MCP Gateway + +Docker provides an [MCP Gateway](/ai/mcp-catalog-and-toolkit/mcp-gateway/) that +gives cagent agents access to a catalog of pre-configured MCP servers. Instead +of configuring individual MCP servers, agents can use the gateway to access +tools like web search, database queries, and more. + +Configure MCP toolset with gateway reference: + +```yaml +agents: + root: + toolsets: + - type: mcp + ref: docker:duckduckgo # Uses Docker MCP Gateway +``` + +The `docker:` prefix tells cagent to use the MCP Gateway for this server. See +the [MCP Gateway documentation](/ai/mcp-catalog-and-toolkit/mcp-gateway/) for +available servers and configuration options. + +You can also use the [MCP Toolkit](/ai/mcp-catalog-and-toolkit/) to explore and +manage MCP servers interactively. + ## Prerequisites Before configuring MCP integration, you need: @@ -264,25 +288,12 @@ Your team keeps agents in an OCI registry. Everyone adds agent, they get the new version on their next restart. No YAML files to pass around. -## ACP vs MCP integration - -Both protocols let you integrate cagent agents with other tools, but they're -designed for different use cases: - -| Feature | ACP Integration | MCP Integration | -| ----------- | ---------------------------- | ------------------------------ | -| Use case | Embedded agents in editors | Agents as tools in MCP clients | -| Filesystem | Delegated to client (editor) | Direct cagent access | -| Working dir | Client workspace | Configurable per agent | -| Best for | Code editing workflows | Using agents as callable tools | - -Use ACP when you want agents embedded in your editor. Use MCP when you want to -expose agents as tools to MCP clients like Claude Desktop or Claude Code. - -For ACP integration setup, see [ACP integration](./acp.md). - ## What's next +- Use the [MCP Gateway](/ai/mcp-catalog-and-toolkit/mcp-gateway/) to give your + agents access to pre-configured MCP servers +- Explore MCP servers interactively with the [MCP + Toolkit](/ai/mcp-catalog-and-toolkit/) - Review the [configuration reference](../reference/config.md) for advanced agent setup - Explore the [toolsets reference](../reference/toolsets.md) to learn what tools diff --git a/content/manuals/ai/cagent/local-models.md b/content/manuals/ai/cagent/local-models.md new file mode 100644 index 000000000000..f63d623d0643 --- /dev/null +++ b/content/manuals/ai/cagent/local-models.md @@ -0,0 +1,215 @@ +--- +title: Local models with Docker Model Runner +linkTitle: Local models +description: Run AI models locally using Docker Model Runner - no API keys required +keywords: [cagent, docker model runner, dmr, local models, embeddings, offline] +weight: 20 +--- + +Docker Model Runner lets you run AI models locally on your machine. No API +keys, no recurring costs, and your data stays private. + +## Why use local models + +Docker Model Runner lets you run models locally without API keys or recurring +costs. Your data stays on your machine, and you can work offline once models +are downloaded. This is an alternative to [cloud model +providers](model-providers.md). + +## Prerequisites + +You need Docker Model Runner installed and running: + +- Docker Desktop (macOS/Windows) - Enable Docker Model Runner in + **Settings > AI > Enable Docker Model Runner**. See [Get started with + DMR](/manuals/ai/model-runner/get-started.md#enable-docker-model-runner) for + detailed instructions. +- Docker Engine (Linux) - Install with `sudo apt-get install +docker-model-plugin` or `sudo dnf install docker-model-plugin`. See [Get + started with DMR](/manuals/ai/model-runner/get-started.md#docker-engine). + +Verify Docker Model Runner is available: + +```console +$ docker model version +``` + +If the command returns version information, you're ready to use local models. + +## Using models with DMR + +Docker Model Runner can run any compatible model. Models can come from: + +- Docker Hub repositories (`docker.io/namespace/model-name`) +- Your own OCI artifacts packaged and pushed to any registry +- HuggingFace models directly (`hf.co/org/model-name`) +- The Docker Model catalog in Docker Desktop + +To see models available through the Docker catalog, run: + +```console +$ docker model list --available +``` + +To use a model, reference it in your configuration. DMR automatically pulls +models on first use if they're not already local. + +## Configuration + +Configure your agent to use Docker Model Runner with the `dmr` provider: + +```yaml +agents: + root: + model: dmr/ai/qwen3 + instruction: You are a helpful assistant + toolsets: + - type: filesystem +``` + +When you first run your agent, cagent prompts you to pull the model if it's +not already available locally: + +```console +$ cagent run agent.yaml +Model not found locally. Do you want to pull it now? ([y]es/[n]o) +``` + +## How it works + +When you configure an agent to use DMR, cagent automatically connects to your +local Docker Model Runner and routes inference requests to it. If a model isn't +available locally, cagent prompts you to pull it on first use. No API keys or +authentication are required. + +## Advanced configuration + +For more control over model behavior, define a model configuration: + +```yaml +models: + local-qwen: + provider: dmr + model: ai/qwen3:14B + temperature: 0.7 + max_tokens: 8192 + +agents: + root: + model: local-qwen + instruction: You are a helpful coding assistant +``` + +### Faster inference with speculative decoding + +Speed up model responses using speculative decoding with a smaller draft model: + +```yaml +models: + fast-qwen: + provider: dmr + model: ai/qwen3:14B + provider_opts: + speculative_draft_model: ai/qwen3:0.6B-Q4_K_M + speculative_num_tokens: 16 + speculative_acceptance_rate: 0.8 +``` + +The draft model generates token candidates, and the main model validates them. +This can significantly improve throughput for longer responses. + +### Runtime flags + +Pass engine-specific flags to optimize performance: + +```yaml +models: + optimized-qwen: + provider: dmr + model: ai/qwen3 + provider_opts: + runtime_flags: ["--ngl=33", "--threads=8"] +``` + +Common flags: + +- `--ngl` - Number of GPU layers +- `--threads` - CPU thread count +- `--repeat-penalty` - Repetition penalty + +## Using DMR for RAG + +Docker Model Runner supports both embeddings and reranking for RAG workflows. + +### Embedding with DMR + +Use local embeddings for indexing your knowledge base: + +```yaml +rag: + codebase: + docs: [./src] + strategies: + - type: chunked-embeddings + embedding_model: dmr/ai/embeddinggemma + database: ./code.db +``` + +### Reranking with DMR + +DMR provides native reranking for improved RAG results: + +```yaml +models: + reranker: + provider: dmr + model: hf.co/ggml-org/qwen3-reranker-0.6b-q8_0-gguf + +rag: + docs: + docs: [./documentation] + strategies: + - type: chunked-embeddings + embedding_model: dmr/ai/embeddinggemma + limit: 20 + results: + reranking: + model: reranker + threshold: 0.5 + limit: 5 +``` + +Native DMR reranking is the fastest option for reranking RAG results. + +## Troubleshooting + +If cagent can't find Docker Model Runner: + +1. Verify Docker Model Runner status: + + ```console + $ docker model status + ``` + +2. Check available models: + + ```console + $ docker model list + ``` + +3. Check model logs for errors: + + ```console + $ docker model logs + ``` + +4. Ensure Docker Desktop has Model Runner enabled in settings (macOS/Windows) + +## What's next + +- Follow the [tutorial](tutorial.md) to build your first agent with local + models +- Learn about [RAG](rag.md) to give your agents access to codebases and + documentation +- See the [configuration reference](reference/config.md#docker-model-runner-dmr) + for all DMR options diff --git a/content/manuals/ai/cagent/model-providers.md b/content/manuals/ai/cagent/model-providers.md new file mode 100644 index 000000000000..de0a82893837 --- /dev/null +++ b/content/manuals/ai/cagent/model-providers.md @@ -0,0 +1,157 @@ +--- +title: Model providers +description: Get API keys and configure cloud model providers for cagent +keywords: [cagent, model providers, api keys, anthropic, openai, google, gemini] +weight: 10 +--- + +To run cagent, you need a model provider. You can either use a cloud provider +with an API key or run models locally with [Docker Model +Runner](local-models.md). + +This guide covers cloud providers. For the local alternative, see [Local +models with Docker Model Runner](local-models.md). + +## Supported providers + +cagent supports these cloud model providers: + +- Anthropic - Claude models +- OpenAI - GPT models +- Google - Gemini models + +## Anthropic + +Anthropic provides the Claude family of models, including Claude Sonnet and +Claude Opus. + +To get an API key: + +1. Go to [console.anthropic.com](https://console.anthropic.com/). +2. Sign up or sign in to your account. +3. Navigate to the API Keys section. +4. Create a new API key. +5. Copy the key. + +Set your API key as an environment variable: + +```console +$ export ANTHROPIC_API_KEY=your_key_here +``` + +Use Anthropic models in your agent configuration: + +```yaml +agents: + root: + model: anthropic/claude-sonnet-4-5 + instruction: You are a helpful coding assistant +``` + +Available models include: + +- `anthropic/claude-sonnet-4-5` +- `anthropic/claude-opus-4-5` +- `anthropic/claude-haiku-4-5` + +## OpenAI + +OpenAI provides the GPT family of models, including GPT-5 and GPT-5 mini. + +To get an API key: + +1. Go to [platform.openai.com/api-keys](https://platform.openai.com/api-keys). +2. Sign up or sign in to your account. +3. Navigate to the API Keys section. +4. Create a new API key. +5. Copy the key. + +Set your API key as an environment variable: + +```console +$ export OPENAI_API_KEY=your_key_here +``` + +Use OpenAI models in your agent configuration: + +```yaml +agents: + root: + model: openai/gpt-5 + instruction: You are a helpful coding assistant +``` + +Available models include: + +- `openai/gpt-5` +- `openai/gpt-5-mini` + +## Google Gemini + +Google provides the Gemini family of models. + +To get an API key: + +1. Go to [aistudio.google.com/apikey](https://aistudio.google.com/apikey). +2. Sign in with your Google account. +3. Create an API key. +4. Copy the key. + +Set your API key as an environment variable: + +```console +$ export GOOGLE_API_KEY=your_key_here +``` + +Use Gemini models in your agent configuration: + +```yaml +agents: + root: + model: google/gemini-2.5-flash + instruction: You are a helpful coding assistant +``` + +Available models include: + +- `google/gemini-2.5-flash` +- `google/gemini-2.5-pro` + +## OpenAI-compatible providers + +You can use the `openai` provider type to connect to any model or provider that +implements the OpenAI API specification. This includes services like Azure +OpenAI, local inference servers, and other compatible endpoints. + +Configure an OpenAI-compatible provider by specifying the base URL: + +```yaml +agents: + root: + model: openai/your-model-name + instruction: You are a helpful coding assistant + provider: + base_url: https://your-provider.example.com/v1 +``` + +By default, cagent uses the `OPENAI_API_KEY` environment variable for +authentication. If your provider uses a different variable, specify it with +`token_key`: + +```yaml +agents: + root: + model: openai/your-model-name + instruction: You are a helpful coding assistant + provider: + base_url: https://your-provider.example.com/v1 + token_key: YOUR_PROVIDER_API_KEY +``` + +## What's next + +- Follow the [tutorial](tutorial.md) to build your first agent +- Learn about [local models with Docker Model Runner](local-models.md) as an + alternative to cloud providers +- Review the [configuration reference](reference/config.md) for advanced model + settings diff --git a/content/manuals/ai/cagent/rag.md b/content/manuals/ai/cagent/rag.md index 33f00653f3c0..a9088bb69ba6 100644 --- a/content/manuals/ai/cagent/rag.md +++ b/content/manuals/ai/cagent/rag.md @@ -2,7 +2,7 @@ title: RAG description: How RAG gives your cagent agents access to codebases and documentation keywords: [cagent, rag, retrieval, embeddings, semantic search] -weight: 60 +weight: 70 --- When you configure a RAG source in cagent, your agent automatically gains a diff --git a/content/manuals/ai/cagent/reference/_index.md b/content/manuals/ai/cagent/reference/_index.md index 1e3fdb26253f..8403097afcef 100644 --- a/content/manuals/ai/cagent/reference/_index.md +++ b/content/manuals/ai/cagent/reference/_index.md @@ -2,5 +2,5 @@ build: render: never title: Reference -weight: 40 +weight: 60 --- diff --git a/content/manuals/ai/cagent/sharing-agents.md b/content/manuals/ai/cagent/sharing-agents.md index 0d6a5efa38ce..5072229f873a 100644 --- a/content/manuals/ai/cagent/sharing-agents.md +++ b/content/manuals/ai/cagent/sharing-agents.md @@ -2,7 +2,7 @@ title: Sharing agents description: Distribute agent configurations through OCI registries keywords: [cagent, oci, registry, docker hub, sharing, distribution] -weight: 30 +weight: 50 --- Push your agent to a registry and share it by name. Your teammates diff --git a/content/manuals/ai/cagent/tutorial.md b/content/manuals/ai/cagent/tutorial.md index 46befbb6ce8c..e2a88e184c8c 100644 --- a/content/manuals/ai/cagent/tutorial.md +++ b/content/manuals/ai/cagent/tutorial.md @@ -2,7 +2,7 @@ title: Building a coding agent description: Create a coding agent that can read, write, and validate code changes in your projects keywords: [cagent, tutorial, coding agent, ai assistant] -weight: 10 +weight: 30 --- This tutorial teaches you how to build a coding agent that can help with