- :fontawesome-brands-python:{ .lg .middle } **Python API Reference**
diff --git a/docs/context/index.md b/docs/context/index.md
index c98923825..f43dc8161 100644
--- a/docs/context/index.md
+++ b/docs/context/index.md
@@ -453,10 +453,10 @@ State is crucial for memory and data flow. When you modify state using `Callback
* **How it Works:** Writing to `callback_context.state['my_key'] = my_value` or `tool_context.state['my_key'] = my_value` adds this change to the `EventActions.state_delta` associated with the current step's event. The `SessionService` then applies these deltas when persisting the event.
-#### Passing Data Between Tools
+* **Passing Data Between Tools**
=== "Python"
-
+
```python
# Pseudocode: Tool 1 - Fetches user ID
from google.adk.tools import ToolContext
@@ -478,9 +478,9 @@ State is crucial for memory and data flow. When you modify state using `Callback
# ... logic to fetch orders using user_id ...
return {"orders": ["order123", "order456"]}
```
-
+
=== "Java"
-
+
```java
// Pseudocode: Tool 1 - Fetches user ID
import com.google.adk.tools.ToolContext;
diff --git a/docs/observability/logging.md b/docs/observability/logging.md
index 6785ed560..4ca4a7570 100644
--- a/docs/observability/logging.md
+++ b/docs/observability/logging.md
@@ -37,6 +37,8 @@ When running agents using the ADK's built-in web or API servers, you can easily
This provides a convenient way to set the logging level without modifying your agent's source code.
+> **Note:** The command-line setting always takes precedence over the programmatic configuration (like `logging.basicConfig`) for ADK's loggers. It's recommended to use `INFO` or `WARNING` in production and enable `DEBUG` only when troubleshooting.
+
**Example using `adk web`:**
To start the web server with `DEBUG` level logging, run:
@@ -46,59 +48,48 @@ adk web --log_level DEBUG path/to/your/agents_dir
```
The available log levels for the `--log_level` option are:
+
- `DEBUG`
- `INFO` (default)
- `WARNING`
- `ERROR`
- `CRITICAL`
-> For the `DEBUG` level, you can also use `-v` or `--verbose` as a a shortcut for `--log_level DEBUG`. For example:
->
+> You can also use `-v` or `--verbose` as a a shortcut for `--log_level DEBUG`.
+>
> ```bash
> adk web -v path/to/your/agents_dir
> ```
-This command-line setting overrides any programmatic configuration (like `logging.basicConfig`) you might have in your code for the ADK's loggers.
-
### Log Levels
-ADK uses standard log levels to categorize the importance of a message:
-
-- `DEBUG`: The most verbose level. Used for fine-grained diagnostic information, such as the full prompt sent to the LLM, detailed state changes, and internal logic flow. **Crucial for debugging.**
-- `INFO`: General information about the agent's lifecycle. This includes events like agent startup, session creation, and tool execution.
-- `WARNING`: Indicates a potential issue or the use of a deprecated feature. The agent can continue to function, but the issue may require attention.
-- `ERROR`: A serious error occurred that prevented the agent from performing an operation.
+ADK uses standard log levels to categorize messages. The configured level determines what information gets logged.
-> **Note:** It is recommended to use `INFO` or `WARNING` in production environments and only enable `DEBUG` when actively troubleshooting an issue, as `DEBUG` logs can be very verbose and may contain sensitive information.
+| Level | Description | Type of Information Logged |
+| :--- | :--- | :--- |
+| **`DEBUG`** | **Crucial for debugging.** The most verbose level for fine-grained diagnostic information. |
- **Full LLM Prompts:** The complete request sent to the language model, including system instructions, history, and tools.
- Detailed API responses from services.
- Internal state transitions and variable values.
|
+| **`INFO`** | General information about the agent's lifecycle. |
- Agent initialization and startup.
- Session creation and deletion events.
- Execution of a tool, including its name and arguments.
|
+| **`WARNING`** | Indicates a potential issue or deprecated feature use. The agent continues to function, but attention may be required. |
- Use of deprecated methods or parameters.
- Non-critical errors that the system recovered from.
|
+| **`ERROR`** | A serious error that prevented an operation from completing. |
- Failed API calls to external services (e.g., LLM, Session Service).
- Unhandled exceptions during agent execution.
- Configuration errors.
|
-## What is Logged
-
-Depending on the configured log level, you can expect to see the following information:
-
-| Level | Type of Information Logged |
-| :-------- | :--------------------------------------------------------------------------------------------------------------------- |
-| **DEBUG** | - **Full LLM Prompts:** The complete request sent to the language model, including system instructions, history, and tools. |
-| | - Detailed API responses from services. |
-| | - Internal state transitions and variable values. |
-| **INFO** | - Agent initialization and startup. |
-| | - Session creation and deletion events. |
-| | - Execution of a tool, including the tool name and arguments. |
-| **WARNING**| - Use of deprecated methods or parameters. |
-| | - Non-critical errors that the system can recover from. |
-| **ERROR** | - Failed API calls to external services (e.g., LLM, Session Service). |
-| | - Unhandled exceptions during agent execution. |
-| | - Configuration errors. |
+> **Note:** It is recommended to use `INFO` or `WARNING` in production environments. Only enable `DEBUG` when actively troubleshooting an issue, as `DEBUG` logs can be very verbose and may contain sensitive information.
## Reading and Understanding the Logs
-The `format` string in the `basicConfig` example determines the structure of each log message. Let's break down a sample log entry:
+The `format` string in the `basicConfig` example determines the structure of each log message.
-`2025-07-08 11:22:33,456 - DEBUG - google_adk.google.adk.models.google_llm - LLM Request: contents { ... }`
+Here’s a sample log entry:
+
+```text
+2025-07-08 11:22:33,456 - DEBUG - google_adk.google.adk.models.google_llm - LLM Request: contents { ... }
+```
-- `2025-07-08 11:22:33,456`: `%(asctime)s` - The timestamp of when the log was recorded.
-- `DEBUG`: `%(levelname)s` - The severity level of the message.
-- `google_adk.google.adk.models.google_llm`: `%(name)s` - The name of the logger. This hierarchical name tells you exactly which module in the ADK framework produced the log. In this case, it's the Google LLM model wrapper.
-- `Request to LLM: contents { ... }`: `%(message)s` - The actual log message.
+| Log Segment | Format Specifier | Meaning |
+| ------------------------------- | ---------------- | ---------------------------------------------- |
+| `2025-07-08 11:22:33,456` | `%(asctime)s` | Timestamp |
+| `DEBUG` | `%(levelname)s` | Severity level |
+| `google_adk.models.google_llm` | `%(name)s` | Logger name (the module that produced the log) |
+| `LLM Request: contents { ... }` | `%(message)s` | The actual log message |
By reading the logger name, you can immediately pinpoint the source of the log and understand its context within the agent's architecture.
diff --git a/docs/plugins/index.md b/docs/plugins/index.md
index 30f8cbab0..9bb23080b 100644
--- a/docs/plugins/index.md
+++ b/docs/plugins/index.md
@@ -67,8 +67,7 @@ in the repository.
Start by extending the `BasePlugin` class and add one or more `callback`
methods, as shown in the following code example:
-```
-# count_plugin.py
+```py title="count_plugin.py"
from google.adk.agents.base_agent import BaseAgent
from google.adk.agents.callback_context import CallbackContext
from google.adk.models.llm_request import LlmRequest
@@ -111,8 +110,7 @@ multiple Plugins with this parameter. The following code example shows how to
register the `CountInvocationPlugin` plugin defined in the previous section with
a simple ADK agent.
-
-```
+```py
from google.adk.runners import InMemoryRunner
from google.adk import Agent
from google.adk.tools.tool_context import ToolContext
@@ -169,8 +167,8 @@ if __name__ == "__main__":
Run the plugin as you typically would. The following shows how to run the
command line:
-```
-> python3 -m path.to.main
+```sh
+python3 -m path.to.main
```
Plugins are not supported by the
@@ -181,7 +179,7 @@ interface.
The output of this previously described agent should look similar to the
following:
-```
+```log
[Plugin] Agent run count: 1
[Plugin] LLM request count: 1
** Got event from hello_world
@@ -326,7 +324,7 @@ giving you a chance to inspect or modify the initial input.\
The following code example shows the basic syntax of this callback:
-```
+```py
async def on_user_message_callback(
self,
*,
@@ -351,7 +349,7 @@ logic begins.
The following code example shows the basic syntax of this callback:
-```
+```py
async def before_run_callback(
self, *, invocation_context: InvocationContext
) -> Optional[types.Content]:
@@ -411,7 +409,7 @@ normally.****
The following code example shows the basic syntax of this callback:
-```
+```py
async def on_model_error_callback(
self,
*,
@@ -458,7 +456,7 @@ works as follows:
The following code example shows the basic syntax of this callback:
-```
+```py
async def on_tool_error_callback(
self,
*,
@@ -485,7 +483,7 @@ before it's streamed to the client.
The following code example shows the basic syntax of this callback:
-```
+```py
async def on_event_callback(
self, *, invocation_context: InvocationContext, event: Event
) -> Optional[Event]:
@@ -507,8 +505,8 @@ cleanup and final reporting.
The following code example shows the basic syntax of this callback:
-```
+```py
async def after_run_callback(
self, *, invocation_context: InvocationContext
) -> Optional[None]:
-```
\ No newline at end of file
+```
diff --git a/docs/safety/index.md b/docs/safety/index.md
index 40cf80c8f..561a3f4aa 100644
--- a/docs/safety/index.md
+++ b/docs/safety/index.md
@@ -4,7 +4,7 @@
As AI agents grow in capability, ensuring they operate safely, securely, and align with your brand values is paramount. Uncontrolled agents can pose risks, including executing misaligned or harmful actions, such as data exfiltration, and generating inappropriate content that can impact your brand’s reputation. **Sources of risk include vague instructions, model hallucination, jailbreaks and prompt injections from adversarial users, and indirect prompt injections via tool use.**
-[Google Cloud's Vertex AI](https://cloud.google.com/vertex-ai/generative-ai/docs/overview) provides a multi-layered approach to mitigate these risks, enabling you to build powerful *and* trustworthy agents. It offers several mechanisms to establish strict boundaries, ensuring agents only perform actions you've explicitly allowed:
+[Google Cloud Vertex AI](https://cloud.google.com/vertex-ai/generative-ai/docs/overview) provides a multi-layered approach to mitigate these risks, enabling you to build powerful *and* trustworthy agents. It offers several mechanisms to establish strict boundaries, ensuring agents only perform actions you've explicitly allowed:
1. **Identity and Authorization**: Control who the agent **acts as** by defining agent and user auth.
2. **Guardrails to screen inputs and outputs:** Control your model and tool calls precisely.
diff --git a/docs/sessions/express-mode.md b/docs/sessions/express-mode.md
index 493ba65af..dfe20b3a4 100644
--- a/docs/sessions/express-mode.md
+++ b/docs/sessions/express-mode.md
@@ -1,7 +1,7 @@
# Vertex AI Express Mode: Using Vertex AI Sessions and Memory for Free
If you are interested in using either the `VertexAiSessionService` or `VertexAiMemoryBankService` but you don't have a Google Cloud Project, you can sign up for Vertex AI Express Mode and get access
-for free and try out these services! You can sign up with an eligible ***gmail*** account [here](https://console.cloud.google.com/expressmode). For more details about Vertex AI Express mode, see the [overview page](https://cloud.google.com/vertex-ai/generative-ai/docs/start/express-mode/overview).
+for free and try out these services! You can sign up with an eligible ***gmail*** account [here](https://console.cloud.google.com/expressmode). For more details about Vertex AI Express mode, see the [overview page](https://cloud.google.com/vertex-ai/generative-ai/docs/start/express-mode/overview).
Once you sign up, get an [API key](https://cloud.google.com/vertex-ai/generative-ai/docs/start/express-mode/overview#api-keys) and you can get started using your local ADK agent with Vertex AI Session and Memory services!
!!! info Vertex AI Express mode limitations
@@ -11,64 +11,66 @@ Once you sign up, get an [API key](https://cloud.google.com/vertex-ai/generative
## Create an Agent Engine
`Session` objects are children of an `AgentEngine`. When using Vertex AI Express Mode, we can create an empty `AgentEngine` parent to manage all of our `Session` and `Memory` objects.
-First, ensure that your enviornment variables are set correctly. For example, in Python:
+First, ensure that your environment variables are set correctly. For example, in Python:
- ```env title="weather_agent/.env"
- GOOGLE_GENAI_USE_VERTEXAI=TRUE
- GOOGLE_API_KEY=PASTE_YOUR_ACTUAL_EXPRESS_MODE_API_KEY_HERE
- ```
+```env title="weather_agent/.env"
+GOOGLE_GENAI_USE_VERTEXAI=TRUE
+GOOGLE_API_KEY=PASTE_YOUR_ACTUAL_EXPRESS_MODE_API_KEY_HERE
+```
Next, we can create our Agent Engine instance. You can use the Gen AI SDK.
-=== "GenAI SDK"
+=== "Gen AI SDK"
+
1. Import Gen AI SDK.
- ```
+ ```py
from google import genai
```
- 2. Set Vertex AI to be True, then use a POST request to create the Agent Engine
+ 2. Set Vertex AI to be True, then use a `POST` request to create the Agent Engine
- ```
- # Create Agent Engine with GenAI SDK
- client = genai.Client(vertexai = True)._api_client
+ ```py
+ # Create Agent Engine with Gen AI SDK
+ client = genai.Client(vertexai=True)._api_client
response = client.request(
- http_method='POST',
- path=f'reasoningEngines',
- request_dict={"displayName": "YOUR_AGENT_ENGINE_DISPLAY_NAME", "description": "YOUR_AGENT_ENGINE_DESCRIPTION"},
- )
+ http_method='POST',
+ path=f'reasoningEngines',
+ request_dict={"displayName": "YOUR_AGENT_ENGINE_DISPLAY_NAME", "description": "YOUR_AGENT_ENGINE_DESCRIPTION"},
+ )
response
```
3. Replace `YOUR_AGENT_ENGINE_DISPLAY_NAME` and `YOUR_AGENT_ENGINE_DESCRIPTION` with your use case.
4. Get the Agent Engine name and ID from the response
- ```
- APP_NAME="/".join(response['name'].split("/")[:6])
- APP_ID=APP_NAME.split('/')[-1]
+ ```py
+ APP_NAME = "/".join(response['name'].split("/")[:6])
+ APP_ID = APP_NAME.split('/')[-1]
```
## Managing Sessions with a `VertexAiSessionService`
-[VertexAiSessionService](session.md###sessionservice-implementations) is compatible with Vertex AI Express mode API Keys. We can
+[`VertexAiSessionService`](session.md###sessionservice-implementations) is compatible with Vertex AI Express mode API Keys. We can
instead initialize the session object without any project or location.
- ```py
- # Requires: pip install google-adk[vertexai]
- # Plus environment variable setup:
- # GOOGLE_GENAI_USE_VERTEXAI=TRUE
- # GOOGLE_API_KEY=PASTE_YOUR_ACTUAL_EXPRESS_MODE_API_KEY_HERE
- from google.adk.sessions import VertexAiSessionService
-
- # The app_name used with this service should be the Reasoning Engine ID or name
- APP_ID = "your-reasoning-engine-id"
-
- # Project and location are not required when initializing with Vertex Express Mode
- session_service = VertexAiSessionService(agent_engine_id=APP_ID)
- # Use REASONING_ENGINE_APP_ID when calling service methods, e.g.:
- # session = await session_service.create_session(app_name=REASONING_ENGINE_APP_ID, user_id= ...)
- ```
+```py
+# Requires: pip install google-adk[vertexai]
+# Plus environment variable setup:
+# GOOGLE_GENAI_USE_VERTEXAI=TRUE
+# GOOGLE_API_KEY=PASTE_YOUR_ACTUAL_EXPRESS_MODE_API_KEY_HERE
+from google.adk.sessions import VertexAiSessionService
+
+# The app_name used with this service should be the Reasoning Engine ID or name
+APP_ID = "your-reasoning-engine-id"
+
+# Project and location are not required when initializing with Vertex Express Mode
+session_service = VertexAiSessionService(agent_engine_id=APP_ID)
+# Use REASONING_ENGINE_APP_ID when calling service methods, e.g.:
+# session = await session_service.create_session(app_name=REASONING_ENGINE_APP_ID, user_id= ...)
+```
+
!!! info Session Service Quotas
For Free Express Mode Projects, `VertexAiSessionService` has the following quota:
@@ -78,24 +80,25 @@ instead initialize the session object without any project or location.
## Managing Memories with a `VertexAiMemoryBankService`
-[VertexAiMemoryBankService](memory.md###memoryservice-implementations) is compatible with Vertex AI Express mode API Keys. We can
+[`VertexAiMemoryBankService`](memory.md###memoryservice-implementations) is compatible with Vertex AI Express mode API Keys. We can
instead initialize the memory object without any project or location.
- ```py
- # Requires: pip install google-adk[vertexai]
- # Plus environment variable setup:
- # GOOGLE_GENAI_USE_VERTEXAI=TRUE
- # GOOGLE_API_KEY=PASTE_YOUR_ACTUAL_EXPRESS_MODE_API_KEY_HERE
- from google.adk.sessions import VertexAiMemoryBankService
-
- # The app_name used with this service should be the Reasoning Engine ID or name
- APP_ID = "your-reasoning-engine-id"
-
- # Project and location are not required when initializing with Vertex Express Mode
- memory_service = VertexAiMemoryBankService(agent_engine_id=APP_ID)
- # Generate a memory from that session so the Agent can remember relevant details about the user
- # memory = await memory_service.add_session_to_memory(session)
- ```
+```py
+# Requires: pip install google-adk[vertexai]
+# Plus environment variable setup:
+# GOOGLE_GENAI_USE_VERTEXAI=TRUE
+# GOOGLE_API_KEY=PASTE_YOUR_ACTUAL_EXPRESS_MODE_API_KEY_HERE
+from google.adk.sessions import VertexAiMemoryBankService
+
+# The app_name used with this service should be the Reasoning Engine ID or name
+APP_ID = "your-reasoning-engine-id"
+
+# Project and location are not required when initializing with Vertex Express Mode
+memory_service = VertexAiMemoryBankService(agent_engine_id=APP_ID)
+# Generate a memory from that session so the Agent can remember relevant details about the user
+# memory = await memory_service.add_session_to_memory(session)
+```
+
!!! info Memory Service Quotas
For Free Express Mode Projects, `VertexAiMemoryBankService` has the following quota:
@@ -104,6 +107,6 @@ instead initialize the memory object without any project or location.
## Code Sample: Weather Agent with Session and Memory using Vertex AI Express Mode
-In this sample, we create a weather agent that utilizes both `VertexAiSessionService` and `VertexAiMemoryBankService` for context maangement, allowing our agent to recall user prefereneces and conversations!
+In this sample, we create a weather agent that utilizes both `VertexAiSessionService` and `VertexAiMemoryBankService` for context management, allowing our agent to recall user preferences and conversations!
**[Weather Agent with Session and Memory using Vertex AI Express Mode](https://github.com/google/adk-docs/blob/main/examples/python/notebooks/express-mode-weather-agent.ipynb)**
diff --git a/docs/sessions/session.md b/docs/sessions/session.md
index 1590cae72..f911c78d1 100644
--- a/docs/sessions/session.md
+++ b/docs/sessions/session.md
@@ -143,7 +143,7 @@ the storage backend that best suits your needs:
2. **`VertexAiSessionService`**
- * **How it works:** Uses Google Cloud's Vertex AI infrastructure via API
+ * **How it works:** Uses Google Cloud Vertex AI infrastructure via API
calls for session management.
* **Persistence:** Yes. Data is managed reliably and scalably via
[Vertex AI Agent Engine](https://google.github.io/adk-docs/deploy/agent-engine/).
diff --git a/docs/tools/built-in-tools.md b/docs/tools/built-in-tools.md
index 6c8c96834..1fce5d799 100644
--- a/docs/tools/built-in-tools.md
+++ b/docs/tools/built-in-tools.md
@@ -123,7 +123,7 @@ Please refer to the [RAG ADK agent sample](https://github.com/google/adk-samples
### Vertex AI Search
-The `vertex_ai_search_tool` uses Google Cloud's Vertex AI Search, enabling the
+The `vertex_ai_search_tool` uses Google Cloud Vertex AI Search, enabling the
agent to search across your private, configured data stores (e.g., internal
documents, company policies, knowledge bases). This built-in tool requires you
to provide the specific data store ID during configuration. For further details of the tool, see [Understanding Vertex AI Search grounding](../grounding/vertex_ai_search_grounding.md).
diff --git a/llms-full.txt b/llms-full.txt
index 45f012c86..9da767c25 100644
--- a/llms-full.txt
+++ b/llms-full.txt
@@ -75229,7 +75229,7 @@ title: "Safety and Security - Agent Development Kit"
As AI agents grow in capability, ensuring they operate safely, securely, and align with your brand values is paramount. Uncontrolled agents can pose risks, including executing misaligned or harmful actions, such as data exfiltration, and generating inappropriate content that can impact your brand’s reputation. **Sources of risk include vague instructions, model hallucination, jailbreaks and prompt injections from adversarial users, and indirect prompt injections via tool use.**
-[Google Cloud's Vertex AI](https://cloud.google.com/vertex-ai/generative-ai/docs/overview) provides a multi-layered approach to mitigate these risks, enabling you to build powerful _and_ trustworthy agents. It offers several mechanisms to establish strict boundaries, ensuring agents only perform actions you've explicitly allowed:
+[Google Cloud Vertex AI](https://cloud.google.com/vertex-ai/generative-ai/docs/overview) provides a multi-layered approach to mitigate these risks, enabling you to build powerful _and_ trustworthy agents. It offers several mechanisms to establish strict boundaries, ensuring agents only perform actions you've explicitly allowed:
1. **Identity and Authorization**: Control who the agent **acts as** by defining agent and user auth.
2. **Guardrails to screen inputs and outputs:** Control your model and tool calls precisely.
@@ -75688,7 +75688,7 @@ memory_service = InMemoryMemoryService()
2. **`VertexAiRagMemoryService`**
- - **How it works:** Leverages Google Cloud's Vertex AI RAG (Retrieval-Augmented Generation) service. It ingests session data into a specified RAG Corpus and uses powerful semantic search capabilities for retrieval.
+ - **How it works:** Leverages Google Cloud Vertex AI RAG (Retrieval-Augmented Generation) service. It ingests session data into a specified RAG Corpus and uses powerful semantic search capabilities for retrieval.
- **Persistence:** Yes. The knowledge is stored persistently within the configured Vertex AI RAG Corpus.
- **Requires:** A Google Cloud project, appropriate permissions, necessary SDKs ( `pip install google-adk[vertexai]`), and a pre-configured Vertex AI RAG Corpus resource name/ID.
- **Best for:** Production applications needing scalable, persistent, and semantically relevant knowledge retrieval, especially when deployed on Google Cloud.
@@ -75984,7 +75984,7 @@ the storage backend that best suits your needs:
2. **`VertexAiSessionService`**
- - **How it works:** Uses Google Cloud's Vertex AI infrastructure via API
+ - **How it works:** Uses Google Cloud Vertex AI infrastructure via API
calls for session management.
- **Persistence:** Yes. Data is managed reliably and scalably via
[Vertex AI Agent Engine](https://google.github.io/adk-docs/deploy/agent-engine/).
@@ -79973,7 +79973,7 @@ public class CodeExecutionAgentApp {
### Vertex AI Search [¶](https://google.github.io/adk-docs/tools/built-in-tools/\#vertex-ai-search "Permanent link")
-The `vertex_ai_search_tool` uses Google Cloud's Vertex AI Search, enabling the
+The `vertex_ai_search_tool` uses Google Cloud Vertex AI Search, enabling the
agent to search across your private, configured data stores (e.g., internal
documents, company policies, knowledge bases). This built-in tool requires you
to provide the specific data store ID during configuration.
diff --git a/llms.txt b/llms.txt
index d4d4f7add..67d7e3c8a 100644
--- a/llms.txt
+++ b/llms.txt
@@ -202,7 +202,7 @@ ADK supports various types of tools:
* **Built-in Tools**: Ready-to-use tools provided by the framework for common tasks. These include:
* **Google Search**: Allows the agent to perform web searches using Google Search, compatible with Gemini 2 models.
* **Code Execution**: Enables the agent to execute code using the built\_in\_code\_execution tool, typically with Gemini 2 models, for calculations or data manipulation.
- * **Vertex AI Search**: Uses Google Cloud's Vertex AI Search for agents to search across private, configured data stores.
+ * **Vertex AI Search**: Uses Google Cloud Vertex AI Search for agents to search across private, configured data stores.
* **Limitations**: Currently, each root agent or single agent only supports one built-in tool, and no other tools of any type can be used in the same agent. Built-in tools are also not supported within sub-agents.
* **Third-Party Tools**: Integrates tools from other AI Agent frameworks like CrewAI and LangChain, enabling faster development and reuse of existing tools.
* **LangChain Tools**: Uses the LangchainTool wrapper to integrate tools from the LangChain ecosystem (e.g., Tavily search tool).
@@ -323,7 +323,7 @@ Evaluation can be run via:
#### **Safety & Security for AI Agents**
-Ensuring AI agents operate safely and securely is paramount due to risks like misaligned actions, data exfiltration, and inappropriate content generation. Sources of risk include vague instructions, model hallucination, jailbreaks, and prompt injections. Google Cloud's Vertex AI provides a multi-layered approach to mitigate these risks.
+Ensuring AI agents operate safely and securely is paramount due to risks like misaligned actions, data exfiltration, and inappropriate content generation. Sources of risk include vague instructions, model hallucination, jailbreaks, and prompt injections. Google Cloud Vertex AI provides a multi-layered approach to mitigate these risks.
* **Safety and Security Risks**: Risks categorized as misalignment/goal corruption, harmful content generation (including brand safety), and unsafe actions (e.g., executing damaging commands, leaking sensitive data).
* **Best Practices**:
@@ -356,4 +356,3 @@ Ways to contribute include:
* **Writing Code**: Submit PRs to google/adk-python, google/adk-java, or google/adk-docs. All contributions undergo a review process via GitHub Pull Requests.
By contributing, you agree that your contributions will be licensed under the project's Apache 2.0 License.
-