Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
12 changes: 7 additions & 5 deletions .github/workflows/link-checker.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -19,12 +19,14 @@ on:
branches:
- main
pull_request:
branches:
- main

jobs:
mkdocs-link-check:
markdown-link-check:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@master
- uses: byrnereese/github-action-mkdocs-link-check@v1
- uses: actions/checkout@v4
- uses: tcort/github-action-markdown-link-check@v1
with:
use-quiet-mode: 'yes'
use-verbose-mode: 'yes'
config-file: '.link-checker-config.json'
16 changes: 16 additions & 0 deletions .link-checker-config.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,16 @@
{
"ignorePatterns": [
{
"pattern": "^https?://(localhost|127\\.0\\.0\\.1)"
},
{
"pattern": "^((?!http).)*$"
}
],
"aliveStatusCodes": [
200,
206,
302,
403
]
}
2 changes: 1 addition & 1 deletion docs/get-started/streaming/quickstart-streaming.md
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,7 @@ adk-streaming/ # Project folder

### agent.py

Copy-paste the following code block to the [`agent.py`](http://agent.py).
Copy-paste the following code block into the `agent.py` file.

For `model`, please double check the model ID as described earlier in the [Models section](#supported-models).

Expand Down
16 changes: 8 additions & 8 deletions docs/tools/google-cloud-tools.md
Original file line number Diff line number Diff line change
Expand Up @@ -102,8 +102,8 @@ you only need to follow a subset of these steps.
from the API, use
`` `projects/my-project-id/locations/us-west1/apis/my-api-id` ``

4. Create your agent file [Agent.py](http://Agent.py) and add the created tools
to your agent definition:
4. Create your agent file Agent.py and add the created tools to your agent
definition:

```py
from google.adk.agents.llm_agent import LlmAgent
Expand Down Expand Up @@ -188,20 +188,20 @@ Connect your agent to enterprise applications using


![Google Cloud Tools](../assets/application-integration-overview.png)

2. Go to [Connection Tool](https://console.cloud.google.com/integrations/templates/connection-tool/locations/us-central1)
template from the template library and click on "USE TEMPLATE" button.


![Google Cloud Tools](../assets/use-connection-tool-template.png)

3. Fill the Integration Name as **ExecuteConnection** (It is mandatory to use this integration name only) and
select the region same as the connection region. Click on "CREATE".

4. Publish the integration by using the "PUBLISH" button on the Application Integration Editor.


![Google Cloud Tools](../assets/publish-integration.png)
![Google Cloud Tools](../assets/publish-integration.png)

**Steps:**

Expand All @@ -223,7 +223,7 @@ Connect your agent to enterprise applications using
```

**Note:**

* You can provide service account to be used instead of using default credentials by generating [Service Account Key](https://cloud.google.com/iam/docs/keys-create-delete#creating) and providing right Application Integration and Integration Connector IAM roles to the service account.
* To find the list of supported entities and actions for a connection, use the connectors apis: [listActions](https://cloud.google.com/integration-connectors/docs/reference/rest/v1/projects.locations.connections.connectionSchemaMetadata/listActions) or [listEntityTypes](https://cloud.google.com/integration-connectors/docs/reference/rest/v1/projects.locations.connections.connectionSchemaMetadata/listEntityTypes)

Expand Down Expand Up @@ -255,7 +255,7 @@ Connect your agent to enterprise applications using
}

oauth_scheme = dict_to_auth_scheme(oauth2_data_google_cloud)

auth_credential = AuthCredential(
auth_type=AuthCredentialTypes.OAUTH2,
oauth2=OAuth2Auth(
Expand Down Expand Up @@ -324,7 +324,7 @@ workflow as a tool for your agent or create a new one.
project="test-project", # TODO: replace with GCP project of the connection
location="us-central1", #TODO: replace with location of the connection
integration="test-integration", #TODO: replace with integration name
triggers=["api_trigger/test_trigger"],#TODO: replace with trigger id(s). Empty list would mean all api triggers in the integration to be considered.
triggers=["api_trigger/test_trigger"],#TODO: replace with trigger id(s). Empty list would mean all api triggers in the integration to be considered.
service_account_credentials='{...}', #optional. Stringified json for service account key
tool_name_prefix="tool_prefix1",
tool_instructions="..."
Expand Down
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
### Create Database Objects

In order to run this Cloud Run Function, you should have created the AlloyDB side of things prior, for the database objects.
Let’s create an AlloyDB cluster, instance and table where the ecommerce dataset will be loaded.
In order to run this Cloud Run Function, you should have created the AlloyDB side of things prior, for the database objects.
Let’s create an AlloyDB cluster, instance and table where the ecommerce dataset will be loaded.

#### Create a cluster and instance
1. Navigate the AlloyDB page in the Cloud Console. An easy way to find most pages in Cloud Console is to search for them using the search bar of the console.
Expand All @@ -15,13 +15,13 @@ PostgreSQL 15 / latest recommended
Region: “us-central1”
Networking: “default”

4. When you select the default network, you'll see a screen like the one below.
Select SET UP CONNECTION.
4. When you select the default network, you'll see a screen like the one below.
Select SET UP CONNECTION.

5. From there, select "Use an automatically allocated IP range" and Continue. After reviewing the information, select CREATE CONNECTION.
5. From there, select "Use an automatically allocated IP range" and Continue. After reviewing the information, select CREATE CONNECTION.

6. Once your network is set up, you can continue to create your cluster. Click CREATE CLUSTER to complete setting up of the cluster as shown below:

7. Make sure to change the instance id to vector-instance
If you cannot change it, remember to change the instance id in all the upcoming references.

Expand Down Expand Up @@ -49,8 +49,8 @@ CREATE EXTENSION IF NOT EXISTS vector;
select extname, extversion from pg_extension;


4. Create a table
You can create a table using the DDL statement below in the AlloyDB Studio:
4. Create a table
You can create a table using the DDL statement below in the AlloyDB Studio:

CREATE TABLE patents_data ( id VARCHAR(25), type VARCHAR(25), number VARCHAR(20), country VARCHAR(2), date VARCHAR(20), abstract VARCHAR(300000), title VARCHAR(100000), kind VARCHAR(5), num_claims BIGINT, filename VARCHAR(100), withdrawn BIGINT, abstract_embeddings vector(768)) ;

Expand All @@ -62,7 +62,7 @@ Run the below statement to grant execute on the “embedding” function:
GRANT EXECUTE ON FUNCTION embedding TO postgres;

6. Grant Vertex AI User ROLE to the AlloyDB service account

From Google Cloud IAM console, grant the AlloyDB service account (that looks like this: service-<<PROJECT_NUMBER>>@gcp-sa-alloydb.iam.gserviceaccount.com) access to the role “Vertex AI User”. PROJECT_NUMBER will have your project number.

PROJECT_ID=$(gcloud config get-value project)
Expand All @@ -72,10 +72,10 @@ gcloud projects add-iam-policy-binding $PROJECT_ID \
--role="roles/aiplatform.user"

7. Load patent data into the database
The [Google Patents Public Datasets]([url](https://console.cloud.google.com/launcher/browse?q=google%20patents%20public%20datasets&filter=solution-type:dataset&_ga=2.179551075.-653757248.1714456172)) from BigQuery will be used as our dataset. Here is the link: https://console.cloud.google.com/launcher/browse?q=google%20patents%20public%20datasets&filter=solution-type:dataset&_ga=2.179551075.-653757248.1714456172
The [Google Patents Public Datasets](https://console.cloud.google.com/launcher/browse?q=google%20patents%20public%20datasets&filter=solution-type:dataset&_ga=2.179551075.-653757248.1714456172) from BigQuery will be used as our dataset. Here is the link: https://console.cloud.google.com/launcher/browse?q=google%20patents%20public%20datasets&filter=solution-type:dataset&_ga=2.179551075.-653757248.1714456172

We will use the AlloyDB Studio to run our queries. The [alloydb-pgvector]([url](https://github.com/AbiramiSukumaran/alloydb-pgvector)) repository includes the
[insert_scripts.sql]([url](https://github.com/AbiramiSukumaran/alloydb-pgvector/blob/main/insert_scripts.sql)) script we will run to load the patent data:
We will use the AlloyDB Studio to run our queries. The [alloydb-pgvector](https://github.com/AbiramiSukumaran/alloydb-pgvector) repository includes the
[insert_scripts.sql](https://github.com/AbiramiSukumaran/alloydb-pgvector/blob/main/insert_scripts.sql) script we will run to load the patent data:
https://github.com/AbiramiSukumaran/alloydb-pgvector/blob/main/insert_scripts.sql

a. In the Google Cloud console, open the AlloyDB page.
Expand Down Expand Up @@ -122,11 +122,11 @@ Maximum instances* 3
instance type* fi-micro

4. Click CREATE and this connector should be listed in the egress settings now.

6. Select the newly created connector.

8. Opt for all traffic to be routed through this VPC connector.

10. Click NEXT and then DEPLOY.

Once the updated Cloud Function is deployed, you should see the endpoint generated. Copy that and replace in the following command:
Expand All @@ -139,6 +139,3 @@ curl -X POST <<YOUR_ENDPOINT>> \
| jq .

That's it! It is that simple to perform an advanced Contextual Similarity Vector Search using the Embeddings model on AlloyDB data.