Skip to content

Commit b30cb45

Browse files
Merge branch 'main' into DOC-14111
2 parents f7baa0c + 92f4938 commit b30cb45

15 files changed

+249
-78
lines changed

src/current/v23.2/cloud-storage-authentication.md

Lines changed: 15 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -94,6 +94,10 @@ To limit the control access to your Amazon S3 buckets, you can create IAM roles
9494

9595
{% include_cached new-in.html version="v23.2" %} You can use the `external_id` option with `ASSUME_ROLE` to specify an [external ID](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_create_for-user_externalid.html) for third-party access to your Amazon S3 bucket. The external ID is a unique ID that the third party provides you along with their ARN. For guidance on `external_id` usage in CockroachDB, refer to the [following example](#set-up-amazon-s3-assume-role).
9696

97+
{{site.data.alerts.callout_info}}
98+
You must [URL encode](https://www.w3schools.com/tags/ref_urlencode.ASP) the entire value passed to `ASSUME_ROLE`.
99+
{{site.data.alerts.end}}
100+
97101
{{site.data.alerts.callout_success}}
98102
Role assumption applies the principle of least privilege rather than directly providing privilege to a user. Creating IAM roles to manage access to AWS resources is Amazon's recommended approach compared to giving access directly to IAM users.
99103
{{site.data.alerts.end}}
@@ -160,14 +164,14 @@ For example, to configure a user to assume an IAM role that allows a bulk operat
160164

161165
{% include_cached copy-clipboard.html %}
162166
~~~sql
163-
BACKUP DATABASE movr INTO 's3://{bucket name}?AWS_ACCESS_KEY_ID={user key}&AWS_SECRET_ACCESS_KEY={user secret key}&ASSUME_ROLE=arn:aws:iam::{account ID}:role/{role name}' AS OF SYSTEM TIME '-10s';
167+
BACKUP DATABASE movr INTO 's3://{bucket name}?AWS_ACCESS_KEY_ID={user key}&AWS_SECRET_ACCESS_KEY={user secret key}&ASSUME_ROLE=arn%3Aaws%3Aiam%3A%3A{account ID}%3Arole%2F{role name}' AS OF SYSTEM TIME '-10s';
164168
~~~
165169

166170
If your user also has an external ID, you can pass that with `ASSUME_ROLE`:
167171

168172
{% include_cached copy-clipboard.html %}
169173
~~~sql
170-
BACKUP DATABASE movr INTO 's3://{bucket name}?AWS_ACCESS_KEY_ID={user key}&AWS_SECRET_ACCESS_KEY={user secret key}&ASSUME_ROLE=arn:aws:iam::{account ID}:role/{role name};external_id={Unique ID}' AS OF SYSTEM TIME '-10s';
174+
BACKUP DATABASE movr INTO 's3://{bucket name}?AWS_ACCESS_KEY_ID={user key}&AWS_SECRET_ACCESS_KEY={user secret key}&ASSUME_ROLE=arn%3Aaws%3Aiam%3A%3A{account ID}%3Arole%2F{role name}%3Bexternal_id%3D{Unique ID}' AS OF SYSTEM TIME '-10s';
171175
~~~
172176

173177
CockroachDB also supports authentication for assuming roles when taking encrypted backups. To use with an encrypted backup, pass the `ASSUME_ROLE` parameter to the KMS URI as well as the bucket's:
@@ -213,14 +217,14 @@ When passing a chained role into `BACKUP`, it will follow this pattern:
213217

214218
{% include_cached copy-clipboard.html %}
215219
~~~sql
216-
BACKUP DATABASE movr INTO "s3://{bucket name}?AWS_ACCESS_KEY_ID={user's key}&AWS_SECRET_ACCESS_KEY={user's secret key}&ASSUME_ROLE={role A ARN},{role B ARN},{role C ARN}" AS OF SYSTEM TIME '-10s';
220+
BACKUP DATABASE movr INTO "s3://{bucket name}?AWS_ACCESS_KEY_ID={user's key}&AWS_SECRET_ACCESS_KEY={user's secret key}&ASSUME_ROLE={role A ARN}%2C{role B ARN}%2C{role C ARN}" AS OF SYSTEM TIME '-10s';
217221
~~~
218222

219223
You can also specify a different [external ID](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_create_for-user_externalid.html) for each chained role. For example:
220224

221225
{% include_cached copy-clipboard.html %}
222226
~~~sql
223-
BACKUP DATABASE movr INTO "s3://{bucket name}?AWS_ACCESS_KEY_ID={user's key}&AWS_SECRET_ACCESS_KEY={user's secret key}&ASSUME_ROLE={role A ARN};external_id={ID A},{role B ARN};external_id={ID B},{role C ARN};external_id={ID C}" AS OF SYSTEM TIME '-10s';
227+
BACKUP DATABASE movr INTO "s3://{bucket name}?AWS_ACCESS_KEY_ID={user's key}&AWS_SECRET_ACCESS_KEY={user's secret key}&ASSUME_ROLE={role A ARN}%3Bexternal_id%3D{ID A}%2C{role B ARN}%3Bexternal_id%3D{ID B}%2C{role C ARN}%3Bexternal_id%3D{ID C}" AS OF SYSTEM TIME '-10s';
224228
~~~
225229

226230
Each chained role is listed separated by a `,` character. You can copy the ARN of the role from its summary page in the [AWS Management console](https://aws.amazon.com/console/).
@@ -340,7 +344,7 @@ To run an operation, use [`implicit` authentication](#google-cloud-storage-impli
340344
For a backup to Amazon S3:
341345

342346
~~~sql
343-
BACKUP DATABASE {database} INTO 's3://{bucket name}/{path}?AUTH=implicit&ASSUME_ROLE=arn:aws:iam::{AWS account ID}:role/crl-dr-store-user-{cluster ID suffix},arn:aws:iam::{account ID}:role/{operation role name}' AS OF SYSTEM TIME '-10s';
347+
BACKUP DATABASE {database} INTO 's3://{bucket name}/{path}?AUTH=implicit&ASSUME_ROLE=arn%3Aaws%3Aiam%3A%3A{AWS account ID}%3Arole%2Fcrl-dr-store-user-{cluster ID suffix}%2Carn%3Aaws%3Aiam%3A%3A{account ID}%3Arole%2F{operation role name}' AS OF SYSTEM TIME '-10s';
344348
~~~
345349

346350
In this SQL statement, the identity role assumes the operation role that has permission to write a backup to the S3 bucket.
@@ -352,7 +356,7 @@ To run an operation, you can use [`implicit` authentication](#google-cloud-stora
352356
For a backup to Amazon S3:
353357

354358
~~~sql
355-
BACKUP DATABASE {database} INTO 's3://{bucket name}/{path}?AUTH=implicit&ASSUME_ROLE=arn:aws:iam::{account ID}:role/{operation role name}' AS OF SYSTEM TIME '-10s';
359+
BACKUP DATABASE {database} INTO 's3://{bucket name}/{path}?AUTH=implicit&ASSUME_ROLE=arn%3Aaws%3Aiam%3A%3A{account ID}%3Arole%2F{operation role name}' AS OF SYSTEM TIME '-10s';
356360
~~~
357361

358362
In this SQL statement, `AUTH=implicit` uses the identity role to authenticate to the S3 bucket. The identity role then assumes the operation role that has permission to write a backup to the S3 bucket.
@@ -462,14 +466,14 @@ For this example, both service accounts have already been created. If you need t
462466

463467
{% include_cached copy-clipboard.html %}
464468
~~~sql
465-
BACKUP DATABASE <database> INTO 'gs://{bucket name}/{path}?AUTH=implicit&ASSUME_ROLE={service account name}@{project name}.iam.gserviceaccount.com';
469+
BACKUP DATABASE <database> INTO 'gs://{bucket name}/{path}?AUTH=implicit&ASSUME_ROLE={service account name}%40{project name}.iam.gserviceaccount.com';
466470
~~~
467471

468472
CockroachDB also supports authentication for assuming roles when taking encrypted backups. To use with an encrypted backup, pass the `ASSUME_ROLE` parameter to the KMS URI as well as the bucket's:
469473
470474
{% include_cached copy-clipboard.html %}
471475
~~~sql
472-
BACKUP DATABASE <database> INTO 'gs://{bucket name}/{path}?AUTH=implicit&ASSUME_ROLE={service account name}@{project name}.iam.gserviceaccount.com' WITH kms = 'gs:///projects/{project name}/locations/us-east1/keyRings/{key ring name}/cryptoKeys/{key name}?AUTH=IMPLICIT&ASSUME_ROLE={service account name}@{project name}.iam.gserviceaccount.com';
476+
BACKUP DATABASE <database> INTO 'gs://{bucket name}/{path}?AUTH=implicit&ASSUME_ROLE={service account name}%40{project name}.iam.gserviceaccount.com' WITH kms = 'gs:///projects/{project name}/locations/us-east1/keyRings/{key ring name}/cryptoKeys/{key name}?AUTH=IMPLICIT&ASSUME_ROLE={service account name}%40{project name}.iam.gserviceaccount.com';
473477
~~~
474478
475479
For more information on Google Cloud Storage KMS URI formats, see [Take and Restore Encrypted Backups]({% link {{ page.version.version }}/take-and-restore-encrypted-backups.md %}).
@@ -498,7 +502,7 @@ When passing a chained role into `BACKUP`, it will follow this pattern with each
498502
499503
{% include_cached copy-clipboard.html %}
500504
~~~sql
501-
BACKUP DATABASE <database> INTO 'gs://{bucket name}/{path}?AUTH=implicit&ASSUME_ROLE={intermediate service account name}@{project name}.iam.gserviceaccount.com,{final service account name}@{project name}.iam.gserviceaccount.com'; AS OF SYSTEM TIME '-10s';
505+
BACKUP DATABASE <database> INTO 'gs://{bucket name}/{path}?AUTH=implicit&ASSUME_ROLE={intermediate service account name}%40{project name}.iam.gserviceaccount.com%2C{final service account name}%40{project name}.iam.gserviceaccount.com'; AS OF SYSTEM TIME '-10s';
502506
~~~
503507
504508
## Google Cloud Storage workload identity
@@ -625,7 +629,7 @@ For a backup to Google Cloud Storage:
625629

626630
{% include_cached copy-clipboard.html %}
627631
~~~sql
628-
BACKUP DATABASE defaultdb INTO "gs://{bucket name}?AUTH=implicit&ASSUME_ROLE=crl-dr-store-user-{cluster ID suffix}@{project ID}.iam.gserviceaccount.com,{operation service account name}@{project name}.iam.gserviceaccount.com" AS OF SYSTEM TIME '-10s';
632+
BACKUP DATABASE defaultdb INTO "gs://{bucket name}?AUTH=implicit&ASSUME_ROLE=crl-dr-store-user-{cluster ID suffix}%40{project ID}.iam.gserviceaccount.com%2C{operation service account name}%40{project name}.iam.gserviceaccount.com" AS OF SYSTEM TIME '-10s';
629633
~~~
630634

631635
In this SQL statement, the identity service account assumes the operation service account that has permission to write a backup to the GCS bucket.
@@ -638,7 +642,7 @@ For a backup to your Google Cloud Storage bucket:
638642

639643
{% include_cached copy-clipboard.html %}
640644
~~~sql
641-
BACKUP DATABASE {database} INTO 'gs://{bucket name}/{path}?AUTH=implicit&ASSUME_ROLE={operation service account}@{project name}.iam.gserviceaccount.com'; AS OF SYSTEM TIME '-10s';
645+
BACKUP DATABASE {database} INTO 'gs://{bucket name}/{path}?AUTH=implicit&ASSUME_ROLE={operation service account}%40{project name}.iam.gserviceaccount.com'; AS OF SYSTEM TIME '-10s';
642646
~~~
643647

644648
In this SQL statement, `AUTH=implicit` uses the workload identity service account to authenticate to the bucket. The workload identity role then assumes the operation service account that has permission to write a backup to the bucket.

src/current/v23.2/use-cloud-storage.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -34,9 +34,9 @@ The following table provides a list of the parameters supported by each storage
3434

3535
Location | Scheme | Host | Parameters
3636
------------------------------------------------------------+-------------+--------------------------------------------------+----------------------------------------------------------------------------
37-
Amazon S3 | `s3` | Bucket name | [`AUTH`]({% link {{ page.version.version }}/cloud-storage-authentication.md %}#amazon-s3-specified): `implicit` or `specified` (default: `specified`). When using `specified` pass user's `AWS_ACCESS_KEY_ID` and `AWS_SECRET_ACCESS_KEY`.<br><br>[`ASSUME_ROLE`]({% link {{ page.version.version }}/cloud-storage-authentication.md %}#set-up-amazon-s3-assume-role) (optional): Pass the [ARN](https://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html) of the role to assume. Use in combination with `AUTH=implicit` or `specified`.<br><br>`AWS_ENDPOINT` (optional): Specify a custom endpoint for Amazon S3 or S3-compatible services. Use to define a particular region or a Virtual Private Cloud (VPC) endpoint.<br><br>[`AWS_SESSION_TOKEN`]({% link {{ page.version.version }}/cloud-storage-authentication.md %}) (optional): Use as part of temporary security credentials when accessing AWS S3. For more information, refer to Amazon's guide on [temporary credentials](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp_use-resources.html).<br><br>[`S3_STORAGE_CLASS`](#amazon-s3-storage-classes) (optional): Specify the Amazon S3 storage class for created objects. Note that Glacier Flexible Retrieval and Glacier Deep Archive are not compatible with incremental backups. **Default**: `STANDARD`.
37+
Amazon S3 | `s3` | Bucket name | [`AUTH`]({% link {{ page.version.version }}/cloud-storage-authentication.md %}#amazon-s3-specified): `implicit` or `specified` (default: `specified`). When using `specified` pass user's `AWS_ACCESS_KEY_ID` and `AWS_SECRET_ACCESS_KEY`.<br><br>[`ASSUME_ROLE`]({% link {{ page.version.version }}/cloud-storage-authentication.md %}#set-up-amazon-s3-assume-role) (optional): Pass the [ARN](https://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html) of the role to assume. You must [URL encode](https://www.w3schools.com/tags/ref_urlencode.ASP) this entire value. Use in combination with `AUTH=implicit` or `specified`.<br><br>`AWS_ENDPOINT` (optional): Specify a custom endpoint for Amazon S3 or S3-compatible services. Use to define a particular region or a Virtual Private Cloud (VPC) endpoint.<br><br>[`AWS_SESSION_TOKEN`]({% link {{ page.version.version }}/cloud-storage-authentication.md %}) (optional): Use as part of temporary security credentials when accessing AWS S3. For more information, refer to Amazon's guide on [temporary credentials](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp_use-resources.html).<br><br>[`S3_STORAGE_CLASS`](#amazon-s3-storage-classes) (optional): Specify the Amazon S3 storage class for created objects. Note that Glacier Flexible Retrieval and Glacier Deep Archive are not compatible with incremental backups. **Default**: `STANDARD`.
3838
Azure Blob Storage | `azure-blob` / `azure` | Storage container | `AZURE_ACCOUNT_NAME`: The name of your Azure account.<br><br>`AZURE_ACCOUNT_KEY`: Your Azure account key. You must [url encode](https://wikipedia.org/wiki/Percent-encoding) your Azure account key before authenticating to Azure Storage. For more information, see [Authentication - Azure Storage]({% link {{ page.version.version }}/cloud-storage-authentication.md %}#azure-blob-storage-specified-authentication).<br><br>`AZURE_ENVIRONMENT`: (optional) {% include {{ page.version.version }}/misc/azure-env-param.md %}<br><br>`AZURE_CLIENT_ID`: Application (client) ID for your [App Registration](https://learn.microsoft.com/azure/active-directory/develop/quickstart-register-app#register-an-application).<br><br>`AZURE_CLIENT_SECRET`: Client credentials secret generated for your App Registration.<br><br>`AZURE_TENANT_ID`: Directory (tenant) ID for your App Registration.<br><br>{% include {{ page.version.version }}/backups/azure-storage-tier-support.md %}<br><br>**Note:** {% include {{ page.version.version }}/misc/azure-blob.md %}
39-
Google Cloud Storage | `gs` | Bucket name | `AUTH`: `implicit`, or `specified` (default: `specified`); `CREDENTIALS`<br><br>[`ASSUME_ROLE`]({% link {{ page.version.version }}/cloud-storage-authentication.md %}#set-up-google-cloud-storage-assume-role) (optional): Pass the [service account name](https://cloud.google.com/iam/docs/understanding-service-accounts) of the service account to assume. <br><br>For more information, see [Authentication - Google Cloud Storage]({% link {{ page.version.version }}/cloud-storage-authentication.md %}#google-cloud-storage-specified).
39+
Google Cloud Storage | `gs` | Bucket name | `AUTH`: `implicit`, or `specified` (default: `specified`); `CREDENTIALS`<br><br>[`ASSUME_ROLE`]({% link {{ page.version.version }}/cloud-storage-authentication.md %}#set-up-google-cloud-storage-assume-role) (optional): Pass the [service account name](https://cloud.google.com/iam/docs/understanding-service-accounts) of the service account to assume. You must [URL encode](https://www.w3schools.com/tags/ref_urlencode.ASP) this entire value. <br><br>For more information, see [Authentication - Google Cloud Storage]({% link {{ page.version.version }}/cloud-storage-authentication.md %}#google-cloud-storage-specified).
4040
HTTP | `file-http(s)` / `http(s)` | Remote host | N/A<br><br>**Note:** Using `http(s)` without the `file-` prefix is deprecated as a [changefeed sink]({% link {{ page.version.version }}/changefeed-sinks.md %}) scheme. There is continued support for `http(s)`, but it will be removed in a future release. We recommend implementing the `file-http(s)` scheme for changefeed messages.<br><br>For more information, refer to [Authentication - HTTP]({% link {{ page.version.version }}/cloud-storage-authentication.md %}#http-authentication).
4141
NFS/Local&nbsp;[<sup>1</sup>](#considerations) | `nodelocal` | `nodeID` [<sup>2</sup>](#considerations) (see [Example file URLs](#example-file-urls)) | N/A
4242
S3-compatible services | `s3` | Bucket name | {{site.data.alerts.callout_danger}} While Cockroach Labs actively tests Amazon S3, Google Cloud Storage, and Azure Storage, we **do not** test S3-compatible services (e.g., [MinIO](https://min.io/), [Red Hat Ceph](https://docs.ceph.com/en/pacific/radosgw/s3/)).{{site.data.alerts.end}}<br><br>`AWS_ACCESS_KEY_ID`, `AWS_SECRET_ACCESS_KEY`, `AWS_SESSION_TOKEN`, `AWS_REGION`&nbsp;[<sup>3</sup>](#considerations) (optional), `AWS_ENDPOINT`<br><br>For more information, see [Authentication - S3-compatible services]({% link {{ page.version.version }}/cloud-storage-authentication.md %}#s3-compatible-services-authentication).

0 commit comments

Comments
 (0)