subcategory |
---|
Databricks SQL |
This resource configures the security policy, databricks_instance_profile, and data access properties for all databricks_sql_endpoint of workspace. Please note that changing parameters of this resource will restart all running databricks_sql_endpoint. To use this resource you need to be an administrator.
resource "databricks_sql_global_config" "this" {
security_policy = "DATA_ACCESS_CONTROL"
instance_profile_arn = "arn:...."
data_access_config = {
"spark.sql.session.timeZone" : "UTC"
}
}
For Azure you should use the data_access_config
to provide the service principal configuration. You can use the Databricks SQL Admin Console UI to help you generate the right configuration values.
resource "databricks_sql_global_config" "this" {
security_policy = "DATA_ACCESS_CONTROL"
data_access_config = {
"spark.hadoop.fs.azure.account.auth.type" : "OAuth",
"spark.hadoop.fs.azure.account.oauth.provider.type" : "org.apache.hadoop.fs.azurebfs.oauth2.ClientCredsTokenProvider",
"spark.hadoop.fs.azure.account.oauth2.client.id" : "${var.application_id}",
"spark.hadoop.fs.azure.account.oauth2.client.secret" : "{{secrets/${local.secret_scope}/${local.secret_key}}}",
"spark.hadoop.fs.azure.account.oauth2.client.endpoint" : "https://login.microsoftonline.com/${var.tenant_id}/oauth2/token"
}
sql_config_params = {
"ANSI_MODE" : "true"
}
}
The following arguments are supported (see documentation for more details):
security_policy
(Optional, String) - The policy for controlling access to datasets. Default value:DATA_ACCESS_CONTROL
, consult documentation for list of possible valuesdata_access_config
(Optional, Map) - Data access configuration for databricks_sql_endpoint, such as configuration for an external Hive metastore, Hadoop Filesystem configuration, etc. Please note that the list of supported configuration properties is limited, so refer to the documentation for a full list. Apply will fail if you're specifying not permitted configuration.instance_profile_arn
(Optional, String) - databricks_instance_profile used to access storage from databricks_sql_endpoint. Please note that this parameter is only for AWS, and will generate an error if used on other clouds.google_service_account
(Optional, String) - used to access GCP services, such as Cloud Storage, from databricks_sql_endpoint. Please note that this parameter is only for GCP, and will generate an error if used on other clouds.sql_config_params
(Optional, Map) - SQL Configuration Parameters let you override the default behavior for all sessions with all endpoints.
You can import a databricks_sql_global_config
resource with command like the following (you need to use global
as ID):
terraform import databricks_sql_global_config.this global
The following resources are often used in the same context:
- End to end workspace management guide.
- databricks_instance_profile to manage AWS EC2 instance profiles that users can launch databricks_cluster and access data, like databricks_mount.
- databricks_sql_dashboard to manage Databricks SQL Dashboards.
- databricks_sql_endpoint to manage Databricks SQL Endpoints.
- databricks_sql_permissions to manage data object access control lists in Databricks workspaces for things like tables, views, databases, and more.