subcategory |
---|
Databricks SQL |
-> Note If you have a fully automated setup with workspaces created by databricks_mws_workspaces or azurerm_databricks_workspace, please make sure to add depends_on attribute in order to prevent authentication is not configured for provider errors.
Retrieves information about a databricks_sql_warehouse using its id. This could be retrieved programmatically using databricks_sql_warehouses data source.
Retrieve attributes of each SQL warehouses in a workspace
data "databricks_sql_warehouses" "all" {
}
data "databricks_sql_warehouse" "all" {
for_each = data.databricks_sql.warehouses.ids
id = each.value
}
id
- (Required) The id of the SQL warehouse
This data source exports the following attributes:
name
- Name of the SQL warehouse. Must be unique.cluster_size
- The size of the clusters allocated to the warehouse: "2X-Small", "X-Small", "Small", "Medium", "Large", "X-Large", "2X-Large", "3X-Large", "4X-Large".min_num_clusters
- Minimum number of clusters available when a SQL warehouse is running.max_num_clusters
- Maximum number of clusters available when a SQL warehouse is running.auto_stop_mins
- Time in minutes until an idle SQL warehouse terminates all clusters and stops.tags
- Databricks tags all warehouse resources with these tags.spot_instance_policy
- The spot policy to use for allocating instances to clusters:COST_OPTIMIZED
orRELIABILITY_OPTIMIZED
.enable_photon
- Whether to enable Photon.enable_serverless_compute
- Whether this SQL warehouse is a Serverless warehouse. To use a Serverless SQL warehouse, you must enable Serverless SQL warehouses for the workspace.channel
block, consisting of following fields:name
- Name of the Databricks SQL release channel. Possible values are:CHANNEL_NAME_PREVIEW
andCHANNEL_NAME_CURRENT
. Default isCHANNEL_NAME_CURRENT
.
jdbc_url
- JDBC connection string.odbc_params
- ODBC connection params:odbc_params.hostname
,odbc_params.path
,odbc_params.protocol
, andodbc_params.port
.data_source_id
- ID of the data source for this warehouse. This is used to bind an Databricks SQL query to an warehouse.
The following resources are often used in the same context:
- End to end workspace management guide.
- databricks_instance_profile to manage AWS EC2 instance profiles that users can launch databricks_cluster and access data, like databricks_mount.
- databricks_sql_dashboard to manage Databricks SQL Dashboards.
- databricks_sql_global_config to configure the security policy, databricks_instance_profile, and data access properties for all databricks_sql_warehouse of workspace.
- databricks_sql_permissions to manage data object access control lists in Databricks workspaces for things like tables, views, databases, and more.