diff --git a/website/docs/docs/build/metricflow-time-spine.md b/website/docs/docs/build/metricflow-time-spine.md index 5499c61a8e4..cc3c0cfd3a0 100644 --- a/website/docs/docs/build/metricflow-time-spine.md +++ b/website/docs/docs/build/metricflow-time-spine.md @@ -7,7 +7,6 @@ tags: [Metrics, Semantic Layer] --- - It's common in analytics engineering to have a date dimension or "time spine" table as a base table for different types of time-based joins and aggregations. The structure of this table is typically a base column of daily or hourly dates, with additional columns for other time grains, like fiscal quarters, defined based on the base column. You can join other tables to the time spine on the base column to calculate metrics like revenue at a point in time, or to aggregate to a specific time grain. @@ -23,7 +22,7 @@ To see the generated SQL for the metric and dimension types that use time spine ## Configuring time spine in YAML - Time spine models are normal dbt models with extra configurations that tell dbt and MetricFlow how to use specific columns by defining their properties. Add the [`models` key](/reference/model-properties) for the time spine in your `models/` directory. If your project already includes a calendar table or date dimension, you can configure that table as a time spine. Otherwise, review the [example time-spine tables](#example-time-spine-tables) to create one. + Time spine models are normal dbt models with extra configurations that tell dbt and MetricFlow how to use specific columns by defining their properties. Add the [`models` key](/reference/model-properties) for the time spine in your `models/` directory. If your project already includes a calendar table or date dimension, you can configure that table as a time spine. Otherwise, review the [example time-spine tables](#example-time-spine-tables) to create one. If the relevant model file (`util/_models.yml`) doesn't exist, create it and add the configuration mentioned in the [next section](#creating-a-time-spine-table). Some things to note when configuring time spine models: @@ -34,9 +33,9 @@ To see the generated SQL for the metric and dimension types that use time spine - If you're looking to specify the grain of a time dimension so that MetricFlow can transform the underlying column to the required granularity, refer to the [Time granularity documentation](/docs/build/dimensions?dimension=time_gran) :::tip -If you previously used a model called `metricflow_time_spine`, you no longer need to create this specific model. You can now configure MetricFlow to use any date dimension or time spine table already in your project by updating the `model` setting in the Semantic Layer. - -If you don’t have a date dimension table, you can still create one by using the code snippet in the [next section](#creating-a-time-spine-table) to build your time spine model. +- If you previously used a `metricflow_time_spine.sql` model, you can delete it after configuring the `time_spine` property in YAML. The Semantic Layer automatically recognizes the new configuration. No additional `.yml` files are needed. +- You can also configure MetricFlow to use any date dimension or time spine table already in your project by updating the `model` setting in the Semantic Layer. +- If you don’t have a date dimension table, you can still create one by using the code snippet in the [next section](#creating-a-time-spine-table) to build your time spine model. ::: ### Creating a time spine table @@ -112,9 +111,37 @@ models: For an example project, refer to our [Jaffle shop](https://github.com/dbt-labs/jaffle-sl-template/blob/main/models/marts/_models.yml) example. +### Migrating from SQL to YAML +If your project already includes a time spine (`metricflow_time_spine.sql`), you can migrate its configuration to YAML to address any deprecation warnings you may get. + +1. Add the following configuration to a new or existing YAML file using the [`models` key](/reference/model-properties) for the time spine in your `models/` directory. Name the YAML file whatever you want (for example, `util/_models.yml`): + + + + ```yaml + models: + - name: all_days + description: A time spine with one row per day, ranging from 2020-01-01 to 2039-12-31. + time_spine: + standard_granularity_column: date_day # Column for the standard grain of your table + columns: + - name: date_day + granularity: day # Set the granularity of the column + ``` + + +2. After adding the YAML configuration, delete the existing `metricflow_time_spine.sql` file from your project to avoid any issues. + +3. Test the configuration to ensure compatibility with your production jobs. + +Note that if you're migrating from a `metricflow_time_spine.sql` file: + +- Replace its functionality by adding the `time_spine` property to YAML as shown in the previous example. +- Once configured, MetricFlow will recognize the YAML settings, and then the SQL model file can be safely removed. + ### Considerations when choosing which granularities to create{#granularity-considerations} -- MetricFlow will use the time spine with the largest compatible granularity for a given query to ensure the most efficient query possible. For example, if you have a time spine at a monthly grain, and query a dimension at a monthly grain, MetricFlow will use the monthly time spine. If you only have a daily time spine, MetricFlow will use the daily time spine and date_trunc to month. +- MetricFlow will use the time spine with the largest compatible granularity for a given query to ensure the most efficient query possible. For example, if you have a time spine at a monthly grain, and query a dimension at a monthly grain, MetricFlow will use the monthly time spine. If you only have a daily time spine, MetricFlow will use the daily time spine and `date_trunc` to month. - You can add a time spine for each granularity you intend to use if query efficiency is more important to you than configuration time, or storage constraints. For most engines, the query performance difference should be minimal and transforming your time spine to a coarser grain at query time shouldn't add significant overhead to your queries. - We recommend having a time spine at the finest grain used in any of your dimensions to avoid unexpected errors. For example, if you have dimensions at an hourly grain, you should have a time spine at an hourly grain. diff --git a/website/docs/docs/cloud/manage-access/set-up-sso-okta.md b/website/docs/docs/cloud/manage-access/set-up-sso-okta.md index 83c9f6492c6..9bc1b3d2683 100644 --- a/website/docs/docs/cloud/manage-access/set-up-sso-okta.md +++ b/website/docs/docs/cloud/manage-access/set-up-sso-okta.md @@ -75,6 +75,9 @@ so pick a slug that uniquely identifies your company. * **Single sign on URL**: `https://YOUR_AUTH0_URI/login/callback?connection=` * **Audience URI (SP Entity ID)**: `urn:auth0::{login slug}` * **Relay State**: `` +* **Name ID format**: `Unspecified` +* **Application username**: `Custom` / `user.getInternalProperty("id")` +* **Update Application username on**: `Create and update` - - Use the **Attribute Statements** and **Group Attribute Statements** forms to map your organization's Okta User and Group Attributes to the format that dbt Cloud expects. diff --git a/website/docs/docs/cloud/manage-access/set-up-sso-saml-2.0.md b/website/docs/docs/cloud/manage-access/set-up-sso-saml-2.0.md index ca93d81badf..96b87dee7a6 100644 --- a/website/docs/docs/cloud/manage-access/set-up-sso-saml-2.0.md +++ b/website/docs/docs/cloud/manage-access/set-up-sso-saml-2.0.md @@ -59,7 +59,9 @@ Additionally, you may configure the IdP attributes passed from your identity pro | email | Unspecified | user.email | The user's email address | | first_name | Unspecified | user.first_name | The user's first name | | last_name | Unspecified | user.last_name | The user's last name | -| NameID (if applicable) | Unspecified | user.email | The user's email address | +| NameID | Unspecified | ID | The user's unchanging ID | + +`NameID` values can be persistent (`urn:oasis:names:tc:SAML:2.0:nameid-format:persistent`) rather than unspecified if your IdP supports these values. Using an email address for `NameID` will work, but dbt Cloud creates an entirely new user if that email address changes. Configuring a value that will not change, even if the user's email address does, is a best practice. dbt Cloud's [role-based access control](/docs/cloud/manage-access/about-user-access#role-based-access-control) relies on group mappings from the IdP to assign dbt Cloud users to dbt Cloud groups. To @@ -144,6 +146,9 @@ Login slugs must be unique across all dbt Cloud accounts, so pick a slug that un * **Single sign on URL**: `https://YOUR_AUTH0_URI/login/callback?connection=` * **Audience URI (SP Entity ID)**: `urn:auth0::` * **Relay State**: `` + * **Name ID format**: `Unspecified` + * **Application username**: `Custom` / `user.getInternalProperty("id")` + * **Update Application username on**: `Create and update` @@ -245,7 +250,7 @@ Login slugs must be unique across all dbt Cloud accounts, so pick a slug that un * **Audience URI (SP Entity ID)**: `urn:auth0::` - **Start URL**: `` 5. Select the **Signed response** checkbox. -6. The default **Name ID** is the primary email. Multi-value input is not supported. +6. The default **Name ID** is the primary email. Multi-value input is not supported. If your user profile has a unique, stable value that will persist across email address changes, it's best to use that; otherwise, email will work. 7. Use the **Attribute mapping** page to map your organization's Google Directory Attributes to the format that dbt Cloud expects. 8. Click **Add another mapping** to map additional attributes. @@ -329,9 +334,11 @@ Follow these steps to set up single sign-on (SSO) with dbt Cloud: From the Set up Single Sign-On with SAML page: 1. Click **Edit** in the User Attributes & Claims section. -2. Leave the claim under "Required claim" as is. -3. Delete all claims under "Additional claims." -4. Click **Add new claim** and add these three new claims: +2. Click **Unique User Identifier (Name ID)** under **Required claim.** +3. Set **Name identifier format** to **Unspecified**. +4. Set **Source attribute** to **user.objectid**. +5. Delete all claims under **Additional claims.** +6. Click **Add new claim** and add the following new claims: | Name | Source attribute | | ----- | ----- | @@ -339,10 +346,10 @@ From the Set up Single Sign-On with SAML page: | **first_name** | user.givenname | | **last_name** | user.surname | -5. Click **Add a group claim** from User Attributes and Claims. -6. If you'll assign users directly to the enterprise application, select **Security Groups**. If not, select **Groups assigned to the application**. -7. Set **Source attribute** to **Group ID**. -8. Under **Advanced options**, check **Customize the name of the group claim** and specify **Name** to **groups**. +7. Click **Add a group claim** from **User Attributes and Claims.** +8. If you assign users directly to the enterprise application, select **Security Groups**. If not, select **Groups assigned to the application**. +9. Set **Source attribute** to **Group ID**. +10. Under **Advanced options**, check **Customize the name of the group claim** and specify **Name** to **groups**. **Note:** Keep in mind that the Group ID in Entra ID maps to that group's GUID. It should be specified in lowercase for the mappings to work as expected. The Source Attribute field alternatively can be set to a different value of your preference. @@ -386,7 +393,7 @@ We recommend using the following values: | name | name format | value | | ---- | ----------- | ----- | -| NameID | Unspecified | Email | +| NameID | Unspecified | OneLogin ID | | email | Unspecified | Email | | first_name | Unspecified | First Name | | last_name | Unspecified | Last Name | diff --git a/website/docs/docs/dbt-versions/core-upgrade/06-upgrading-to-v1.9.md b/website/docs/docs/dbt-versions/core-upgrade/06-upgrading-to-v1.9.md index 2a4a9d96528..6009fc4c73a 100644 --- a/website/docs/docs/dbt-versions/core-upgrade/06-upgrading-to-v1.9.md +++ b/website/docs/docs/dbt-versions/core-upgrade/06-upgrading-to-v1.9.md @@ -103,7 +103,8 @@ You can read more about each of these behavior changes in the following links: ### Snowflake -- Iceberg Table Format support will be available on three out-of-the-box materializations: table, incremental, dynamic tables. +- Iceberg Table Format — Support will be available on three out-of-the-box materializations: table, incremental, dynamic tables. +- Breaking change — When upgrading from dbt 1.8 to 1.9 `{{ target.account }}` replaces underscores with dashes. For example, if the `target.account` is set to `sample_company`, then the compiled code now generates `sample-company`. [Refer to the `dbt-snowflake` issue](https://github.com/dbt-labs/dbt-snowflake/issues/1286) for more information. ### Bigquery diff --git a/website/docs/docs/deploy/webhooks.md b/website/docs/docs/deploy/webhooks.md index 4ff9c350344..5aa8abbe41f 100644 --- a/website/docs/docs/deploy/webhooks.md +++ b/website/docs/docs/deploy/webhooks.md @@ -36,17 +36,23 @@ You can also check out the free [dbt Fundamentals course](https://learn.getdbt.c ## Create a webhook subscription {#create-a-webhook-subscription} -Navigate to **Account settings** in dbt Cloud (by clicking your account name from the left side panel), and click **Create New Webhook** in the **Webhooks** section. You can find the appropriate dbt Cloud access URL for your region and plan with [Regions & IP addresses](/docs/cloud/about-cloud/access-regions-ip-addresses). - -To configure your new webhook: - -- **Name** — Enter a name for your outbound webhook. -- **Description** — Enter a description of the webhook. -- **Events** — Choose the event you want to trigger this webhook. You can subscribe to more than one event. -- **Jobs** — Specify the job(s) you want the webhook to trigger on. Or, you can leave this field empty for the webhook to trigger on all jobs in your account. By default, dbt Cloud configures your webhook at the account level. -- **Endpoint** — Enter your application's endpoint URL, where dbt Cloud can send the event(s) to. +1. Navigate to **Account settings** in dbt Cloud (by clicking your account name from the left side panel) +2. Go to the **Webhooks** section and click **Create webhook**. +3. To configure your new webhook: + - **Webhook name** — Enter a name for your outbound webhook. + - **Description** — Enter a description of the webhook. + - **Events** — Choose the event you want to trigger this webhook. You can subscribe to more than one event. + - **Jobs** — Specify the job(s) you want the webhook to trigger on. Or, you can leave this field empty for the webhook to trigger on all jobs in your account. By default, dbt Cloud configures your webhook at the account level. + - **Endpoint** — Enter your application's endpoint URL, where dbt Cloud can send the event(s) to. +4. When done, click **Save**. + + dbt Cloud provides a secret token that you can use to [check for the authenticity of a webhook](#validate-a-webhook). It’s strongly recommended that you perform this check on your server to protect yourself from fake (spoofed) requests. + +:::info +Note that dbt Cloud automatically deactivates a webhook after 5 consecutive failed attempts to send events to your endpoint. To re-activate the webhook, locate it in the webhooks list and click the reactivate button to enable it and continue receiving events. +::: -When done, click **Save**. dbt Cloud provides a secret token that you can use to [check for the authenticity of a webhook](#validate-a-webhook). It’s strongly recommended that you perform this check on your server to protect yourself from fake (spoofed) requests. +To find the appropriate dbt Cloud access URL for your region and plan, refer to [Regions & IP addresses](/docs/cloud/about-cloud/access-regions-ip-addresses). ### Differences between completed and errored webhook events {#completed-errored-event-difference} The `job.run.errored` event is a subset of the `job.run.completed` events. If you subscribe to both, you will receive two notifications when your job encounters an error. However, dbt Cloud triggers the two events at different times: diff --git a/website/docs/guides/snowflake-qs.md b/website/docs/guides/snowflake-qs.md index 40bdeed1ef2..18e77ce050c 100644 --- a/website/docs/guides/snowflake-qs.md +++ b/website/docs/guides/snowflake-qs.md @@ -46,7 +46,7 @@ You can also watch the [YouTube video on dbt and Snowflake](https://www.youtube. ## Create a new Snowflake worksheet 1. Log in to your trial Snowflake account. -2. In the Snowflake UI, click **+ Worksheet** in the upper right corner to create a new worksheet. +2. In the Snowflake UI, click **+ Create** in the left-hand corner, underneath the Snowflake logo, which opens a dropdown. Select the first option, **SQL Worksheet**. ## Load data The data used here is stored as CSV files in a public S3 bucket and the following steps will guide you through how to prepare your Snowflake account for that data and upload it. diff --git a/website/docs/reference/artifacts/run-results-json.md b/website/docs/reference/artifacts/run-results-json.md index 13ad528d185..118b5615ea8 100644 --- a/website/docs/reference/artifacts/run-results-json.md +++ b/website/docs/reference/artifacts/run-results-json.md @@ -3,14 +3,17 @@ title: "Run results JSON file" sidebar_label: "Run results" --- -**Current schema**: [`v5`](https://schemas.getdbt.com/dbt/run-results/v5/index.html) +**Current schema**: [`v6`](https://schemas.getdbt.com/dbt/run-results/v6/index.html) **Produced by:** [`build`](/reference/commands/build) + [`clone`](/reference/commands/clone) [`compile`](/reference/commands/compile) [`docs generate`](/reference/commands/cmd-docs) + [`retry`](/reference/commands/retry) [`run`](/reference/commands/run) [`seed`](/reference/commands/seed) + [`show`](/reference/commands/show) [`snapshot`](/reference/commands/snapshot) [`test`](/reference/commands/test) [`run-operation`](/reference/commands/run-operation) diff --git a/website/static/img/docs/dbt-cloud/dbt-cloud-enterprise/okta/okta-3-saml-settings-top.png b/website/static/img/docs/dbt-cloud/dbt-cloud-enterprise/okta/okta-3-saml-settings-top.png index b8b11f6ea00..7494972d4f6 100644 Binary files a/website/static/img/docs/dbt-cloud/dbt-cloud-enterprise/okta/okta-3-saml-settings-top.png and b/website/static/img/docs/dbt-cloud/dbt-cloud-enterprise/okta/okta-3-saml-settings-top.png differ