Skip to content

Commit

Permalink
This branch was auto-updated!
Browse files Browse the repository at this point in the history
  • Loading branch information
github-actions[bot] authored Jan 11, 2025
2 parents 6b7ad46 + 95dfba3 commit ba33dfe
Show file tree
Hide file tree
Showing 8 changed files with 76 additions and 32 deletions.
39 changes: 33 additions & 6 deletions website/docs/docs/build/metricflow-time-spine.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,6 @@ tags: [Metrics, Semantic Layer]
---
<VersionBlock firstVersion="1.9">

<!-- this whole section is for 1.9 and higher + Release Tracks -->

It's common in analytics engineering to have a date dimension or "time spine" table as a base table for different types of time-based joins and aggregations. The structure of this table is typically a base column of daily or hourly dates, with additional columns for other time grains, like fiscal quarters, defined based on the base column. You can join other tables to the time spine on the base column to calculate metrics like revenue at a point in time, or to aggregate to a specific time grain.

Expand All @@ -23,7 +22,7 @@ To see the generated SQL for the metric and dimension types that use time spine

## Configuring time spine in YAML

Time spine models are normal dbt models with extra configurations that tell dbt and MetricFlow how to use specific columns by defining their properties. Add the [`models` key](/reference/model-properties) for the time spine in your `models/` directory. If your project already includes a calendar table or date dimension, you can configure that table as a time spine. Otherwise, review the [example time-spine tables](#example-time-spine-tables) to create one.
Time spine models are normal dbt models with extra configurations that tell dbt and MetricFlow how to use specific columns by defining their properties. Add the [`models` key](/reference/model-properties) for the time spine in your `models/` directory. If your project already includes a calendar table or date dimension, you can configure that table as a time spine. Otherwise, review the [example time-spine tables](#example-time-spine-tables) to create one. If the relevant model file (`util/_models.yml`) doesn't exist, create it and add the configuration mentioned in the [next section](#creating-a-time-spine-table).

Some things to note when configuring time spine models:

Expand All @@ -34,9 +33,9 @@ To see the generated SQL for the metric and dimension types that use time spine
- If you're looking to specify the grain of a time dimension so that MetricFlow can transform the underlying column to the required granularity, refer to the [Time granularity documentation](/docs/build/dimensions?dimension=time_gran)

:::tip
If you previously used a model called `metricflow_time_spine`, you no longer need to create this specific model. You can now configure MetricFlow to use any date dimension or time spine table already in your project by updating the `model` setting in the Semantic Layer.

If you don’t have a date dimension table, you can still create one by using the code snippet in the [next section](#creating-a-time-spine-table) to build your time spine model.
- If you previously used a `metricflow_time_spine.sql` model, you can delete it after configuring the `time_spine` property in YAML. The Semantic Layer automatically recognizes the new configuration. No additional `.yml` files are needed.
- You can also configure MetricFlow to use any date dimension or time spine table already in your project by updating the `model` setting in the Semantic Layer.
- If you don’t have a date dimension table, you can still create one by using the code snippet in the [next section](#creating-a-time-spine-table) to build your time spine model.
:::

### Creating a time spine table
Expand Down Expand Up @@ -112,9 +111,37 @@ models:

For an example project, refer to our [Jaffle shop](https://github.com/dbt-labs/jaffle-sl-template/blob/main/models/marts/_models.yml) example.

### Migrating from SQL to YAML
If your project already includes a time spine (`metricflow_time_spine.sql`), you can migrate its configuration to YAML to address any deprecation warnings you may get.

1. Add the following configuration to a new or existing YAML file using the [`models` key](/reference/model-properties) for the time spine in your `models/` directory. Name the YAML file whatever you want (for example, `util/_models.yml`):

<File name="models/_models.yml">

```yaml
models:
- name: all_days
description: A time spine with one row per day, ranging from 2020-01-01 to 2039-12-31.
time_spine:
standard_granularity_column: date_day # Column for the standard grain of your table
columns:
- name: date_day
granularity: day # Set the granularity of the column
```
</File>

2. After adding the YAML configuration, delete the existing `metricflow_time_spine.sql` file from your project to avoid any issues.

3. Test the configuration to ensure compatibility with your production jobs.

Note that if you're migrating from a `metricflow_time_spine.sql` file:

- Replace its functionality by adding the `time_spine` property to YAML as shown in the previous example.
- Once configured, MetricFlow will recognize the YAML settings, and then the SQL model file can be safely removed.

### Considerations when choosing which granularities to create{#granularity-considerations}

- MetricFlow will use the time spine with the largest compatible granularity for a given query to ensure the most efficient query possible. For example, if you have a time spine at a monthly grain, and query a dimension at a monthly grain, MetricFlow will use the monthly time spine. If you only have a daily time spine, MetricFlow will use the daily time spine and date_trunc to month.
- MetricFlow will use the time spine with the largest compatible granularity for a given query to ensure the most efficient query possible. For example, if you have a time spine at a monthly grain, and query a dimension at a monthly grain, MetricFlow will use the monthly time spine. If you only have a daily time spine, MetricFlow will use the daily time spine and `date_trunc` to month.
- You can add a time spine for each granularity you intend to use if query efficiency is more important to you than configuration time, or storage constraints. For most engines, the query performance difference should be minimal and transforming your time spine to a coarser grain at query time shouldn't add significant overhead to your queries.
- We recommend having a time spine at the finest grain used in any of your dimensions to avoid unexpected errors. For example, if you have dimensions at an hourly grain, you should have a time spine at an hourly grain.

Expand Down
6 changes: 3 additions & 3 deletions website/docs/docs/cloud/manage-access/set-up-sso-okta.md
Original file line number Diff line number Diff line change
Expand Up @@ -75,16 +75,16 @@ so pick a slug that uniquely identifies your company.
* **Single sign on URL**: `https://YOUR_AUTH0_URI/login/callback?connection=<login slug>`
* **Audience URI (SP Entity ID)**: `urn:auth0:<YOUR_AUTH0_ENTITYID>:{login slug}`
* **Relay State**: `<login slug>`
* **Name ID format**: `Unspecified`
* **Application username**: `Custom` / `user.getInternalProperty("id")`
* **Update Application username on**: `Create and update`

<Lightbox
collapsed={false}
src="/img/docs/dbt-cloud/dbt-cloud-enterprise/okta/okta-3-saml-settings-top.png"
title="Configure the app's SAML Settings"
/>

<!-- TODO : Will users need to change the Name ID format and Application
username on this screen? -->

Use the **Attribute Statements** and **Group Attribute Statements** forms to
map your organization's Okta User and Group Attributes to the format that
dbt Cloud expects.
Expand Down
27 changes: 17 additions & 10 deletions website/docs/docs/cloud/manage-access/set-up-sso-saml-2.0.md
Original file line number Diff line number Diff line change
Expand Up @@ -59,7 +59,9 @@ Additionally, you may configure the IdP attributes passed from your identity pro
| email | Unspecified | user.email | The user's email address |
| first_name | Unspecified | user.first_name | The user's first name |
| last_name | Unspecified | user.last_name | The user's last name |
| NameID (if applicable) | Unspecified | user.email | The user's email address |
| NameID | Unspecified | ID | The user's unchanging ID |

`NameID` values can be persistent (`urn:oasis:names:tc:SAML:2.0:nameid-format:persistent`) rather than unspecified if your IdP supports these values. Using an email address for `NameID` will work, but dbt Cloud creates an entirely new user if that email address changes. Configuring a value that will not change, even if the user's email address does, is a best practice.

dbt Cloud's [role-based access control](/docs/cloud/manage-access/about-user-access#role-based-access-control) relies
on group mappings from the IdP to assign dbt Cloud users to dbt Cloud groups. To
Expand Down Expand Up @@ -144,6 +146,9 @@ Login slugs must be unique across all dbt Cloud accounts, so pick a slug that un
* **Single sign on URL**: `https://YOUR_AUTH0_URI/login/callback?connection=<login slug>`
* **Audience URI (SP Entity ID)**: `urn:auth0:<YOUR_AUTH0_ENTITYID>:<login slug>`
* **Relay State**: `<login slug>`
* **Name ID format**: `Unspecified`
* **Application username**: `Custom` / `user.getInternalProperty("id")`
* **Update Application username on**: `Create and update`

<Lightbox collapsed={false} src="/img/docs/dbt-cloud/dbt-cloud-enterprise/okta/okta-3-saml-settings-top.png" title="Configure the app's SAML Settings"/>

Expand Down Expand Up @@ -245,7 +250,7 @@ Login slugs must be unique across all dbt Cloud accounts, so pick a slug that un
* **Audience URI (SP Entity ID)**: `urn:auth0:<YOUR_AUTH0_ENTITYID>:<login slug>`
- **Start URL**: `<login slug>`
5. Select the **Signed response** checkbox.
6. The default **Name ID** is the primary email. Multi-value input is not supported.
6. The default **Name ID** is the primary email. Multi-value input is not supported. If your user profile has a unique, stable value that will persist across email address changes, it's best to use that; otherwise, email will work.
7. Use the **Attribute mapping** page to map your organization's Google Directory Attributes to the format that
dbt Cloud expects.
8. Click **Add another mapping** to map additional attributes.
Expand Down Expand Up @@ -329,20 +334,22 @@ Follow these steps to set up single sign-on (SSO) with dbt Cloud:
From the Set up Single Sign-On with SAML page:

1. Click **Edit** in the User Attributes & Claims section.
2. Leave the claim under "Required claim" as is.
3. Delete all claims under "Additional claims."
4. Click **Add new claim** and add these three new claims:
2. Click **Unique User Identifier (Name ID)** under **Required claim.**
3. Set **Name identifier format** to **Unspecified**.
4. Set **Source attribute** to **user.objectid**.
5. Delete all claims under **Additional claims.**
6. Click **Add new claim** and add the following new claims:

| Name | Source attribute |
| ----- | ----- |
| **email** | user.mail |
| **first_name** | user.givenname |
| **last_name** | user.surname |

5. Click **Add a group claim** from User Attributes and Claims.
6. If you'll assign users directly to the enterprise application, select **Security Groups**. If not, select **Groups assigned to the application**.
7. Set **Source attribute** to **Group ID**.
8. Under **Advanced options**, check **Customize the name of the group claim** and specify **Name** to **groups**.
7. Click **Add a group claim** from **User Attributes and Claims.**
8. If you assign users directly to the enterprise application, select **Security Groups**. If not, select **Groups assigned to the application**.
9. Set **Source attribute** to **Group ID**.
10. Under **Advanced options**, check **Customize the name of the group claim** and specify **Name** to **groups**.

**Note:** Keep in mind that the Group ID in Entra ID maps to that group's GUID. It should be specified in lowercase for the mappings to work as expected. The Source Attribute field alternatively can be set to a different value of your preference.

Expand Down Expand Up @@ -386,7 +393,7 @@ We recommend using the following values:

| name | name format | value |
| ---- | ----------- | ----- |
| NameID | Unspecified | Email |
| NameID | Unspecified | OneLogin ID |
| email | Unspecified | Email |
| first_name | Unspecified | First Name |
| last_name | Unspecified | Last Name |
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -103,7 +103,8 @@ You can read more about each of these behavior changes in the following links:

### Snowflake

- Iceberg Table Format support will be available on three out-of-the-box materializations: table, incremental, dynamic tables.
- Iceberg Table Format &mdash; Support will be available on three out-of-the-box materializations: table, incremental, dynamic tables.
- Breaking change &mdash; When upgrading from dbt 1.8 to 1.9 `{{ target.account }}` replaces underscores with dashes. For example, if the `target.account` is set to `sample_company`, then the compiled code now generates `sample-company`. [Refer to the `dbt-snowflake` issue](https://github.com/dbt-labs/dbt-snowflake/issues/1286) for more information.

### Bigquery

Expand Down
26 changes: 16 additions & 10 deletions website/docs/docs/deploy/webhooks.md
Original file line number Diff line number Diff line change
Expand Up @@ -36,17 +36,23 @@ You can also check out the free [dbt Fundamentals course](https://learn.getdbt.c

## Create a webhook subscription {#create-a-webhook-subscription}

Navigate to **Account settings** in dbt Cloud (by clicking your account name from the left side panel), and click **Create New Webhook** in the **Webhooks** section. You can find the appropriate dbt Cloud access URL for your region and plan with [Regions & IP addresses](/docs/cloud/about-cloud/access-regions-ip-addresses).

To configure your new webhook:

- **Name** &mdash; Enter a name for your outbound webhook.
- **Description** &mdash; Enter a description of the webhook.
- **Events** &mdash; Choose the event you want to trigger this webhook. You can subscribe to more than one event.
- **Jobs** &mdash; Specify the job(s) you want the webhook to trigger on. Or, you can leave this field empty for the webhook to trigger on all jobs in your account. By default, dbt Cloud configures your webhook at the account level.
- **Endpoint** &mdash; Enter your application's endpoint URL, where dbt Cloud can send the event(s) to.
1. Navigate to **Account settings** in dbt Cloud (by clicking your account name from the left side panel)
2. Go to the **Webhooks** section and click **Create webhook**.
3. To configure your new webhook:
- **Webhook name** &mdash; Enter a name for your outbound webhook.
- **Description** &mdash; Enter a description of the webhook.
- **Events** &mdash; Choose the event you want to trigger this webhook. You can subscribe to more than one event.
- **Jobs** &mdash; Specify the job(s) you want the webhook to trigger on. Or, you can leave this field empty for the webhook to trigger on all jobs in your account. By default, dbt Cloud configures your webhook at the account level.
- **Endpoint** &mdash; Enter your application's endpoint URL, where dbt Cloud can send the event(s) to.
4. When done, click **Save**.

dbt Cloud provides a secret token that you can use to [check for the authenticity of a webhook](#validate-a-webhook). It’s strongly recommended that you perform this check on your server to protect yourself from fake (spoofed) requests.

:::info
Note that dbt Cloud automatically deactivates a webhook after 5 consecutive failed attempts to send events to your endpoint. To re-activate the webhook, locate it in the webhooks list and click the reactivate button to enable it and continue receiving events.
:::

When done, click **Save**. dbt Cloud provides a secret token that you can use to [check for the authenticity of a webhook](#validate-a-webhook). It’s strongly recommended that you perform this check on your server to protect yourself from fake (spoofed) requests.
To find the appropriate dbt Cloud access URL for your region and plan, refer to [Regions & IP addresses](/docs/cloud/about-cloud/access-regions-ip-addresses).

### Differences between completed and errored webhook events {#completed-errored-event-difference}
The `job.run.errored` event is a subset of the `job.run.completed` events. If you subscribe to both, you will receive two notifications when your job encounters an error. However, dbt Cloud triggers the two events at different times:
Expand Down
2 changes: 1 addition & 1 deletion website/docs/guides/snowflake-qs.md
Original file line number Diff line number Diff line change
Expand Up @@ -46,7 +46,7 @@ You can also watch the [YouTube video on dbt and Snowflake](https://www.youtube.

## Create a new Snowflake worksheet
1. Log in to your trial Snowflake account.
2. In the Snowflake UI, click **+ Worksheet** in the upper right corner to create a new worksheet.
2. In the Snowflake UI, click **+ Create** in the left-hand corner, underneath the Snowflake logo, which opens a dropdown. Select the first option, **SQL Worksheet**.

## Load data
The data used here is stored as CSV files in a public S3 bucket and the following steps will guide you through how to prepare your Snowflake account for that data and upload it.
Expand Down
5 changes: 4 additions & 1 deletion website/docs/reference/artifacts/run-results-json.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,14 +3,17 @@ title: "Run results JSON file"
sidebar_label: "Run results"
---

**Current schema**: [`v5`](https://schemas.getdbt.com/dbt/run-results/v5/index.html)
**Current schema**: [`v6`](https://schemas.getdbt.com/dbt/run-results/v6/index.html)

**Produced by:**
[`build`](/reference/commands/build)
[`clone`](/reference/commands/clone)
[`compile`](/reference/commands/compile)
[`docs generate`](/reference/commands/cmd-docs)
[`retry`](/reference/commands/retry)
[`run`](/reference/commands/run)
[`seed`](/reference/commands/seed)
[`show`](/reference/commands/show)
[`snapshot`](/reference/commands/snapshot)
[`test`](/reference/commands/test)
[`run-operation`](/reference/commands/run-operation)
Expand Down
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.

0 comments on commit ba33dfe

Please sign in to comment.