If you receive a schema-related error message referencing a previous PR, this is usually an indicator that you are not using a production job for your deferral and are instead using self. If the prior PR has already been merged, the prior PR's schema may have been dropped by the time the CI job for the current PR is kicked off.
diff --git a/website/docs/docs/deploy/retry-jobs.md b/website/docs/docs/deploy/retry-jobs.md
index f439351aec5..4e3ad0d429f 100644
--- a/website/docs/docs/deploy/retry-jobs.md
+++ b/website/docs/docs/deploy/retry-jobs.md
@@ -10,6 +10,7 @@ If your dbt job run completed with a status of **Error**, you can rerun it from
- You have a [dbt Cloud account](https://www.getdbt.com/signup).
- You must be using [dbt version](/docs/dbt-versions/upgrade-dbt-version-in-cloud) 1.6 or newer.
+- dbt can successfully parse the project and generate a [manifest](/reference/artifacts/manifest-json)
- The most recent run of the job hasn't completed successfully. The latest status of the run is **Error**.
- The job command that failed in the run must be one that supports the [retry command](/reference/commands/retry).
diff --git a/website/docs/docs/get-started-dbt.md b/website/docs/docs/get-started-dbt.md
index 428253ec139..1920a9b3da2 100644
--- a/website/docs/docs/get-started-dbt.md
+++ b/website/docs/docs/get-started-dbt.md
@@ -6,7 +6,7 @@ pagination_next: null
pagination_prev: null
---
-Begin your dbt journey by trying one of our quickstarts, which provides a step-by-step guide to help you set up dbt Cloud or dbt Core with a [variety of data platforms](/docs/cloud/connect-data-platform/about-connections).
+Begin your dbt journey by trying one of our quickstarts, which provides a step-by-step guide to help you set up [dbt Cloud](#dbt-cloud) or [dbt Core](#dbt-core) with a [variety of data platforms](/docs/cloud/connect-data-platform/about-connections).
## dbt Cloud
@@ -76,13 +76,23 @@ Learn more about [dbt Cloud features](/docs/cloud/about-cloud/dbt-cloud-feature
[dbt Core](/docs/core/about-core-setup) is a command-line [open-source tool](https://github.com/dbt-labs/dbt-core) that enables data practitioners to transform data using analytics engineering best practices. It suits individuals and small technical teams who prefer manual setup and customization, supports community adapters, and open-source standards.
-Refer to the following quickstarts to get started with dbt Core:
+
+
+
-- [dbt Core from a manual install](/guides/manual-install) to learn how to install dbt Core and set up a project.
-- [dbt Core using GitHub Codespace](/guides/codespace?step=1) to learn how to create a codespace and execute the `dbt build` command.
+
+
## Related docs
-
+
Expand your dbt knowledge and expertise with these additional resources:
- [Join the bi-weekly demos](https://www.getdbt.com/resources/webinars/dbt-cloud-demos-with-experts) to see dbt Cloud in action and ask questions.
diff --git a/website/docs/faqs/Troubleshooting/error-importing-repo.md b/website/docs/faqs/Troubleshooting/error-importing-repo.md
new file mode 100644
index 00000000000..85c9ffb0745
--- /dev/null
+++ b/website/docs/faqs/Troubleshooting/error-importing-repo.md
@@ -0,0 +1,14 @@
+---
+title: Errors importing a repository on dbt Cloud project set up
+description: "Errors importing a repository on dbt Cloud project set up"
+sidebar_label: 'Errors importing a repository on dbt Cloud project set up'
+id: error-importing-repo
+---
+
+If you don't see your repository listed, double-check that:
+- Your repository is in a Gitlab group you have access to. dbt Cloud will not read repos associated with a user.
+
+If you do see your repository listed, but are unable to import the repository successfully, double-check that:
+- You are a maintainer of that repository. Only users with maintainer permissions can set up repository connections.
+
+If you imported a repository using the dbt Cloud native integration with GitLab, you should be able to see if the clone strategy is using a `deploy_token`. If it's relying on an SSH key, this means the repository was not set up using the native GitLab integration, but rather using the generic git clone option. The repository must be reconnected in order to get the benefits described above.
diff --git a/website/docs/faqs/Troubleshooting/gitlab-webhook.md b/website/docs/faqs/Troubleshooting/gitlab-webhook.md
new file mode 100644
index 00000000000..450796db83e
--- /dev/null
+++ b/website/docs/faqs/Troubleshooting/gitlab-webhook.md
@@ -0,0 +1,19 @@
+---
+title: Unable to trigger a CI job with GitLab
+description: "Unable to trigger a CI job"
+sidebar_label: 'Unable to trigger a CI job'
+id: gitlab-webhook
+---
+
+When you connect dbt Cloud to a GitLab repository, GitLab automatically registers a webhook in the background, viewable under the repository settings. This webhook is also used to trigger [CI jobs](/docs/deploy/ci-jobs) when you push to the repository.
+
+If you're unable to trigger a CI job, this usually indicates that the webhook registration is missing or incorrect.
+
+To resolve this issue, navigate to the repository settings in GitLab and view the webhook registrations by navigating to GitLab --> **Settings** --> **Webhooks**.
+
+Some things to check:
+
+- The webhook registration is enabled in GitLab.
+- The webhook registration is configured with the correct URL and secret.
+
+If you're still experiencing this issue, reach out to the Support team at support@getdbt.com and we'll be happy to help!
diff --git a/website/docs/guides/mesh-qs.md b/website/docs/guides/mesh-qs.md
index 9a7aa8b0ce0..d81951c9669 100644
--- a/website/docs/guides/mesh-qs.md
+++ b/website/docs/guides/mesh-qs.md
@@ -94,7 +94,7 @@ To set a production environment:
6. Click **Test Connection** to confirm the deployment connection.
6. Click **Save** to create a production environment.
-
+
## Set up a foundational project
diff --git a/website/docs/reference/database-permissions/snowflake-permissions.md b/website/docs/reference/database-permissions/snowflake-permissions.md
index 3f474242834..1ab35e46d26 100644
--- a/website/docs/reference/database-permissions/snowflake-permissions.md
+++ b/website/docs/reference/database-permissions/snowflake-permissions.md
@@ -83,6 +83,7 @@ grant role reporter to user looker_user; -- or mode_user, periscope_user
```
5. Let loader load data
+
Give the role unilateral permission to operate on the raw database
```
use role sysadmin;
@@ -90,6 +91,7 @@ grant all on database raw to role loader;
```
6. Let transformer transform data
+
The transformer role needs to be able to read raw data.
If you do this before you have any data loaded, you can run:
@@ -110,6 +112,7 @@ transformer also needs to be able to create in the analytics database:
grant all on database analytics to role transformer;
```
7. Let reporter read the transformed data
+
A previous version of this article recommended this be implemented through hooks in dbt, but this way lets you get away with a one-off statement.
```
grant usage on database analytics to role reporter;
@@ -120,10 +123,11 @@ grant select on future views in database analytics to role reporter;
Again, if you already have data in your analytics database, make sure you run:
```
grant usage on all schemas in database analytics to role reporter;
-grant select on all tables in database analytics to role transformer;
-grant select on all views in database analytics to role transformer;
+grant select on all tables in database analytics to role reporter;
+grant select on all views in database analytics to role reporter;
```
8. Maintain
+
When new users are added, make sure you add them to the right role! Everything else should be inherited automatically thanks to those `future` grants.
For more discussion and legacy information, refer to [this Discourse article](https://discourse.getdbt.com/t/setting-up-snowflake-the-exact-grant-statements-we-run/439).
diff --git a/website/docs/reference/model-configs.md b/website/docs/reference/model-configs.md
index 9508cf68ceb..6c37b69758c 100644
--- a/website/docs/reference/model-configs.md
+++ b/website/docs/reference/model-configs.md
@@ -36,9 +36,11 @@ models:
[+](/reference/resource-configs/plus-prefix)[materialized](/reference/resource-configs/materialized):
[+](/reference/resource-configs/plus-prefix)[sql_header](/reference/resource-configs/sql_header):
[+](/reference/resource-configs/plus-prefix)[on_configuration_change](/reference/resource-configs/on_configuration_change): apply | continue | fail #only for materialized views on supported adapters
+ [+](/reference/resource-configs/plus-prefix)[unique_key](/reference/resource-configs/unique_key):
```
+
@@ -57,6 +59,7 @@ models:
[materialized](/reference/resource-configs/materialized):
[sql_header](/reference/resource-configs/sql_header):
[on_configuration_change](/reference/resource-configs/on_configuration_change): apply | continue | fail #only for materialized views on supported adapters
+ [unique_key](/reference/resource-configs/unique_key):
```
@@ -69,12 +72,13 @@ models:
-```jinja
+```sql
{{ config(
[materialized](/reference/resource-configs/materialized)="",
[sql_header](/reference/resource-configs/sql_header)=""
[on_configuration_change](/reference/resource-configs/on_configuration_change): apply | continue | fail #only for materialized views for supported adapters
+ [unique_key](/reference/resource-configs/unique_key)='column_name_or_expression'
) }}
```
@@ -212,7 +216,7 @@ models:
-```jinja
+```sql
{{ config(
[enabled](/reference/resource-configs/enabled)=true | false,
@@ -233,7 +237,7 @@ models:
-```jinja
+```sql
{{ config(
[enabled](/reference/resource-configs/enabled)=true | false,
@@ -246,8 +250,9 @@ models:
[persist_docs](/reference/resource-configs/persist_docs)={},
[meta](/reference/resource-configs/meta)={},
[grants](/reference/resource-configs/grants)={},
- [contract](/reference/resource-configs/contract)={}
- [event_time](/reference/resource-configs/event-time): my_time_field
+ [contract](/reference/resource-configs/contract)={},
+ [event_time](/reference/resource-configs/event-time)='my_time_field',
+
) }}
```
diff --git a/website/docs/reference/node-selection/defer.md b/website/docs/reference/node-selection/defer.md
index 863494de12e..eddb1ece9d4 100644
--- a/website/docs/reference/node-selection/defer.md
+++ b/website/docs/reference/node-selection/defer.md
@@ -29,11 +29,12 @@ dbt test --models [...] --defer --state path/to/artifacts
-When the `--defer` flag is provided, dbt will resolve `ref` calls differently depending on two criteria:
-1. Is the referenced node included in the model selection criteria of the current run?
-2. Does the referenced node exist as a database object in the current environment?
+By default, dbt uses the [`target`](/reference/dbt-jinja-functions/target) namespace to resolve `ref` calls.
-If the answer to both is **no**—a node is not included _and_ it does not exist as a database object in the current environment—references to it will use the other namespace instead, provided by the state manifest.
+When `--defer` is enabled, dbt resolves ref calls using the state manifest instead, but only if:
+
+1. The node isn’t among the selected nodes, _and_
+2. It doesn’t exist in the database (or `--favor-state` is used).
Ephemeral models are never deferred, since they serve as "passthroughs" for other `ref` calls.
@@ -46,7 +47,7 @@ Deferral requires both `--defer` and `--state` to be set, either by passing flag
#### Favor state
-You can optionally skip the second criterion by passing the `--favor-state` flag. If passed, dbt will favor using the node defined in your `--state` namespace, even if the node exists in the current target.
+When `--favor-state` is passed, dbt prioritizes node definitions from the `--state directory`. However, this doesn’t apply if the node is also part of the selected nodes.
### Example
diff --git a/website/docs/reference/project-configs/seed-paths.md b/website/docs/reference/project-configs/seed-paths.md
index d99c1b5a907..53e2902cae0 100644
--- a/website/docs/reference/project-configs/seed-paths.md
+++ b/website/docs/reference/project-configs/seed-paths.md
@@ -38,7 +38,7 @@ absolute="/Users/username/project/seed"
```
## Examples
-### Use a subdirectory named `custom_seeds` instead of `seeds`
+### Use a directory named `custom_seeds` instead of `seeds`
diff --git a/website/docs/reference/resource-configs/alias.md b/website/docs/reference/resource-configs/alias.md
index c14804ef2a7..5beaa238806 100644
--- a/website/docs/reference/resource-configs/alias.md
+++ b/website/docs/reference/resource-configs/alias.md
@@ -8,9 +8,11 @@ datatype: string
-Specify a custom alias for a model in your `dbt_project.yml` file or config block.
+Specify a custom alias for a model in your `dbt_project.yml` file, `models/properties.yml` file, or config block in a SQL file.
-For example, if you have a model that calculates `sales_total` and want to give it a more user-friendly alias, you can alias it like this:
+For example, if you have a model that calculates `sales_total` and want to give it a more user-friendly alias, you can alias it as shown in the following examples.
+
+In the `dbt_project.yml` file, the following example sets a default `alias` for the `sales_total` model at the project level:
@@ -22,16 +24,40 @@ models:
```
+The following specifies an `alias` as part of the `models/properties.yml` file metadata, useful for centralized configuration:
+
+
+
+```yml
+version: 2
+
+models:
+ - name: sales_total
+ config:
+ alias: sales_dashboard
+```
+
+
+The following assigns the `alias` directly in the In `models/sales_total.sql` file:
+
+
+
+```sql
+{{ config(
+ alias="sales_dashboard"
+) }}
+```
+
+
This would return `analytics.finance.sales_dashboard` in the database, instead of the default `analytics.finance.sales_total`.
+Configure a seed's alias in your `dbt_project.yml` file or a `properties.yml` file. The following examples demonstrate how to `alias` a seed named `product_categories` to `categories_data`.
-Configure a seed's alias in your `dbt_project.yml` file or config block.
-
-For example, if you have a seed that represents `product_categories` and want to alias it as `categories_data`, you would alias like this:
+In the `dbt_project.yml` file at the project level:
@@ -41,6 +67,21 @@ seeds:
product_categories:
+alias: categories_data
```
+
+
+In the `seeds/properties.yml` file:
+
+
+
+```yml
+version: 2
+
+seeds:
+ - name: product_categories
+ config:
+ alias: categories_data
+```
+
This would return the name `analytics.finance.categories_data` in the database.
@@ -55,9 +96,6 @@ seeds:
+alias: country_mappings
```
-
-
-
@@ -65,7 +103,9 @@ seeds:
Configure a snapshots's alias in your `dbt_project.yml` file or config block.
-For example, if you have a snapshot that is named `your_snapshot` and want to alias it as `the_best_snapshot`, you would alias like this:
+The following examples demonstrate how to `alias` a snapshot named `your_snapshot` to `the_best_snapshot`.
+
+In the `dbt_project.yml` file at the project level:
@@ -75,20 +115,57 @@ snapshots:
your_snapshot:
+alias: the_best_snapshot
```
+
-This would build your snapshot to `analytics.finance.the_best_snapshot` in the database.
+In the `snapshots/properties.yml` file:
+
+
+```yml
+version: 2
+
+snapshots:
+ - name: your_snapshot
+ config:
+ alias: the_best_snapshot
+```
+In `snapshots/your_snapshot.sql` file:
+
+
+
+```sql
+{{ config(
+ alias="the_best_snapshot"
+) }}
+```
+
+
+This would build your snapshot to `analytics.finance.the_best_snapshot` in the database.
+
-Configure a test's alias in your `schema.yml` file or config block.
+Configure a data test's alias in your `dbt_project.yml` file, `properties.yml` file, or config block in the model file.
-For example, to add a unique test to the `order_id` column and give it an alias `unique_order_id_test` to identify this specific test, you would alias like this:
+The following examples demonstrate how to `alias` a unique data test named `order_id` to `unique_order_id_test` to identify a specific data test.
-
+In the `dbt_project.yml` file at the project level:
+
+
+
+```yml
+tests:
+ your_project:
+ +alias: unique_order_id_test
+```
+
+
+In the `models/properties.yml` file:
+
+
```yml
models:
@@ -99,10 +176,22 @@ models:
- unique:
alias: unique_order_id_test
```
+
+
+In `tests/unique_order_id_test.sql` file:
+
+
+
+```sql
+{{ config(
+ alias="unique_order_id_test",
+ severity="error",
+```
+
When using [`store_failures_as`](/reference/resource-configs/store_failures_as), this would return the name `analytics.finance.orders_order_id_unique_order_id_test` in the database.
-
+
diff --git a/website/docs/reference/resource-configs/bigquery-configs.md b/website/docs/reference/resource-configs/bigquery-configs.md
index ab5f562f57c..c912bca0688 100644
--- a/website/docs/reference/resource-configs/bigquery-configs.md
+++ b/website/docs/reference/resource-configs/bigquery-configs.md
@@ -909,3 +909,10 @@ By default, this is set to `True` to support the default `intermediate_format` o
### The `intermediate_format` parameter
The `intermediate_format` parameter specifies which file format to use when writing records to a table. The default is `parquet`.
+
+
+## Unit test limitations
+
+You must specify all fields in a BigQuery `STRUCT` for [unit tests](/docs/build/unit-tests). You cannot use only a subset of fields in a `STRUCT`.
+
+
diff --git a/website/docs/reference/resource-configs/no-configs.md b/website/docs/reference/resource-configs/no-configs.md
index 5eec26917c8..f72b286c837 100644
--- a/website/docs/reference/resource-configs/no-configs.md
+++ b/website/docs/reference/resource-configs/no-configs.md
@@ -1,11 +1,12 @@
---
-title: "No specifc configurations for this Adapter"
+title: "No specific configurations for this adapter"
id: "no-configs"
---
If you were guided to this page from a data platform setup article, it most likely means:
- Setting up the profile is the only action the end-user needs to take on the data platform, or
-- The subsequent actions the end-user needs to take are not currently documented
+- The subsequent actions the end-user needs to take are not currently documented, or
+- Relevant information is provided on the documentation pages of the data platform vendor.
If you'd like to contribute to data platform-specific configuration information, refer to [Documenting a new adapter](/guides/adapter-creation)
diff --git a/website/docs/reference/resource-configs/unique_key.md b/website/docs/reference/resource-configs/unique_key.md
index 77c99937295..071102bae6d 100644
--- a/website/docs/reference/resource-configs/unique_key.md
+++ b/website/docs/reference/resource-configs/unique_key.md
@@ -1,12 +1,65 @@
---
-resource_types: [snapshots]
+resource_types: [snapshots, models]
description: "Learn more about unique_key configurations in dbt."
datatype: column_name_or_expression
---
+
+
+
+
+Configure the `unique_key` in the `config` block of your [incremental model's](/docs/build/incremental-models) SQL file, in your `models/properties.yml` file, or in your `dbt_project.yml` file.
+
+
+
+```sql
+{{
+ config(
+ materialized='incremental',
+ unique_key='id'
+ )
+}}
+
+```
+
+
+
+
+
+```yaml
+models:
+ - name: my_incremental_model
+ description: "An incremental model example with a unique key."
+ config:
+ materialized: incremental
+ unique_key: id
+
+```
+
+
+
+
+
+```yaml
+name: jaffle_shop
+
+models:
+ jaffle_shop:
+ staging:
+ +unique_key: id
+```
+
+
+
+
+
+
+
+For [snapshots](/docs/build/snapshots), configure the `unique_key` in the your `snapshot/filename.yml` file or in your `dbt_project.yml` file.
+
```yaml
@@ -23,6 +76,8 @@ snapshots:
+Configure the `unique_key` in the `config` block of your snapshot SQL file or in your `dbt_project.yml` file.
+
import SnapshotYaml from '/snippets/_snapshot-yaml-spec.md';
@@ -49,10 +104,13 @@ snapshots:
+
+
+
## Description
-A column name or expression that is unique for the inputs of a snapshot. dbt uses this to match records between a result set and an existing snapshot, so that changes can be captured correctly.
+A column name or expression that is unique for the inputs of a snapshot or incremental model. dbt uses this to match records between a result set and an existing snapshot or incremental model, so that changes can be captured correctly.
-In dbt Cloud "Latest" and dbt v1.9+, [snapshots](/docs/build/snapshots) are defined and configured in YAML files within your `snapshots/` directory. You can specify one or multiple `unique_key` values within your snapshot YAML file's `config` key.
+In dbt Cloud "Latest" release track and from dbt v1.9, [snapshots](/docs/build/snapshots) are defined and configured in YAML files within your `snapshots/` directory. You can specify one or multiple `unique_key` values within your snapshot YAML file's `config` key.
:::caution
@@ -67,6 +125,32 @@ This is a **required parameter**. No default is provided.
## Examples
### Use an `id` column as a unique key
+
+
+
+
+In this example, the `id` column is the unique key for an incremental model.
+
+
+
+```sql
+{{
+ config(
+ materialized='incremental',
+ unique_key='id'
+ )
+}}
+
+select * from ..
+```
+
+
+
+
+
+
+In this example, the `id` column is used as a unique key for a snapshot.
+
@@ -114,10 +198,38 @@ snapshots:
+
+
+
### Use multiple unique keys
+
+
+
+Configure multiple unique keys for an incremental model as a string representing a single column or a list of single-quoted column names that can be used together, for example, `['col1', 'col2', …]`.
+
+Columns must not contain null values, otherwise the incremental model will fail to match rows and generate duplicate rows. Refer to [Defining a unique key](/docs/build/incremental-models#defining-a-unique-key-optional) for more information.
+
+
+
+```sql
+{{ config(
+ materialized='incremental',
+ unique_key=['order_id', 'location_id']
+) }}
+
+with...
+
+```
+
+
+
+
+
+
+
You can configure snapshots to use multiple unique keys for `primary_key` columns.
@@ -137,12 +249,35 @@ snapshots:
```
+
+
### Use a combination of two columns as a unique key
+
+
+
+
+
+```sql
+{{ config(
+ materialized='incremental',
+ unique_key=['order_id', 'location_id']
+) }}
+
+with...
+
+```
+
+
+
+
+
+
+
This configuration accepts a valid column expression. As such, you can concatenate two columns together as a unique key if required. It's a good idea to use a separator (for example, `'-'`) to ensure uniqueness.
@@ -170,7 +305,6 @@ from {{ source('erp', 'transactions') }}
Though, it's probably a better idea to construct this column in your query and use that as the `unique_key`:
-
```sql
@@ -211,4 +345,6 @@ from {{ source('erp', 'transactions') }}
```
+
+
diff --git a/website/sidebars.js b/website/sidebars.js
index 08494e4c713..9a93980b12c 100644
--- a/website/sidebars.js
+++ b/website/sidebars.js
@@ -222,6 +222,7 @@ const sidebarSettings = {
"docs/core/connect-data-platform/athena-setup",
"docs/core/connect-data-platform/glue-setup",
"docs/core/connect-data-platform/clickhouse-setup",
+ "docs/core/connect-data-platform/cratedb-setup",
"docs/core/connect-data-platform/databend-setup",
"docs/core/connect-data-platform/decodable-setup",
"docs/core/connect-data-platform/doris-setup",
@@ -941,6 +942,7 @@ const sidebarSettings = {
"reference/resource-configs/pre-hook-post-hook",
"reference/resource-configs/schema",
"reference/resource-configs/tags",
+ "reference/resource-configs/unique_key",
"reference/resource-configs/meta",
"reference/advanced-config-usage",
"reference/resource-configs/plus-prefix",
@@ -985,7 +987,6 @@ const sidebarSettings = {
"reference/resource-configs/strategy",
"reference/resource-configs/target_database",
"reference/resource-configs/target_schema",
- "reference/resource-configs/unique_key",
"reference/resource-configs/updated_at",
],
},
diff --git a/website/static/img/docs/dbt-cloud/cloud-ide/copilot-sql-generation-prompt.jpg b/website/static/img/docs/dbt-cloud/cloud-ide/copilot-sql-generation-prompt.jpg
new file mode 100644
index 00000000000..da42bbd83dd
Binary files /dev/null and b/website/static/img/docs/dbt-cloud/cloud-ide/copilot-sql-generation-prompt.jpg differ
diff --git a/website/static/img/docs/dbt-cloud/cloud-ide/copilot-sql-generation.gif b/website/static/img/docs/dbt-cloud/cloud-ide/copilot-sql-generation.gif
new file mode 100644
index 00000000000..74e6409e34d
Binary files /dev/null and b/website/static/img/docs/dbt-cloud/cloud-ide/copilot-sql-generation.gif differ
diff --git a/website/static/img/docs/dbt-cloud/using-dbt-cloud/Environment Variables/job-override.gif b/website/static/img/docs/dbt-cloud/using-dbt-cloud/Environment Variables/job-override.gif
index 3ce6cee6259..1fb2cbd3e97 100644
Binary files a/website/static/img/docs/dbt-cloud/using-dbt-cloud/Environment Variables/job-override.gif and b/website/static/img/docs/dbt-cloud/using-dbt-cloud/Environment Variables/job-override.gif differ
diff --git a/website/static/img/docs/dbt-cloud/using-dbt-cloud/Environment Variables/personal-override.gif b/website/static/img/docs/dbt-cloud/using-dbt-cloud/Environment Variables/personal-override.gif
index 4185e3c98d8..d3e64f2c4af 100644
Binary files a/website/static/img/docs/dbt-cloud/using-dbt-cloud/Environment Variables/personal-override.gif and b/website/static/img/docs/dbt-cloud/using-dbt-cloud/Environment Variables/personal-override.gif differ
diff --git a/website/static/img/docs/dbt-cloud/using-dbt-cloud/Environment Variables/personal-override.png b/website/static/img/docs/dbt-cloud/using-dbt-cloud/Environment Variables/personal-override.png
index 64b0ac8170f..b221a0b73ba 100644
Binary files a/website/static/img/docs/dbt-cloud/using-dbt-cloud/Environment Variables/personal-override.png and b/website/static/img/docs/dbt-cloud/using-dbt-cloud/Environment Variables/personal-override.png differ
diff --git a/website/static/img/docs/dbt-cloud/using-dbt-cloud/Environment Variables/refresh-ide.gif b/website/static/img/docs/dbt-cloud/using-dbt-cloud/Environment Variables/refresh-ide.gif
deleted file mode 100644
index 14b700547ca..00000000000
Binary files a/website/static/img/docs/dbt-cloud/using-dbt-cloud/Environment Variables/refresh-ide.gif and /dev/null differ
diff --git a/website/static/img/docs/dbt-cloud/using-dbt-cloud/Environment Variables/refresh-ide.png b/website/static/img/docs/dbt-cloud/using-dbt-cloud/Environment Variables/refresh-ide.png
new file mode 100644
index 00000000000..54588f53d5d
Binary files /dev/null and b/website/static/img/docs/dbt-cloud/using-dbt-cloud/Environment Variables/refresh-ide.png differ
diff --git a/website/static/img/docs/dbt-cloud/using-dbt-cloud/prod-settings-1.png b/website/static/img/docs/dbt-cloud/using-dbt-cloud/prod-settings-1.png
index 581c4ca6cbc..5fd53ffde78 100644
Binary files a/website/static/img/docs/dbt-cloud/using-dbt-cloud/prod-settings-1.png and b/website/static/img/docs/dbt-cloud/using-dbt-cloud/prod-settings-1.png differ
diff --git a/website/vercel.json b/website/vercel.json
index fa90697a517..b68dc053db9 100644
--- a/website/vercel.json
+++ b/website/vercel.json
@@ -3651,7 +3651,7 @@
},
{
"key": "Content-Security-Policy",
- "value": "img-src 'self' data: https:;"
+ "value": "img-src 'self' data: https:; frame-ancestors 'self' https://*.mutinyhq.com https://*.getdbt.com"
},
{
"key": "Strict-Transport-Security",