diff --git a/website/blog/2023-10-31-to-defer-or-to-clone.md b/website/blog/2023-10-31-to-defer-or-to-clone.md index a39fc3ac0b7..00aa8c7f7e5 100755 --- a/website/blog/2023-10-31-to-defer-or-to-clone.md +++ b/website/blog/2023-10-31-to-defer-or-to-clone.md @@ -87,7 +87,7 @@ Using the cheat sheet above, let’s explore a few common scenarios and explore 1. Make a copy of our production dataset available in our downstream BI tool 2. To safely iterate on this copy without breaking production datasets - Therefore, we should use **clone** in this scenario + Therefore, we should use **clone** in this scenario. 2. **[Slim CI](https://discourse.getdbt.com/t/how-we-sped-up-our-ci-runs-by-10x-using-slim-ci/2603)** @@ -96,7 +96,11 @@ Using the cheat sheet above, let’s explore a few common scenarios and explore 2. Only run and test models in the CI staging environment that have changed from the production environment 3. Reference models from different environments – prod for unchanged models, and staging for modified models - Therefore, we should use **defer** in this scenario + Therefore, we should use **defer** in this scenario. + +:::tip Use `dbt clone` in CI jobs to test incremental models +Learn how to [use `dbt clone` in CI jobs](/best-practices/clone-incremental-models) to efficiently test modified incremental models, simulating post-merge behavior while avoiding full-refresh costs. +::: 3. **[Blue/Green Deployments](https://discourse.getdbt.com/t/performing-a-blue-green-deploy-of-your-dbt-project-on-snowflake/1349)** diff --git a/website/blog/2023-12-11-semantic-layer-on-semantic-layer.md b/website/blog/2023-12-11-semantic-layer-on-semantic-layer.md index ea77072a6dd..44499c51ec5 100644 --- a/website/blog/2023-12-11-semantic-layer-on-semantic-layer.md +++ b/website/blog/2023-12-11-semantic-layer-on-semantic-layer.md @@ -1,5 +1,5 @@ --- -title: "How we built consistent product launch metrics with the dbt Semantic Layer." +title: "How we built consistent product launch metrics with the dbt Semantic Layer" description: "We built an end-to-end data pipeline for measuring the launch of the dbt Semantic Layer using the dbt Semantic Layer." slug: product-analytics-pipeline-with-dbt-semantic-layer diff --git a/website/docs/best-practices/clone-incremental-models.md b/website/docs/best-practices/clone-incremental-models.md index 4096af489ab..11075b92161 100644 --- a/website/docs/best-practices/clone-incremental-models.md +++ b/website/docs/best-practices/clone-incremental-models.md @@ -35,11 +35,17 @@ This can be suboptimal because: - Typically incremental models are your largest datasets, so they take a long time to build in their entirety which can slow down development time and incur high warehouse costs. - There are situations where a `full-refresh` of the incremental model passes successfully in your CI job but an _incremental_ build of that same table in prod would fail when the PR is merged into main (think schema drift where [on_schema_change](/docs/build/incremental-models#what-if-the-columns-of-my-incremental-model-change) config is set to `fail`) -You can alleviate these problems by zero copy cloning the relevant, pre-exisitng incremental models into your PR-specific schema as the first step of the CI job using the `dbt clone` command. This way, the incremental models already exist in the PR-specific schema when you first execute the command `dbt build --select state:modified+` so the `is_incremental` flag will be `true`. +You can alleviate these problems by zero copy cloning the relevant, pre-existing incremental models into your PR-specific schema as the first step of the CI job using the `dbt clone` command. This way, the incremental models already exist in the PR-specific schema when you first execute the command `dbt build --select state:modified+` so the `is_incremental` flag will be `true`. You'll have two commands for your dbt Cloud CI check to execute: -1. Clone all of the pre-existing incremental models that have been modified or are downstream of another model that has been modified: `dbt clone --select state:modified+,config.materialized:incremental,state:old` -2. Build all of the models that have been modified and their downstream dependencies: `dbt build --select state:modified+` +1. Clone all of the pre-existing incremental models that have been modified or are downstream of another model that has been modified: + ```shell + dbt clone --select state:modified+,config.materialized:incremental,state:old + ``` +2. Build all of the models that have been modified and their downstream dependencies: + ```shell + dbt build --select state:modified+ + ``` Because of your first clone step, the incremental models selected in your `dbt build` on the second step will run in incremental mode. diff --git a/website/docs/docs/dbt-versions/release-notes/74-Dec-2023/legacy-sl.md b/website/docs/docs/dbt-versions/release-notes/74-Dec-2023/legacy-sl.md index 0eecfea623e..f6c18b2a053 100644 --- a/website/docs/docs/dbt-versions/release-notes/74-Dec-2023/legacy-sl.md +++ b/website/docs/docs/dbt-versions/release-notes/74-Dec-2023/legacy-sl.md @@ -23,11 +23,11 @@ The [re-released dbt Semantic Layer](/docs/use-dbt-semantic-layer/dbt-sl), power ### Breaking changes and recommendations -- For users on dbt version 1.6 and lower with dbt Metrics and Snowflake proxy: +- For users on dbt version 1.5 and lower with dbt Metrics and Snowflake proxy: - **Impact**: Post-deprecation, queries using the proxy _will not_ run. - **Action required:** _Immediate_ migration is necessary. Refer to the [dbt Semantic Layer migration guide](/guides/sl-migration?step=1) -- For users on dbt version 1.6 and lower using dbt Metrics without Snowflake proxy: +- For users on dbt version 1.5 and lower using dbt Metrics without Snowflake proxy: - **Impact**: No immediate disruption, but the package will not receive updates or support after deprecation - **Recommendation**: Plan migration to the re-released Semantic Layer for compatibility with dbt version 1.6 and higher. diff --git a/website/docs/reference/commands/deps.md b/website/docs/reference/commands/deps.md index 60ccd091ad7..1a3562e3172 100644 --- a/website/docs/reference/commands/deps.md +++ b/website/docs/reference/commands/deps.md @@ -62,7 +62,11 @@ Update your versions in packages.yml, then run dbt deps dbt generates the `package-lock.yml` file in the _project_root_ where `packages.yml` is recorded, which contains all the resolved packages, the first time you run `dbt deps`. Each subsequent run records the packages installed in this file. If the subsequent `dbt deps` runs contain no updated packages in `dependencies.yml` or `packages.yml`, dbt-core installs from `package-lock.yml`. -When you update the package spec and run `dbt deps` again, the package-lock and package files update accordingly. You can run `dbt deps --lock` to update the `package-lock.yml` with the most recent dependencies from `packages`. +When you update the package spec and run `dbt deps` again, the `package-lock.yml` and `packages.yml` files update accordingly. + +There are two flags related to `package-lock.yml`: +- `dbt deps --lock` — creates or updates the `package-lock.yml` file but does not install the packages. +- `dbt deps --upgrade` — creates or updates the `package-lock.yml` file with the most recent dependencies from `packages.yml`. Also install the packages unless the `--lock` flag is also passed. The `--add-package` flag allows you to add a package to the `packages.yml` with configurable `--version` and `--source` information. The `--dry-run` flag, when set to `False`(default), recompiles the `package-lock.yml` file after a new package is added to the `packages.yml` file. Set the flag to `True` for the changes to not persist.