diff --git a/website/docs/guides/migration/versions/00-upgrading-to-v1.7.md b/website/docs/guides/migration/versions/00-upgrading-to-v1.7.md
index ba079e6a0fb..a85dd0e69e0 100644
--- a/website/docs/guides/migration/versions/00-upgrading-to-v1.7.md
+++ b/website/docs/guides/migration/versions/00-upgrading-to-v1.7.md
@@ -14,11 +14,22 @@ description: New features and changes in dbt Core v1.7
dbt Labs is committed to providing backward compatibility for all versions 1.x, with the exception of any changes explicitly mentioned below. If you encounter an error upon upgrading, please let us know by [opening an issue](https://github.com/dbt-labs/dbt-core/issues/new).
+### Behavior changes
+
+dbt Core v1.7 expands the amount of sources you can configure freshness for. Previously, freshness was limited to sources with a `loaded_at_field`; now, freshness can be generated from warehouse metadata tables when available.
+
+As part of this change, the `loaded_at_field` is no longer required to generate source freshness. If a source has a `freshness:` block, dbt will attempt to calculate freshness for that source:
+- If a `loaded_at_field` is provided, dbt will calculate freshness via a select query (previous behavior).
+- If a `loaded_at_field` is _not_ provided, dbt will calculate freshness via warehouse metadata tables when possible (new behavior).
+
+This is a relatively small behavior change, but worth calling out in case you notice that dbt is calculating freshness for _more_ sources than before. To exclude a source from freshness calculations, you have two options:
+- Don't add a `freshness:` block.
+- Explicitly set `freshness: null`
+
## New and changed features and functionality
- [`dbt docs generate`](/reference/commands/cmd-docs) now supports `--select` to generate documentation for a subset of your project. Currently available for Snowflake and Postgres only, but other adapters are coming soon.
-- [Source freshness](/docs/deploy/source-freshness) can now be generated from warehouse metadata tables, currently snowflake only, but other adapters that have metadata tables are coming soon. If you configure source freshness without a `loaded_at_field`, dbt will try to determine freshness from warehouse metadata tables.
-- The nodes dictionary in the `catalog.json` can now be "partial" if `dbt docs generate` is run with a selector.
+- [Source freshness](/docs/deploy/source-freshness) can now be generated from warehouse metadata tables, currently Snowflake only, but other adapters that have metadata tables are coming soon.
### MetricFlow enhancements
@@ -32,8 +43,9 @@ dbt Labs is committed to providing backward compatibility for all versions 1.x,
- The [manifest](/reference/artifacts/manifest-json) schema version has been updated to v11.
- The [run_results](/reference/artifacts/run-results-json) schema version has been updated to v5.
-- Added [node attributes](/reference/artifacts/run-results-json) related to compilation (`compiled`, `compiled_code`, `relation_name`).
-
+- There are a few specific changes to the [catalog.json](/reference/artifacts/catalog-json):
+ - Added [node attributes](/reference/artifacts/run-results-json) related to compilation (`compiled`, `compiled_code`, `relation_name`) to the `catalog.json`.
+ - The nodes dictionary in the `catalog.json` can now be "partial" if `dbt docs generate` is run with a selector.
### Model governance
@@ -49,4 +61,4 @@ dbt Core v1.5 introduced model governance which we're continuing to refine. v1.
With these quick hits, you can now:
- Configure a `delimiter` for a seed file.
- Use packages with the same git repo and unique subdirectory.
-- Moved the `date_spine` macro from dbt-utils to dbt-core.
+- Access the `date_spine` macro directly from dbt-core (moved over from dbt-utils).
diff --git a/website/docs/guides/migration/versions/01-upgrading-to-v1.6.md b/website/docs/guides/migration/versions/01-upgrading-to-v1.6.md
index a3ebc947aaf..50b0ca8bc58 100644
--- a/website/docs/guides/migration/versions/01-upgrading-to-v1.6.md
+++ b/website/docs/guides/migration/versions/01-upgrading-to-v1.6.md
@@ -90,4 +90,5 @@ More consistency and flexibility around packages. Resources defined in a package
- [`dbt debug --connection`](/reference/commands/debug) to test just the data platform connection specified in a profile
- [`dbt docs generate --empty-catalog`](/reference/commands/cmd-docs) to skip catalog population while generating docs
- [`--defer-state`](/reference/node-selection/defer) enables more-granular control
+- [`dbt ls`](/reference/commands/list) adds the Semantic model selection method to allow for `dbt ls -s "semantic_model:*"` and the ability to execute `dbt ls --resource-type semantic_model`.
diff --git a/website/docs/reference/commands/list.md b/website/docs/reference/commands/list.md
index 6084b3dec70..93a0b87dd93 100644
--- a/website/docs/reference/commands/list.md
+++ b/website/docs/reference/commands/list.md
@@ -10,7 +10,7 @@ The `dbt ls` command lists resources in your dbt project. It accepts selector ar
### Usage
```
dbt ls
- [--resource-type {model,source,seed,snapshot,metric,test,exposure,analysis,default,all}]
+ [--resource-type {model,semantic_model,source,seed,snapshot,metric,test,exposure,analysis,default,all}]
[--select SELECTION_ARG [SELECTION_ARG ...]]
[--models SELECTOR [SELECTOR ...]]
[--exclude SELECTOR [SELECTOR ...]]
@@ -93,6 +93,16 @@ $ dbt ls --select snowplow.* --output json --output-keys name resource_type desc
+
+
+**Listing Semantic models**
+
+List all resources upstream of your orders semantic model:
+```
+dbt ls -s +semantic_model:orders
+```
+
+
**Listing file paths**
```
diff --git a/website/docs/reference/node-selection/methods.md b/website/docs/reference/node-selection/methods.md
index a2da3196739..f476ed87ed1 100644
--- a/website/docs/reference/node-selection/methods.md
+++ b/website/docs/reference/node-selection/methods.md
@@ -352,3 +352,18 @@ dbt list --select version:none # models that are *not* versioned
```
+
+### The "semantic_model" method
+
+Supported in v1.6 or newer.
+
+
+
+The `semantic_model` method selects [semantic models](/docs/build/semantic-models).
+
+```bash
+dbt list --select semantic_model:* # list all semantic models
+dbt list --select +semantic_model:orders # list your semantic model named "orders" and all upstream resources
+```
+
+
\ No newline at end of file
diff --git a/website/docs/reference/resource-configs/contract.md b/website/docs/reference/resource-configs/contract.md
index f9a5376bc05..59cc511890b 100644
--- a/website/docs/reference/resource-configs/contract.md
+++ b/website/docs/reference/resource-configs/contract.md
@@ -23,8 +23,31 @@ When the `contract` configuration is enforced, dbt will ensure that your model's
This is to ensure that the people querying your model downstream—both inside and outside dbt—have a predictable and consistent set of columns to use in their analyses. Even a subtle change in data type, such as from `boolean` (`true`/`false`) to `integer` (`0`/`1`), could cause queries to fail in surprising ways.
+
+
The `data_type` defined in your YAML file must match a data type your data platform recognizes. dbt does not do any type aliasing itself. If your data platform recognizes both `int` and `integer` as corresponding to the same type, then they will return a match.
+
+
+
+
+dbt uses built-in type aliasing for the `data_type` defined in your YAML. For example, you can specify `string` in your contract, and on Postgres/Redshift, dbt will convert it to `text`. If dbt doesn't recognize the `data_type` name among its known aliases, it will pass it through as-is. This is enabled by default, but you can opt-out by setting `alias_types` to `false`.
+
+Example for disabling:
+
+```yml
+
+models:
+ - name: my_model
+ config:
+ contract:
+ enforced: true
+ alias_types: false # true by default
+
+```
+
+
+
When dbt compares data types, it will not compare granular details such as size, precision, or scale. We don't think you should sweat the difference between `varchar(256)` and `varchar(257)`, because it doesn't really affect the experience of downstream queriers. You can accomplish a more-precise assertion by [writing or using a custom test](/guides/best-practices/writing-custom-generic-tests).
Note that you need to specify a varchar size or numeric scale, otherwise dbt relies on default values. For example, if a `numeric` type defaults to a precision of 38 and a scale of 0, then the numeric column stores 0 digits to the right of the decimal (it only stores whole numbers), which might cause it to fail contract enforcement. To avoid this implicit coercion, specify your `data_type` with a nonzero scale, like `numeric(38, 6)`. dbt Core 1.7 and higher provides a warning if you don't specify precision and scale when providing a numeric data type.