diff --git a/.github/CODEOWNERS b/.github/CODEOWNERS index 8aaf0375007..1983b0201d9 100644 --- a/.github/CODEOWNERS +++ b/.github/CODEOWNERS @@ -4,14 +4,14 @@ * @dbt-labs/product-docs # Adapter & Package Development Docs -/website/docs/docs/supported-data-platforms.md @dbt-labs/product-docs @dataders -/website/docs/reference/warehouse-setups @dbt-labs/product-docs @dataders +/website/docs/docs/supported-data-platforms.md @dbt-labs/product-docs @amychen1776 +/website/docs/reference/warehouse-setups @dbt-labs/product-docs @amychen1776 # `resource-configs` contains more than just warehouse setups -/website/docs/reference/resource-configs/*-configs.md @dbt-labs/product-docs @dataders -/website/docs/guides/advanced/adapter-development @dbt-labs/product-docs @dataders @dbeatty10 +/website/docs/reference/resource-configs/*-configs.md @dbt-labs/product-docs @amychen1776 +/website/docs/guides/advanced/adapter-development @dbt-labs/product-docs @amychen1776 -/website/docs/guides/building-packages @dbt-labs/product-docs @amychen1776 @dataders @dbeatty10 -/website/docs/guides/creating-new-materializations @dbt-labs/product-docs @dataders @dbeatty10 +/website/docs/guides/building-packages @dbt-labs/product-docs @amychen1776 +/website/docs/guides/creating-new-materializations @dbt-labs/product-docs # Require approval from the Multicell team when making # changes to the public facing migration documentation. diff --git a/website/docs/best-practices/how-we-style/5-how-we-style-our-yaml.md b/website/docs/best-practices/how-we-style/5-how-we-style-our-yaml.md index 8f817356334..e3b539e8b12 100644 --- a/website/docs/best-practices/how-we-style/5-how-we-style-our-yaml.md +++ b/website/docs/best-practices/how-we-style/5-how-we-style-our-yaml.md @@ -7,6 +7,7 @@ id: 5-how-we-style-our-yaml - 2️⃣ Indents should be two spaces - ➡️ List items should be indented +- 🔠 List items with a single entry can be a string. For example, `'select': 'other_user'`, but it's best practice to provide the argument as an explicit list. For example, `'select': ['other_user']` - 🆕 Use a new line to separate list items that are dictionaries where appropriate - 📏 Lines of YAML should be no longer than 80 characters. - 🛠️ Use the [dbt JSON schema](https://github.com/dbt-labs/dbt-jsonschema) with any compatible IDE and a YAML formatter (we recommend [Prettier](https://prettier.io/)) to validate your YAML files and format them automatically. diff --git a/website/docs/community/resources/oss-expectations.md b/website/docs/community/resources/oss-expectations.md index e6e5d959c96..7b518424e92 100644 --- a/website/docs/community/resources/oss-expectations.md +++ b/website/docs/community/resources/oss-expectations.md @@ -2,112 +2,122 @@ title: "Expectations for OSS contributors" --- -Whether it's a dbt package, a plugin, `dbt-core`, or this very documentation site, contributing to the open source code that supports the dbt ecosystem is a great way to level yourself up as a developer, and to give back to the community. The goal of this page is to help you understand what to expect when contributing to dbt open source software (OSS). While we can only speak for our own experience as open source maintainers, many of these guidelines apply when contributing to other open source projects, too. +Whether it's `dbt-core`, adapters, packages, or this very documentation site, contributing to the open source code that supports the dbt ecosystem is a great way to share your knowledge, level yourself up as a developer, and to give back to the community. The goal of this page is to help you understand what to expect when contributing to dbt open source software (OSS). -Have you seen things in other OSS projects that you quite like, and think we could learn from? [Open a discussion on the dbt Community Forum](https://discourse.getdbt.com), or start a conversation in the dbt Community Slack (for example: `#community-strategy`, `#dbt-core-development`, `#package-ecosystem`, `#adapter-ecosystem`). We always appreciate hearing from you! +Have you seen things in other OSS projects that you quite like, and think we could learn from? [Open a discussion on the dbt Community Forum](https://discourse.getdbt.com), or start a conversation in the [dbt Community Slack](https://www.getdbt.com/community/join-the-community) (for example: `#community-strategy`, `#dbt-core-development`, `#package-ecosystem`, `#adapter-ecosystem`). We always appreciate hearing from you! ## Principles ### Open source is participatory -Why take time out of your day to write code you don’t _have_ to? We all build dbt together. By using dbt, you’re invested in the future of the tool, and an agent in pushing forward the practice of analytics engineering. You’ve already benefited from using code contributed by community members, and documentation written by community members. Contributing to dbt OSS is your way to pay it forward, as an active participant in the thing we’re all creating together. +We all build dbt together -- whether you write code or contribute your ideas. By using dbt, you're invested in the future of the tool, and have an active role in pushing forward the standard of analytics engineering. You already benefit from using code and documentation contributed by community members. Contributing to the dbt community is your way to be an active participant in the thing we're all creating together. -There’s a very practical reason, too: OSS prioritizes our collective knowledge and experience over any one person’s. We don’t have experience using every database, operating system, security environment, ... We rely on the community of OSS users to hone our product capabilities and documentation to the wide variety of contexts in which it operates. In this way, dbt gets to be the handiwork of thousands, rather than a few dozen. +There's a very practical reason, too: OSS prioritizes our collective knowledge and experience over any one person's. We don't have experience using every database, operating system, security environment, ... We rely on the community of OSS users to hone our product capabilities and documentation to the wide variety of contexts in which it operates. In this way, dbt gets to be the handiwork of thousands, rather than a few dozen. -### We take seriously our role as maintainers +### We take seriously our role as maintainers of a standard -In that capacity, we cannot and will not fix every bug ourselves, or code up every feature worth doing. Instead, we’ll do our best to respond to new issues with context (including links to related issues), feedback, alternatives/workarounds, and (whenever possible) pointers to code that would aid a community contributor. If a change is so tricky or involved that the initiative rests solely with us, we’ll do our best to explain the complexity, and when / why we could foresee prioritizing it. Our role also includes maintenance of the backlog of issues, such as closing duplicates, proposals we don’t intend to support, or stale issues (no activity for 180 days). +As a standard, dbt must be reliable and consistent. Our first priority is ensuring the continued high quality of existing dbt capabilities before we introduce net-new capabilities. -### Initiative is everything +We also believe dbt as a framework should be extensible enough to ["make the easy things easy, and the hard things possible"](https://en.wikipedia.org/wiki/Perl#Philosophy). To that end, we _don't_ believe it's appropriate for dbt to have an out-of-the-box solution for every niche problem. Users have the flexibility to achieve many custom behaviors by defining their own macros, materializations, hooks, and more. We view it as our responsibility as maintainers to decide when something should be "possible" — via macros, packages, etc. — and when something should be "easy" — built into the dbt Core standard. -Given that we, as maintainers, will not be able to resolve every bug or flesh out every feature request, we empower you, as a community member, to initiate a change. +So when will we say "yes" to new capabilities for dbt Core? The signals we look for include: +- Upvotes on issues in our GitHub repos +- Open source dbt packages trying to close a gap +- Technical advancements in the ecosystem -- If you open the bug report, it’s more likely to be identified. -- If you open the feature request, it’s more likely to be discussed. -- If you comment on the issue, engaging with ideas and relating it to your own experience, it’s more likely to be prioritized. -- If you open a PR to fix an identified bug, it’s more likely to be fixed. -- If you contribute the code for a well-understood feature, that feature is more likely to be in the next version. -- If you review an existing PR, to confirm it solves a concrete problem for you, it’s more likely to be merged. +In the meantime — we'll do our best to respond to new issues with: +- Clarity about whether the proposed feature falls into the intended scope of dbt Core +- Context (including links to related issues) +- Alternatives and workarounds +- When possible, pointers to code that would aid a community contributor -Sometimes, this can feel like shouting into the void, especially if you aren’t met with an immediate response. We promise that there are dozens (if not hundreds) of folks who will read your comment, maintainers included. It all adds up to a real difference. +### Initiative is everything -# Practicalities +Given that we, as maintainers, will not be able to resolve every bug or flesh out every feature request, we empower you, as a community member, to initiate a change. -As dbt OSS is growing in popularity, and dbt Labs has been growing in size, we’re working to involve new people in the responsibilities of OSS maintenance. We really appreciate your patience as our newest maintainers are learning and developing habits. +- If you open the bug report, it's more likely to be identified. +- If you open the feature request, it's more likely to be discussed. +- If you comment on the issue, engaging with ideas and relating it to your own experience, it's more likely to be prioritized. +- If you open a PR to fix an identified bug, it's more likely to be fixed. +- If you comment on an existing PR, to confirm it solves the concrete problem for your team in practice, it's more likely to be merged. -## Discussions +Sometimes, this can feel like shouting into the void, especially if you aren't met with an immediate response. We promise that there are dozens (if not hundreds) of folks who will read your comment, including us as maintainers. It all adds up to a real difference. -Discussions are a relatively new GitHub feature, and we really like them! +## Practicalities -A discussion is best suited to propose a Big Idea, such as brand-new capability in dbt Core, or a new section of the product docs. Anyone can open a discussion, add a comment to an existing one, or reply in a thread. +### Discussions -What can you expect from a new Discussion? Hopefully, comments from other members of the community, who like your idea or have their own ideas for how it could be improved. The most helpful comments are ones that describe the kinds of experiences users and readers should have. Unlike an **issue**, there is no specific code change that would “resolve” a Discussion. +A discussion is best suited to propose a Big Idea, such as brand-new capability in dbt Core or an adapter. Anyone can open a discussion, comment on an existing one, or reply in a thread. -If, over the course of a discussion, we do manage to reach consensus on a way forward, we’ll open a new issue that references the discussion for context. That issue will connect desired outcomes to specific implementation details, as well as perceived limitations and open questions. It will serve as a formal proposal and request for comment. +When you open a new discussion, you might be looking for validation from other members of the community — folks who identify with your problem statement, who like your proposed idea, and who may have their own ideas for how it could be improved. The most helpful comments propose nuances or desirable user experiences to be considered in design and refinement. Unlike an **issue**, there is no specific code change that would “resolve” a discussion. -## Issues +If, over the course of a discussion, we reach a consensus on specific elements of a proposed design, we can open new implementation issues that reference the discussion for context. Those issues will connect desired user outcomes to specific implementation details, acceptance testing, and remaining questions that need answering. -An issue could be a bug you’ve identified while using the product or reading the documentation. It could also be a specific idea you’ve had for how it could be better. +### Issues -### Best practices for issues +An issue could be a bug you've identified while using the product or reading the documentation. It could also be a specific idea you've had for a narrow extension of existing functionality. + +#### Best practices for issues - Issues are **not** for support / troubleshooting / debugging help. Please see [dbt support](/docs/dbt-support) for more details and suggestions on how to get help. - Always search existing issues first, to see if someone else had the same idea / found the same bug you did. -- Many repositories offer templates for creating issues, such as when reporting a bug or requesting a new feature. If available, please select the relevant template and fill it out to the best of your ability. This will help other people understand your issue and respond. +- Many dbt repositories offer templates for creating issues, such as reporting a bug or requesting a new feature. If available, please select the relevant template and fill it out to the best of your ability. This information helps us (and others) understand your issue. -### You’ve found an existing issue that interests you. What should you do? +##### You've found an existing issue that interests you. What should you do? -Comment on it! Explain that you’ve run into the same bug, or had a similar idea for a new feature. If the issue includes a detailed proposal for a change, say which parts of the proposal you find most compelling, and which parts give you pause. +Comment on it! Explain that you've run into the same bug, or had a similar idea for a new feature. If the issue includes a detailed proposal for a change, say which parts of the proposal you find most compelling, and which parts give you pause. -### You’ve opened a new issue. What can you expect to happen? +##### You've opened a new issue. What can you expect to happen? -In our most critical repositories (such as `dbt-core`), **our goal is to respond to new issues within 2 standard work days.** While this initial response might be quite lengthy (context, feedback, and pointers that we can offer as maintainers), more often it will be a short acknowledgement that the maintainers are aware of it and don't believe it's in urgent need of resolution. Depending on the nature of your issue, it might be well suited to an external contribution, from you or another community member. +In our most critical repositories (such as `dbt-core`), our goal is to respond to new issues as soon as possible. This initial response will often be a short acknowledgement that the maintainers are aware of the issue, signalling our perception of its urgency. Depending on the nature of your issue, it might be well suited to an external contribution, from you or another community member. -**What does “triage” mean?** In some repositories, we use a `triage` label to keep track of issues that need an initial response from a maintainer. +**What if you're opening an issue in a different repository?** We have engineering teams dedicated to active maintenance of [`dbt-core`](https://github.com/dbt-labs/dbt-core) and its component libraries ([`dbt-common`](https://github.com/dbt-labs/dbt-common) + [`dbt-adapters`](https://github.com/dbt-labs/dbt-adapters)), as well as several platform-specific adapters ([`dbt-snowflake`](https://github.com/dbt-labs/dbt-snowflake), [`dbt-bigquery`](https://github.com/dbt-labs/dbt-bigquery), [`dbt-redshift`](https://github.com/dbt-labs/dbt-redshift), [`dbt-postgres`](https://github.com/dbt-labs/dbt-postgres)). We've open-sourced a number of other software projects over the years, and the majority of them do not have the same activity or maintenance guarantees. Check to see if other recent issues have responses, or when the last commit was added to the `main` branch. -**What if I’m opening an issue in a different repository?** **What if I’m opening an issue in a different repository?** We have engineering teams dedicated to active maintainence of [`dbt-core`](https://github.com/dbt-labs/dbt-core) and its component libraries ([`dbt-common`](https://github.com/dbt-labs/dbt-common) + [`dbt-adapters`](https://github.com/dbt-labs/dbt-adapters)), as well as several platform-specific adapters ([`dbt-snowflake`](https://github.com/dbt-labs/dbt-snowflake), [`dbt-bigquery`](https://github.com/dbt-labs/dbt-bigquery), [`dbt-redshift`](https://github.com/dbt-labs/dbt-redshift), [`dbt-postgres`](https://github.com/dbt-labs/dbt-postgres)). We’ve open sourced a number of other software projects over the years, and the majority of them do not have the same activity or maintenance guarantees. Check to see if other recent issues have responses, or when the last commit was added to the `main` branch. +**You're not sure about the status of your issue.** If your issue is in an actively maintained repo and has a `triage` label attached, we're aware it's something that needs a response. If the issue has been triaged, but not prioritized, this could mean: +- The intended scope or user experience of a proposed feature requires further refinement from a maintainer +- We believe the required code change is too tricky for an external contributor -**If my issue is lingering...** Sorry for the delay! If your issue is in an actively maintained repo and has a `triage` label attached, we’re aware it's something that needs a response. +We'll do our best to explain the open questions or complexity, and when / why we could foresee prioritizing it. -**Automation that can help us:** In many repositories, we use a bot that marks issues as stale if they haven’t had any activity for 180 days. This helps us keep our backlog organized and up-to-date. We encourage you to comment on older open issues that you’re interested in, to keep them from being marked stale. You’re also always welcome to comment on closed issues to say that you’re still interested in the proposal. +**Automation that can help us:** In many repositories, we use a bot that marks issues as stale if they haven't had any activity for 180 days. This helps us keep our backlog organized and up-to-date. We encourage you to comment on older open issues that you're interested in, to keep them from being marked stale. You're also always welcome to comment on closed issues to say that you're still interested in the proposal. -### Issue labels +#### Issue labels In all likelihood, the maintainer who responds will also add a number of labels. Not all of these labels are used in every repository. -In some cases, the right resolution to an open issue might be tangential to the codebase. The right path forward might be in another codebase (we'll transfer it), a documentation update, or a change that can be made in user-space code. In other cases, the issue might describe functionality that the maintainers are unwilling or unable to incorporate into the main codebase. In these cases, a maintainer will close the issue (perhaps using a `wontfix` label) and explain why. +In some cases, the right resolution to an open issue might be tangential to the codebase. The right path forward might be in another codebase (we'll transfer it), a documentation update, or a change that you can make yourself in user-space code. In other cases, the issue might describe functionality that the maintainers are unwilling or unable to incorporate into the main codebase. In these cases, a maintainer will close the issue (perhaps using a `wontfix` label) and explain why. + +Some of the most common labels are explained below: | tag | description | | ------------------ | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | `triage` | This is a new issue which has not yet been reviewed by a maintainer. This label is removed when a maintainer reviews and responds to the issue. | -| `bug` | This issue represents a defect or regression from the behavior that's documented, or that you reasonably expect | -| `enhancement` | This issue represents net-new functionality, including an extension of an existing capability | -| `good_first_issue` | This issue does not require deep knowledge of the codebase to implement. This issue is appropriate for a first-time contributor. | +| `bug` | This issue represents a defect or regression from the behavior that's documented | +| `enhancement` | This issue represents a narrow extension of an existing capability | +| `good_first_issue` | This issue does not require deep knowledge of the codebase to implement, and it is appropriate for a first-time contributor. | | `help_wanted` | This issue is trickier than a "good first issue." The required changes are scattered across the codebase, or more difficult to test. The maintainers are happy to help an experienced community contributor; they aren't planning to prioritize this issue themselves. | | `duplicate` | This issue is functionally identical to another open issue. The maintainers will close this issue and encourage community members to focus conversation on the other one. | | `stale` | This is an old issue which has not recently been updated. In repositories with a lot of activity, stale issues will periodically be closed. | | `wontfix` | This issue does not require a code change in the repository, or the maintainers are unwilling to merge a change which implements the proposed behavior. | -## Pull requests - -PRs are your surest way to make the change you want to see in dbt / packages / docs, especially when the change is straightforward. +### Pull requests -**Every PR should be associated with an issue.** Why? Before you spend a lot of time working on a contribution, we want to make sure that your proposal will be accepted. You should open an issue first, describing your desired outcome and outlining your planned change. If you've found an older issue that's already open, comment on it with an outline for your planned implementation. Exception to this rule: If you're just opening a PR for a cosmetic fix, such as a typo in documentation, an issue isn't needed. +**Every PR should be associated with an issue.** Why? Before you spend a lot of time working on a contribution, we want to make sure that your proposal will be accepted. You should open an issue first, describing your desired outcome and outlining your planned change. If you've found an older issue that's already open, comment on it with an outline for your planned implementation _before_ putting in the work to open a pull request. -**PRs must include robust testing.** Comprehensive testing within pull requests is crucial for the stability of our project. By prioritizing robust testing, we ensure the reliability of our codebase, minimize unforeseen issues, and safeguard against potential regressions. We cannot merge changes that risk the backward incompatibility of existing documented behaviors. We understand that creating thorough tests often requires significant effort, and your dedication to this process greatly contributes to the project's overall reliability. Thank you for your commitment to maintaining the integrity of our codebase and the experience of everyone using dbt! +**PRs must include robust testing.** Comprehensive testing within pull requests is crucial for the stability of dbt. By prioritizing robust testing, we ensure the reliability of our codebase, minimize unforeseen issues, and safeguard against potential regressions. **We cannot merge changes that risk the backward incompatibility of existing documented behaviors.** We understand that creating thorough tests often requires significant effort, and your dedication to this process greatly contributes to the project's overall reliability. Thank you for your commitment to maintaining the integrity of our codebase and the experience of everyone using dbt! -**PRs go through two review steps.** First, we aim to respond with feedback on whether we think the implementation is appropriate from a product & usability standpoint. At this point, we will close PRs that we believe fall outside the scope of dbt Core, or which might lead to an inconsistent user experience. This is an important part of our role as maintainers; we're always open to hearing disagreement. If a PR passes this first review, we will queue it up for code review, at which point we aim to test it ourselves and provide thorough feedback within the next month. +**PRs go through two review steps.** First, we aim to respond with feedback on whether we think the implementation is appropriate from a product & usability standpoint. At this point, we will close PRs that we believe fall outside the scope of dbt Core, or which might lead to an inconsistent user experience. This is an important part of our role as maintainers; we're always open to hearing disagreement. If a PR passes this first review, we will queue it up for code review, at which point we aim to test it ourselves and provide thorough feedback. -**We receive more PRs than we can thoroughly review, test, and merge.** Our teams have finite capacity, and our top priority is maintaining a well-scoped, high-quality framework for the tens of thousands of people who use it every week. To that end, we must prioritize overall stability and planned improvements over a long tail of niche potential features. For best results, say what in particular you’d like feedback on, and explain what would it mean to you, your team, and other community members to have the proposed change merged. Smaller PRs tackling well-scoped issues tend to be easier and faster for review. Two recent examples of community-contributed PRs: +**We receive more PRs than we can thoroughly review, test, and merge.** Our teams have finite capacity, and our top priority is maintaining a well-scoped, high-quality framework for the tens of thousands of people who use it every week. To that end, we must prioritize overall stability and planned improvements over a long tail of niche potential features. For best results, say what in particular you'd like feedback on, and explain what would it mean to you, your team, and other community members to have the proposed change merged. Smaller PRs tackling well-scoped issues tend to be easier and faster for review. Two examples of community-contributed PRs: - [(dbt-core#9347) Fix configuration of turning test warnings into failures](https://github.com/dbt-labs/dbt-core/pull/9347) - [(dbt-core#9863) Better error message when trying to select a disabled model](https://github.com/dbt-labs/dbt-core/pull/9863) -**Automation that can help us:** Many repositories have a template for pull request descriptions, which will include a checklist that must be completed before the PR can be merged. You don’t have to do all of these things to get an initial PR, but they definitely help. Those many include things like: +**Automation that can help us:** Many repositories have a template for pull request descriptions, which will include a checklist that must be completed before the PR can be merged. You don't have to do all of these things to get an initial PR, but they will delay our review process. Those include: -- **Tests!** When you open a PR, some tests and code checks will run. (For security reasons, some may need to be approved by a maintainer.) We will not merge any PRs with failing tests. If you’re not sure why a test is failing, please say so, and we’ll do our best to get to the bottom of it together. +- **Tests, tests, tests.** When you open a PR, some tests and code checks will run. (For security reasons, some may need to be approved by a maintainer.) We will not merge any PRs with failing tests. If you're not sure why a test is failing, please say so, and we'll do our best to get to the bottom of it together. - **Contributor License Agreement** (CLA): This ensures that we can merge your code, without worrying about unexpected implications for the copyright or license of open source dbt software. For more details, read: ["Contributor License Agreements"](../resources/contributor-license-agreements.md) - **Changelog:** In projects that include a number of changes in each release, we need a reliable way to signal what's been included. The mechanism for this will vary by repository, so keep an eye out for notes about how to update the changelog. -### Inclusion in release versions +#### Inclusion in release versions -Both bug fixes and backwards-compatible new features will be included in the [next minor release](/docs/dbt-versions/core#how-dbt-core-uses-semantic-versioning). Fixes for regressions and net-new bugs that were present in the minor version's original release will be backported to versions with [active support](/docs/dbt-versions/core). Other bug fixes may be backported when we have high confidence that they're narrowly scoped and won't cause unintended side effects. +Both bug fixes and backwards-compatible new features will be included in the [next minor release of dbt Core](/docs/dbt-versions/core#how-dbt-core-uses-semantic-versioning). Fixes for regressions and net-new bugs that were present in the minor version's original release will be backported to versions with [active support](/docs/dbt-versions/core). Other bug fixes may be backported when we have high confidence that they're narrowly scoped and won't cause unintended side effects. diff --git a/website/docs/docs/build/environment-variables.md b/website/docs/docs/build/environment-variables.md index 99129cea8c9..95242069ed9 100644 --- a/website/docs/docs/build/environment-variables.md +++ b/website/docs/docs/build/environment-variables.md @@ -83,7 +83,7 @@ If you change the value of an environment variable mid-session while using the I To refresh the IDE mid-development, click on either the green 'ready' signal or the red 'compilation error' message at the bottom right corner of the IDE. A new modal will pop up, and you should select the Refresh IDE button. This will load your environment variables values into your development environment. - + There are some known issues with partial parsing of a project and changing environment variables mid-session in the IDE. If you find that your dbt project is not compiling to the values you've set, try deleting the `target/partial_parse.msgpack` file in your dbt project which will force dbt to re-compile your whole project. diff --git a/website/docs/docs/build/incremental-microbatch.md b/website/docs/docs/build/incremental-microbatch.md index 901f59a167c..4aff8b5839c 100644 --- a/website/docs/docs/build/incremental-microbatch.md +++ b/website/docs/docs/build/incremental-microbatch.md @@ -187,7 +187,7 @@ Several configurations are relevant to microbatch models, and some are required: | [`begin`](/reference/resource-configs/begin) | The "beginning of time" for the microbatch model. This is the starting point for any initial or full-refresh builds. For example, a daily-grain microbatch model run on `2024-10-01` with `begin = '2023-10-01` will process 366 batches (it's a leap year!) plus the batch for "today." | N/A | Date | Required | | [`batch_size`](/reference/resource-configs/batch-size) | The granularity of your batches. Supported values are `hour`, `day`, `month`, and `year` | N/A | String | Required | | [`lookback`](/reference/resource-configs/lookback) | Process X batches prior to the latest bookmark to capture late-arriving records. | `1` | Integer | Optional | -| [`concurrent_batches`](/reference/resource-properties/concurrent_batches) | An override for whether batches run concurrently (at the same time) or sequentially (one after the other). | `None` | Boolean | Optional | +| [`concurrent_batches`](/reference/resource-properties/concurrent_batches) | Overrides dbt's auto detect for running batches concurrently (at the same time). Read more about [configuring concurrent batches](/docs/build/incremental-microbatch#configure-concurrent_batches). Setting to
* `true` runs batches concurrently (in parallel).
* `false` runs batches sequentially (one after the other). | `None` | Boolean | Optional | diff --git a/website/docs/docs/build/unit-tests.md b/website/docs/docs/build/unit-tests.md index a81fc088de7..fc4cf02b34f 100644 --- a/website/docs/docs/build/unit-tests.md +++ b/website/docs/docs/build/unit-tests.md @@ -24,6 +24,7 @@ Starting in dbt Core v1.8, we have introduced an additional type of test to dbt - We currently only support adding unit tests to models in your _current_ project. - We currently _don't_ support unit testing models that use the [`materialized view`](/docs/build/materializations#materialized-view) materialization. - We currently _don't_ support unit testing models that use recursive SQL. +- We currently _don't_ support unit testing models that use introspective queries. - If your model has multiple versions, by default the unit test will run on *all* versions of your model. Read [unit testing versioned models](/reference/resource-properties/unit-testing-versions) for more information. - Unit tests must be defined in a YML file in your [`models/` directory](/reference/project-configs/model-paths). - Table names must be aliased in order to unit test `join` logic. diff --git a/website/docs/docs/cloud/about-cloud/about-dbt-cloud.md b/website/docs/docs/cloud/about-cloud/about-dbt-cloud.md index 08bbcb94c3b..1a7e59dd5c2 100644 --- a/website/docs/docs/cloud/about-cloud/about-dbt-cloud.md +++ b/website/docs/docs/cloud/about-cloud/about-dbt-cloud.md @@ -24,7 +24,7 @@ dbt Cloud's [flexible plans](https://www.getdbt.com/pricing/) and features make diff --git a/website/docs/docs/cloud/dbt-cloud-ide/develop-in-the-cloud.md b/website/docs/docs/cloud/dbt-cloud-ide/develop-in-the-cloud.md index c9d2cbbad30..de44de67b33 100644 --- a/website/docs/docs/cloud/dbt-cloud-ide/develop-in-the-cloud.md +++ b/website/docs/docs/cloud/dbt-cloud-ide/develop-in-the-cloud.md @@ -13,7 +13,7 @@ The dbt Cloud integrated development environment (IDE) is a single web-based int The dbt Cloud IDE offers several [keyboard shortcuts](/docs/cloud/dbt-cloud-ide/keyboard-shortcuts) and [editing features](/docs/cloud/dbt-cloud-ide/ide-user-interface#editing-features) for faster and efficient development and governance: - Syntax highlighting for SQL — Makes it easy to distinguish different parts of your code, reducing syntax errors and enhancing readability. -- AI copilot — Use [dbt Copilot](/docs/cloud/dbt-copilot), a powerful AI engine that can generate documentation, tests, and semantic models for your dbt SQL models. +- AI copilot — Use [dbt Copilot](/docs/cloud/dbt-copilot), a powerful AI engine that can [generate code](/docs/cloud/use-dbt-copilot#generate-and-edit-code) using natural language, and [generate documentation](/docs/build/documentation), [tests](/docs/build/data-tests), and [semantic models](/docs/build/semantic-models) for you with the click of a button. - Auto-completion — Suggests table names, arguments, and column names as you type, saving time and reducing typos. - Code [formatting and linting](/docs/cloud/dbt-cloud-ide/lint-format) — Helps standardize and fix your SQL code effortlessly. - Navigation tools — Easily move around your code, jump to specific lines, find and replace text, and navigate between project files. diff --git a/website/docs/docs/cloud/dbt-copilot.md b/website/docs/docs/cloud/dbt-copilot.md index 403df86a089..bd2573e0ff8 100644 --- a/website/docs/docs/cloud/dbt-copilot.md +++ b/website/docs/docs/cloud/dbt-copilot.md @@ -8,10 +8,12 @@ pagination_prev: null # About dbt Copilot -dbt Copilot is a powerful artificial intelligence (AI) engine that's fully integrated into your dbt Cloud experience and designed to accelerate your analytics workflows. dbt Copilot embeds AI-driven assistance across every stage of the analytics development life cycle (ADLC), empowering data practitioners to deliver data products faster, improve data quality, and enhance data accessibility. With automatic code generation, you can let the AI engine generate the [documentation](/docs/build/documentation), [tests](/docs/build/data-tests), and [semantic models](/docs/build/semantic-models) for you. +dbt Copilot is a powerful artificial intelligence (AI) engine that's fully integrated into your dbt Cloud experience and designed to accelerate your analytics workflows. dbt Copilot embeds AI-driven assistance across every stage of the analytics development life cycle (ADLC), empowering data practitioners to deliver data products faster, improve data quality, and enhance data accessibility. + +With automatic code generation, let dbt Copilot [generate code](/docs/cloud/use-dbt-copilot#generate-and-edit-code) using natural language, and [generate documentation](/docs/build/documentation), [tests](/docs/build/data-tests), and [semantic models](/docs/build/semantic-models) for you with the click of a button. :::tip Beta feature -dbt Copilot is designed to _help_ developers generate documentation, tests, and semantic models in dbt Cloud. It's available in beta, in the dbt Cloud IDE only. +dbt Copilot is designed to _help_ developers generate documentation, tests, and semantic models, as well as [code](/docs/cloud/use-dbt-copilot#generate-and-edit-code) using natural language, in dbt Cloud. It's available in beta, in the dbt Cloud IDE only. To use dbt Copilot, you must have an active [dbt Cloud Enterprise account](https://www.getdbt.com/pricing) and either agree to use dbt Labs' OpenAI key or provide your own Open AI API key. [Register here](https://docs.google.com/forms/d/e/1FAIpQLScPjRGyrtgfmdY919Pf3kgqI5E95xxPXz-8JoVruw-L9jVtxg/viewform) or reach out to the Account Team if you're interested in joining the private beta. ::: diff --git a/website/docs/docs/cloud/git/connect-gitlab.md b/website/docs/docs/cloud/git/connect-gitlab.md index 40d84f7d164..d16cdb15b8e 100644 --- a/website/docs/docs/cloud/git/connect-gitlab.md +++ b/website/docs/docs/cloud/git/connect-gitlab.md @@ -10,6 +10,7 @@ Connecting your GitLab account to dbt Cloud provides convenience and another lay - Clone repos using HTTPS rather than SSH. - Carry GitLab user permissions through to dbt Cloud or dbt Cloud CLI's git actions. - Trigger [Continuous integration](/docs/deploy/continuous-integration) builds when merge requests are opened in GitLab. + - GitLab automatically registers a webhook in your GitLab repository to enable seamless integration with dbt Cloud. The steps to integrate GitLab in dbt Cloud depend on your plan. If you are on: - the Developer or Team plan, read these [instructions](#for-dbt-cloud-developer-and-team-tiers). @@ -114,20 +115,10 @@ If your GitLab account is not connected, you’ll see "No connected account". Se Once you approve authorization, you will be redirected to dbt Cloud, and you should see your connected account. You're now ready to start developing in the dbt Cloud IDE or dbt Cloud CLI. - ## Troubleshooting -### Errors when importing a repository on dbt Cloud project set up -If you do not see your repository listed, double-check that: -- Your repository is in a Gitlab group you have access to. dbt Cloud will not read repos associated with a user. - -If you do see your repository listed, but are unable to import the repository successfully, double-check that: -- You are a maintainer of that repository. Only users with maintainer permissions can set up repository connections. - -If you imported a repository using the dbt Cloud native integration with GitLab, you should be able to see the clone strategy is using a `deploy_token`. If it's relying on an SSH key, this means the repository was not set up using the native GitLab integration, but rather using the generic git clone option. The repository must be reconnected in order to get the benefits described above. - -## FAQs - + + diff --git a/website/docs/docs/cloud/use-dbt-copilot.md b/website/docs/docs/cloud/use-dbt-copilot.md index 30def967f96..48e5ffa6fa7 100644 --- a/website/docs/docs/cloud/use-dbt-copilot.md +++ b/website/docs/docs/cloud/use-dbt-copilot.md @@ -1,22 +1,73 @@ --- title: "Use dbt Copilot" sidebar_label: "Use dbt Copilot" -description: "Use the dbt Copilot AI engine to generate documentation, tests, and semantic models from scratch, giving you the flexibility to modify or fix generated code." +description: "Use dbt Copilot to generate documentation, tests, semantic models, and sql code from scratch, giving you the flexibility to modify or fix generated code." --- # Use dbt Copilot -Use dbt Copilot to generate documentation, tests, and semantic models from scratch, giving you the flexibility to modify or fix generated code. To access and use this AI engine: +Use dbt Copilot to generate documentation, tests, semantic models, and code from scratch, giving you the flexibility to modify or fix generated code. -1. Navigate to the dbt Cloud IDE and select a SQL model file under the **File Explorer**. +This page explains how to use dbt Copilot to: -2. In the **Console** section (under the **File Editor**), click **dbt Copilot** to view the available AI options. +- [Generate resources](#generate-resources) — Save time by using dbt Copilot’s generation button to generate documentation, tests, and semantic model files during your development. +- [Generate and edit code](#generate-and-edit-code) — Use natural language prompts to generate SQL code from scratch or to edit existing SQL file by using keyboard shortcuts or highlighting code. + +## Generate resources +Generate documentation, tests, and semantic models resources with the click-of-a-button using dbt Copilot, saving you time. To access and use this AI feature: + +1. Navigate to the dbt Cloud IDE and select a SQL model file under the **File Explorer**. +2. In the **Console** section (under the **File Editor**), click **dbt Copilot** to view the available AI options. 3. Select the available options to generate the YAML config: **Generate Documentation**, **Generate Tests**, or **Generate Semantic Model**. - To generate multiple YAML configs for the same model, click each option separately. dbt Copilot intelligently saves the YAML config in the same file. - 4. Verify the AI-generated code. You can update or fix the code as needed. - 5. Click **Save As**. You should see the file changes under the **Version control** section. + +## Generate and edit code + +dbt Copilot also allows you to generate SQL code directly within the SQL file in the dbt Cloud IDE, using natural language prompts. This means you can rewrite or add specific portions of the SQL file without needing to edit the entire file. + +This intelligent AI tool streamlines SQL development by reducing errors, scaling effortlessly with complexity, and saving valuable time. dbt Copilot's [prompt window](#use-the-prompt-window), accessible by keyboard shortcut, handles repetitive or complex SQL generation effortlessly so you can focus on high-level tasks. + +Use Copilot's prompt window for use cases like: + +- Writing advanced transformations +- Performing bulk edits efficiently +- Crafting complex patterns like regex + +### Use the prompt window + +Access dbt Copilot's AI prompt window using the keyboard shortcut Cmd+B (Mac) or Ctrl+B (Windows) to: + +#### 1. Generate SQL from scratch +- Use the keyboard shortcuts Cmd+B (Mac) or Ctrl+B (Windows) to generate SQL from scratch. +- Enter your instructions to generate SQL code tailored to your needs using natural language. +- Ask dbt Copilot to fix the code or add a specific portion of the SQL file. + + + +#### 2. Edit existing SQL code +- Highlight a section of SQL code and press Cmd+B (Mac) or Ctrl+B (Windows) to open the prompt window for editing. +- Use this to refine or modify specific code snippets based on your needs. +- Ask dbt Copilot to fix the code or add a specific portion of the SQL file. + +#### 3. Review changes with the diff view to quickly assess the impact of the changes before making changes +- When a suggestion is generated, Copilot displays a visual "diff" view to help you compare the proposed changes with your existing code: + - **Green**: Means new code that will be added if you accept the suggestion. + - **Red**: Highlights existing code that will be removed or replaced by the suggested changes. + +#### 4. Accept or reject suggestions +- **Accept**: If the generated SQL meets your requirements, click the **Accept** button to apply the changes directly to your `.sql` file directly in the IDE. +- **Reject**: If the suggestion don’t align with your request/prompt, click **Reject** to discard the generated SQL without making changes and start again. + +#### 5. Regenerate code +- To regenerate, press the **Escape** button on your keyboard (or click the Reject button in the popup). This will remove the generated code and puts your cursor back into the prompt text area. +- Update your prompt and press **Enter** to try another generation. Press **Escape** again to close the popover entirely. + +Once you've accepted a suggestion, you can continue to use the prompt window to generate additional SQL code and commit your changes to the branch. + + + diff --git a/website/docs/docs/community-adapters.md b/website/docs/docs/community-adapters.md index 3af4e15b32b..895e47a8fa3 100644 --- a/website/docs/docs/community-adapters.md +++ b/website/docs/docs/community-adapters.md @@ -7,7 +7,8 @@ Community adapters are adapter plugins contributed and maintained by members of | Data platforms (click to view setup guide) ||| | ------------------------------------------ | -------------------------------- | ------------------------------------- | -| [Clickhouse](/docs/core/connect-data-platform/clickhouse-setup) | [Databend Cloud](/docs/core/connect-data-platform/databend-setup) | [Doris & SelectDB](/docs/core/connect-data-platform/doris-setup) | +| [Clickhouse](/docs/core/connect-data-platform/clickhouse-setup) | [CrateDB](/docs/core/connect-data-platform/cratedb-setup) +| [Databend Cloud](/docs/core/connect-data-platform/databend-setup) | [Doris & SelectDB](/docs/core/connect-data-platform/doris-setup) | | [DuckDB](/docs/core/connect-data-platform/duckdb-setup) | [Exasol Analytics](/docs/core/connect-data-platform/exasol-setup) | [Extrica](/docs/core/connect-data-platform/extrica-setup) | | [Hive](/docs/core/connect-data-platform/hive-setup) | [IBM DB2](/docs/core/connect-data-platform/ibmdb2-setup) | [Impala](/docs/core/connect-data-platform/impala-setup) | | [Infer](/docs/core/connect-data-platform/infer-setup) | [iomete](/docs/core/connect-data-platform/iomete-setup) | [MindsDB](/docs/core/connect-data-platform/mindsdb-setup) | diff --git a/website/docs/docs/core/connect-data-platform/cratedb-setup.md b/website/docs/docs/core/connect-data-platform/cratedb-setup.md new file mode 100644 index 00000000000..fa1b9833e59 --- /dev/null +++ b/website/docs/docs/core/connect-data-platform/cratedb-setup.md @@ -0,0 +1,62 @@ +--- +title: "CrateDB setup" +description: "Read this guide to learn about the CrateDB data platform setup in dbt." +id: "cratedb-setup" +meta: + maintained_by: Crate.io, Inc. + authors: 'CrateDB maintainers' + github_repo: 'crate/dbt-cratedb2' + pypi_package: 'dbt-cratedb2' + min_core_version: 'v1.0.0' + cloud_support: Not Supported + min_supported_version: 'n/a' + slack_channel_name: 'Community Forum' + slack_channel_link: 'https://community.cratedb.com/' + platform_name: 'CrateDB' + config_page: '/reference/resource-configs/no-configs' +--- + +import SetUpPages from '/snippets/_setup-pages-intro.md'; + + + + +[CrateDB] is compatible with PostgreSQL, so its dbt adapter strongly depends on +dbt-postgres, documented at [PostgreSQL profile setup]. + +CrateDB targets are configured exactly the same way, see also [PostgreSQL +configuration], with just a few things to consider which are special to +CrateDB. Relevant details are outlined at [using dbt with CrateDB], +which also includes up-to-date information. + + +## Profile configuration + +CrateDB targets should be set up using a configuration like this minimal sample +of settings in your [`profiles.yml`] file. + + + +```yaml +cratedb_analytics: + target: dev + outputs: + dev: + type: cratedb + host: [clustername].aks1.westeurope.azure.cratedb.net + port: 5432 + user: [username] + pass: [password] + dbname: crate # Do not change this value. CrateDB's only catalog is `crate`. + schema: doc # Define the schema name. CrateDB's default schema is `doc`. +``` + + + + + +[CrateDB]: https://cratedb.com/database +[PostgreSQL configuration]: https://docs.getdbt.com/reference/resource-configs/postgres-configs +[PostgreSQL profile setup]: https://docs.getdbt.com/docs/core/connect-data-platform/postgres-setup +[`profiles.yml`]: https://docs.getdbt.com/docs/core/connect-data-platform/profiles.yml +[using dbt with CrateDB]: https://cratedb.com/docs/guide/integrate/dbt/ diff --git a/website/docs/docs/core/connect-data-platform/dremio-setup.md b/website/docs/docs/core/connect-data-platform/dremio-setup.md index 21d0ee2956b..69f2b14fc4f 100644 --- a/website/docs/docs/core/connect-data-platform/dremio-setup.md +++ b/website/docs/docs/core/connect-data-platform/dremio-setup.md @@ -60,10 +60,6 @@ Next, configure the profile for your project. When you initialize a project, you create one of these three profiles. You must configure it before trying to connect to Dremio Cloud or Dremio Software. -## Profiles - -When you initialize a project, you create one of these three profiles. You must configure it before trying to connect to Dremio Cloud or Dremio Software. - * Profile for Dremio Cloud * Profile for Dremio Software with Username/Password Authentication * Profile for Dremio Software with Authentication Through a Personal Access Token @@ -149,9 +145,7 @@ For descriptions of the configurations in these profiles, see [Configurations](# -## Configurations - -### Configurations Common to Profiles for Dremio Cloud and Dremio Software +## Configurations Common to Profiles for Dremio Cloud and Dremio Software | Configuration | Required? | Default Value | Description | diff --git a/website/docs/docs/dbt-versions/core-upgrade/07-upgrading-to-v1.8.md b/website/docs/docs/dbt-versions/core-upgrade/07-upgrading-to-v1.8.md index e9e45a69153..026fb1a2a11 100644 --- a/website/docs/docs/dbt-versions/core-upgrade/07-upgrading-to-v1.8.md +++ b/website/docs/docs/dbt-versions/core-upgrade/07-upgrading-to-v1.8.md @@ -1,5 +1,5 @@ --- -title: "Upgrading to v1.8 (latest)" +title: "Upgrading to v1.8" id: upgrading-to-v1.8 description: New features and changes in dbt Core v1.8 displayed_sidebar: "docs" diff --git a/website/docs/docs/dbt-versions/release-notes.md b/website/docs/docs/dbt-versions/release-notes.md index a8cec46fe90..e9245399e65 100644 --- a/website/docs/docs/dbt-versions/release-notes.md +++ b/website/docs/docs/dbt-versions/release-notes.md @@ -20,6 +20,7 @@ Release notes are grouped by month for both multi-tenant and virtual private clo ## December 2024 +- **Fix**: [The dbt Semantic Layer](/docs/use-dbt-semantic-layer/dbt-sl) now respects the BigQuery [`execution_project` attribute](/docs/core/connect-data-platform/bigquery-setup#execution-project), including for exports. - **New**: [Model notifications](/docs/deploy/model-notifications) are now generally available in dbt Cloud. These notifications alert model owners through email about any issues encountered by models and tests as soon as they occur while running a job. - **New**: You can now use your [Azure OpenAI key](/docs/cloud/account-integrations?ai-integration=azure#ai-integrations) (available in beta) to use dbt Cloud features like [dbt Copilot](/docs/cloud/dbt-copilot) and [Ask dbt](/docs/cloud-integrations/snowflake-native-app) . Additionally, you can use your own [OpenAI API key](/docs/cloud/account-integrations?ai-integration=openai#ai-integrations) or use [dbt Labs-managed OpenAI](/docs/cloud/account-integrations?ai-integration=dbtlabs#ai-integrations) key. Refer to [AI integrations](/docs/cloud/account-integrations#ai-integrations) for more information. - **New**: The [`hard_deletes`](/reference/resource-configs/hard-deletes) config gives you more control on how to handle deleted rows from the source. Supported options are `ignore` (default), `invalidate` (replaces the legacy `invalidate_hard_deletes=true`), and `new_record`. Note that `new_record` will create a new metadata column in the snapshot table. diff --git a/website/docs/docs/deploy/ci-jobs.md b/website/docs/docs/deploy/ci-jobs.md index 1128dfd7abc..0f9b6ba377a 100644 --- a/website/docs/docs/deploy/ci-jobs.md +++ b/website/docs/docs/deploy/ci-jobs.md @@ -188,6 +188,8 @@ To validate _all_ semantic nodes in your project, add the following command to d ## Troubleshooting + + If your temporary schemas aren't dropping after a PR merges or closes, this typically indicates one of these issues: - You have overridden the generate_schema_name macro and it isn't using dbt_cloud_pr_ as the prefix. @@ -201,6 +203,7 @@ A macro is creating a schema but there are no dbt models writing to that schema. + If you receive a schema-related error message referencing a previous PR, this is usually an indicator that you are not using a production job for your deferral and are instead using self. If the prior PR has already been merged, the prior PR's schema may have been dropped by the time the CI job for the current PR is kicked off. diff --git a/website/docs/docs/deploy/retry-jobs.md b/website/docs/docs/deploy/retry-jobs.md index f439351aec5..4e3ad0d429f 100644 --- a/website/docs/docs/deploy/retry-jobs.md +++ b/website/docs/docs/deploy/retry-jobs.md @@ -10,6 +10,7 @@ If your dbt job run completed with a status of **Error**, you can rerun it from - You have a [dbt Cloud account](https://www.getdbt.com/signup). - You must be using [dbt version](/docs/dbt-versions/upgrade-dbt-version-in-cloud) 1.6 or newer. +- dbt can successfully parse the project and generate a [manifest](/reference/artifacts/manifest-json) - The most recent run of the job hasn't completed successfully. The latest status of the run is **Error**. - The job command that failed in the run must be one that supports the [retry command](/reference/commands/retry). diff --git a/website/docs/docs/get-started-dbt.md b/website/docs/docs/get-started-dbt.md index 428253ec139..1920a9b3da2 100644 --- a/website/docs/docs/get-started-dbt.md +++ b/website/docs/docs/get-started-dbt.md @@ -6,7 +6,7 @@ pagination_next: null pagination_prev: null --- -Begin your dbt journey by trying one of our quickstarts, which provides a step-by-step guide to help you set up dbt Cloud or dbt Core with a [variety of data platforms](/docs/cloud/connect-data-platform/about-connections). +Begin your dbt journey by trying one of our quickstarts, which provides a step-by-step guide to help you set up [dbt Cloud](#dbt-cloud) or [dbt Core](#dbt-core) with a [variety of data platforms](/docs/cloud/connect-data-platform/about-connections). ## dbt Cloud @@ -76,13 +76,23 @@ Learn more about [dbt Cloud features](/docs/cloud/about-cloud/dbt-cloud-feature [dbt Core](/docs/core/about-core-setup) is a command-line [open-source tool](https://github.com/dbt-labs/dbt-core) that enables data practitioners to transform data using analytics engineering best practices. It suits individuals and small technical teams who prefer manual setup and customization, supports community adapters, and open-source standards. -Refer to the following quickstarts to get started with dbt Core: +
+ + -- [dbt Core from a manual install](/guides/manual-install) to learn how to install dbt Core and set up a project. -- [dbt Core using GitHub Codespace](/guides/codespace?step=1) to learn how to create a codespace and execute the `dbt build` command. + +
## Related docs - + Expand your dbt knowledge and expertise with these additional resources: - [Join the bi-weekly demos](https://www.getdbt.com/resources/webinars/dbt-cloud-demos-with-experts) to see dbt Cloud in action and ask questions. diff --git a/website/docs/faqs/Troubleshooting/error-importing-repo.md b/website/docs/faqs/Troubleshooting/error-importing-repo.md new file mode 100644 index 00000000000..85c9ffb0745 --- /dev/null +++ b/website/docs/faqs/Troubleshooting/error-importing-repo.md @@ -0,0 +1,14 @@ +--- +title: Errors importing a repository on dbt Cloud project set up +description: "Errors importing a repository on dbt Cloud project set up" +sidebar_label: 'Errors importing a repository on dbt Cloud project set up' +id: error-importing-repo +--- + +If you don't see your repository listed, double-check that: +- Your repository is in a Gitlab group you have access to. dbt Cloud will not read repos associated with a user. + +If you do see your repository listed, but are unable to import the repository successfully, double-check that: +- You are a maintainer of that repository. Only users with maintainer permissions can set up repository connections. + +If you imported a repository using the dbt Cloud native integration with GitLab, you should be able to see if the clone strategy is using a `deploy_token`. If it's relying on an SSH key, this means the repository was not set up using the native GitLab integration, but rather using the generic git clone option. The repository must be reconnected in order to get the benefits described above. diff --git a/website/docs/faqs/Troubleshooting/gitlab-webhook.md b/website/docs/faqs/Troubleshooting/gitlab-webhook.md new file mode 100644 index 00000000000..450796db83e --- /dev/null +++ b/website/docs/faqs/Troubleshooting/gitlab-webhook.md @@ -0,0 +1,19 @@ +--- +title: Unable to trigger a CI job with GitLab +description: "Unable to trigger a CI job" +sidebar_label: 'Unable to trigger a CI job' +id: gitlab-webhook +--- + +When you connect dbt Cloud to a GitLab repository, GitLab automatically registers a webhook in the background, viewable under the repository settings. This webhook is also used to trigger [CI jobs](/docs/deploy/ci-jobs) when you push to the repository. + +If you're unable to trigger a CI job, this usually indicates that the webhook registration is missing or incorrect. + +To resolve this issue, navigate to the repository settings in GitLab and view the webhook registrations by navigating to GitLab --> **Settings** --> **Webhooks**. + +Some things to check: + +- The webhook registration is enabled in GitLab. +- The webhook registration is configured with the correct URL and secret. + +If you're still experiencing this issue, reach out to the Support team at support@getdbt.com and we'll be happy to help! diff --git a/website/docs/guides/mesh-qs.md b/website/docs/guides/mesh-qs.md index 9a7aa8b0ce0..d81951c9669 100644 --- a/website/docs/guides/mesh-qs.md +++ b/website/docs/guides/mesh-qs.md @@ -94,7 +94,7 @@ To set a production environment: 6. Click **Test Connection** to confirm the deployment connection. 6. Click **Save** to create a production environment. - + ## Set up a foundational project diff --git a/website/docs/reference/database-permissions/snowflake-permissions.md b/website/docs/reference/database-permissions/snowflake-permissions.md index 3f474242834..1ab35e46d26 100644 --- a/website/docs/reference/database-permissions/snowflake-permissions.md +++ b/website/docs/reference/database-permissions/snowflake-permissions.md @@ -83,6 +83,7 @@ grant role reporter to user looker_user; -- or mode_user, periscope_user ``` 5. Let loader load data + Give the role unilateral permission to operate on the raw database ``` use role sysadmin; @@ -90,6 +91,7 @@ grant all on database raw to role loader; ``` 6. Let transformer transform data + The transformer role needs to be able to read raw data. If you do this before you have any data loaded, you can run: @@ -110,6 +112,7 @@ transformer also needs to be able to create in the analytics database: grant all on database analytics to role transformer; ``` 7. Let reporter read the transformed data + A previous version of this article recommended this be implemented through hooks in dbt, but this way lets you get away with a one-off statement. ``` grant usage on database analytics to role reporter; @@ -120,10 +123,11 @@ grant select on future views in database analytics to role reporter; Again, if you already have data in your analytics database, make sure you run: ``` grant usage on all schemas in database analytics to role reporter; -grant select on all tables in database analytics to role transformer; -grant select on all views in database analytics to role transformer; +grant select on all tables in database analytics to role reporter; +grant select on all views in database analytics to role reporter; ``` 8. Maintain + When new users are added, make sure you add them to the right role! Everything else should be inherited automatically thanks to those `future` grants. For more discussion and legacy information, refer to [this Discourse article](https://discourse.getdbt.com/t/setting-up-snowflake-the-exact-grant-statements-we-run/439). diff --git a/website/docs/reference/model-configs.md b/website/docs/reference/model-configs.md index 9508cf68ceb..6c37b69758c 100644 --- a/website/docs/reference/model-configs.md +++ b/website/docs/reference/model-configs.md @@ -36,9 +36,11 @@ models: [+](/reference/resource-configs/plus-prefix)[materialized](/reference/resource-configs/materialized): [+](/reference/resource-configs/plus-prefix)[sql_header](/reference/resource-configs/sql_header): [+](/reference/resource-configs/plus-prefix)[on_configuration_change](/reference/resource-configs/on_configuration_change): apply | continue | fail #only for materialized views on supported adapters + [+](/reference/resource-configs/plus-prefix)[unique_key](/reference/resource-configs/unique_key): ``` + @@ -57,6 +59,7 @@ models: [materialized](/reference/resource-configs/materialized): [sql_header](/reference/resource-configs/sql_header): [on_configuration_change](/reference/resource-configs/on_configuration_change): apply | continue | fail #only for materialized views on supported adapters + [unique_key](/reference/resource-configs/unique_key): ``` @@ -69,12 +72,13 @@ models: -```jinja +```sql {{ config( [materialized](/reference/resource-configs/materialized)="", [sql_header](/reference/resource-configs/sql_header)="" [on_configuration_change](/reference/resource-configs/on_configuration_change): apply | continue | fail #only for materialized views for supported adapters + [unique_key](/reference/resource-configs/unique_key)='column_name_or_expression' ) }} ``` @@ -212,7 +216,7 @@ models: -```jinja +```sql {{ config( [enabled](/reference/resource-configs/enabled)=true | false, @@ -233,7 +237,7 @@ models: -```jinja +```sql {{ config( [enabled](/reference/resource-configs/enabled)=true | false, @@ -246,8 +250,9 @@ models: [persist_docs](/reference/resource-configs/persist_docs)={}, [meta](/reference/resource-configs/meta)={}, [grants](/reference/resource-configs/grants)={}, - [contract](/reference/resource-configs/contract)={} - [event_time](/reference/resource-configs/event-time): my_time_field + [contract](/reference/resource-configs/contract)={}, + [event_time](/reference/resource-configs/event-time)='my_time_field', + ) }} ``` diff --git a/website/docs/reference/node-selection/defer.md b/website/docs/reference/node-selection/defer.md index 863494de12e..eddb1ece9d4 100644 --- a/website/docs/reference/node-selection/defer.md +++ b/website/docs/reference/node-selection/defer.md @@ -29,11 +29,12 @@ dbt test --models [...] --defer --state path/to/artifacts -When the `--defer` flag is provided, dbt will resolve `ref` calls differently depending on two criteria: -1. Is the referenced node included in the model selection criteria of the current run? -2. Does the referenced node exist as a database object in the current environment? +By default, dbt uses the [`target`](/reference/dbt-jinja-functions/target) namespace to resolve `ref` calls. -If the answer to both is **no**—a node is not included _and_ it does not exist as a database object in the current environment—references to it will use the other namespace instead, provided by the state manifest. +When `--defer` is enabled, dbt resolves ref calls using the state manifest instead, but only if: + +1. The node isn’t among the selected nodes, _and_ +2. It doesn’t exist in the database (or `--favor-state` is used). Ephemeral models are never deferred, since they serve as "passthroughs" for other `ref` calls. @@ -46,7 +47,7 @@ Deferral requires both `--defer` and `--state` to be set, either by passing flag #### Favor state -You can optionally skip the second criterion by passing the `--favor-state` flag. If passed, dbt will favor using the node defined in your `--state` namespace, even if the node exists in the current target. +When `--favor-state` is passed, dbt prioritizes node definitions from the `--state directory`. However, this doesn’t apply if the node is also part of the selected nodes. ### Example diff --git a/website/docs/reference/project-configs/seed-paths.md b/website/docs/reference/project-configs/seed-paths.md index d99c1b5a907..53e2902cae0 100644 --- a/website/docs/reference/project-configs/seed-paths.md +++ b/website/docs/reference/project-configs/seed-paths.md @@ -38,7 +38,7 @@ absolute="/Users/username/project/seed" ``` ## Examples -### Use a subdirectory named `custom_seeds` instead of `seeds` +### Use a directory named `custom_seeds` instead of `seeds` diff --git a/website/docs/reference/resource-configs/alias.md b/website/docs/reference/resource-configs/alias.md index c14804ef2a7..5beaa238806 100644 --- a/website/docs/reference/resource-configs/alias.md +++ b/website/docs/reference/resource-configs/alias.md @@ -8,9 +8,11 @@ datatype: string -Specify a custom alias for a model in your `dbt_project.yml` file or config block. +Specify a custom alias for a model in your `dbt_project.yml` file, `models/properties.yml` file, or config block in a SQL file. -For example, if you have a model that calculates `sales_total` and want to give it a more user-friendly alias, you can alias it like this: +For example, if you have a model that calculates `sales_total` and want to give it a more user-friendly alias, you can alias it as shown in the following examples. + +In the `dbt_project.yml` file, the following example sets a default `alias` for the `sales_total` model at the project level: @@ -22,16 +24,40 @@ models: ``` +The following specifies an `alias` as part of the `models/properties.yml` file metadata, useful for centralized configuration: + + + +```yml +version: 2 + +models: + - name: sales_total + config: + alias: sales_dashboard +``` + + +The following assigns the `alias` directly in the In `models/sales_total.sql` file: + + + +```sql +{{ config( + alias="sales_dashboard" +) }} +``` + + This would return `analytics.finance.sales_dashboard` in the database, instead of the default `analytics.finance.sales_total`. +Configure a seed's alias in your `dbt_project.yml` file or a `properties.yml` file. The following examples demonstrate how to `alias` a seed named `product_categories` to `categories_data`. -Configure a seed's alias in your `dbt_project.yml` file or config block. - -For example, if you have a seed that represents `product_categories` and want to alias it as `categories_data`, you would alias like this: +In the `dbt_project.yml` file at the project level: @@ -41,6 +67,21 @@ seeds: product_categories: +alias: categories_data ``` + + +In the `seeds/properties.yml` file: + + + +```yml +version: 2 + +seeds: + - name: product_categories + config: + alias: categories_data +``` + This would return the name `analytics.finance.categories_data` in the database. @@ -55,9 +96,6 @@ seeds: +alias: country_mappings ``` - - - @@ -65,7 +103,9 @@ seeds: Configure a snapshots's alias in your `dbt_project.yml` file or config block. -For example, if you have a snapshot that is named `your_snapshot` and want to alias it as `the_best_snapshot`, you would alias like this: +The following examples demonstrate how to `alias` a snapshot named `your_snapshot` to `the_best_snapshot`. + +In the `dbt_project.yml` file at the project level: @@ -75,20 +115,57 @@ snapshots: your_snapshot: +alias: the_best_snapshot ``` + -This would build your snapshot to `analytics.finance.the_best_snapshot` in the database. +In the `snapshots/properties.yml` file: + + +```yml +version: 2 + +snapshots: + - name: your_snapshot + config: + alias: the_best_snapshot +``` +In `snapshots/your_snapshot.sql` file: + + + +```sql +{{ config( + alias="the_best_snapshot" +) }} +``` + + +This would build your snapshot to `analytics.finance.the_best_snapshot` in the database. + -Configure a test's alias in your `schema.yml` file or config block. +Configure a data test's alias in your `dbt_project.yml` file, `properties.yml` file, or config block in the model file. -For example, to add a unique test to the `order_id` column and give it an alias `unique_order_id_test` to identify this specific test, you would alias like this: +The following examples demonstrate how to `alias` a unique data test named `order_id` to `unique_order_id_test` to identify a specific data test. - +In the `dbt_project.yml` file at the project level: + + + +```yml +tests: + your_project: + +alias: unique_order_id_test +``` + + +In the `models/properties.yml` file: + + ```yml models: @@ -99,10 +176,22 @@ models: - unique: alias: unique_order_id_test ``` + + +In `tests/unique_order_id_test.sql` file: + + + +```sql +{{ config( + alias="unique_order_id_test", + severity="error", +``` + When using [`store_failures_as`](/reference/resource-configs/store_failures_as), this would return the name `analytics.finance.orders_order_id_unique_order_id_test` in the database. - + diff --git a/website/docs/reference/resource-configs/bigquery-configs.md b/website/docs/reference/resource-configs/bigquery-configs.md index ab5f562f57c..c912bca0688 100644 --- a/website/docs/reference/resource-configs/bigquery-configs.md +++ b/website/docs/reference/resource-configs/bigquery-configs.md @@ -909,3 +909,10 @@ By default, this is set to `True` to support the default `intermediate_format` o ### The `intermediate_format` parameter The `intermediate_format` parameter specifies which file format to use when writing records to a table. The default is `parquet`. + + +## Unit test limitations + +You must specify all fields in a BigQuery `STRUCT` for [unit tests](/docs/build/unit-tests). You cannot use only a subset of fields in a `STRUCT`. + + diff --git a/website/docs/reference/resource-configs/no-configs.md b/website/docs/reference/resource-configs/no-configs.md index 5eec26917c8..f72b286c837 100644 --- a/website/docs/reference/resource-configs/no-configs.md +++ b/website/docs/reference/resource-configs/no-configs.md @@ -1,11 +1,12 @@ --- -title: "No specifc configurations for this Adapter" +title: "No specific configurations for this adapter" id: "no-configs" --- If you were guided to this page from a data platform setup article, it most likely means: - Setting up the profile is the only action the end-user needs to take on the data platform, or -- The subsequent actions the end-user needs to take are not currently documented +- The subsequent actions the end-user needs to take are not currently documented, or +- Relevant information is provided on the documentation pages of the data platform vendor. If you'd like to contribute to data platform-specific configuration information, refer to [Documenting a new adapter](/guides/adapter-creation) diff --git a/website/docs/reference/resource-configs/unique_key.md b/website/docs/reference/resource-configs/unique_key.md index 77c99937295..071102bae6d 100644 --- a/website/docs/reference/resource-configs/unique_key.md +++ b/website/docs/reference/resource-configs/unique_key.md @@ -1,12 +1,65 @@ --- -resource_types: [snapshots] +resource_types: [snapshots, models] description: "Learn more about unique_key configurations in dbt." datatype: column_name_or_expression --- + + + + +Configure the `unique_key` in the `config` block of your [incremental model's](/docs/build/incremental-models) SQL file, in your `models/properties.yml` file, or in your `dbt_project.yml` file. + + + +```sql +{{ + config( + materialized='incremental', + unique_key='id' + ) +}} + +``` + + + + + +```yaml +models: + - name: my_incremental_model + description: "An incremental model example with a unique key." + config: + materialized: incremental + unique_key: id + +``` + + + + + +```yaml +name: jaffle_shop + +models: + jaffle_shop: + staging: + +unique_key: id +``` + + + + + + + +For [snapshots](/docs/build/snapshots), configure the `unique_key` in the your `snapshot/filename.yml` file or in your `dbt_project.yml` file. + ```yaml @@ -23,6 +76,8 @@ snapshots: +Configure the `unique_key` in the `config` block of your snapshot SQL file or in your `dbt_project.yml` file. + import SnapshotYaml from '/snippets/_snapshot-yaml-spec.md'; @@ -49,10 +104,13 @@ snapshots: + + + ## Description -A column name or expression that is unique for the inputs of a snapshot. dbt uses this to match records between a result set and an existing snapshot, so that changes can be captured correctly. +A column name or expression that is unique for the inputs of a snapshot or incremental model. dbt uses this to match records between a result set and an existing snapshot or incremental model, so that changes can be captured correctly. -In dbt Cloud "Latest" and dbt v1.9+, [snapshots](/docs/build/snapshots) are defined and configured in YAML files within your `snapshots/` directory. You can specify one or multiple `unique_key` values within your snapshot YAML file's `config` key. +In dbt Cloud "Latest" release track and from dbt v1.9, [snapshots](/docs/build/snapshots) are defined and configured in YAML files within your `snapshots/` directory. You can specify one or multiple `unique_key` values within your snapshot YAML file's `config` key. :::caution @@ -67,6 +125,32 @@ This is a **required parameter**. No default is provided. ## Examples ### Use an `id` column as a unique key + + + + +In this example, the `id` column is the unique key for an incremental model. + + + +```sql +{{ + config( + materialized='incremental', + unique_key='id' + ) +}} + +select * from .. +``` + + + + + + +In this example, the `id` column is used as a unique key for a snapshot. + @@ -114,10 +198,38 @@ snapshots: + + + ### Use multiple unique keys + + + +Configure multiple unique keys for an incremental model as a string representing a single column or a list of single-quoted column names that can be used together, for example, `['col1', 'col2', …]`. + +Columns must not contain null values, otherwise the incremental model will fail to match rows and generate duplicate rows. Refer to [Defining a unique key](/docs/build/incremental-models#defining-a-unique-key-optional) for more information. + + + +```sql +{{ config( + materialized='incremental', + unique_key=['order_id', 'location_id'] +) }} + +with... + +``` + + + + + + + You can configure snapshots to use multiple unique keys for `primary_key` columns. @@ -137,12 +249,35 @@ snapshots: ``` + + ### Use a combination of two columns as a unique key + + + + + +```sql +{{ config( + materialized='incremental', + unique_key=['order_id', 'location_id'] +) }} + +with... + +``` + + + + + + + This configuration accepts a valid column expression. As such, you can concatenate two columns together as a unique key if required. It's a good idea to use a separator (for example, `'-'`) to ensure uniqueness. @@ -170,7 +305,6 @@ from {{ source('erp', 'transactions') }} Though, it's probably a better idea to construct this column in your query and use that as the `unique_key`: - ```sql @@ -211,4 +345,6 @@ from {{ source('erp', 'transactions') }} ``` + + diff --git a/website/sidebars.js b/website/sidebars.js index 08494e4c713..9a93980b12c 100644 --- a/website/sidebars.js +++ b/website/sidebars.js @@ -222,6 +222,7 @@ const sidebarSettings = { "docs/core/connect-data-platform/athena-setup", "docs/core/connect-data-platform/glue-setup", "docs/core/connect-data-platform/clickhouse-setup", + "docs/core/connect-data-platform/cratedb-setup", "docs/core/connect-data-platform/databend-setup", "docs/core/connect-data-platform/decodable-setup", "docs/core/connect-data-platform/doris-setup", @@ -941,6 +942,7 @@ const sidebarSettings = { "reference/resource-configs/pre-hook-post-hook", "reference/resource-configs/schema", "reference/resource-configs/tags", + "reference/resource-configs/unique_key", "reference/resource-configs/meta", "reference/advanced-config-usage", "reference/resource-configs/plus-prefix", @@ -985,7 +987,6 @@ const sidebarSettings = { "reference/resource-configs/strategy", "reference/resource-configs/target_database", "reference/resource-configs/target_schema", - "reference/resource-configs/unique_key", "reference/resource-configs/updated_at", ], }, diff --git a/website/static/img/docs/dbt-cloud/cloud-ide/copilot-sql-generation-prompt.jpg b/website/static/img/docs/dbt-cloud/cloud-ide/copilot-sql-generation-prompt.jpg new file mode 100644 index 00000000000..da42bbd83dd Binary files /dev/null and b/website/static/img/docs/dbt-cloud/cloud-ide/copilot-sql-generation-prompt.jpg differ diff --git a/website/static/img/docs/dbt-cloud/cloud-ide/copilot-sql-generation.gif b/website/static/img/docs/dbt-cloud/cloud-ide/copilot-sql-generation.gif new file mode 100644 index 00000000000..74e6409e34d Binary files /dev/null and b/website/static/img/docs/dbt-cloud/cloud-ide/copilot-sql-generation.gif differ diff --git a/website/static/img/docs/dbt-cloud/using-dbt-cloud/Environment Variables/job-override.gif b/website/static/img/docs/dbt-cloud/using-dbt-cloud/Environment Variables/job-override.gif index 3ce6cee6259..1fb2cbd3e97 100644 Binary files a/website/static/img/docs/dbt-cloud/using-dbt-cloud/Environment Variables/job-override.gif and b/website/static/img/docs/dbt-cloud/using-dbt-cloud/Environment Variables/job-override.gif differ diff --git a/website/static/img/docs/dbt-cloud/using-dbt-cloud/Environment Variables/personal-override.gif b/website/static/img/docs/dbt-cloud/using-dbt-cloud/Environment Variables/personal-override.gif index 4185e3c98d8..d3e64f2c4af 100644 Binary files a/website/static/img/docs/dbt-cloud/using-dbt-cloud/Environment Variables/personal-override.gif and b/website/static/img/docs/dbt-cloud/using-dbt-cloud/Environment Variables/personal-override.gif differ diff --git a/website/static/img/docs/dbt-cloud/using-dbt-cloud/Environment Variables/personal-override.png b/website/static/img/docs/dbt-cloud/using-dbt-cloud/Environment Variables/personal-override.png index 64b0ac8170f..b221a0b73ba 100644 Binary files a/website/static/img/docs/dbt-cloud/using-dbt-cloud/Environment Variables/personal-override.png and b/website/static/img/docs/dbt-cloud/using-dbt-cloud/Environment Variables/personal-override.png differ diff --git a/website/static/img/docs/dbt-cloud/using-dbt-cloud/Environment Variables/refresh-ide.gif b/website/static/img/docs/dbt-cloud/using-dbt-cloud/Environment Variables/refresh-ide.gif deleted file mode 100644 index 14b700547ca..00000000000 Binary files a/website/static/img/docs/dbt-cloud/using-dbt-cloud/Environment Variables/refresh-ide.gif and /dev/null differ diff --git a/website/static/img/docs/dbt-cloud/using-dbt-cloud/Environment Variables/refresh-ide.png b/website/static/img/docs/dbt-cloud/using-dbt-cloud/Environment Variables/refresh-ide.png new file mode 100644 index 00000000000..54588f53d5d Binary files /dev/null and b/website/static/img/docs/dbt-cloud/using-dbt-cloud/Environment Variables/refresh-ide.png differ diff --git a/website/static/img/docs/dbt-cloud/using-dbt-cloud/prod-settings-1.png b/website/static/img/docs/dbt-cloud/using-dbt-cloud/prod-settings-1.png index 581c4ca6cbc..5fd53ffde78 100644 Binary files a/website/static/img/docs/dbt-cloud/using-dbt-cloud/prod-settings-1.png and b/website/static/img/docs/dbt-cloud/using-dbt-cloud/prod-settings-1.png differ diff --git a/website/vercel.json b/website/vercel.json index fa90697a517..b68dc053db9 100644 --- a/website/vercel.json +++ b/website/vercel.json @@ -3651,7 +3651,7 @@ }, { "key": "Content-Security-Policy", - "value": "img-src 'self' data: https:;" + "value": "img-src 'self' data: https:; frame-ancestors 'self' https://*.mutinyhq.com https://*.getdbt.com" }, { "key": "Strict-Transport-Security",