Skip to content

Commit

Permalink
Merge branch 'service-principal' of https://github.com/dbt-labs/docs.…
Browse files Browse the repository at this point in the history
…getdbt.com into service-principal
  • Loading branch information
matthewshaver committed Jan 16, 2025
2 parents 72a156b + 2d56016 commit ae9e93d
Show file tree
Hide file tree
Showing 12 changed files with 173 additions and 29 deletions.
5 changes: 5 additions & 0 deletions website/docs/docs/build/custom-aliases.md
Original file line number Diff line number Diff line change
Expand Up @@ -157,3 +157,8 @@ If these models should indeed have the same database identifier, you can work ar

By default, dbt will create versioned models with the alias `<model_name>_v<v>`, where `<v>` is that version's unique identifier. You can customize this behavior just like for non-versioned models by configuring a custom `alias` or re-implementing the `generate_alias_name` macro.

## Related docs

- [Customize dbt models database, schema, and alias](/guides/customize-schema-alias?step=1) to learn how to customize dbt models database, schema, and alias
- [Custom schema](/docs/build/custom-schemas) to learn how to customize dbt schema
- [Custom database](/docs/build/custom-databases) to learn how to customize dbt database
6 changes: 6 additions & 0 deletions website/docs/docs/build/custom-databases.md
Original file line number Diff line number Diff line change
Expand Up @@ -98,3 +98,9 @@ See docs on macro `dispatch`: ["Managing different global overrides across packa
### BigQuery

When dbt opens a BigQuery connection, it will do so using the `project_id` defined in your active `profiles.yml` target. This `project_id` will be billed for the queries that are executed in the dbt run, even if some models are configured to be built in other projects.

## Related docs

- [Customize dbt models database, schema, and alias](/guides/customize-schema-alias?step=1) to learn how to customize dbt models database, schema, and alias
- [Custom schema](/docs/build/custom-schemas) to learn how to customize dbt model schema
- [Custom aliases](/docs/build/custom-aliases) to learn how to customize dbt model alias name
6 changes: 6 additions & 0 deletions website/docs/docs/build/custom-schemas.md
Original file line number Diff line number Diff line change
Expand Up @@ -207,3 +207,9 @@ In the `generate_schema_name` macro examples shown in the [built-in alternative
If your schema names are being generated incorrectly, double-check your target name in the relevant environment.

For more information, consult the [managing environments in dbt Core](/docs/core/dbt-core-environments) guide.

## Related docs

- [Customize dbt models database, schema, and alias](/guides/customize-schema-alias?step=1) to learn how to customize dbt models database, schema, and alias
- [Custom database](/docs/build/custom-databases) to learn how to customize dbt model database
- [Custom aliases](/docs/build/custom-aliases) to learn how to customize dbt model alias name
4 changes: 3 additions & 1 deletion website/docs/docs/build/data-tests.md
Original file line number Diff line number Diff line change
Expand Up @@ -73,7 +73,9 @@ having total_amount < 0

The name of this test is the name of the file: `assert_total_payment_amount_is_positive`.

Note, you won't need to include semicolons (;) at the end of the SQL statement in your singular test files as it can cause your test to fail.
Note:
- Omit semicolons (;) at the end of the SQL statement in your singular test files, as they can cause your test to fail.
- Singular tests placed in the tests directory are automatically executed when running `dbt test`. Don't reference singular tests in `model_name.yml`, as they are not treated as generic tests or macros, and doing so will result in an error.

To add a description to a singular test in your project, add a `.yml` file to your `tests` directory, for example, `tests/schema.yml` with the following content:

Expand Down
16 changes: 11 additions & 5 deletions website/docs/docs/build/unit-tests.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,9 +10,6 @@ keywords:

<VersionCallout version="1.8" />




Historically, dbt's test coverage was confined to [“data” tests](/docs/build/data-tests), assessing the quality of input data or resulting datasets' structure. However, these tests could only be executed _after_ building a model.

Starting in dbt Core v1.8, we have introduced an additional type of test to dbt - unit tests. In software programming, unit tests validate small portions of your functional code, and they work much the same way here. Unit tests allow you to validate your SQL modeling logic on a small set of static inputs _before_ you materialize your full model in production. Unit tests enable test-driven development, benefiting developer efficiency and code reliability.
Expand Down Expand Up @@ -219,10 +216,19 @@ dbt test --select test_is_valid_email_address

Your model is now ready for production! Adding this unit test helped catch an issue with the SQL logic _before_ you materialized `dim_customers` in your warehouse and will better ensure the reliability of this model in the future.


## Unit testing incremental models

When configuring your unit test, you can override the output of macros, vars, or environment variables. This enables you to unit test your incremental models in "full refresh" and "incremental" modes.
When configuring your unit test, you can override the output of macros, vars, or environment variables. This enables you to unit test your incremental models in "full refresh" and "incremental" modes.

:::note
Incremental models need to exist in the database first before running unit tests or doing a `dbt build`. Use the [`--empty` flag](/reference/commands/build#the---empty-flag) to build an empty version of the models to save warehouse spend. You can also optionally select only your incremental models using the [`--select` flag](/reference/node-selection/syntax#shorthand).

```shell
dbt run --select "config.materialized:incremental" --empty
```

After running the command, you can then perform a regular `dbt build` for that model and then run your unit test.
:::

When testing an incremental model, the expected output is the __result of the materialization__ (what will be merged/inserted), not the resulting model itself (what the final table will look like after the merge/insert).

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@ import Tools from '/snippets/_sl-excel-gsheets.md';

<Tools
type="Microsoft Excel"
bullet_1="There's a timeout of 1 minute for queries."
bullet_1="Results that take longer than one minute to load into Excel will fail. This limit only applies to the loading process, not the time it takes for the data platform to run the query."
bullet_2="If you're using this extension, make sure you're signed into Microsoft with the same Excel profile you used to set up the Add-In. Log in with one profile at a time as using multiple profiles at once might cause issues."
queryBuilder="/img/docs/dbt-cloud/semantic-layer/query-builder.png"
/>
Expand Down
42 changes: 42 additions & 0 deletions website/docs/docs/dbt-versions/compatible-track-changelog.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,6 +18,48 @@ Starting in January 2025, each monthly "Extended" release will match the previou

For more information, see [release tracks](/docs/dbt-versions/cloud-release-tracks).

## January 2025

Release date: January 14, 2025

This release includes functionality from the following versions of dbt Core OSS:
```
dbt-core==1.9.1
# shared interfaces
dbt-adapters==1.13.1
dbt-common==1.14.0
dbt-semantic-interfaces==0.7.4
# adapters
dbt-athena==1.9.0
dbt-bigquery==1.9.1
dbt-databricks==1.9.1
dbt-fabric==1.9.0
dbt-postgres==1.9.0
dbt-redshift==1.9.0
dbt-snowflake==1.9.0
dbt-spark==1.9.0
dbt-synapse==1.8.2
dbt-teradata==1.9.0
dbt-trino==1.9.0
```

Changelogs:
- [dbt-core 1.9.1](https://github.com/dbt-labs/dbt-core/blob/1.9.latest/CHANGELOG.md#dbt-core-191---december-16-2024)
- [dbt-adapters 1.13.1](https://github.com/dbt-labs/dbt-adapters/blob/main/CHANGELOG.md#dbt-adapters-1131---january-10-2025)
- [dbt-common 1.14.0](https://github.com/dbt-labs/dbt-common/blob/main/CHANGELOG.md)
- [dbt-bigquery 1.9.1](https://github.com/dbt-labs/dbt-bigquery/blob/1.9.latest/CHANGELOG.md#dbt-bigquery-191---january-10-2025)
- [dbt-databricks 1.9.1](https://github.com/databricks/dbt-databricks/blob/main/CHANGELOG.md#dbt-databricks-191-december-16-2024)
- [dbt-fabric 1.9.0](https://github.com/microsoft/dbt-fabric/releases/tag/v1.9.0)
- [dbt-postgres 1.9.0](https://github.com/dbt-labs/dbt-postgres/blob/main/CHANGELOG.md#dbt-postgres-190---december-09-2024)
- [dbt-redshift 1.9.0](https://github.com/dbt-labs/dbt-redshift/blob/1.9.latest/CHANGELOG.md#dbt-redshift-190---december-09-2024)
- [dbt-snowflake 1.9.0](https://github.com/dbt-labs/dbt-snowflake/blob/1.9.latest/CHANGELOG.md#dbt-snowflake-190---december-09-2024)
- [dbt-spark 1.9.0](https://github.com/dbt-labs/dbt-spark/blob/1.9.latest/CHANGELOG.md#dbt-spark-190---december-10-2024)
- [dbt-synapse 1.8.2](https://github.com/microsoft/dbt-synapse/blob/v1.8.latest/CHANGELOG.md)
- [dbt-teradata 1.9.0](https://github.com/Teradata/dbt-teradata/releases/tag/v1.9.0)
- [dbt-trino 1.9.0](https://github.com/starburstdata/dbt-trino/blob/master/CHANGELOG.md#dbt-trino-190---december-20-2024)

## December 2024

Release date: December 12, 2024
Expand Down
6 changes: 3 additions & 3 deletions website/docs/docs/deploy/deploy-jobs.md
Original file line number Diff line number Diff line change
Expand Up @@ -32,9 +32,9 @@ You can create a deploy job and configure it to run on [scheduled days and times
- (Optional) **Description** &mdash; Provide a description of what the job does (for example, what the job consumes and what the job produces).
- **Environment** &mdash; By default, it’s set to the deployment environment you created the deploy job from.
3. Options in the **Execution settings** section:
- **Commands** &mdash; By default, it includes the `dbt build` command. Click **Add command** to add more [commands](/docs/deploy/job-commands) that you want to be invoked when the job runs.
- **Generate docs on run** &mdash; Enable this option if you want to [generate project docs](/docs/collaborate/build-and-view-your-docs) when this deploy job runs.
- **Run source freshness** &mdash; Enable this option to invoke the `dbt source freshness` command before running the deploy job. Refer to [Source freshness](/docs/deploy/source-freshness) for more details.
- [**Commands**](/docs/deploy/job-commands#built-in-commands) &mdash; By default, it includes the `dbt build` command. Click **Add command** to add more [commands](/docs/deploy/job-commands) that you want to be invoked when the job runs. During a job run, [built-in commands](/docs/deploy/job-commands#built-in-commands) are "chained" together and if one run step fails, the entire job fails with an "Error" status.
- [**Generate docs on run**](/docs/deploy/job-commands#checkbox-commands) &mdash; Enable this option if you want to [generate project docs](/docs/collaborate/build-and-view-your-docs) when this deploy job runs. If the step fails, the job can succeed if subsequent steps pass.
- [**Run source freshness**](/docs/deploy/job-commands#checkbox-commands) &mdash; Enable this option to invoke the `dbt source freshness` command before running the deploy job. If the step fails, the job can succeed if subsequent steps pass. Refer to [Source freshness](/docs/deploy/source-freshness) for more details.
4. Options in the **Triggers** section:
- **Run on schedule** &mdash; Run the deploy job on a set schedule.
- **Timing** &mdash; Specify whether to [schedule](#schedule-days) the deploy job using **Intervals** that run the job every specified number of hours, **Specific hours** that run the job at specific times of day, or **Cron schedule** that run the job specified using [cron syntax](#cron-schedule).
Expand Down
1 change: 1 addition & 0 deletions website/docs/guides/customize-schema-alias.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,7 @@ icon: 'guides'
hide_table_of_contents: true
level: 'Advanced'
recently_updated: true
keywords: ["generate", "schema name", "guide", "dbt", "schema customization", "custom schema"]
---

<div style={{maxWidth: '900px'}}>
Expand Down
27 changes: 17 additions & 10 deletions website/docs/reference/resource-configs/event-time.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,6 @@ models:
```
</File>
<File name='models/properties.yml'>
```yml
Expand Down Expand Up @@ -139,20 +138,28 @@ sources:
## Definition
Set the `event_time` to the name of the field that represents the timestamp of the event -- "at what time did the row occur" -- as opposed to an event ingestion date. You can configure `event_time` for a [model](/docs/build/models), [seed](/docs/build/seeds), or [source](/docs/build/sources) in your `dbt_project.yml` file, property YAML file, or config block.
You can configure `event_time` for a [model](/docs/build/models), [seed](/docs/build/seeds), or [source](/docs/build/sources) in your `dbt_project.yml` file, property YAML file, or config block.

`event_time` is required for the [incremental microbatch](/docs/build/incremental-microbatch) strategy and highly recommended for [Advanced CI's compare changes](/docs/deploy/advanced-ci#optimizing-comparisons) in CI/CD workflows, where it ensures the same time-slice of data is correctly compared between your CI and production environments.

### Best practices

Set the `event_time` to the name of the field that represents the actual timestamp of the event (like `account_created_at`). The timestamp of the event should represent "at what time did the row occur" rather than an event ingestion date. Marking a column as the `event_time` when it isn't, diverges from the semantic meaning of the column which may result in user confusion when other tools make use of the metadata.

Here are some examples of good and bad `event_time` columns:
However, if an ingestion date (like `loaded_at`, `ingested_at`, or `last_updated_at`) are the only timestamps you use, you can set `event_time` to these fields. Here are some considerations to keep in mind if you do this:

- ✅ Good:
- `account_created_at` &mdash; This represents the specific time when an account was created, making it a fixed event in time.
- `session_began_at` &mdash; This captures the exact timestamp when a user session started, which won’t change and directly ties to the event.
- Using `last_updated_at` or `loaded_at` &mdash; May result in duplicate entries in the resulting table in the data warehouse over multiple runs. Setting an appropriate [lookback](/reference/resource-configs/lookback) value can reduce duplicates but it can't fully eliminate them since some updates outside the lookback window won't be processed.
- Using `ingested_at` &mdash; Since this column is created by your ingestion/EL tool instead of coming from the original source, it will change if/when you need to resync your connector for some reason. This means that data will be reprocessed and loaded into your warehouse for a second time against a second date. As long as this never happens (or you run a full refresh when it does), microbatches will be processed correctly when using `ingested_at`.

- ❌ Bad:
Here are some examples of recommended and not recommended `event_time` columns:

- `_fivetran_synced` &mdash; This isn't the time that the event happened, it's the time that the event was ingested.
- `last_updated_at` &mdash; This isn't a good use case as this will keep changing over time.

`event_time` is required for [Incremental microbatch](/docs/build/incremental-microbatch) and highly recommended for [Advanced CI's compare changes](/docs/deploy/advanced-ci#optimizing-comparisons) in CI/CD workflows, where it ensures the same time-slice of data is correctly compared between your CI and production environments.
| <div style={{width:'200px'}}>Status</div> | Column name | Description |
|--------------------|---------------------|----------------------|
| ✅ Recommended | `account_created_at` | Represents the specific time when an account was created, making it a fixed event in time. |
| ✅ Recommended | `session_began_at` | Captures the exact timestamp when a user session started, which won’t change and directly ties to the event. |
| ❌ Not recommended | `_fivetran_synced` | This represents the time the event was ingested, not when it happened. |
| ❌ Not recommended | `last_updated_at` | Changes over time and isn't tied to the event itself. If used, note the considerations mentioned earlier in [best practices](#best-practices). |

## Examples

Expand Down
9 changes: 9 additions & 0 deletions website/docs/reference/resource-configs/snowflake-configs.md
Original file line number Diff line number Diff line change
Expand Up @@ -337,6 +337,15 @@ For dbt limitations, these dbt features are not supported:
- [Model contracts](/docs/collaborate/govern/model-contracts)
- [Copy grants configuration](/reference/resource-configs/snowflake-configs#copying-grants)

### Troubleshooting dynamic tables

If your dynamic table model fails to rerun with the following error message after the initial execution:

```sql
SnowflakeDynamicTableConfig.__init__() missing 6 required positional arguments: 'name', 'schema_name', 'database_name', 'query', 'target_lag', and 'snowflake_warehouse'
```
Ensure that `QUOTED_IDENTIFIERS_IGNORE_CASE` on your account is set to `FALSE`.

## Temporary tables

Incremental table merges for Snowflake prefer to utilize a `view` rather than a `temporary table`. The reasoning is to avoid the database write step that a temporary table would initiate and save compile time.
Expand Down
78 changes: 69 additions & 9 deletions website/docs/reference/resource-configs/tags.md
Original file line number Diff line number Diff line change
Expand Up @@ -93,6 +93,21 @@ resource_type:
```
</File>
To apply tags to a model in your `models/` directory, add the `config` property similar to the following example:

<File name='models/model.yml'>

```yaml
models:
- name: my_model
description: A model description
config:
tags: ['example_tag']
```

</File>

</TabItem>

<TabItem value="config">
Expand Down Expand Up @@ -126,10 +141,24 @@ You can use the [`+` operator](/reference/node-selection/graph-operators#the-plu
- `dbt run --select +model_name+` &mdash; Run a model, its upstream dependencies, and its downstream dependencies.
- `dbt run --select tag:my_tag+ --exclude tag:exclude_tag` &mdash; Run model tagged with `my_tag` and their downstream dependencies, and exclude models tagged with `exclude_tag`, regardless of their dependencies.


:::tip Usage notes about tags

When using tags, consider the following:

- Tags are additive across project hierarchy.
- Some resource types (like sources, exposures) require tags at the top level.

Refer to [usage notes](#usage-notes) for more information.
:::

## Examples

The following examples show how to apply tags to resources in your project. You can configure tags in the `dbt_project.yml`, `schema.yml`, or SQL files.

### Use tags to run parts of your project

Apply tags in your `dbt_project.yml` as a single value or a string:
Apply tags in your `dbt_project.yml` as a single value or a string. In the following example, one of the models, the `jaffle_shop` model, is tagged with `contains_pii`.

<File name='dbt_project.yml'>

Expand All @@ -153,16 +182,52 @@ models:
- "published"
```
</File>


### Apply tags to models

This section demonstrates applying tags to models in the `dbt_project.yml`, `schema.yml`, and SQL files.

To apply tags to a model in your `dbt_project.yml` file, you would add the following:

<File name='dbt_project.yml'>

```yaml
models:
jaffle_shop:
+tags: finance # jaffle_shop model is tagged with 'finance'.
```

</File>

To apply tags to a model in your `models/` directory YAML file, you would add the following using the `config` property:

<File name='models/stg_customers.yml'>

```yaml
models:
- name: stg_customers
description: Customer data with basic cleaning and transformation applied, one row per customer.
config:
tags: ['santi'] # stg_customers.yml model is tagged with 'santi'.
columns:
- name: customer_id
description: The unique key for each customer.
data_tests:
- not_null
- unique
```

</File>

You can also apply tags to individual resources using a config block:
To apply tags to a model in your SQL file, you would add the following:

<File name='models/staging/stg_payments.sql'>

```sql
{{ config(
tags=["finance"]
tags=["finance"] # stg_payments.sql model is tagged with 'finance'.
) }}
select ...
Expand Down Expand Up @@ -211,14 +276,10 @@ seeds:

<VersionBlock lastVersion="1.8">

:::tip Upgrade to dbt Core 1.9
Applying tags to saved queries is only available in dbt Core versions 1.9 and later.
:::
<VersionCallout version="1.9" />

</VersionBlock>

<VersionBlock firstVersion="1.9">

This following example shows how to apply a tag to a saved query in the `dbt_project.yml` file. The saved query is then tagged with `order_metrics`.

Expand Down Expand Up @@ -263,7 +324,6 @@ Run resources with multiple tags using the following commands:
# Run all resources tagged "order_metrics" and "hourly"
dbt build --select tag:order_metrics tag:hourly
```
</VersionBlock>

## Usage notes

Expand Down

0 comments on commit ae9e93d

Please sign in to comment.