diff --git a/website/blog/2022-04-19-dbt-cloud-postman-collection.md b/website/blog/2022-04-19-dbt-cloud-postman-collection.md
index 7ea81e89181..16ecb04670d 100644
--- a/website/blog/2022-04-19-dbt-cloud-postman-collection.md
+++ b/website/blog/2022-04-19-dbt-cloud-postman-collection.md
@@ -19,7 +19,7 @@ is_featured: true
The dbt Cloud API has well-documented endpoints for creating, triggering and managing dbt Cloud jobs. But there are other endpoints that aren’t well documented yet, and they’re extremely useful for end-users. These endpoints exposed by the API enable organizations not only to orchestrate jobs, but to manage their dbt Cloud accounts programmatically. This creates some really interesting capabilities for organizations to scale their dbt Cloud implementations.
-The main goal of this article is to spread awareness of these endpoints as the docs are being built & show you how to use them.
+The main goal of this article is to spread awareness of these endpoints as the docs are being built & show you how to use them.
@@ -45,7 +45,7 @@ Beyond the day-to-day process of managing their dbt Cloud accounts, many organiz
*Below this you’ll find a series of example requests - use these to guide you or [check out the Postman Collection](https://dbtlabs.postman.co/workspace/Team-Workspace~520c7ac4-3895-4779-8bc3-9a11b5287c1c/request/12491709-23cd2368-aa58-4c9a-8f2d-e8d56abb6b1dlinklink) to try it out yourself.*
-## Appendix
+## Appendix
### Examples of how to use the Postman Collection
@@ -55,7 +55,7 @@ Let’s run through some examples on how to make good use of this Postman Collec
One common question we hear from customers is “How can we migrate resources from one dbt Cloud project to another?” Often, they’ll create a development project, in which users have access to the UI and can manually make changes, and then migrate selected resources from the development project to a production project once things are ready.
-There are several reasons one might want to do this, including:
+There are several reasons one might want to do this, including:
- Probably the most common is separating dev/test/prod environments across dbt Cloud projects to enable teams to build manually in a development project, and then automatically migrate those environments & jobs to a production project.
- Building “starter projects” they can deploy as templates for new teams onboarding to dbt from a learning standpoint.
@@ -90,10 +90,10 @@ https://cloud.getdbt.com/api/v3/accounts/28885/projects/86704/environments/75286
#### Push the environment to the production project
-We take the response from the GET request above, and then to the following:
+We take the response from the GET request above, and then to the following:
1. Adjust some of the variables for the new environment:
- - Change the the value of the “project_id” field from 86704 to 86711
+ - Change the value of the “project_id” field from 86704 to 86711
- Change the value of the “name” field from “dev-staging” to “production–api-generated”
- Set the “custom_branch” field to “main”
@@ -116,7 +116,7 @@ We take the response from the GET request above, and then to the following:
}
```
-3. Note the environment ID returned in the response, as we’ll use to create a dbt Cloud job in the next step
+3. Note the environment ID returned in the response, as we’ll use to create a dbt Cloud job in the next step
#### Pull the job definition from the dev project
diff --git a/website/blog/2022-05-17-stakeholder-friendly-model-names.md b/website/blog/2022-05-17-stakeholder-friendly-model-names.md
index 39107035465..7170770106a 100644
--- a/website/blog/2022-05-17-stakeholder-friendly-model-names.md
+++ b/website/blog/2022-05-17-stakeholder-friendly-model-names.md
@@ -29,7 +29,7 @@ In this article, we’ll take a deeper look at why model naming conventions are
>“[Data folks], what we [create in the database]… echoes in eternity.” -Max(imus, Gladiator)
-Analytics Engineers are often centrally located in the company, sandwiched between data analysts and data engineers. This means everything AEs create might be read and need to be understood by both an analytics or customer-facing team and by teams who spend most of their time in code and the database. Depending on the audience, the scope of access differs, which means the user experience and context changes. Let’s elaborate on what that experience might look like by breaking end-users into two buckets:
+Analytics Engineers are often centrally located in the company, sandwiched between data analysts and data engineers. This means everything AEs create might be read and need to be understood by both an analytics or customer-facing team and by teams who spend most of their time in code and the database. Depending on the audience, the scope of access differs, which means the user experience and context changes. Let’s elaborate on what that experience might look like by breaking end-users into two buckets:
- Analysts / BI users
- Analytics engineers / Data engineers
@@ -49,13 +49,13 @@ Here we have drag and drop functionality and a skin over top of the underlying `
**How model names can make this painful:**
The end users might not even know what tables the data refers to, as potentially everything is joined by the system and they don’t need to write their own queries. If model names are chosen poorly, there is a good chance that the BI layer on top of the database tables has been renamed to something more useful for the analysts. This adds an extra step of mental complexity in tracing the lineage from data model to BI.
-#### Read only access to the dbt Cloud IDE docs
+#### Read only access to the dbt Cloud IDE docs
If Analysts want more context via documentation, they may traverse back to the dbt layer and check out the data models in either the context of the Project or Database. In the Project view, they will see the data models in the folder hierarchy present in your project’s repository. In the Database view you will see the output of the data models as present in your database, ie. `database / schema / object`.
![A screenshot depicting the dbt Cloud IDE menu's Database view which shows you the output of your data models. Next to this view, is the Project view.](/img/blog/2022-05-17-stakeholder-friendly-model-names/project-view.png)
**How model names can make this painful:**
-For the Project view, generally abstracted department or organizational structures as folder names presupposes the reader/engineer knows what is contained within the folder beforehand or what that department actually does, or promotes haphazard clicking to open folders to see what is within. Organizing the final outputs by business unit or analytics function is great for end users but doesn't accurately represent all the sources and references that had to come together to build this output, as they often live in another folder.
+For the Project view, generally abstracted department or organizational structures as folder names presupposes the reader/engineer knows what is contained within the folder beforehand or what that department actually does, or promotes haphazard clicking to open folders to see what is within. Organizing the final outputs by business unit or analytics function is great for end users but doesn't accurately represent all the sources and references that had to come together to build this output, as they often live in another folder.
For the Database view, pray your team has been declaring a logical schema bucketing, or a logical model naming convention, otherwise you will have a long, alphabetized list of database objects to scroll through, where staging, intermediate, and final output models are all intermixed. Clicking into a data model and viewing the documentation is helpful, but you would need to check out the DAG to see where the model lives in the overall flow.
@@ -63,7 +63,7 @@ For the Database view, pray your team has been declaring a logical schema bucket
If they have access to Worksheets, SQL runner, or another way to write ad hoc sql queries, then they will have access to the data models as present in your database, ie. `database / schema / object`, but with less documentation attached, and more proclivity towards querying tables to check out their contents, which costs time and money.
-![A screenshot of the the SQL Runner menu within Looker showcasing the dropdown list of all data models present in the database.](/img/blog/2022-05-17-stakeholder-friendly-model-names/data-warehouse-dropdown.png)
+![A screenshot of the SQL Runner menu within Looker showcasing the dropdown list of all data models present in the database.](/img/blog/2022-05-17-stakeholder-friendly-model-names/data-warehouse-dropdown.png)
**How model names can make this painful:**
Without proper naming conventions, you will encounter `analytics.order`, `analytics.orders`, `analytics.orders_new` and not know which one is which, so you will open up a scratch statement tab and attempt to figure out which is correct:
@@ -73,9 +73,9 @@ Without proper naming conventions, you will encounter `analytics.order`, `analyt
-- select * from analytics.orders limit 10
select * from analytics.orders_new limit 10
```
-Hopefully you get it right via sampling queries, or eventually find out there is a true source of truth defined in a totally separate area: `core.dim_orders`.
+Hopefully you get it right via sampling queries, or eventually find out there is a true source of truth defined in a totally separate area: `core.dim_orders`.
-The problem here is the only information you can use to determine what data is within an object or the purpose of the object is within the schema and model name.
+The problem here is the only information you can use to determine what data is within an object or the purpose of the object is within the schema and model name.
### The engineer’s user experience
@@ -98,7 +98,7 @@ There is not much worse than spending all week developing on a task, submitting
This is largely the same as the Analyst experience above, except they created the data models or are aware of their etymologies. They are likely more comfortable writing ad hoc queries, but also have the ability to make changes, which adds a layer of thought processing when working.
**How model names can make this painful:**
-It takes time to become a subject matter expert in the database. You will need to know which schema a subject lives in, what tables are the source of truth and/or output models, versus experiments, outdated objects, or building blocks used along the way. Working within this context, engineers know the history and company lore behind why a table was named that way or how its purpose may differ slightly from its name, but they also have the ability to make changes.
+It takes time to become a subject matter expert in the database. You will need to know which schema a subject lives in, what tables are the source of truth and/or output models, versus experiments, outdated objects, or building blocks used along the way. Working within this context, engineers know the history and company lore behind why a table was named that way or how its purpose may differ slightly from its name, but they also have the ability to make changes.
Change management is hard; how many places would you need to update, rename, re-document, and retest to fix a poor naming choice from long ago? It is a daunting position, which can create internal strife when constrained for time over whether we should continually revamp and refactor for maintainability or focus on building new models in the same pattern as before.
diff --git a/website/docs/docs/cloud/manage-access/set-up-sso-okta.md b/website/docs/docs/cloud/manage-access/set-up-sso-okta.md
index fda32f118ef..83c9f6492c6 100644
--- a/website/docs/docs/cloud/manage-access/set-up-sso-okta.md
+++ b/website/docs/docs/cloud/manage-access/set-up-sso-okta.md
@@ -152,7 +152,7 @@ the integration between Okta and dbt Cloud.
## Configuration in dbt Cloud
-To complete setup, follow the steps below in dbt Cloud.
+To complete setup, follow the steps below in dbt Cloud.
### Supplying credentials
@@ -182,7 +182,7 @@ configured in the steps above.
21. Click **Save** to complete setup for the Okta integration. From
here, you can navigate to the URL generated for your account's _slug_ to
- test logging in with Okta. Additionally, users added the the Okta app
+ test logging in with Okta. Additionally, users added the Okta app
will be able to log in to dbt Cloud from Okta directly.
diff --git a/website/docs/docs/cloud/manage-access/set-up-sso-saml-2.0.md b/website/docs/docs/cloud/manage-access/set-up-sso-saml-2.0.md
index 34c1a91fbee..ca93d81badf 100644
--- a/website/docs/docs/cloud/manage-access/set-up-sso-saml-2.0.md
+++ b/website/docs/docs/cloud/manage-access/set-up-sso-saml-2.0.md
@@ -312,7 +312,7 @@ Follow these steps to set up single sign-on (SSO) with dbt Cloud:
12. Click **Edit** in the Basic SAML Configuration section.
-
+
13. Use the following table to complete the required fields and connect to dbt:
diff --git a/website/docs/docs/collaborate/explore-projects.md b/website/docs/docs/collaborate/explore-projects.md
index 627b255cd78..e21bd507e51 100644
--- a/website/docs/docs/collaborate/explore-projects.md
+++ b/website/docs/docs/collaborate/explore-projects.md
@@ -7,7 +7,7 @@ pagination_next: "docs/collaborate/data-health-signals"
pagination_prev: null
---
-With dbt Explorer, you can view your project's [resources](/docs/build/projects) (such as models, tests, and metrics), their lineage, and [model consumption](/docs/collaborate/auto-exposures) to gain a better understanding of its latest production state. Navigate and manage your projects within dbt Cloud to help you and other data developers, analysts, and consumers discover and leverage your dbt resources.
+With dbt Explorer, you can view your project's [resources](/docs/build/projects) (such as models, tests, and metrics), their lineage, and [model consumption](/docs/collaborate/auto-exposures) to gain a better understanding of its latest production state. Navigate and manage your projects within dbt Cloud to help you and other data developers, analysts, and consumers discover and leverage your dbt resources.
import ExplorerCourse from '/snippets/_explorer-course-link.md';
@@ -41,23 +41,23 @@ dbt Explorer uses the metadata provided by the [Discovery API](/docs/dbt-cloud-a
- dbt Explorer automatically retrieves the metadata updates after each job run in the production or staging deployment environment so it always has the latest results for your project. This includes deploy and merge jobs.
- Note that CI jobs do not update dbt Explorer. This is because they don't reflect the production state and don't provide the necessary metadata updates.
-- To view a resource and its metadata, you must define the resource in your project and run a job in the production or staging environment.
-- The resulting metadata depends on the [commands](/docs/deploy/job-commands) executed by the jobs.
+- To view a resource and its metadata, you must define the resource in your project and run a job in the production or staging environment.
+- The resulting metadata depends on the [commands](/docs/deploy/job-commands) executed by the jobs.
| To view in Explorer | You must successfully run |
|---------------------|---------------------------|
| Model lineage, details, or results | [dbt run](/reference/commands/run) or [dbt build](/reference/commands/build) on a given model within a job in the environment |
| Columns and statistics for models, sources, and snapshots| [dbt docs generate](/reference/commands/cmd-docs) within [a job](/docs/collaborate/build-and-view-your-docs) in the environment |
-| Test results | [dbt test](/reference/commands/test) or [dbt build](/reference/commands/build) within a job in the environment |
+| Test results | [dbt test](/reference/commands/test) or [dbt build](/reference/commands/build) within a job in the environment |
| Source freshness results | [dbt source freshness](/reference/commands/source#dbt-source-freshness) within a job in the environment |
| Snapshot details | [dbt snapshot](/reference/commands/snapshot) or [dbt build](/reference/commands/build) within a job in the environment |
| Seed details | [dbt seed](/reference/commands/seed) or [dbt build](/reference/commands/build) within a job in the environment |
-Richer and more timely metadata will become available as dbt Cloud evolves.
+Richer and more timely metadata will become available as dbt Cloud evolves.
## Explore your project's lineage graph {#project-lineage}
-dbt Explorer provides a visualization of your project’s DAG that you can interact with. To access the project's full lineage graph, select **Overview** in the left sidebar and click the **Explore Lineage** button on the main (center) section of the page.
+dbt Explorer provides a visualization of your project’s DAG that you can interact with. To access the project's full lineage graph, select **Overview** in the left sidebar and click the **Explore Lineage** button on the main (center) section of the page.
If you don't see the project lineage graph immediately, click **Render Lineage**. It can take some time for the graph to render depending on the size of your project and your computer’s available memory. The graph of very large projects might not render so you can select a subset of nodes by using selectors, instead.
@@ -78,7 +78,7 @@ To explore the lineage graphs of tests and macros, view [their resource details
- Refocus on the node and it upstream nodes only
- View the node's [resource details](#view-resource-details) page
- Select a resource to highlight its relationship with other resources in your project. A panel opens on the graph’s right-hand side that displays a high-level summary of the resource’s details. The side panel includes a **General** tab for information like description, materialized type, and other details. In the side panel's upper right corner:
- - Click the View Resource icon to [view the resource details](#view-resource-details).
+ - Click the View Resource icon to [view the resource details](#view-resource-details).
- Click the [Open in IDE](#open-in-ide) icon to examine the resource using the [dbt Cloud IDE](/docs/cloud/dbt-cloud-ide/develop-in-the-cloud).
- Click the Copy Link to Page icon to copy the page's link to your clipboard.
- Use [selectors](/reference/node-selection/methods) (in the search bar) to select specific resources or a subset of the DAG. This can help narrow the focus on the resources that interest you. All selectors are available for use, except those requiring a state comparison (result, source status, and state). You can also use the `--exclude` and the `--select` flag (which is optional). Examples:
@@ -91,7 +91,7 @@ To explore the lineage graphs of tests and macros, view [their resource details
- `+snowplow_sessions +fct_orders` — Use space-delineated arguments for a union operation. Returns resources that are upstream nodes of either `snowplow_sessions` or `fct_orders`.
- [View resource details](#view-resource-details) by selecting a node (double-clicking) in the graph.
-- Click **Lenses** (lower right corner of the graph) to use Explorer's [lenses](#lenses) feature.
+- Click **Lenses** (lower right corner of the graph) to use Explorer's [lenses](#lenses) feature.
@@ -120,7 +120,7 @@ A resource in your project is characterized by resource type, materialization ty
- **Marts** — A model with the prefix `fct_` or `dim_` or a model that lives in the `/marts/` subdirectory.
- **Intermediate** — A model with the prefix `int_`. Or, a model that lives in the `/int/` or `/intermediate/` subdirectory.
- **Staging** — A model with the prefix `stg_`. Or, a model that lives in the `/staging/` subdirectory.
-- **Test status**: The status from the latest execution of the tests that ran again this resource. In the case that a model has multiple tests with different results, the lens reflects the 'worst case' status.
+- **Test status**: The status from the latest execution of the tests that ran again this resource. In the case that a model has multiple tests with different results, the lens reflects the 'worst case' status.
- **Consumption query history**: The number of queries against this resource over a given time period.
@@ -159,7 +159,7 @@ The **Filters** side panel becomes available after you perform a keyword search.
- [Model materialization](/docs/build/materializations) (like view, table)
- [Tags](/reference/resource-configs/tags) (supports multi-select)
-Under the the **Models** option, you can filter on model properties (access or materialization type). Also available are **Advanced** options, where you can limit the search results to column name, model code, and more.
+Under the **Models** option, you can filter on model properties (access or materialization type). Also available are **Advanced** options, where you can limit the search results to column name, model code, and more.
@@ -170,20 +170,20 @@ Example of results from searching on the keyword `customers` and applying the fi
## Browse with the sidebar
-From the sidebar, you can browse your project's resources, its file tree, and the database.
+From the sidebar, you can browse your project's resources, its file tree, and the database.
- **Resources** tab — All resources in the project organized by type. Select any resource type in the list and all those resources in the project will display as a table in the main section of the page. For a description on the different resource types (like models, metrics, and so on), refer to [About dbt projects](/docs/build/projects).
- [Data health signals](/docs/collaborate/data-health-signals) are visible to the right of the resource name under the **Health** column.
- **File Tree** tab — All resources in the project organized by the file in which they are defined. This mirrors the file tree in your dbt project repository.
-- **Database** tab — All resources in the project organized by the database and schema in which they are built. This mirrors your data platform's structure that represents the [applied state](/docs/dbt-cloud-apis/project-state) of your project.
+- **Database** tab — All resources in the project organized by the database and schema in which they are built. This mirrors your data platform's structure that represents the [applied state](/docs/dbt-cloud-apis/project-state) of your project.
## Open in IDE
-If you have been assigned a [developer license](/docs/cloud/manage-access/about-user-access#license-based-access-control), you can open the resource in the [IDE](/docs/cloud/dbt-cloud-ide/develop-in-the-cloud) directly from Explorer. For example, the IDE opens all the corresponding files for the model. This includes the model's SQL or Python definition and any YAML files that include an entry for that model. The feature is available from the [full lineage graph](#example-of-full-lineage-graph) and the [resource's details view](#example-of-model-details).
+If you have been assigned a [developer license](/docs/cloud/manage-access/about-user-access#license-based-access-control), you can open the resource in the [IDE](/docs/cloud/dbt-cloud-ide/develop-in-the-cloud) directly from Explorer. For example, the IDE opens all the corresponding files for the model. This includes the model's SQL or Python definition and any YAML files that include an entry for that model. The feature is available from the [full lineage graph](#example-of-full-lineage-graph) and the [resource's details view](#example-of-model-details).
-Here's an example of the Open in IDE icon in the upper right corner of the resource details page. The icon is inactive (grayed out) if you haven't been assigned a developer license.
+Here's an example of the Open in IDE icon in the upper right corner of the resource details page. The icon is inactive (grayed out) if you haven't been assigned a developer license.
@@ -192,9 +192,9 @@ Here's an example of the Open in IDE icon in the upper right corner of the resou
If models in the project are versioned, you can see which [version of the model](/docs/collaborate/govern/model-versions) is being applied — `prerelease`, `latest`, and `old` — in the title of the model’s details page and in the model list from the sidebar.
## View resource details {#view-resource-details}
-You can view the definition and latest run results of any resource in your project. To find a resource and view its details, you can interact with the lineage graph, use search, or browse the catalog.
+You can view the definition and latest run results of any resource in your project. To find a resource and view its details, you can interact with the lineage graph, use search, or browse the catalog.
-The details (metadata) available to you depends on the resource’s type, its definition, and the [commands](/docs/deploy/job-commands) that run within jobs in the production environment.
+The details (metadata) available to you depends on the resource’s type, its definition, and the [commands](/docs/deploy/job-commands) that run within jobs in the production environment.
In the upper right corner of the resource details page, you can:
- Click the [Open in IDE](#open-in-ide) icon to examine the resource using the [dbt Cloud IDE](/docs/cloud/dbt-cloud-ide/develop-in-the-cloud).
@@ -203,22 +203,22 @@ In the upper right corner of the resource details page, you can:
- **Data health signals** — [Data health signals](/docs/collaborate/data-health-signals) offer a quick, at-a-glance view of data health. These icons indicate whether a model is Healthy, Caution, Degraded, or Unknown. Hover over an icon to view detailed information about the model's health.
-- **Status bar** (below the page title) — Information on the last time the model ran, whether the run was successful, how the data is materialized, number of rows, and the size of the model.
+- **Status bar** (below the page title) — Information on the last time the model ran, whether the run was successful, how the data is materialized, number of rows, and the size of the model.
- **General** tab includes:
- **Lineage** graph — The model’s lineage graph that you can interact with. The graph includes one upstream node and one downstream node from the model. Click the Expand icon in the graph's upper right corner to view the model in full lineage graph mode.
- **Description** section — A [description of the model](/docs/build/documentation#adding-descriptions-to-your-project).
- **Recent** section — Information on the last time the model ran, how long it ran for, whether the run was successful, the job ID, and the run ID.
- - **Tests** section — [Tests](/docs/build/data-tests) for the model, including a status indicator for the latest test status. A :white_check_mark: denotes a passing test.
+ - **Tests** section — [Tests](/docs/build/data-tests) for the model, including a status indicator for the latest test status. A :white_check_mark: denotes a passing test.
- **Details** section — Key properties like the model’s relation name (for example, how it’s represented and how you can query it in the data platform: `database.schema.identifier`); model governance attributes like access, group, and if contracted; and more.
- **Relationships** section — The nodes the model **Depends On**, is **Referenced by**, and (if applicable) is **Used by** for projects that have declared the models' project as a dependency.
- **Code** tab — The source code and compiled code for the model.
-- **Columns** tab — The available columns in the model. This tab also shows tests results (if any) that you can select to view the test's details page. A :white_check_mark: denotes a passing test. To filter the columns in the resource, you can use the search bar that's located at the top of the columns view.
+- **Columns** tab — The available columns in the model. This tab also shows tests results (if any) that you can select to view the test's details page. A :white_check_mark: denotes a passing test. To filter the columns in the resource, you can use the search bar that's located at the top of the columns view.
-- **Status bar** (below the page title) — Information on the last time the exposure was updated.
+- **Status bar** (below the page title) — Information on the last time the exposure was updated.
- **Data health signals** — [Data health signals](/docs/collaborate/data-health-signals) offer a quick, at-a-glance view of data health. These icons indicate whether a resource is Healthy, Caution, or Degraded. Hover over an icon to view detailed information about the exposure's health.
- **General** tab includes:
- **Data health** — The status on data freshness and data quality.
@@ -226,7 +226,7 @@ In the upper right corner of the resource details page, you can:
- **Lineage** graph — The exposure’s lineage graph. Click the **Expand** icon in the graph's upper right corner to view the exposure in full lineage graph mode. Integrates natively with Tableau and auto-generates downstream lineage.
- **Description** section — A description of the exposure.
- **Details** section — Details like exposure type, maturity, owner information, and more.
- - **Relationships** section — The nodes the exposure **Depends On**.
+ - **Relationships** section — The nodes the exposure **Depends On**.
@@ -252,7 +252,7 @@ Example of the Tests view:
-- **Status bar** (below the page title) — Information on the last time the source was updated and the number of tables the source uses.
+- **Status bar** (below the page title) — Information on the last time the source was updated and the number of tables the source uses.
- **Data health signals** — [Data health signals](/docs/collaborate/data-health-signals) offer a quick, at-a-glance view of data health. These icons indicate whether a resource is Healthy, Caution, or Degraded. Hover over an icon to view detailed information about the source's health.
- **General** tab includes:
- **Lineage** graph — The source’s lineage graph that you can interact with. The graph includes one upstream node and one downstream node from the source. Click the Expand icon in the graph's upper right corner to view the source in full lineage graph mode.
@@ -277,13 +277,13 @@ Example of the details view for the model `customers`:
## Related content
-- [Enterprise permissions](/docs/cloud/manage-access/enterprise-permissions)
+- [Enterprise permissions](/docs/cloud/manage-access/enterprise-permissions)
- [About model governance](/docs/collaborate/govern/about-model-governance)
- Blog on [What is data mesh?](https://www.getdbt.com/blog/what-is-data-mesh-the-definition-and-importance-of-data-mesh)
diff --git a/website/docs/docs/dbt-versions/release-notes/99-dbt-cloud-changelog-2019-2020.md b/website/docs/docs/dbt-versions/release-notes/99-dbt-cloud-changelog-2019-2020.md
index 32a33d95301..721cf5e2d65 100644
--- a/website/docs/docs/dbt-versions/release-notes/99-dbt-cloud-changelog-2019-2020.md
+++ b/website/docs/docs/dbt-versions/release-notes/99-dbt-cloud-changelog-2019-2020.md
@@ -27,7 +27,7 @@ and adds two new permission sets for Enterprise acccounts.
## dbt Cloud v1.1.15 (December 10, 2020)
-Lots of great stuff to confer about this go-round: things really coalesced this week! Lots of excitement around adding Spark to the connection family, as well as knocking out some longstanding bugs.
+Lots of great stuff to confer about this go-round: things really coalesced this week! Lots of excitement around adding Spark to the connection family, as well as knocking out some longstanding bugs.
#### Enhancements
@@ -45,7 +45,7 @@ Lots of great stuff to confer about this go-round: things really coalesced this
## dbt Cloud v1.1.14 (November 25, 2020)
-This release adds a few new pieces of connective tissue, notably OAuth for BigQuery and SparkAdapter work. There are also some quality of life improvements and investments for the future, focused on our beloved IDE users, and some improved piping for observability into log management and API usage.
+This release adds a few new pieces of connective tissue, notably OAuth for BigQuery and SparkAdapter work. There are also some quality of life improvements and investments for the future, focused on our beloved IDE users, and some improved piping for observability into log management and API usage.
#### Enhancements
@@ -712,7 +712,7 @@ These fields need to be specified for your instance of dbt Cloud to function pro
- Fix console warning presented when updating React state from unmounted component
- Fix issue where closed tabs would continue to be shown, though the content was removed correctly
- Fix issue that prevented opening an adjacent tab when a tab was closed
-- Fix issue creating BigQuery connections causing the the account connections list to not load correctly.
+- Fix issue creating BigQuery connections causing the account connections list to not load correctly.
- Fix for locked accounts that have downgraded to the developer plan at trial end
- Fix for not properly showing server error messages on the user invite page
diff --git a/website/docs/reference/dbt-jinja-functions/run_started_at.md b/website/docs/reference/dbt-jinja-functions/run_started_at.md
index 9dfc83ec56a..b8a25d8b80b 100644
--- a/website/docs/reference/dbt-jinja-functions/run_started_at.md
+++ b/website/docs/reference/dbt-jinja-functions/run_started_at.md
@@ -7,7 +7,7 @@ description: "Use `run_started_at` to output the timestamp the run started."
`run_started_at` outputs the timestamp that this run started, e.g. `2017-04-21 01:23:45.678`.
-The `run_started_at` variable is a Python `datetime` object. As of 0.9.1, the timezone of this variable
+The `run_started_at` variable is a Python `datetime` object. As of 0.9.1, the timezone of this variable
defaults to UTC.
@@ -15,20 +15,20 @@ The `run_started_at` variable is a Python `datetime` object. As of 0.9.1, the ti
```sql
select
'{{ run_started_at.strftime("%Y-%m-%d") }}' as date_day
-
+
from ...
```
-To modify the timezone of this variable, use the the `pytz` module:
+To modify the timezone of this variable, use the `pytz` module:
```sql
select
'{{ run_started_at.astimezone(modules.pytz.timezone("America/New_York")) }}' as run_started_est
-
+
from ...
```
diff --git a/website/docs/reference/resource-configs/bigquery-configs.md b/website/docs/reference/resource-configs/bigquery-configs.md
index c912bca0688..f3bdc08ea34 100644
--- a/website/docs/reference/resource-configs/bigquery-configs.md
+++ b/website/docs/reference/resource-configs/bigquery-configs.md
@@ -14,7 +14,7 @@ To-do:
- `schema` is interchangeable with the BigQuery concept `dataset`
- `database` is interchangeable with the BigQuery concept of `project`
-For our reference documentation, you can declare `project` in place of `database.`
+For our reference documentation, you can declare `project` in place of `database.`
This will allow you to read and write from multiple BigQuery projects. Same for `dataset`.
## Using table partitioning and clustering
@@ -335,7 +335,7 @@ models:
dbt supports the specification of BigQuery labels for the tables and views that it creates. These labels can be specified using the `labels` model config.
The `labels` config can be provided in a model config, or in the `dbt_project.yml` file, as shown below.
-
+
BigQuery key-value pair entries for labels larger than 63 characters are truncated.
**Configuring labels in a model file**
@@ -393,9 +393,9 @@ select * from {{ ref('another_model') }}
-You can create a new label with no value or remove a value from an existing label key.
+You can create a new label with no value or remove a value from an existing label key.
-A label with a key that has an empty value can also be [referred](https://cloud.google.com/bigquery/docs/adding-labels#adding_a_label_without_a_value) to as a tag in BigQuery. However, this should not be confused with a [tag resource](https://cloud.google.com/bigquery/docs/tags), which conditionally applies IAM policies to BigQuery tables and datasets. Find out more in [labels and tags](https://cloud.google.com/resource-manager/docs/tags/tags-overview).
+A label with a key that has an empty value can also be [referred](https://cloud.google.com/bigquery/docs/adding-labels#adding_a_label_without_a_value) to as a tag in BigQuery. However, this should not be confused with a [tag resource](https://cloud.google.com/bigquery/docs/tags), which conditionally applies IAM policies to BigQuery tables and datasets. Find out more in [labels and tags](https://cloud.google.com/resource-manager/docs/tags/tags-overview).
Currently, it's not possible to apply IAM tags in BigQuery, however, you can weigh in by upvoting [GitHub issue 1134](https://github.com/dbt-labs/dbt-bigquery/issues/1134).
@@ -551,7 +551,7 @@ _today_ and _yesterday_ every day that it is run. It is the fastest and cheapest
way to incrementally update a table using dbt. If we wanted this to run more dynamically—
let’s say, always for the past 3 days—we could leverage dbt’s baked-in [datetime macros](https://github.com/dbt-labs/dbt-core/blob/dev/octavius-catto/core/dbt/include/global_project/macros/etc/datetime.sql) and write a few of our own.
-Think of this as "full control" mode. You must ensure that expressions or literal values in the the `partitions` config have proper quoting when templated, and that they match the `partition_by.data_type` (`timestamp`, `datetime`, `date`, or `int64`). Otherwise, the filter in the incremental `merge` statement will raise an error.
+Think of this as "full control" mode. You must ensure that expressions or literal values in the `partitions` config have proper quoting when templated, and that they match the `partition_by.data_type` (`timestamp`, `datetime`, `date`, or `int64`). Otherwise, the filter in the incremental `merge` statement will raise an error.
#### Dynamic partitions