Skip to content

Commit

Permalink
chore: remove duplicated 'the' (#6740)
Browse files Browse the repository at this point in the history
## What are you changing in this pull request and why?
<!--
Describe your changes and why you're making them. If related to an open
issue or a pull request on dbt Core or another repository, then link to
them here!

To learn more about the writing conventions used in the dbt Labs docs,
see the [Content style
guide](https://github.com/dbt-labs/docs.getdbt.com/blob/current/contributing/content-style-guide.md).
-->

`the the` -> `the`

## Checklist
- [x] I have reviewed the [Content style
guide](https://github.com/dbt-labs/docs.getdbt.com/blob/current/contributing/content-style-guide.md)
so my content adheres to these guidelines.
- [ ] The topic I'm writing about is for specific dbt version(s) and I
have versioned it according to the [version a whole
page](https://github.com/dbt-labs/docs.getdbt.com/blob/current/contributing/single-sourcing-content.md#adding-a-new-version)
and/or [version a block of
content](https://github.com/dbt-labs/docs.getdbt.com/blob/current/contributing/single-sourcing-content.md#versioning-blocks-of-content)
guidelines.
- [ ] I have added checklist item(s) to this list for anything anything
that needs to happen before this PR is merged, such as "needs technical
review" or "change base branch."
- [ ] The content in this PR requires a dbt release note, so I added one
to the [release notes
page](https://docs.getdbt.com/docs/dbt-versions/dbt-cloud-release-notes).
<!--
PRE-RELEASE VERSION OF dbt (if so, uncomment):
- [ ] Add a note to the prerelease version [Migration
Guide](https://github.com/dbt-labs/docs.getdbt.com/tree/current/website/docs/docs/dbt-versions/core-upgrade)
-->
<!-- 
ADDING OR REMOVING PAGES (if so, uncomment):
- [ ] Add/remove page in `website/sidebars.js`
- [ ] Provide a unique filename for new pages
- [ ] Add an entry for deleted pages in `website/vercel.json`
- [ ] Run link testing locally with `npm run build` to update the links
that point to deleted pages
-->

Co-authored-by: Leona B. Campbell <[email protected]>
  • Loading branch information
nakamasato and runleonarun authored Jan 8, 2025
1 parent 1c26bb2 commit 13c3968
Show file tree
Hide file tree
Showing 8 changed files with 52 additions and 52 deletions.
12 changes: 6 additions & 6 deletions website/blog/2022-04-19-dbt-cloud-postman-collection.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ is_featured: true
The dbt Cloud API has well-documented endpoints for creating, triggering and managing dbt Cloud jobs. But there are other endpoints that aren’t well documented yet, and they’re extremely useful for end-users. These endpoints exposed by the API enable organizations not only to orchestrate jobs, but to manage their dbt Cloud accounts programmatically. This creates some really interesting capabilities for organizations to scale their dbt Cloud implementations.

The main goal of this article is to spread awareness of these endpoints as the docs are being built & show you how to use them.
The main goal of this article is to spread awareness of these endpoints as the docs are being built & show you how to use them.

<!--truncate-->

Expand All @@ -45,7 +45,7 @@ Beyond the day-to-day process of managing their dbt Cloud accounts, many organiz

*Below this you’ll find a series of example requests - use these to guide you or [check out the Postman Collection](https://dbtlabs.postman.co/workspace/Team-Workspace~520c7ac4-3895-4779-8bc3-9a11b5287c1c/request/12491709-23cd2368-aa58-4c9a-8f2d-e8d56abb6b1dlinklink) to try it out yourself.*

## Appendix
## Appendix

### Examples of how to use the Postman Collection

Expand All @@ -55,7 +55,7 @@ Let’s run through some examples on how to make good use of this Postman Collec

One common question we hear from customers is “How can we migrate resources from one dbt Cloud project to another?” Often, they’ll create a development project, in which users have access to the UI and can manually make changes, and then migrate selected resources from the development project to a production project once things are ready.

There are several reasons one might want to do this, including:
There are several reasons one might want to do this, including:

- Probably the most common is separating dev/test/prod environments across dbt Cloud projects to enable teams to build manually in a development project, and then automatically migrate those environments & jobs to a production project.
- Building “starter projects” they can deploy as templates for new teams onboarding to dbt from a learning standpoint.
Expand Down Expand Up @@ -90,10 +90,10 @@ https://cloud.getdbt.com/api/v3/accounts/28885/projects/86704/environments/75286

#### Push the environment to the production project

We take the response from the GET request above, and then to the following:
We take the response from the GET request above, and then to the following:

1. Adjust some of the variables for the new environment:
- Change the the value of the “project_id” field from 86704 to 86711
- Change the value of the “project_id” field from 86704 to 86711
- Change the value of the “name” field from “dev-staging” to “production–api-generated”
- Set the “custom_branch” field to “main”

Expand All @@ -116,7 +116,7 @@ We take the response from the GET request above, and then to the following:
}
```

3. Note the environment ID returned in the response, as we’ll use to create a dbt Cloud job in the next step
3. Note the environment ID returned in the response, as we’ll use to create a dbt Cloud job in the next step

#### Pull the job definition from the dev project

Expand Down
14 changes: 7 additions & 7 deletions website/blog/2022-05-17-stakeholder-friendly-model-names.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ In this article, we’ll take a deeper look at why model naming conventions are

>[Data folks], what we [create in the database]… echoes in eternity.” -Max(imus, Gladiator)
Analytics Engineers are often centrally located in the company, sandwiched between data analysts and data engineers. This means everything AEs create might be read and need to be understood by both an analytics or customer-facing team and by teams who spend most of their time in code and the database. Depending on the audience, the scope of access differs, which means the user experience and context changes. Let’s elaborate on what that experience might look like by breaking end-users into two buckets:
Analytics Engineers are often centrally located in the company, sandwiched between data analysts and data engineers. This means everything AEs create might be read and need to be understood by both an analytics or customer-facing team and by teams who spend most of their time in code and the database. Depending on the audience, the scope of access differs, which means the user experience and context changes. Let’s elaborate on what that experience might look like by breaking end-users into two buckets:

- Analysts / BI users
- Analytics engineers / Data engineers
Expand All @@ -49,21 +49,21 @@ Here we have drag and drop functionality and a skin over top of the underlying `
**How model names can make this painful:**
The end users might not even know what tables the data refers to, as potentially everything is joined by the system and they don’t need to write their own queries. If model names are chosen poorly, there is a good chance that the BI layer on top of the database tables has been renamed to something more useful for the analysts. This adds an extra step of mental complexity in tracing the <Term id="data-lineage">lineage</Term> from data model to BI.

#### Read only access to the dbt Cloud IDE docs
#### Read only access to the dbt Cloud IDE docs
If Analysts want more context via documentation, they may traverse back to the dbt layer and check out the data models in either the context of the Project or Database. In the Project view, they will see the data models in the folder hierarchy present in your project’s repository. In the Database view you will see the output of the data models as present in your database, ie. `database / schema / object`.

![A screenshot depicting the dbt Cloud IDE menu's Database view which shows you the output of your data models. Next to this view, is the Project view.](/img/blog/2022-05-17-stakeholder-friendly-model-names/project-view.png)

**How model names can make this painful:**
For the Project view, generally abstracted department or organizational structures as folder names presupposes the reader/engineer knows what is contained within the folder beforehand or what that department actually does, or promotes haphazard clicking to open folders to see what is within. Organizing the final outputs by business unit or analytics function is great for end users but doesn't accurately represent all the sources and references that had to come together to build this output, as they often live in another folder.
For the Project view, generally abstracted department or organizational structures as folder names presupposes the reader/engineer knows what is contained within the folder beforehand or what that department actually does, or promotes haphazard clicking to open folders to see what is within. Organizing the final outputs by business unit or analytics function is great for end users but doesn't accurately represent all the sources and references that had to come together to build this output, as they often live in another folder.

For the Database view, pray your team has been declaring a logical schema bucketing, or a logical model naming convention, otherwise you will have a long, alphabetized list of database objects to scroll through, where staging, intermediate, and final output models are all intermixed. Clicking into a data model and viewing the documentation is helpful, but you would need to check out the DAG to see where the model lives in the overall flow.

#### The full dropdown list in their data warehouse.

If they have access to Worksheets, SQL runner, or another way to write ad hoc sql queries, then they will have access to the data models as present in your database, ie. `database / schema / object`, but with less documentation attached, and more proclivity towards querying tables to check out their contents, which costs time and money.

![A screenshot of the the SQL Runner menu within Looker showcasing the dropdown list of all data models present in the database.](/img/blog/2022-05-17-stakeholder-friendly-model-names/data-warehouse-dropdown.png)
![A screenshot of the SQL Runner menu within Looker showcasing the dropdown list of all data models present in the database.](/img/blog/2022-05-17-stakeholder-friendly-model-names/data-warehouse-dropdown.png)

**How model names can make this painful:**
Without proper naming conventions, you will encounter `analytics.order`, `analytics.orders`, `analytics.orders_new` and not know which one is which, so you will open up a scratch statement tab and attempt to figure out which is correct:
Expand All @@ -73,9 +73,9 @@ Without proper naming conventions, you will encounter `analytics.order`, `analyt
-- select * from analytics.orders limit 10
select * from analytics.orders_new limit 10
```
Hopefully you get it right via sampling queries, or eventually find out there is a true source of truth defined in a totally separate area: `core.dim_orders`.
Hopefully you get it right via sampling queries, or eventually find out there is a true source of truth defined in a totally separate area: `core.dim_orders`.

The problem here is the only information you can use to determine what data is within an object or the purpose of the object is within the schema and model name.
The problem here is the only information you can use to determine what data is within an object or the purpose of the object is within the schema and model name.

### The engineer’s user experience

Expand All @@ -98,7 +98,7 @@ There is not much worse than spending all week developing on a task, submitting
This is largely the same as the Analyst experience above, except they created the data models or are aware of their etymologies. They are likely more comfortable writing ad hoc queries, but also have the ability to make changes, which adds a layer of thought processing when working.

**How model names can make this painful:**
It takes time to become a subject matter expert in the database. You will need to know which schema a subject lives in, what tables are the source of truth and/or output models, versus experiments, outdated objects, or building blocks used along the way. Working within this context, engineers know the history and company lore behind why a table was named that way or how its purpose may differ slightly from its name, but they also have the ability to make changes.
It takes time to become a subject matter expert in the database. You will need to know which schema a subject lives in, what tables are the source of truth and/or output models, versus experiments, outdated objects, or building blocks used along the way. Working within this context, engineers know the history and company lore behind why a table was named that way or how its purpose may differ slightly from its name, but they also have the ability to make changes.

Change management is hard; how many places would you need to update, rename, re-document, and retest to fix a poor naming choice from long ago? It is a daunting position, which can create internal strife when constrained for time over whether we should continually revamp and refactor for maintainability or focus on building new models in the same pattern as before.

Expand Down
4 changes: 2 additions & 2 deletions website/docs/docs/cloud/manage-access/set-up-sso-okta.md
Original file line number Diff line number Diff line change
Expand Up @@ -152,7 +152,7 @@ the integration between Okta and dbt Cloud.

## Configuration in dbt Cloud

To complete setup, follow the steps below in dbt Cloud.
To complete setup, follow the steps below in dbt Cloud.

### Supplying credentials

Expand Down Expand Up @@ -182,7 +182,7 @@ configured in the steps above.

21. Click **Save** to complete setup for the Okta integration. From
here, you can navigate to the URL generated for your account's _slug_ to
test logging in with Okta. Additionally, users added the the Okta app
test logging in with Okta. Additionally, users added the Okta app
will be able to log in to dbt Cloud from Okta directly.

<Snippet path="login_url_note" />
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -312,7 +312,7 @@ Follow these steps to set up single sign-on (SSO) with dbt Cloud:

12. Click **Edit** in the Basic SAML Configuration section.

<Lightbox src="/img/docs/dbt-cloud/access-control/basic-saml.jpg" width="75%" title="In the 'Set up Single Sign-On with SAML' page, click 'Edit' in the the 'Basic SAML Configuration' card" />
<Lightbox src="/img/docs/dbt-cloud/access-control/basic-saml.jpg" width="75%" title="In the 'Set up Single Sign-On with SAML' page, click 'Edit' in the 'Basic SAML Configuration' card" />

13. Use the following table to complete the required fields and connect to dbt:

Expand Down
Loading

0 comments on commit 13c3968

Please sign in to comment.