Testing and Documentation in DBT: Ensuring Data Quality and Accessibility

September 1, 2023 · 18 minute read

Shashank Mishra

Data Engineering

TLDR

Testing and documentation in

DBT (Data Build Tool)

are crucial pillars for maintaining data integrity and understanding. They work hand-in-hand to ensure data transformations are accurate and comprehensible for teams, promoting both data quality and easy accessibility.

Outline

  • The importance of testing in DBT

  • Writing tests in DBT

  • Using DBT to generate documentation

  • Maintaining and updating DBT documentation

  • A practical example of testing and documenting a DBT model

  • Conclusion

The importance of testing in DBT

In the world of data transformation and analytics, 

DBT (Data Build Tool)

has emerged as a key player. A crucial aspect of DBT, which sets it apart from many traditional ETL tools, is its robust testing capabilities. Here's a deep dive into the importance of testing in DBT:

  • Catching Data Anomalies Early: 

    • With DBT's testing capabilities, data anomalies or issues can be identified and rectified early in the data transformation process.

    • This early detection can prevent costly mistakes downstream, such as presenting misleading insights to stakeholders.

  • Built-in Test Functions: 

    • DBT comes with a set of predefined tests like unique, not_null, and accepted_values.

    • These built-in tests are tailored for common data integrity checks, making it easy to validate data without writing extensive custom logic.

  • Custom Test Creation:

    • While built-in tests cover many scenarios, DBT's flexibility in allowing users to define custom tests ensures that specific business logic and edge cases can also be validated.

    • This ensures that data models are not just technically sound but also business-relevant.

  • Ensuring Consistency:

    • Regular testing in DBT ensures data consistency across models, especially important when multiple analysts or data engineers are involved.

    • This reduces the chance of conflicting transformations or duplicate work.

  • Source Data Validation:

    • Before diving deep into transformations, DBT tests can validate source data, ensuring that any external data ingested into the system meets the required standards.

    • This is particularly useful when integrating with third-party systems or external data providers where data quality might vary.

In essence, testing in DBT isn't just a feature—it's a fundamental part of the data transformation philosophy. It emphasizes the principle that data quality is paramount, and regular, rigorous testing is the key to achieving and maintaining that quality.

(Source: 

Writing tests in DBT

DBT (Data Build Tool) 

incorporates a foundational principle: "Test your data, not your code." This shifts the focus from merely coding transformations to ensuring the outcomes of these transformations are accurate and trustworthy. Here's how you can leverage DBT to write tests:

  • Utilizing Built-in Tests:

    • unique

      : Validates that a column has no duplicate values.

    • not_null

      : Checks that a column contains no null values.

    • accepted_values

      : Ensures a column's values fall within a specific set of acceptable values.

  • Writing Custom Tests:

    • While the built-in tests cater to common scenarios, DBT's Jinja context allows for crafting custom SQL-based tests for specific use-cases.

    • A custom test is typically a SQL query that should return no rows if the test passes.

-- Example: Ensure the 'revenue' column is always positive

1
2
3
select *
from {{ ref('your_model') }}
where revenue < 0
  • Test Configurations: 

    Tests in DBT can be configured using the 

    dbt_project.yml 

    file or directly within the model's SQL file.

    • warn_if

      : Set a threshold for failure warning without failing the run.

    • error_if

      : Define a threshold that will fail the run if exceeded.

    • severity

      : Determine the severity (e.g., warn or error) of a test failure.

  • Source Tests:

    • DBT also allows testing directly on source data with source configurations.

    • This ensures that the raw data ingested into the transformation pipeline adheres to expected standards from the outset.

1
2
3
4
5
6
7
sources:
  - name: my_source
    tables:
      - name: my_table
        tests:
          - unique:
              column_name: id
  • Running and Reporting:

    • Use the 

      dbt test 

      command to execute tests.

    • DBT's CLI provides comprehensive feedback, listing which tests passed and which failed. This transparency aids in proactive error rectification.

In summary, DBT's testing capabilities, both built-in and customizable, provide an extensive toolkit for data engineers and analysts to assert the quality and consistency of their data models. Proper testing in DBT ensures that analytics and insights derived are both accurate and reliable.

(Source: 

Using DBT to generate documentation

DBT (Data Build Tool)

isn't just about transforming data. One of its standout features is its ability to auto-generate comprehensive documentation for your data models. This ensures that stakeholders, data engineers, and analysts have a clear understanding of the data's structure, lineage, and transformation logic. Here's how DBT achieves this:

  • Self-Documenting Code:

    • DBT uses the principle of "self-documenting" code. This means that the code you write for your models, tests, and other configurations inherently serves as its own documentation.

    • It reduces the gap between code creation and documentation, ensuring they evolve together.

  • description Field:

    • Within your DBT models and configurations, there's an option to use the 

      description

      field.

    • This field allows you to provide detailed explanations for your models, columns, or tests, enhancing the auto-generated documentation.

1
2
3
models:
  - name: my_model
    description: "This model joins user data with transaction data to calculate LTV."
  • Visualization of Data Lineage:

    • One of DBT's powerful features is its ability to visually depict data lineage. This showcases the flow of data from source to transformation.

    • Stakeholders can easily trace back through a model's lineage, understanding its dependencies and sources.

  • Web-based Interface:

    • Running 

      dbt docs serve

      will launch a local web server, presenting an interactive, web-based version of your project's documentation.

    • This interface is user-friendly, searchable, and provides a clear overview of models, tests, and sources.

  • Integration with dbt_artifacts:

    • DBT generates JSON artifacts post-run, which contain rich metadata about the run.

    • These artifacts can be utilized to enhance documentation with run statistics, test results, and more.

  • Keeping Documentation Fresh:

    • As you evolve your DBT models and logic, the documentation can be regenerated with the 

      dbt docs generate

      command.

    • This ensures that your documentation remains updated with the most recent changes and provides an accurate representation of your data landscape.

  • Sharing and Collaboration:

    • The web-based documentation can be easily shared among teams. This promotes collaboration and ensures that everyone is on the same page regarding data definitions and structures.

In conclusion, DBT's documentation capabilities ensure that your data transformations aren't just accurate and efficient, but also transparent and understandable. By bridging the gap between data engineering and analytics, DBT promotes a culture where data's story is as important as the insights derived from it.

(Source: 

Maintaining and updating DBT documentation

DBT (Data Build Tool)

revolutionizes the way data teams think about transformations by offering a "source of truth" through its auto-generated documentation. Maintaining and updating this documentation is critical for ensuring a seamless workflow and fostering trust with stakeholders. Here's a guide on how to do this efficiently:

  • Auto-generation is Key:

    • At the heart of DBT's approach is the auto-generation of documentation. Running 

      dbt docs generate

      automatically captures changes in your models, tests, and other configurations.

    • This ensures that as you evolve your models, the documentation is always kept in sync with minimal manual intervention.

  • Leverage the description Field:

    • Always update the description field when making changes to your models, columns, or tests.

    • This not only provides context about the data transformations but also informs users about recent updates or changes.

  • Use Version Control:

    • Incorporating DBT within a version control system (like Git) offers a historical view of changes.

    • This facilitates tracking alterations in both the transformation logic and the accompanying documentation.

  • Schedule Regular Reviews:

    • Designate times (e.g., end of sprint, monthly) to review documentation for completeness and accuracy.

    • This is particularly crucial if multiple team members are contributing to the DBT project.

  • Highlight Deprecations:

    • If certain models or transformations are deprecated, ensure they're clearly marked in the documentation.

    • This provides clarity and prevents reliance on outdated data processes.

  • Integrate with CI/CD:

    • Use Continuous Integration and Continuous Deployment (CI/CD) pipelines to automate the generation and deployment of documentation updates.

    • Tools like Jenkins or GitHub Actions can automatically run dbt docs generate and dbt docs serve whenever changes are pushed.

While DBT greatly simplifies the documentation process with its auto-generation feature, maintaining its freshness and relevance requires a systematic approach. By integrating best practices and leveraging automation, teams can ensure that their DBT documentation remains a reliable and up-to-date resource for all stakeholders.

(Source: 

A Practical Example of Testing and Documenting a DBT Model

DBT (Data Build Tool)

offers powerful capabilities not just for transforming data but also for testing and documenting it. Let’s walk through a practical example of both testing and documenting a DBT model.

Scenario:

Imagine we have a DBT model that aggregates user data, generating a table of daily active users (DAU) from raw event logs.

Model Creation:

  • First, we create our DBT model, 

    dau.sql

    :

1
2
3
4
5
6
7
SELECT
    DATE(event_timestamp) as date,
    COUNT(DISTINCT user_id) as daily_active_users
FROM
    raw_event_logs
GROUP BY
    1

Testing the Model:

  • Before deploying our model to production, we want to ensure its accuracy. DBT offers built-in tests, and we can also define custom tests.

  • Unique Test: 

    Ensure that each date has a single entry.

1
2
3
4
5
6
7
8
version: 2


models:
  - name: dau
    tests:
      - unique:
          column_name: date

Not Null Test: 

Ensure that there are no NULL values in the date column.

1
2
      - not_null:
          column_name: date

Documenting the Model:

  • DBT's description field provides a seamless way to add documentation directly in the code.

  • Model Description: 

    At the top of 

    dau.sql

    , add a comment to describe the model's purpose.

-- This model aggregates raw event logs to compute daily active users.

Column Descriptions: 

In the schema.yml file associated with the model, provide descriptions for each column.

1
2
3
4
5
6
7
8
9
10
11
version: 2


models:
  - name: dau
    description: "This model aggregates raw event logs to compute daily active users."
    columns:
      - name: date
        description: "The specific day for the DAU count."
      - name: daily_active_users
        description: "The count of unique active users for that day."

Generating and Viewing Documentation:

  • After adding the necessary tests and documentation:

    • Run 

      dbt docs generate

      to update the project's documentation.

    • Use 

      dbt docs serve

      to start a local server and view the interactive documentation.

(Source: 

Conclusion

DBT's integrated testing and documentation underscore a new era of data reliability and transparency. By intertwining rigorous quality checks with clear, accessible documentation, DBT empowers teams to foster trust, drive understanding, and elevate the overall standard of data practices in today's digital landscape. In the next article of this series, we’ll see how we can integrate DBT with Mage.

Start building for free

No need for a credit card to get started.
Trying out Mage to build ranking models won’t cost a cent.

No need for a credit card to get started. Trying out Mage to build ranking models won’t cost a cent.

 2024 Mage Technologies, Inc.
 2024 Mage Technologies, Inc.