This package models TikTok Ads data from Fivetran’s connector. It uses data in the format described by this ERD.
This package enriches your Fivetran data by doing the following:
- Adds descriptions to tables and columns that are synced using Fivetran
- Adds column-level testing where applicable. For example, all primary keys are tested for uniqueness and non-null values.
- Models staging tables, which will be used in our transform package
Modelslink
This package contains staging models, designed to work simultaneously with our TikTok Ads transform package and our multi-platform Ad Reporting package. The staging models:
- Name columns consistently across all packages:
- Boolean fields are prefixed with
is_
orhas_
- Timestamps are appended with
_at
- ID primary keys are prefixed with the name of the table. For example, the advertiser table’s ID column is renamed
advertiser_id
.
- Boolean fields are prefixed with
Installation Instructionslink
Check dbt Hub for the latest installation instructions, or read the dbt docs for more information on installing packages.
Include in your packages.yml
packages:
- package: fivetran/tiktok_ads_source
version: [">=0.1.0", "<0.2.0"]
Configurationlink
By default, this package will look for your TikTok Ads data in the tiktok_ads
schema of your target database. If this is not where your TikTok Ads data is, please add the following configuration to your dbt_project.yml
file:
...
config-version: 2
vars:
tiktok_ads_database: your_database_name
tiktok_ads_schema: your_schema_name
Changing the Build Schemalink
By default, this package will build the TikTok Ads staging models within a schema titled (<target_schema> + _stg_tiktok_ads
) in your target database. If this is not where you would like your TikTok Ads staging data to be written to, add the following configuration to your dbt_project.yml
file:
...
models:
tiktok_ads_source:
+schema: my_new_schema_name # leave blank for just the target_schema
Database Supportlink
This package has been tested on BigQuery, Snowflake, Redshift, Postgres, and Databricks.
Databricks Dispatch Configurationlink
dbt v0.20.0
introduced a new project-level dispatch configuration that enables an “override” setting for all dispatched macros. If you are using a Databricks destination with this package, you will need to add the following (or a variation of the following) dispatch configuration within your dbt_project.yml
. This is required in order for the package to accurately search for macros within the dbt-labs/spark_utils
and then the dbt-labs/dbt_utils
packages, respectively.
dispatch:
- macro_namespace: dbt_utils
search_order: ['spark_utils', 'dbt_utils']
Contributionslink
Additional contributions to this package are very welcome! Please create issues
or open PRs against main
. Check out
this Discourse post
on the best workflow for contributing to a package.
Resources:link
- Provide feedback on our existing dbt packages or what you’d like to see next
- Have questions or feedback, or need help? Book a time during our office hours using Calendly or email us at solutions@fivetran.com.
- Find all of Fivetran’s pre-built dbt packages in our dbt hub
- Learn how to orchestrate your models with Fivetran Transformations for dbt Core™
- Learn more about Fivetran overall in our docs
- Check out Fivetran’s blog
- Learn more about dbt in the dbt docs
- Check out Discourse for commonly asked questions and answers
- Join the chat on Slack for live discussions and support
- Find dbt events near you
- Check out the dbt blog for the latest news on dbt’s development and best practices