This package enriches your Fivetran data by doing the following:
- Adds descriptions to tables and columns that are synced using Fivetran
- Adds column-level testing where applicable. For example, all primary keys are tested for uniqueness and non-null values.
- Models staging tables, which will be used in our transform package
This package contains staging models, designed to work simultaneously with our Klaviyo transformation package. The staging models:
- Remove any rows that are soft-deleted
- Name columns consistently across all packages:
- Boolean fields are prefixed with
- Timestamps are appended with
- ID primary keys are prefixed with the name of the table. For example, a user table’s ID column is renamed
- Foreign keys include the table that they refer to. For example, a project table’s owner ID column is renamed
- Boolean fields are prefixed with
Include in your
packages: - package: fivetran/klaviyo_source version: [">=0.4.1", "<0.5.0"]
By default, this package looks for your Klaviyo data in the
klaviyo schema of your target database. If this is not where your Klaviyo data is, add the following configuration to your
... config-version: 2 vars: klaviyo_database: your_database_name klaviyo_schema: your_schema_name
Unioning Multiple Klaviyo Connectorslink
If you have multiple Klaviyo connectors in Fivetran and would like to use this package on all of them simultaneously, we have provided functionality to do so. The package will union all of the data together and pass the unioned table into the transformations. You will be able to see which source it came from in the
source_relation column of each model. To use this functionality, you will need to set either (note that you cannot use both) the
... config-version: 2 vars: klaviyo_source: klaviyo_union_schemas: ['klaviyo_usa','klaviyo_canada'] # use this if the data is in different schemas/datasets of the same database/project klaviyo_union_databases: ['klaviyo_usa','klaviyo_canada'] # use this if the data is in different databases/projects but uses the same schema name
Additionally, this package includes all source columns defined in the macros folder. We highly recommend including custom fields in this package as models now only bring in the standard fields for the
PERSON tables. You can add more columns using our passthrough column variables. These variables allow the passthrough fields to be aliased (
alias) and casted (
transform_sql) if desired, although it is not required. Datatype casting is configured via a SQL snippet within the
transform_sql key. You may add the desired SQL snippet while omitting the
as field_name part of the casting statement - this will be dealt with by the alias attribute - and your custom passthrough fields will be casted accordingly.
Use the following format for declaring the respective passthrough variables:
... vars: klaviyo__event_pass_through_columns: - name: "property_field_id" alias: "new_name_for_this_field_id" transform_sql: "cast(new_name_for_this_field as int64)" - name: "this_other_field" transform_sql: "cast(this_other_field as string)" klaviyo__person_pass_through_columns: - name: "custom_crazy_field_name" alias: "normal_field_name"
Changing the Build Schemalink
By default, this package will build the Klaviyo staging models within a schema titled (
_stg_klaviyo) in your target database. If this is not where you would like your Klaviyo staging data to be written to, add the following configuration to your
... models: klaviyo_source: +schema: my_new_schema_name # leave blank for just the target_schema
Note that if your profile does not have permissions to create schemas in your warehouse, you can set the
+schemato blank. The package will then write all tables to your pre-existing target schema.
Don’t see a model or specific metric you would have liked to be included? Notice any bugs when installing
and running the package? If so, we highly encourage and welcome contributions to this package!
Please create issues or open PRs against
master. Check out this post on the best workflow for contributing to a package.
This package has been tested on BigQuery, Snowflake, Redshift, Postgres, and Databricks.
Databricks Dispatch Configurationlink
v0.20.0 introduced a new project-level dispatch configuration that enables an “override” setting for all dispatched macros. If you are using a Databricks destination with this package you will need to add the below (or a variation of the below) dispatch configuration within your
dbt_project.yml. This is required in order for the package to accurately search for macros within the
dbt-labs/spark_utils then the
dbt-labs/dbt_utils packages respectively.
dispatch: - macro_namespace: dbt_utils search_order: ['spark_utils', 'dbt_utils']
- Provide feedback on our existing dbt packages or what you’d like to see next
- Have questions, feedback, or need help? Book a time during our office hours using Calendly or email us at email@example.com
- Find all of Fivetran’s pre-built dbt packages in our dbt hub
- Learn how to orchestrate your models with Fivetran Transformations for dbt Core™
- Learn more about Fivetran overall in our docs
- Check out Fivetran’s blog
- Learn more about dbt in the dbt docs
- Check out Discourse for commonly asked questions and answers
- Join the chat on Slack for live discussions and support
- Find dbt events near you
- Check out the dbt blog for the latest news on dbt’s development and best practices