This package contains staging models, designed to work simultaneously with our Twitter Organic transform package and our Social Media Reporting package. The staging models name columns consistently across all packages:
- Boolean fields are prefixed with
- Timestamps are appended with
- ID primary keys are prefixed with the name of the table. For example, the account table’s ID column is renamed
Include in your
packages: - package: fivetran/twitter_organic_source version: [">=0.1.0", "<0.2.0"]
The Fivetran team maintaining this package only maintains the latest version. We recommend you keep your
packages.yml updated with the dbt hub latest version. You may refer to the CHANGELOG and release notes for more information on changes across versions.
By default, this package will look for your Twitter Organic data in the
twitter_organic schema of your target database. If this is not where your Twitter Organic data is, please add the following configuration to your
... config-version: 2 vars: twitter_organic_schema: your_schema_name twitter_organic_database: your_database_name
Unioning Multiple Twitter Organic Connectorslink
If you have multiple Twitter Organic connectors in Fivetran and would like to use this package on all of them simultaneously, we have provided functionality to do so. The package will union all of the data together and pass the unioned table(s) into the final models. You will be able to see which source it came from in the
source_relation column(s) of each model. To use this functionality, you will need to set either (note that you cannot use both) the
... config-version: 2 vars: ##You may set EITHER the schemas variables below twitter_organic_union_schemas: ['twitter_organic_one','twitter_organic_two'] ##OR you may set EITHER the databases variables below twitter_organic_union_databases: ['twitter_organic_one','twitter_organic_two']
Changing the Build Schemalink
By default, this package will build the Twitter Organic staging models within a schema titled (
_stg_twitter_organic) in your target database. If this is not where you would like your Twitter Organic staging data to be written to, add the following configuration to your
... models: twitter_organic_source: +schema: my_new_schema_name # leave blank for just the target_schema
Don’t see a model or specific metric you would like to be included? Notice any bugs when installing and running the package? If so, we highly encourage and welcome contributions to this package!
Please create issues or open PRs against
main. See the Discourse post for information on how to contribute to a package.
This package has been tested on BigQuery, Snowflake, Redshift, Postgres, and Databricks.
Databricks Dispatch Configurationlink
v0.20.0 introduced a new project-level dispatch configuration that enables an “override” setting for all dispatched macros. If you are using a Databricks destination with this package you will need to add the below (or a variation of the below) dispatch configuration within your
dbt_project.yml. This is required in order for the package to accurately search for macros within the
dbt-labs/spark_utils then the
dbt-labs/dbt_utils packages respectively.
dispatch: - macro_namespace: dbt_utils search_order: ['spark_utils', 'dbt_utils']
- Provide feedback on our existing dbt packages or what you’d like to see next
- Have questions or feedback, or need help? Book a time during our office hours here or email us at firstname.lastname@example.org.
- Find all of Fivetran’s pre-built dbt packages in our dbt hub
- Learn how to orchestrate your models with Fivetran Transformations for dbt Core™
- Learn more about Fivetran overall in our docs
- Check out Fivetran’s blog
- Learn more about dbt in the dbt docs
- Check out Discourse for commonly asked questions and answers
- Join the chat on Slack for live discussions and support
- Find dbt events near you
- Check out the dbt blog for the latest news on dbt’s development and best practices