Connector Improvement: Add ability to change destination column names
Sometimes a connector can have a limitation where the source column name cannot be mapped one-to-one to the destination (e.g. BigQuery cannot start with "_PARTITION").
-
Josh Lubawy I'm curious to understand when and how you would use this ability? We don't allow for renaming because it goes against our one simple default choice principle to reduce the configurability in favor of simplicity and automation.
-
This arose from a support ticket. Our source database has a column named `_partitionKey` and our destination is BiqQuery. Unfortunately BiqQuery does not allow columns to start with `_partition` so this column was not copied over: https://cloud.google.com/bigquery/docs/schemas#column_names
Unfortunately we can't rename at the source, so only possible workaround at the moment would be to duplicate the field with another name. But ideal solution would be to have an override feature for edge-cases like this.
-
I am also having an issue with this. My Gsheet data source has long column names, and the Fivetran didn't copy these columns to my Postgres DW; it skipped. I would rather change the column names inside Fivetran, than ask the department owner of the table to change their column names. As an alternative, instead of skipping these columns transfer, Fivetran could automatically truncate these names to the maximum size allowed by Postgres on the column name.
-
what's the status here?
-
I would also like this feature to permanently change destination column names (in Snowflake, for me.) The use case is switching from one data ingestion tool to Fivetran, and wanting to keep the 'old' column names used before Fivetran the same. Thanks!
Please sign in to leave a comment.
Comments
5 comments