Connector Improvement: Volume Cap for Tables
We are new to Fivetran and have been using it for 3 months. We just discovered last week that we have used up all of our pre-paid credits for year during this period. Oddly we noticed a big spike after our first two week 'free period'. This was a big concern. We started monitoring it, and it didn't happen for a while, so we cracked it up as an anomaly and moved on to other work. As it turns out, we seem to have had a 'rogue' process in our Data Warehouse (which we inherited from others who had moved on long ago) where a snapshot of our CRM was being re-created in our Data Warehouse at seemingly random intervals. Now, we didn't know about this process, and have since killed off those tables from Fivetran replication to Snowflake. However, we are now left high and dry. We have not purposely abused our usage, but I am now in a situation where 9 spike days have consumed the remaining nine months of my credits. This hurts badly. Now I feel like a Sucker for going with Fivetran, after we spent the money on a PoC etc. etc. I would much rather be forgiven for these 9 days of random spikes and to pay (use) the credits for the 'free' 14 days for each new connector. However, I am now stuck, since I had no mechanism to put Volume Caps on my tables for ingestion. I am not sure if we can afford to remain a Fivetran customer, but I would very much like a Kill switch for any volumes over a given amount, or for a Volume Cap for any table that is over a percentage above trend. If I were the CIA, I wouldn't care about this, but I am just a humble Enterprise Data Manager trying to move my own data without throwing Red Budget Flags all over the place, and moving data shouldn't cause so much pain and suffering. Thank you.
-
Official comment
Hi Kirk, Shira the Product Team here!
Thank you for sharing your request. I have added your request to feature improvements backlog.
I am curious to learn if any other alerts on connector-specific MAR spikes would have helped in your case.
-
Hi Shira,
I am afraid that Alerts simply don't cut the mustard in these runaway train scenarios. Again, I would say that this is a deal breaker for using Fivetran to migrate Data Warehouses. Data Warehouses are messy things that usually have a sordid history, or that is, a more sordid history in terms of data integrity than transactional databases. All sorts of strange things are done in Data Warehouses, like additions of columns and recreation of tables. It is completely unrealistic to assume that a Data Warehouse should conform to a third normal form (3NF) design pattern, because this has never been the case in Data Warehouse-Land, where the concepts of dimensions and cubes have traditionally ruled the day. So, no Alert in the world is good enough if the train is allowed to runaway, crashing into the population and causing devastation in its wake - there needs to be a Pre-Cook phase (or Pre-Fivetran phase - where 'to Fivetran' is a verb) that has a few logical steps: 1. Estimate the amount change of a a given table that is 'ticked' for replication; 2. Check that estimate against a percentage growth tolerance against the migration volume trend for that table; 3. Stop that table migration Dead In Its Tracks before it can cause any monetary or psychological harm to the Fivetran customer if it is larger than trend * acceptable growth %; 4. Notify said customer that Fivetran has once again prevented the loss and suffering that is incurred by these disasters - Yayyy for Fivetran and its happy customers.
-
Kirk, Thank you for that insight!
Please sign in to leave a comment.
Comments
3 comments