Skip to main content

Community

Destination Improvement: Automatically Delete Rows Marked as _fivetran_deleted in Destination

Answered

Please sign in to leave a comment.

Comments

18 comments

  • Official comment

    Hi everyone,

    Thanks for your posts in our feature request portal!

    We have heard from our customers a desire to have hard deletes issued on destinations when read from the source. This seems to be common in database use cases, where customers want a like-for-like table in their destination that exactly matches their source.

    While I don't have a timeline to share for when we will implement hard deletes for destinations (which we are calling Live mode), this is something that is top of mind and we are discussing this internally. Please stay tuned for more updates as we research a possible solution here.

    In the meantime, every comment and upvote does help build the case for this feature.

    +1

    As we're adding more tables, it is becoming inefficient to manually add a transformation to hard delete records for each table. Does Fivetran have a planned solution to this? 

    Fivetran uses Salesforce logs to determine whether a record is deleted. When the records/logs are hard deleted in Salesforce, it will erroneously assume that is_deleted = false, because there is no log to check against. Revenue for an opportunity is a sum of products. Since all old entries of products are often wrongly flagged as is_deleted = false (due to hard delete), the sum for all products can be 4x what is should be.  Tableau is showing the correct sum, because it only pulls what's currently in Salesforce and ignores old entries. Please use the same approach.

    +1

    Would be nice to see this for coupa

    We also need this, in almost all cases we want hard deletions to propagate through to the destination. 

    It is both easy to miss this as a filter, and causes performance issues by forcing another query term across all tables and joins.

    +1

    Being forced into this is the worst part of an otherwise wonderful product. In the POC we put our desired outcome as a simple "We want a 1:1 database sync". No one told us we would have to have an additional transformation on every table to make that happen. I definitely could have read the docs more thoroughly ahead of time :) but I strongly strongly strongly upvote this request. 

    I have been bit hard by this twice now in fivetran implementations (several years apart). I understand the logic here on why fivetran made the decisions, but not being able to easily go clean this up is a big pain. Now not only do I look like an idiot because our data (that was well tested on release) now is wrong because of duplicate rows, but now I have to go through and add a transformation layer to clean these out, with now help from fivetran to do so.

    Adding an emphatic upvote to this to give us the option to clean out rows that should have gone away from the dataset by default. 

    Any updates on this feature? Is this work in the plan?

    +1.  It would be awesome to have like-for-like in our source (snowflake) and destination (postgres).  That way, we don't have to have special logic around the deleted rows.

    This is especially critical for Powered By Fivetran use cases as it is not always clear to our end users that some tables have this column that they need to use to filter out data that is no longer present in sources.

    Any update on this functionality? This is a painful transformation for data that we are pulling at near real time. 

     

    +1

    + 1

    This is really bad feature by design.

    If you decide to keep this Fivetran should give warnings whenever there has been _fivetran_deleted = true posts inserted in the target database.

    I just found this by chance.

    + 1

    It's great if we can have this feature to "hard delete" data for all sources.

    +1