We now use Azure Blob Storage as the staging location for our temporary files if
- Your Databricks cluster is hosted on Azure
- Your Databricks cluster is a SQL Endpoint cluster
- Your Databricks cluster is a general purpose cluster with DBR version 10.2 or greater
Our Databricks destination now supports clusters with Databricks Runtime versions 9.0 - 10.x.
We now support Databricks on Google Cloud.
Our Databricks destination now supports clusters with Databricks Runtime 8.0 and above.
Our Databricks destination is now generally available.
We have reduced the sync duration of append-only tables that have primary key columns of
BIGINT data type. During internal performance testing, we observed a reduction in sync time for destinations with large table sizes.
Our Databricks destination now supports the creation of external tables. You can now opt to create Delta tables as external tables from the connector setup form.
We now support syncing the BINARY data type from your source.
We now use our own Amazon S3 bucket as an intermediate storage for staging temporary data during a sync. Now, when setting up Databricks as your destination, you do not have to create a S3 bucket.
We will end support for clusters with Databricks Runtime 7.0 and below, on August 15, 2020. To prevent your integrations from failing or causing data loss, upgrade your Databricks Runtime to 7.1 before August 15, 2020.