Latest Post

Follow our stories and unique insights.
data quality
With data constantly changing in source and destination databases, how do you understand the quality of your analysis? Is your data integration tooling providing you any data quality guarantees?
data infrastructure
Introduction Data connectors are a critical component of any company’s data infrastructure stack. They replicate the company’s data from different sources into a shared data warehouse or data lake, which then enables the data team to aggregate, combine, and explore all these datasets together. When a data connector is set up, it will start reading the data and the changes as they happen at the source, but will also trigger a “historical sync” for the table to fetch all the existing data, so that the data at the destination matches data at the source exactly.  Historical syncs are needed
data integration and change data capture
This article was originally posted on Towards Data Science here. Data integrations have been around for decades, but there has been a recent explosion of new, compelling data integration companies offering cloud-native, easy-to-configure connectors and quick access to high-value analytics. These offerings come from companies like Fivetran, Stitch Data, Matillion, and Airbyte. It can be difficult for any data team to know whether to stick to their existing data stack or to adopt these new tools. Though it’s great to see data integrations becoming more mainstream and new approaches emerging, we think there’s a growing gap being created by new
Datacoral, an end-to-end data integration platform built using Amazon Web Services (AWS) serverless technologies, announces today integration within Amazon Redshift Console. The integration is designed to allow Amazon Redshift customers to seamlessly and quickly select AWS Partners participating in the program who can meet their data pipeline specifications. As a result of the feature launch and integration, customers interested in Datacoral’s CDC integrations and 80+ pre-built connectors, will be able to quickly leverage Datacoral’s services within the console to get their data flowing into Amazon Redshift.  With Datacoral’s out-of-the-box data quality, data freshness guarantees, and ability to run within the
data lakehouses
Data warehouses, data lakes — and now data lakehouses. What’s next? The technology to manage your company’s data is changing so rapidly, it can be difficult to know what to do about new vendors bringing innovative data services to the market. Can they plug into your data stack? Will they compromise your data security strategy? If you have a data lake and a data warehouse, should you consolidate into a lakehouse? Is there something even newer on the market you’re missing? That’s a lot to think about. We’re here to help. We asked Datacoral’s CEO Raghu Murthy some key
change data capture
Highlights from the “Real World Change Data Capture” Podcast (Part 2)  In our previous article, What Is Change Data Capture (CDC) Part 1, we highlighted several Change Data Capture-related questions and answers from episode 177 of the Data Engineering Podcast. Tobias Macey hosted Datacoral’s founder and CEO, Raghu Murthy, and spent 50 minutes discussing Datacoral’s real-world usage of Change Data Capture data pipelines. In this article, we go a little deeper with CDC by covering: Alternatives to CDC and use cases for more batch-oriented approaches What to consider before you implement CDC and the barriers to entry Shortcomings and

Get started with Datacoral

We install in your AWS VPC, your control your data.

We use cookies on our website. If you continue to use our website, you are agreeing to our use of cookies in accordance with our Cookie Statement. For information about how to change your cookie settings, please see our Cookie Statement.