Data Infrastructure.

All Posts

3 Layer Framework
Introduction How do you think about your data stack? If your job requires analyzing data, or supporting teams who analyze your data, you will likely find yourself at some point working on data pipelines and learning about their complexity as you go along. While data pipelines are light years ahead of where they were 20 years, the day-to-day challenges data teams face today are largely the same. How have we come so far, yet stayed where we are? We have seen tons of thoughtful work being done to promote the modern data stack. Andreessen Horowitz’s seminal thought piece on
Synchronous vs Asynchronous Invocations
At Datacoral, we are heavy users of AWS Lambda and other AWS Serverless services because of the power, flexibility and security that we can provide our users by building our architecture on top of these. The benefits of these serverless services include: No resource provisioning required; services scale up and down seamlessly based on demand Fault tolerance and retry mechanisms are handled by the services themselves Services end up being cheaper for most real-world workloads (since machines don’t have to be provisioned for highest levels of traffic) Lightweight and fast deployments Very tight integration with other cloud services for
Jyve Leveraged Datacoral’s Batch Compute to Better Understand its Marketplace
Jyve: Company Background Jyve serves the Grocery and CPG industry by connecting them with a pool of skilled labor professionals, called Jyvers, who are trained to complete jobs like restocking shelves and auditing goods for expiry. As a gig-economy platform, Jyve understands that success lies in making both its partners and Jyvers happy and that data is an invaluable tool to gain insight into its marketplace dynamics. Focusing on the needs of its partners and Jyvers represents a mission that is unique to Jyve, so when Jyve’s tech team was beginning to grow its Data team it made sense
Datacoral’s Event-Driven Orchestration Framework Now Available to Customers
Introduction Datacoral provides an end-to-end data engineering platform to extract, load, transform and serve business data. In short, we help data scientists create and manage data pipelines with only a few clicks. As part of our installation process, we deploy a set of micro-services within our customers’ VPC to help them with their data engineering workloads. These services could be fetching data from a database like Postgres and loading data into a warehouse like Redshift, Snowflake or Athena. Internally, our services communicate via events, which we call “data events”. In this blog post, we will first explore what these

Want more data insights? Sign up for our updates.

Get started with Datacoral

We install in your AWS VPC, your control your data.

We use cookies on our website. If you continue to use our website, you are agreeing to our use of cookies in accordance with our Cookie Statement. For information about how to change your cookie settings, please see our Cookie Statement.