The Cinchapi Data Platform
The Cinchapi Data Platform (CDP) was created to make it easier for developers and analysts to manage and work with real-time or legacy data and to access actionable analytics.
The CDP does away with the need to do all of the traditional, time-consuming, prep work typically needed in order to develop data-driven applications or to expose interesting insights.
Instead, we offer developers an ad hoc method in which leverages machine learning and natural language processing. The CDP’s Ask, See, Act workflow has developers working with relevant data in minutes instead of weeks.
The Cinchapi Data Platform was purpose-built to make it easy to work with disparate data, to find otherwise hidden relationships between these sources, and to allow data scientists, analysts, and developers to “rewind time”. Much like a DVR for data, once these otherwise hidden relationships are uncovered, it is now possible to go back in time to see how these relationships were established and evolved over time.
The foundation of the Cinchapi Data Platform stack is the Concourse Database (ConcourseDB). The ConcourseDB project was founded by Cinchapi, and we continue to maintain it.
ConcourseDB offers a unique array of features like automatic indexing and version control with distributed high performance ACID transactions. ConcourseDB offers the benefits of both graph and schema-less document-oriented databases in one system. ConcourseDB was built from the ground up to offer flexibility and scalability with minimal tuning. Simplicity and ease-of-use are first class design decisions.
Impromptu is a natural language based real-time analytics engine that enables anyone to visualize trends and get intelligence from any data source on demand. Just ask questions with conversational queries, and get results. Need to drill down? No problem – Ask follow up questions to refine your results.
Cinchapi’s Sponge is where all connected data sources flow in order to be integrated into the Concourse Database. It uses machine learning to make sense of disparate, decentralized data sources. It greatly reduces data cleanup and preparation.