Category

Cinchapi

Cinchapi Releases Beta Version of Data Platform

By | Cinchapi, Data Visualizations, Natural Language Interface, Natural Language Processing, News, Real-Time Data | No Comments

Cinchapi Releases Beta Version of Data Platform Featuring Machine Learning and a Natural Language Interface to Explore Any Data Source in Real-Time

The Cinchapi Data Platform allows data scientists and analysts to dispense with data prep. Makes data exploration and discovery conversational and actionable.

ATLANTA, GA. March 6, 2017 – Delivering on its promise to take enterprise data from cluster to clarity, Atlanta data startup, Cinchapi, today announced the beta launch of its flagship product, the Cinchapi Data Platform (CDP).  

The Cinchapi Data Platform is a real-time data discovery and analytics engine that automatically learns as humans interact with data and automates their workflows on-the-fly. Cinchapi’s data integration pipeline connects to disparate databases, APIs and IoT devices and streams information to the foundational Concourse Database in real time. Data analysts can then use the Impromptu application to perform ad hoc data exploration using a conversational interface.

The CDP’s analytics engine automatically derives additional context from data and presents the most interesting trends through beautiful visualizations that update in real-time. These visualizations can also be “rewound” to show how data looked in the past and evolved over time – even if the data has been deleted. The CDP’s automated machine intelligence empowers data analysts to immediately explore data using natural language and drill down by asking follow-up questions.

Compared to conventional data management, data teams can expect to shave 50% or more from their analytics tasks. Obstinately a data management platform, the CDP is ideal for anyone looking to explore decentralized or disparate data in search of previously hidden relationships. No matter the nature of the data source – be it any combination of unstructured IoT data, industry standard frameworks, proprietary data, or legacy sources – in just a few minutes, interesting relationships, patterns, or anomalies will be exposed.

Just as powerfully, the Cinchapi Data Platform’s underlying database, Concourse, writes and stores definitive data across time. Like a DVR for data, users can “rewind time” to specific points in the past. They can also can press play to watch as vivid visualizations illustrate how these newly discovered insights were created and how they evolved over time.

“From day one, the Cinchapi vision has been to deliver ‘computing without complexity’”, explains Cinchapi CEO and founder, Jeff Nelson. “I’ve worked with data my entire career and have been frustrated by how much of my time has been spent integrating and cleaning up disparate or decentralized data before being able to explore trends or to begin coding. We knew that by leveraging machine learning, the Cinchapi Data Platform would eliminate the drudgery of data prep. It then instantly exposes the most interesting and relevant data to use or to more fully investigate.”

The End of Data Prep and Cleanup

If asked, those who work with data will tell you that the greatest impediment to working with it is that there is too much of it, and that often, the data is messy. In other words, before an analyst can get insights from data, she has to sift through all of the data to see what she has. She has to determine what data is relevant to the task at hand, and then see how that might relate to other data points.  This data prep and cleanup process can add weeks or months to a project.

As Big Data grows ever larger with data generated by the Internet of Things, it’s a problem which will only increase in scale and complexity. By 2020, BusinessInsider.com predicts that 24 billion IoT devices will be connected to the internet. That works out to about three IoT devices for every person on the planet.  Each of these devices will be generating “messy data”, as there is no standard for what IoT data should look like.

To solve this growing problem, the Cinchapi Data Platform uses machine intelligence to comprehend data, regardless of the source or the schema.  It then looks for relationships, patterns, or anomalies found between otherwise decentralized, disparate, data stores. The CDP was also purpose-built to not impose, nor to rely upon any specific data schema.

This makes the CDP the ideal platform when working with data sources which lack a coherent structure, like IoT data or undocumented legacy or proprietary data. Of course, the Cinchapi Data Platform can also work with industry standard databases like SQL, noSQL, and Oracle.

A Simple, Three Step Workflow

The Cinchapi Data Platform workflow consists of three simple steps: Ask, See, and Act.

Step One, ASK: Once connected to the desired data sources, the first step is to simply ask a question using common English phrases. There is no need to master cryptic data queries in an effort to “solve for x”. Instead, users can ask a question using everyday, conversational phrases. Should the user need a more specific answer, all that she needs to do is ask a follow-up question. With use, the CDP’s machine learning allows the platform to better understand the context of the question asked, further enhancing the user experience.

Step Two, SEE: After questions are asked, next comes the results. Built into the CDP is a powerful analytics engine which provides hidden insights and customized visualizations. This allows users to see relationships and connections which were previously obscured. Even better, with these new relationships now exposed, users can “rewind time” to see how these relationships have evolved and impacted operations in the past.

Step Three, ACT: With the results available, users can then act on the information presented. A data analyst can automate actions with just a few button clicks. A logistics company might find enhanced efficiencies in route planning which could be shared to the fleet in real-time. A CSO in a bank might use its automation capabilities to trigger alerts to a security team when potentially fraudulent activities are detected. Frankly, the possible use cases are endless.

CDP features include:

  • Concourse Database – An enterprise edition of the Cinchapi’s open source database warehouse for transactions, search and analytics across time. This is where streamed data is stored.
  • Sponge – A real-time change data capture and integration service for disparate data sources.
  • Impromptu – A real-time ad-hoc analytics engine that use machine intelligence for workflow automation.

About Cinchapi, Inc.

Atlanta-based Cinchapi is transforming how data scientists, analysts, and developers explore and work with data. The Cinchapi Data Platform (CDP) and its Ask, See, and Act workflow was purpose-built to simplify data preparation, exploration, and development. Its natural language interface combined with machine learning and an analytics engine make working with data conversational, efficient, and intuitive. Imposing no schema requirements, the CDP streams, comprehends, and stores definitive data generated in real-time by IoT devices as well as conventional, legacy, and proprietary databases. Learn more about the Cinchapi Data Platform and its #AskSeeAct workflow at https://Cinchapi.com/

###

Rewind Time with the Cinchapi Data Platform

By | Cinchapi, Concoursedb, Data Visualizations, Database, Real-Time Data, Strongly Consistent | No Comments

Love it or hate it, the singer Cher had a hit single with her 1989 song “If I Could Turn Back Time”. While the song may now be stuck in your head, the truth is that developers who work with data now have the ability to rewind time, at least from a data perspective.

The Cinchapi Data Platform (CDP) allows developers to stream and store decentralized or disparate data from any connected data source. The foundation of the CDP is the open source Concourse Database, created and maintained by Cinchapi.  Since Concourse is a strongly consistent database, it stores definitive data values from connected data sources.

With versioning included, even if the original source data has been overwritten, lost, or changed, developers and analysts will always have the ability to go back to any point in time to see what the values were at a specific moment in time.

The Benefit of Traveling Back in Time

Data is fast, and data is often messy. By that we mean that data points change and evolve from moment to moment. What was true a minute ago may no longer be true now. Worse, typically data is siloed, so it becomes increasingly difficult to see relationships between decentralized data sources.

In other words, organizations have an enormous amount of data which is constantly morphing in real time, and the sources of the data are not connected to each other. That makes finding relationships between data sets a tedious and time consuming task. Dependant upon the data, we could be talking weeks or even months of data prep and cleanup just to see what is relevant, and how the data sets relate to each other.

By leveraging the power of machine learning, the CDP can make short work of understanding what your data means, and it can uncover interesting relationships between otherwise siloed data.

That’s pretty cool, but it gets even better.  With these previously hidden relationships now exposed, the data developer, analyst, or scientist can now explore aspects of the relationship at any point in time.

Think of this as like a DVR for data. Sports fans will often rewind a play to see it again – they want to see how the play developed, who did what right, and who did what wrong to lead to a score or a loss of possession.

Similarly, the Cinchapi Data Platform allows users to rewind data, “press play” and then watch as that data evolves to its current state. Just like a DVR, users can slow things down, fast forward, or pause at specific points in time.

This could prove valuable for a vast array of use cases. Banks and credit card issuers might use this to detect credit card fraud, and to prevent future fraud. A retailer might use it to better understand why demand for specific products rise and fall. A logistics company might use this to determine more efficient transportation routes and methods.

The Visualization Engine

Out of the box, the CDP lets a developer see relationships between her connected data sources. It doesn’t matter what the schema or the source of that data may be, because the platform doesn’t impose any schema on her. She can work with financial data, IoT generated data, data from operations and logistics, or virtually any source to which she has access to via a direct connection or an API.

Good stuff to be sure, but looking at a glorified spreadsheet with values changing over time can be a little off-putting. This is why a powerful visualization engine is included as a core component of the CDP.

Visualizations help people to see the relationships in data. But as we mentioned earlier, typically the data in one data source is independent of other sources. Vendor data might be in one silo, customer data in another, with operations and logistics in still another silo.

Factor in social media data, news events, and a host of other data and the list of potential data silos can be mind boggling as the size and scope of a business grows. Yet as the amount of data grows, it becomes an increasing critical to see the very relationships which could be impacting productivity, sales, operations, and much more.

It’s not just the positive things that can impact a business. We’ve all heard stories of retailers and other businesses which found out well after the fact that they had been hacked, or that fraud has occurred.

This doesn’t just hurt the bottom line, it can also have a profoundly negative effect on the reputation of a business. When retailers like Target or restaurant chains like Wendy’s had customer information stolen, how much potential business did they also lose because customers were fearful of of their information also being exposed?

It’s impossible to put a specific dollar value on bad publicity, but we will suggest that there is a significant cost factor when customers shy away from a company because they fear becoming the next victim.

Data is big, and it’s only getting bigger. It’s also increasingly messy in that not all data is relevant to a specific problem or opportunity. Having the ability to uncover relationships that were hidden is compelling enough.  But being able to rewind the data and see how these relationships looked in their nascent stage can benefit anyone with an interest in data forensics.

Cher probably wasn’t thinking about data when she wondered what would change if she could turn back time. But with the Cinchapi Data Platform, anyone working with data can turn back the calendar to see when and how data relationships were established, and how they then changed and morphed over time.

IoT data is messy. Clean it up and use it in minutes.

How Can Your Business Leverage IoT Data?

By | Cinchapi, Database, Natural Language Interface, Natural Language Processing, Real-Time Data, Strongly Consistent | No Comments

In a January 2017 TechTarget article, Executive Editor Lauren Horwitz wrote that companies are  struggling with working with and managing data generated from IoT (Internet of Things) devices. Ms. Horwitz writes:

“While verticals like manufacturing are more business process-driven and have been able to integrate IoT devices and data into their operations, other industries are still struggling with the volume and velocity of the data and how to bring meaning to it.”

The Challenges With IoT Data

Truthfully, Ms. Horwitz is not wrong. The amount of data being produced by the Internet of Things is mind boggling. Business Insider’s BI Intelligence research team released a report in this past August in which they revealed that in 2015, there were roughly 10 billion devices connected to the internet. Granted, that number appears to include traditional smart devices like tablets and phones.

But chew on this: In that same report, BI Intelligence predicts that by 2020  there will be a total of 34 billion devices connected, with 24 Billion of those devices being what we would call IoT devices – the remaining 10 billion being our trusty mobile devices and computers.

Think about that for a moment – at the time of this writing, the current global population is estimated to be a little under 7.5 billion people.  So that means by 2020, there will be about three IoT devices for every man, woman, and child on the planet. And every single one of these devices will be pumping out data in some form.

There Is No Standard For IoT Data

One of the inherent problems facing anyone wishing to work with data generated by these devices is that at present, there isn’t a definitive standard to IoT data. It’s all ad hoc. It’s like the Tower of Babel myth but with data instead of languages. The data, at least in it’s native form, is messy.

In Horwitz’s article, she quotes Brent Leary, a principal at CRM Essentials. He says:

“There is a lot of data coming at these companies, from multiple places. They have to figure out, ‘How do we get it all, aggregate it, analyze it — and what are we looking for?’ And you’re trying to do that in as near real time as possible. The technology may be there, but the culture may not be; the processes may not be in place. And that is just as critical to the success of IoT as the technology itself.”

Leary hits the nail on the head. The real value in IoT isn’t just the data, it’s being able to DO something with the data – ideally in real-time. After all, let’s think of a logistics company with a fleet of refrigerated truck which are IoT capable. It wouldn’t do much good to learn that the temperature in the trucks exceeded safe norms a week after the fact.  By then, the data is useless, and the loads in question would be losses.

That’s hardly an isolated scenario. A manufacturer would be interested in data which could indicate that a component on an assembly line is nearing failure. An aviation outfit would be wise to monitor critical items on their fleet of aircraft. The potential uses for IoT span these industries as well as healthcare, military, utilities and more. But again, the problem isn’t the hardware – it’s managing the data generated by the Internet of Things.

The data management problem isn’t limited to any specific use case or industry. The problem really is being able to acquire the data, make sense of the data, and then being able to act on what these devices are telling us in real-time. But the 800 pound gorilla in this room remains: “How can we make sense of IoT data?”

The Cinchapi Data Platform

From the moment that Cinchapi founder Jeff Nelson first came up with the concept of Cinchapi, he was keenly aware that working with disparate, or decentralized, data was a growing problem.

Leaving aside IoT for just a moment, as a developer himself, Jeff was constantly spending time doing the tedious data prep and cleanup required in order to understand what aspects of the data in question was relevant, and to learn what relationships might be hidden when working with multiple data sources.

Jeff knew that there had to be a better way, so he began working on developing a platform which could do a number critical things. He wanted a data platform which could work with any source, regardless of schema or structure. He also wanted to find a method to use technology to do the heavy lifting when it came to doing data prep and clean up.  Next, was the desire to make the ways of querying data more intuitive.

The result was what would become the core pieces of the Cinchapi Data Platform (CDP). With it, developers can connect, stream, and store any available data source. It doesn’t matter a whit if the data is structured or not. It can work with traditional relational databases, of course, but it isn’t limited to such.

By using machine learning, once the data sources are connected, either directly or with the CDP’s API “Sponge” component, the platform begins to understand what each source is presenting. It’s also uncovering and establishing relationships between these sources.  In other words, it’s doing the data prep..

With the data and relationships beginning to take shape, the next piece of the desired functionality was to make data conversational. To that end, the Cinchapi Data Platform features a natural language processing (NLP) interface. Instead of creating a series of cryptic queries in an effort to effectively “solve for X”, Jeff knew it would be much easier and far more intuitive if the developer or user could just ask questions with common phrases.

Jeff also knew that he needed a strongly consistent database for all of this, ideally one capable of providing ad hoc analytics in real-time, but which could also allow the ability to “rewind time” once relationships had been identified. Unable to find a solution to suit his needs, he began work on the open source Concourse Database.

Concourse is Strongly Consistent, which allows developers to work with definitive data. By that, we mean data that has to be accurate at all times – be it in real-time, or in the past. Jeff likens the ability to rewind time as a “DVR for Data”. By that, he means that much like how someone might be watching a hockey or basketball game in real time, they also have the ability to pause and rewind any play to see more clearly how a goal was scored or a basket was made.

To carry that metaphor to data, imagine that you have just uncovered a relationship between multiple data sources – one wholly new to you, but absolutely interesting. With your “Data DVR”, you could go back in time and see what was happening in the context of this newly discovered relationship.

If you want to kick the tires of Concourse, have at it. It is freely available at ConcouseDB.com. Heck, we won’t even ask you to fill out a form. We’re big advocates of Open Source, and we do want folks to both use the database and we invite those interested to become contributors to the project.

That said, while Concourse is a fantastic operational database with ad hoc analytics, do be aware that it’s only the full CDP adds all of that extra goodness: The machine learning, the natural language interface, the visualization engine and assorted other goodies which you won’t be getting with Concourse solo.

The Internet of Things and the Cinchapi Data Platform

Now let’s circle back to IoT and the data produced by it. As we mentioned earlier, there is no standard for IoT data. Any manufacturer of a device may deliver data in virtually any fashion they deem desirable. There isn’t set way of producing the data. Sure, some devices may be easier to work with, and there might even be documentation to explain how the manufacturer suggests how to leverage it.

But with 20 Billion devices coming online within the next three years, can you imagine trying to master the data produced from all of them?  Yeah. That’s why aspirin and antacids always seem to be found in the break room.

All kidding aside, there is a better way. Just as how the Cinchapi Data Platform can make short work of traditional data sources, it is ideally suited to work with IoT data. Remember, the CDP doesn’t impose any schema requirements on the developer. As long as data can be connected to it, the CDP streams and stores the data while machine learning makes sense of it all. That absolutely includes IoT data.

If your organization is looking at IoT as a must have, but cannot figure out how to work with the data generated from IoT (as well as all of your other data sources – even those proprietary databases that have been in production since the dawn of time), we’d love to show you what the Cinchapi Data Platform can do.

Click here, and you can watch a 60 second overview video, and then, if you want to get a full-on demonstration, fill out the form and we can set something up.

Cinchapi Data Platform Recommendation system design

Building a Recommendation System for Data Visualizations

By | Cinchapi, Concoursedb, Data Visualizations, Database, Real-Time Data | No Comments

This past year, I’ve been working as a software engineer at Cinchapi, a technology startup based in Atlanta. The company’s flagship product is the Cinchapi Development Platform (CDP). The CDP is a platform for gleaning insights from data, through real-time analytics, natural language querying, and machine learning.

One of the more compelling aspects of the platform is to provide data visualizations out of the box. The visualization engine is where I have focused my energies by developing a recommendation system for visualizations.

The Motivation

With so much data being generated by smart devices and the Internet of Things (IoT), it’s increasingly difficult to see and understand relationships and correlations from these disparate data sources – especially in real-time. At the same time, collecting insufficient amounts of data may lead you to miss out on important problems that you’d miss otherwise.

This is where the power of data visualization comes into play. On the surface, it’s a simple transformation that converts raw, unintelligible data into actionable, intuitive insights. Simple, of course, is relative to the eye of the beholder.

Data Visualization

Maybe not the best example

After all, there are an abundance of plots and graphs and charts and figures out there, each of which is suited for a particular kind of dataset. Do you have some categorical data indexed by frequency? A bar chart might be the best method to visualize it. However, bivariate numerical data abiding by a non-functional relationship might best be seen as a scatter plot.

That pretty much outlines the problem – how can you get a visualization engine to determine what type of visualization is appropriate for a given set of data?  That’s what I needed to determine, and I thought the process of getting there would make for an interesting article.

Understanding the Problem

The point of all of this is to help users understand better understand what their data means and to do so with visualizations. I knew that I needed a recommendation system – something that would offer up visualizations which would best show that the data really means.  Recommendation systems are a highly researched and published topic, and have seen widespread implementation.  Consumers see examples of recommendation systems in products from companies like as Google, Netflix, Amazon, Spotify, and Apple.

These companies implement their systems to solve the generalized problem of recommending something (whatever it may be) to the user. If this sounds ambiguous, it’s because it is. The specifics of a recommendation system often rely on the problem being solved, and differ from one use case to the next. Netflix, as an example, would be recommending movies which might appeal to the user. Amazon may do that as well, but they would also recommend other products related to the movie.  A baseball might be displayed when looking at the movie, “A Field of Dreams”, as an example.

Some recommendations are dynamic while others are static recommendations. One is not necessarily better than the other, but it is useful to understand what sets them apart.

Dynamic Recommendation

Google search uses a Dynamic Recommendation system, as do Netflix, Amazon, and Spotify. These systems collect data generated by a user as they search for items or when they make a purchase. Essentially these companies are building profiles of each user. The profiles factor in prior transactions and behavior of the user and become more refined over time and usage.  These profiles can then be compared to similar profiles of other users, which allows for recommendations which are increasingly relevant.

For example, recently I was researching Apache Spark on Google.  As I began to type the letters ho’ Google’s search auto-completion feature provided relevant phrases which begin with the letters “ho”:

Search Recommendations

Google search: recommendations based on a user’s profile and history

As you likely know, Hortonworks is a company focusing on the development of other Apache platforms, such as Hadoop. Google understands the topic I’m likely interested in via my search history, and from that it offers up relevant search options related to my prior search on Apache Spark.

Following that search, I later decided to look up a recipe for Eggs Benedict. Next, I typed the same ‘ho’ letters. Now, based upon that earlier search for Eggs Benedict, Google’s auto=completion offered new suggestions to complete my sentence:

Contextual recommendations

Contextual recommendations

Google’s system is dynamic in the sense that the user’s profile is evolving as they continue to use the product. Therefore, the recommendation evolves to suit the newest relevant information.

Static Recommendations

On the other hand, the systems employed by Apple’s predictive text can be described as largely static recommendations. Apple’s system can process user behavior and history, however they do not use these (to a large extent) to influence their recommendations.

For example, observe the following stream of messages and the Predictive Text output:

Trying to get Siri’s attention

Trying to get Siri’s attention

Unlike the example from Google search earlier, it seems as if Apple’s iOS Predictive Text does not completely base recommendations on user history. I say “completely”, because Predictive Text actually suggested ‘Siri’ after I had typed ‘Hi Siri’ twice, but then it reverted to a generic array of predictions after I sent the third request.

It is extremely important to note here that Predictive Text is in no way worse than Google’s search suggestions. They are both trying to solve completely different problems.

Google Search

What Google Search is offering is a way to improve search experience for users by opening them to new, yet related, options. After looking up that recipe to Eggs Benedict, I was presented with recipes for home fries, poached eggs, hollandaise sauce, and more. This kind of system, building on the user’s cues and profile, makes perfect sense.

Predictive Text

The goal of Predictive Text is to provide rapid, relevant, and coherent sentence construction. Many individuals use abbreviations, slang, improper grammar, and unknown words when texting. To train a system to propagate language like that would lead to a broken system.

The user can be unreliable – they might enter “soz” instead of the proper “sorry”. We wouldn’t want a predictive text system to mimic these bad habits. Instead the predictive text algorithm should offer properly spelled options and it should employ proper grammar when it predicatively completes phrases.

The User’s Behavior Can Be Misleading

For the sake of this blog, imagine a user who has been creating pie charts with her data. Time and time again, she visualizes her data with pie charts.  Does that mean that our visualization engine should always present her with visualizations as pie charts?  Absolutely not.  What our user needs is an engine which will examine her data, and then suggest the best method to visualize the data, regardless of past behavior.

Just because someone has used pie charts for earlier sets of data, it would not follow that they should always use pie charts for any and all data sets.

In other words, the past behavior of the user and her apparent love of pie charts should not be the determining factor as to what type of visualization should be used. Instead, we’ll use static recommendations based upon the data in question, and then employ the best visualization to present that data.

The Item-User Feature Matrix

It’s a mouthful, but it’s an important concept. Let’s back up a bit.

As mentioned earlier, a common way to produce recommendations is to compare the tastes of one user to other users. Let’s say User Allison is most similar to User Zubin. The system will then determine the items that Zubin liked the most which Allison has yet to see.  The system would then recommend those. The issue with this approach for our use case is that there is no community of users from which profiles can be compared.

Alternatively, recommendations can be made on the basis of comparisons between items themselves. Let’s say Allison loves a specific item, in this case, she loves peaches. Along with other fruits, peaches are given its own profile, through which it is quantifiably characterized across several ‘features’. These features could include taste, sweetness, skin type, nutrition facts and the like.

As far as fruits are concerned, nectarines are similar to peaches. The most significant difference being the skin type – peaches have fuzz, while nectarines have a smooth skin, devoid of any fuzz. Since Allison likes peaches, she would probably like nectarines as well. Therefore the system would display nectarines to Allison.

Recommendations of this type work for more than fruit. Think about movies, as an example. While most people enjoy a good movie, “good” is relative to the viewer. Someone who love “Star Wars” will likely enjoy “Star Trek”. But they may not like the film, “A Star is Born”. So, how would the system base its movie suggestions? The word “star” helps, but it isn’t enough.

Enter the Matrix

Example of an Item Feature Matrix

Example of an Item Feature Matrix

The figure above is called an item feature matrix, in which each item offered is characterized along several different features. This is closer to what we want, but it’s not still perfect. We can’t base our recommendations on what the user likes, since the user may not be right. We must incorporate another dimension.

Example of an User Feature Matrix

Example of an User Feature Matrix

The above matrix is called a user feature matrix, as it depicts the preferences of each user along the same features as the items.

Combining the two concepts, we have two matrices, one for characterizing the user and one for characterizing the items. When combined, these are considered the item-user feature matrix.

At Cinchapi, where I work, we don’t characterize the user’s preferences, but we do leverage their data within the ConcourseDB database. Further, we don’t characterize by the number of characters, action scenes, length, and rating, but a series of data characteristics relating to data types, variable types, uniqueness, and more.

This provides a framework to quantifiably determine the similarity between the user’s data and possible visualizations. This is aspect of the Cinchapi Data Platform which we call the DataCharacterizer.  As the name implies, it serves to define the user’s data across some set of characteristics. But how do we characterize the items which in the CDP’s case are the actual visualizations?  We do so by employing a heuristic.

Heuristics

Considering the case of Predictive Text, there is some core ‘rulebook’ from which recommendations originate. For a language predictor in general, this may be in the form of an expression graph or a Markov model. When the vertices are words, a connection then represents a logical next word in a sentence, and each edge is weighted by a certain probability or likelihood.

Expression graph

Example of an Expression Graph

This could explain why repeatedly tapping one of the three Predictive Text suggestions on an iOS device produces something like this as a result of a cycle in the graph:

Nonsense-cycle

Nonsense-cycle from Predictive Text Suggestions

That word salad isn’t really going to do much for us, even if it is possible to read it. Moving to our need – a visualization engine – we’re not looking to complete a sentence.  There is no visualization ‘rulebook’ with which a model can be trained upon, at least not of a size or magnitude that would produce meaningful results.

This is where the heuristic process comes into action. Loosely defined, a heuristic is an approximation. More formally, it is an algorithm designed to find an approximate solution when an exact solution cannot be found.

This formed the basis of my recommendation system, and resolved the problem of having incomplete or unreliable data from which to learn. I developed a table, where the rows represented the same features as in the matrices above, and the columns represented different visualizations. Each visualization was then characterized based on the types of data that it would best represent.

Presently we call this aspect of the Cinchapi Data Platform a HeuristicTable.  For each potential visualization, the HeuristicTable holds pre-defined, static characterizations across the same set of characteristics as the user’s data.

Putting the Pieces Together

Much of the system is comprised of these components. I’m only providing a 30,000 foot view of the DataCharacterizer.  In short, it measures a series of characteristics of the user’s data, namely the percentage of Strings, Numbers, and Booleans.  It also factors in whether or not there are linkages between entries, whether or not the data is bijective, the uniqueness of values, and the number of unique values (dichtomous, nominal, or continuous).

Treating a particular characterization as a vector, a cosine similarity function is executed on the user’s data and each column of the HeuristicTable.  This in turn  measures the similarity between two vectors on a scale from zero to one.

From this point, it’s a matter of sorting the results in descending order of similarity and the recommendation set is ready.

Below is an overview of the system’s design:

Cinchapi Data Platform Recommendation system design

Cinchapi Data Platform Visualization Recommendation System

Closing Thoughts

Recommendation systems come in all shapes and sizes. Although the problems seem similar from a 30,000 feet view, each use case requires a unique solution to propose the best experience for users.

This was how I built a recommendation system for visualizations from unreliable data, and I hope it inspires some new ideas.

To see an example of how Cinchapi’s visualizations from data actually work, there is a 60 Second video which shows how visualizations can uncover relationships.

real-time data analytics from Cinchapi

Near Time Data Isn’t Real Time Data

By | Cinchapi, Database, Real-Time Data, Strongly Consistent | No Comments

There has been considerable buzz about the Internet of Things. IoT is certainly a hot space, with Gartner saying that by 2020 as many as 21 Billion “things” will be in use.

Obviously, 21 Billion is a large number.  With the 2020 global population predicted to be 7,716,749,042 people, that works out to nearly three devices for every person on the planet. So, yes, this is huge.

That said, it seems far too much focus has been on the devices, when the real value from IoT is in the data generated by these things. “Big Data” doesn’t really do justice to the massive amount of data which will be generated by 21 Billion devices.

Of course, predictions are just that – predictions. There is no guarantee that these will be the actual numbers in 2020, but even if Gartner is off the mark by 50%, the fact remains that there will be unprecedented amounts of data generated by IoT. The problem won’t be the number of devices; the problem will be to make use of this data in real-time.

Real-Time or Near Time?

While the IoT enabled devices in the consumer space may get a lot of love and a lot of ink – think IoT thermostats, refrigerators, and other appliances. But IoT has applications outside the home which could prove to be much more interesting.  In 2013, Cisco suggested that “the list is endless”, but would include “…tires, roads, cars, supermarket shelves, and yes, even cattle.”

With that in mind, the use cases for IoT equally endless. A municipality might be interested in IoT enabled traffic signals combined with data from IoT enabled roads.  A logistics and supply chain company could use leverage that municipality’s data and combine it with generated from its own IoT-enabled fleet and equipment to monitor vehicle locations, inventory, and warehouse space availability. The Supply Chain provider could offer data to its retail customers where it is processed and analysed along with many other data sources to better predict supply and demand needs.

So IoT is everywhere, and its all producing data. Te problem is that there is no set data standard for IoT enabled devices. Manufacturers can deliver data as they see fit. This makes it challenging to both work with IoT data, but even more so to uncover interesting aspect to the IoT data which could relate to other data sources.

For example, in the Logistics and Supply Chain space, with a fleet of connected trucks carrying loads of consumer goods, IoT enabled RFID readers can work in conjunction with GPS geofencing data to cross reference where, when, and what items might be removed from a truck at any given point in time or location.

Deviations from any of approved locations and time for any item ideally might warrant an alert as possible, Cross referencing data from GPS tracking with the data from IoT enabled devices is just the beginning. Don’t forget that there may be mitigating reasons for the deviations. Data from real-time traffic sources, weather forecasters and could provide an explanation as to why a truck veered from the approved route and delivery plan.

Similarly, think of a power company monitoring a power grid. Should a power surge occur which could bring down multiple transformers, getting that information in real-time could allow the system to shut down the impacted area before an entire region goes dark. Does that sound like a stretch?  In 2003, the United States and Canada suffered a massive blackout.  In just 30 minutes a 3,500 megawatt power surge shut down over 500 generating units at 265 power plants from New York City to Toronto, and as far west as Michigan.

While this was in the pre-IoT days, it does highlight how an IoT enabled power grid combined with a real-time data platform would go a long way in minimizing the impact of an event like that in 2003.  It could also provide a layer of security against actors with a sinister agenda, like a foreign adversary or a terrorist organization.

Cinchapi is the Data Platform for Real-Time Data

This is why we are building the Cinchapi Data Platform (CDP), with a focus on working with real-time data emanating from disparate sources. The CDP can stream data from any connected source, including IoT generated data, in real-time.  It features a machine learning component which makes sense of data without the need to do that tedious data prep. Literally, as soon as the connected data begins streaming, developers can begin making ad hoc queries and uncover interesting data.  It doesn’t impose any schema on the developer, so working with multiple data sources and formats is a breeze.

Applications can be created to work with real-time data which can allow users to act in real-time, when it matters the most. As desired, automated responses to real-time incidents can be developed to do things like hitting the brakes on a bus before a tire blows out, or to shut down a section of a power grid before a system wide failure occurs.

There are countless possible use cases where the Cinchapi Data Platform is ideally suited to work with real-time data, as well as to work with definitive data – data that absolutely has to be absolutely accurate at a specific time. That time may be real-time, or it could be in reference to a specific time in the past.

Does this sound interesting? If so, be sure to take a moment to view a 60 second video overview, and if you would like a deeper dive, register for a live demonstration of the CDP. Should you have any relevant thoughts about working with real-time data, please use the comment section below.

 

real-time data analytics from Cinchapi

Can We Get Real-Time Analytics From IoT Generated Sources?

By | Cinchapi, Database, Natural Language Interface, Natural Language Processing, Real-Time Data, Strongly Consistent | No Comments

A new study from 451 Research indicates that the majority of IT Professionals are clamoring for a solution which will offer real-time analytics from machine and IoT generated data, but 53% of those surveyed lack the functionality.

As reported by ZDnet:

Among the 200 survey respondents, there was a clear desire to analyze data as rapidly as possible. When asked specifically at which levels of speed they wanted to expand their use of machine data analytics, most respondents said ‘machine real-time’ speed (69 percent), compared with ‘human real-time’ (51 percent), and minutes, hours or days (29 percent).

About one-third of respondents (34 percent) said their existing machine data analytics offering doesn’t feature machine real-time analytics, while 53 percent said their current technology wasn’t even capable of human real-time analytics.

This is precisely what we are building with the Cinchapi Data Platform.

From the beginning, our goal has been to create a data platform which can stream disparate data in real-time, no matter the source or schema. So long as we can connect directly, or via an API, we can work with virtually any data source, and that absolutely includes real-time data generated from IoT devices.

So how do we do that? After all, data prep and clean up is a massive time-suck.  Data developers will tell you that one of the biggest challenges that they face is making sense of disparate data – what does it mean?

We mitigate that problem by leveraging machine learning to make short work of the data clean up.  Literally, once a data source is connected, developers can begin making ad hoc queries of the data.

That  by itself will save a developer a massive amount of time and effort, but we don’t stop there. We don’t insist on developing cryptic formulas in an effort to “solve for x”. Nah, we’re better than that.

One of our core beliefs is that we we should strive to provide “computing without complexity”.  To that end, the Cinchapi Data Platform features a Natural Language Processing (NLP) interface. That means instead of creating a host of complicated queries to explore the data, a developer can ask questions of the data.

The goal is to make data conversational.  If a developer wanted to drill down, all she has to do is ask followup questions. Pretty sweet.

But what about all of those real-time analytics? Those are included out of the box, and even better, a visualization engine takes those analytics and presents them visually.  That’s right,  real-time analytics and visualizations from multiple data sources – all with on simple to use data solution.

Want to see it all in action, click here to view a 60  second overview, and if you like what you see, sign up for a much more in-depth live demonstration.