With the proliferation of sensors on everything from refrigerators to clothes you wear, data is everywhere. The data explosion, however, is not limited to consumer goods. The industrial sector is also experiencing an “industrial data revolution”.
A major contributor to sensor proliferation is the decrease in sensor cost and size, while increasing in sophistication. These characteristics make sensors an exponential technology, promising repeated doublings in improvement in price and performance over the short-term.
Rivers of data
For example, in 2007 the average cost of an accelerometer sensor was $3. In 2014, the average was 54¢ 2. In the near future sensors will be everywhere. As an example, when imagining a train hauling cargo across the country most of us only visualize a simple machine with an engine, some boxcars, and lots of wheels. But GE’s latest locomotive has 250 sensors measuring 150,000 data points per minute 3. These sensors are producing “Rivers of Data”.
Another example: A running Boeing jet engine produce 10 terabytes of operational information for every 30 minutes. A four-engine jumbo jet can create 640 terabytes of data on just one Atlantic crossing. Multiply that by the more than 25,000 flights each day, and you begin to understand the impact that sensor and machine-produced data can make on a BI environment 4.
Making sense out of the rivers of data generated by such intelligent devices is one of the key, yet many times overlooked, components of the industrial internet.
Often, the collected data becomes what is called dark data. This is operational data not being used. According to IDC, most companies document 22% of the data they collect, but they can analyze on average only 5% of it. Hence, the data wasted, and still, organizations pay in bandwidth, server space, inefficient retrieval, and overhead to maintain this data 5. There should be a better way for plant operators, subject matter experts, CFO’s, CIO’s, and others to gain value from these rivers of data. In many cases, however, the business neither the expertise nor time to derive value from the data.
This is where machine learning can use computing platforms to gain actionable insight and value from the data.
SparkPredict to detect asset failures
According to ARC Research, ineffective maintenance consumes $60 billion annually. SparkPredict, and its patent pending Cognitive Fingerprinting algorithm, lets users take massive amounts of data and, in an automated fashion, quickly build models to find meaning in it, thereby significantly improving maintenance strategies and costs.
For example, SparkCognition has been working with one of the largest suppliers of industrial and environmental machinery-pumps, valves, mechanical seals to take real-time data off of their horizontal pumps and prevent future breakdown. By using three years of operational data to train on, SparkCognition’s algorithms were able to predict future failures with over five days of warning in just a few short weeks. This was a 20 fold operational improvement over existing models, which had been in development for decades by the client’s subject matter experts. The improvement was possible because of algorithmic advances in feature derivation, feature selection, and model building and ensembling—all of which come together in what we call Cognitive Fingerprinting.
The beauty of SparkCognition’s Cognitive Fingerprinting algorithm is its use in a wide variety of applications. A second case study involves the work being done with Invenergy, LLC which develops, builds, owns, and operates power generation and energy-storage projects that monitor hundreds of their wind turbines. The system is looking for mechanical breakdowns centered around gearboxes. Maintenance of turbines is a major problem. It is estimated that in 2011, nearly $40 billion worth of wind equipment in the U.S. will be out of warranty, thrusting the financial risk on the owner to provide cost-effective operation and maintenance 6.
Invenergy uses the SparkPredict platform to predict gearbox failures across their fleet. Using Cognitive Fingerprinting, SparkPredict was able to increase catastrophic failure predictions from a few days to a median lead time of 37 days with a prediction confidence greater than 90% and no false positives. The algorithm takes 26 features provided by the customer, transforms those into over 1,000 features, models the data to create failure-prediction signatures, and provides results in real-time. Asking a human to analyze tens of thousands of variables across six years of data would have been an impossible task.
In addition to these two case studies, the SparkCognition team is using Cognitive Fingerprinting to solve customers’ issues in the fields of Finance, Oil & Gas, and Manufacturing.
For further reading:
2 https://www.accenture.com/us-en/_acnmedia/Accenture/next-gen/reassembling-industry/ pdf/Accenture-Driving-Unconventional-Growth-through-IIoT.pdf
3 http://www.cnet.com/news/at-ge-making-the-most-advanced-locomotives-in-history/ 4 http://www.information-management.com/issues/21_5/ big-data-is-scaling-bi-and-analytics-10021093-1.html
Filed Under: News, O&M