business triangle technical hexagon

Fix it before it Breaks: Incremental Learning for Predictive Maintenance

Technical talk | English

Theatre 16: Track 4

Wednesday - 11.00 to 11.40 - Technical

- -

Big data drives big decisions in smart factories. Connected, automated machines produce streams of real-time data, which artificial intelligence algorithms process into actionable knowledge. Producing that knowledge often requires significant time, batch processing that takes hours or runs overnight. This lag time between data and decision creates supply chain and maintenance inefficiencies: supplies must be stockpiled to prevent shortages and machines serviced on a time-based rather than as-needed schedule. Presenting near real-time knowledge to decision makers enables better decisions and provides the competitive advantage of increased manufacturing efficiency.

The complexity of a supply chain or manufacturing processing makes it difficult to manually develop the accurate models required for knowledge creation. Machine learning algorithms build these models automatically. Many currently deployed machine learning solutions use the Lambda Architecture, a hybrid of near real-time and batch processing. Such systems process streams of data in near real-time using a periodically updated model. But if the real-time data signals significant new trends, the model may not recognize or respond to those trends until the next update.

A new class of machine learning algorithms increase responsiveness and accuracy by dynamically updating their models in near real-time. These incremental or online learning algorithms process the incoming data into knowledge and then feed that knowledge back into the model. Updating the model from the data stream has several advantages: fewer copies of the data, which increases security; elimination of the time and cost of model redistribution; ability to handle data sets that exceed system memory or storage capacity; and the opportunity to apply machine learning in isolated environments, where batch processing resources are not available.

We present a predictive maintenance solution for smart manufacturing at scale. Predictive maintenance aims to prevent surprises by analyzing a machine’s load, environment and condition to schedule just-in-time service. Predictive maintenance strategies vary from estimating the remaining useful lifetime of a consumable part to detecting unexpected behavior. Our solution employs the latter strategy, more formally known as anomaly detection. Our modular architecture supports the deployment of multiple algorithms for incremental learning, but here we focus on one: kernel support vector machines (KSVM).

Recent research shows how to re-purpose KSVM for incremental learning. Ordinarily, the resources required to train a KSVM model scale super-linearly with data set size, making the algorithm seemingly unsuitable for big data. However, it is possible to reformulate the SVM problem so that the training scales linearly, thus enabling efficient processing of large streams of data. We use a one-class Kernel Support-Vector-Machine to perform anomaly detection. We show that a model can be incrementally updated with streaming data and use it to detect abrupt changes in the data distribution. We show how this work can be extended to regression, binary classification and multi-class classification via Error-Correcting Output Codes modeling.

We demonstrate algorithm development and interactive debugging in MATLAB and then deploy the predictive maintenance solution to a MATLAB Production Server hosted in a cloud computing environment designed for horizontal scaling. This solution processes multiple streams of data in near real-time to produce a maintenance dashboard such as plant managers might use to detect anomalous behavior before it causes catastrophic failures. Our presentation follows the entire workflow, from design through development and testing to production, with emphasis on the tools available for data scientists, system architects, the IT team and the plant managers.