18 November. 17.30 - 18.10 | Attic

As companies mature through their Machine Learning journey, a pattern of “many models” often emerges. In the real world, many problems can be too complex to be solved by a single machine learning model. Whether that be predicting sales for each individual store, building a predictive maintenance model for thousands of oil wells, or tailoring an experience to individual users, building a model for each instance can lead to improved results on many machine learning problems, as opposed to training a single model to make predictions for all instances. However, the infrastructure, procedures and level of automatization required to operate this kind of pattern poses a challenge at all levels. The combined use of cloud services, machine learning traceability systems and DevOps practices allow any business to build a scalable platform that uplifts the technical foundations of their data and data science capabilities in standardization, rapid model development and deployment at scale, unlocking existing business value and creating new opportunities to accelerate business growth. In this talk we present the many models pattern and how it could be applied to an energy business to quickly prototype and launch a site level energy demand forecasting use case, leveraging the end-to-end capabilities of the Azure stack. This many-models solution predicts energy demands right down at the meter level and is used to plan energy trading responses to grid demand. The use case incorporates thousands of individual models that are trained with specific data, with predictions every few minutes (resulting in millions of calls per day) and includes parallelized and selective model retraining triggered by drift. It incorporates parallel at-scale training (leveraging on-demand compute), end-to-end model and code management, automated MLOps deployments, real-time availability of models via webservices, Kubernetes model hosting and performance monitoring. The solution leverages an “Infrastructure as Code” and “Continuous Integration/Continuous Delivery” approach that ensures their data science and engineering resources are transferable across business initiatives with all ML artefacts fully code controlled, managed and documented. The target audience for this session are either: – Data product owners that want to learn about the many models pattern, the business problems it can solve and the implications of building this solution. – Data scientists, data engineers and machine learning engineers who have worked with machine learning models and would like to learn how to manage the lifecycle of many models. By attending the session, the audience will: – Understand the kind of business problems where a many models pattern may apply. – Get an overview of the best practices needed to bring this solution to life. – Understand how to design a machine learning solution with thousands of parallelized model training. – Know how to package thousands of models in a handful of groups and deploy them into containers. – Learn how to make thousands of models accessible in a real-time manner via a single API. – Learn how to build the MLOps pipelines required for operationalizing this solution.