We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 - 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Register today!


Artificial intelligence (AI) adoption keeps growing. According to a McKinsey survey, 56% of companies are now using AI in at least one function, up from 50% in 2020. A PwC survey found that the pandemic accelerated AI uptake and that 86% of companies say AI is becoming a mainstream technology in their company.

In the last few years, significant advances in open-source AI, such as the groundbreaking TensorFlow framework, have opened AI up to a broad audience and made the technology more accessible. Relatively frictionless use of the new technology has led to greatly accelerated adoption and an explosion of new applications. Tesla Autopilot, Amazon Alexa and other familiar use cases have both captured our imaginations and stirred controversy, but AI is finding applications in almost every aspect of our world.

The parts that make up the AI puzzle

Historically, machine learning (ML) – the pathway to AI – was reserved for academics and specialists with the necessary mathematical skills to develop complex algorithms and models. Today, the data scientists working on these projects need both the necessary knowledge and the right tools to be able to effectively productize their machine learning models for consumption at scale – which can often be a hugely complicated task involving sophisticated infrastructure and multiple steps in ML workflows. 

Another key piece is model lifecycle management (MLM), which manages the complex AI pipeline and helps ensure results. The proprietary enterprise MLM systems of the past were expensive, however, and yet often lagged far behind the latest technological advances in AI.

Event

Transform 2022

Join us at the leading event on applied AI for enterprise business and technology decision makers in-person July 19 and virtually from July 20-28.

Register Here

Effectively filling that operational capability gap is critical to the long-term success of AI programs because training models that give good predictions is just a small part of the overall challenge. Building ML systems that bring value to an organization is more than this. Rather than the ship-and-forget pattern typical of traditional software, an effective strategy requires regular iteration cycles with continuous monitoring, care and improvement.

Enter MLops (machine learning operations), which enables data scientists, engineering and IT operations teams to work together collaboratively to deploy ML models into production, manage them at scale and continuously monitor their performance.

The key challenges for AI in production

MLops typically aims to address six key challenges around taking AI applications into production. These are: repeatability, availability, maintainability, quality, scalability and consistency. 

Further, MLops can help simplify AI consumption so that applications can make use of machine learning models for inference (i.e., to make predictions based on data) in a scalable, maintainable manner. This capability is, after all, the primary value that AI initiatives are supposed to deliver. To dive deeper:

Repeatability is the process that ensures the ML model will run successfully in a repeatable manner.

Availability means the ML model is deployed in a way that it is sufficiently available to be able to provide inference services to consuming applications and offer an appropriate level of service.

Maintainability refers to the processes that enable the ML model to remain maintainable on a long-term basis; for example, when retraining the model becomes necessary.

Quality: the ML model is continuously monitored to ensure it delivers predictions of tolerable quality.

Scalability means both the scalability of inference services and of the people and processes that are required to retrain the ML model when required.

Consistency: A consistent approach to ML is essential to ensuring success on the other noted measures above.

We can think of MLops as a natural extension of agile devops applied to AI and ML. Typically MLops covers the major aspects of the machine learning lifecycle – data preprocessing (ingesting, analyzing and preparing data – and making sure that the data is suitably aligned for the model to be trained on), model development, model training and validation, and finally, deployment.

The following six proven MLops techniques can measurably improve the efficacy of AI initiatives, in terms of time to market, outcomes and long-term sustainability.

1. ML pipelines

ML pipelines typically consist of multiple steps, often orchestrated in a directed acyclic graph (DAG) that coordinates the flow of training data as well as the generation and delivery of trained ML models.

The steps within an ML pipeline can be complex. For instance, a step for fetching data in itself may require multiple subtasks to gather datasets, perform checks and execute transformations. For example – data may need to be extracted from a variety of source systems – perhaps data marts in a corporate data warehouse, web scraping, geospatial stores and APIs. The extracted data may then need to undergo quality and integrity checks using sampling techniques and might need to be adapted in various ways – like dropping data points that are not required, aggregations such as summarizing or windowing of other data points, and so on.

Transforming the data into a format that can be used to train the machine learning ML model – a process called feature engineering – may benefit from additional alignment steps.

Training and testing models often require a grid search to find optimal hyperparameters, where multiple experiments are conducted in parallel until the best set of hyperparameters is identified.

Storing models requires an effective approach to versioning and a way to capture associated metadata and metrics about the model.

MLops platforms like Kubeflow, an open-source machine learning toolkit that runs on Kubernetes, translate the complex steps that compose a data science workflow into jobs that run inside Docker containers on Kubernetes, providing a cloud-native, yet platform-agnostic, interface for the component steps of ML pipelines. 

2. Inference services

Once the appropriate trained and validated model has been selected, the model needs to be deployed to a production environment where live data is available in order to produce predictions.

And there’s good news here – the model-as-a-service architecture has made this aspect of ML significantly easier. This approach separates the application from the model through an API, further simplifying processes such as model versioning, redeployment and reuse.

A number of open-source technologies are available that can wrap an ML model and expose inference APIs; for example, KServe and Seldon Core, which are open-source platforms for deploying ML models on Kubernetes.

3. Continuous deployment

It’s crucial to be able to retrain and redeploy ML models in an automated fashion when significant model drift is detected.

Within the cloud-native world, KNative offers a powerful open-source platform for building serverless applications and can be used to trigger MLops pipelines running on Kubeflow or another open-source job scheduler, such as Apache Airflow.

4. Blue-green deployments

With solutions like Seldon Core, it can be useful to create an ML deployment with two predictors – e.g., allocating 90% of the traffic to the existing (“champion”) predictor and 10% to the new (“challenger”) predictor. The MLops team can then (ideally automatically) observe the quality of the predictions. Once proven, the deployment can be updated to move all traffic over to the new predictor. If, on the other hand, the new predictor is seen to perform worse than the existing predictor, 100% of the traffic can be moved back to the old predictor instead.

5. Automatic drift detection

When production data changes over time, model performance can veer off from the baseline because of substantial variations in the new data versus the data used in training and validating the model. This can significantly harm prediction quality.

Drift detectors like Seldon Alibi Detect can be used to automatically assess model performance over time and trigger a model retrain process and automatic redeployment.

6. Feature stores

These are databases optimized for ML. Feature stores allow data scientists and data engineers to reuse and collaborate on datasets that have been prepared for machine learning – so-called “features.” Preparing features can be a lot of work, and by sharing access to prepared feature datasets within data science teams, time to market can be greatly accelerated, whilst improving overall machine learning model quality and consistency. FEAST is one such open-source feature store that describes itself as “the fastest path to operationalizing analytic data for model training and online inference.”

By embracing the MLops paradigm for their data lab and approaching AI with the six sustainability measures in mind – repeatability, availability, maintainability, quality, scalability and consistency – organizations and departments can measurably improve data team productivity, AI project long-term success and continue to effectively retain their competitive edge.

Rob Gibbon is product manager for data platform and MLops at Canonical – the publishers of Ubuntu.

DataDecisionMakers

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even consider contributing an article of your own!

Read More From DataDecisionMakers

Author
Topics