We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 - 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Register today!


When it comes to deploying machine learning (ML) models in the enterprise, optimization and acceleration – that is, deploying better and faster – are the keys to cost and time savings. 

But according to Seattle-based OctoML — founded in 2019 to help enterprises optimize machine learning model deployment — there are plenty of bottlenecks — including the dependencies between ML training frameworks, model types and required hardware at each stage of the model lifecycle. 

Today, OctoML announced a new model deployment platform that it claims is a “huge milestone for the AI developer community” — it enables app developers and IT operations to “transform trained ML models into agile, portable, reliable software functions that easily integrate with their existing application stack and devops workflows.”  

OctoML is built on the open-source Apache TVM, a machine learning compiler framework for central processing units (CPUs), graphics processing units (GPUs) and machine learning accelerators. Apache TVM’s creators founded OctoML, including CEO Luis Ceze.

Event

Transform 2022

Join us at the leading event on applied AI for enterprise business and technology decision makers in-person July 19 and virtually from July 20-28.

Register Here

Machine learning alignment with devops

According to Ceze, the solution to the problem of ML model deployment bottlenecks is that ML needs to align with standard software devops practices rather than the popular MLops. Organizations need a way to abstract out the complexity, strip out dependencies and automatically generate and sustain trained models that can be delivered as production-ready software functions.

Models-as-function can run at high-performance anywhere from cloud to edge, remaining stable and consistent even as hardware infrastructure changes, he explained. This devops inclusive approach eliminates redundancy by unifying two parallel deployment streams — one for AI and the other for traditional software. It also maximizes the success of the investments that have already been made in model creation and model operations. 

Evolving relationship between data science and devops

Developers and IT operations haven’t had much opportunity to participate in AI/ML deployment, despite being responsible for building and maintaining virtually every other element of the enterprise app stack, said Ceze.

“The secret to bringing devops into the fold is the ability to treat models as agile, portable, reliable software functions,” he said. “Today, the rigid dependence between models, libraries, framework and hardware creates a complex and highly specialized deployment path that is closed off to all but a few highly trained ML engineers.” 

The proliferation of MLops platforms has built a parallel development stream that is specific to AI deployment, he added: “Devops is a tried-and-true discipline for agile development and software deployment and there’s abundant talent there, so allowing devops to deploy models as intelligent software functions unites these disconnected development paths and unlocks a lot of value.”

Continued shifts in the MLops space

Ceze explained that based on customers and the broader industry, he anticipates a continued shift away from large, monolithic MLops platforms to more “best-of-breed” solutions that serve the needs of ML engineers, IT ops and app developers at each phase of the development cycle. 

“AI is mainstream now, and more companies are looking to build intelligence into apps and services, but the needs are really diverse,” he said. “Users will want to control their AI/ML deployments like they control the rest of their applications – their models, their infrastructure, their application stack – while guaranteeing SLAs for performance, cost and user experience and at the same time integrating effectively with their own workflows. 

As the company deploys more AI services, having granular control over deployment choices and the ability to accelerate across hardware targets will help them manage costs,  and deal with inherent governance questions by keeping data and models deployed under their control. 

“This mirrors broader enterprise technology adoption patterns that indicate a shift toward tools and tech that can be worked into the flow of users and provides more control over their own APIs and workflows,” he said. 

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn more about membership.

Author
Topics