We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 - 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Register today!
The more the enterprise transitions from a mere digital organization to a fully intelligent one, the more data executives will come to realize that traditional monitoring and management of complex systems and processes is not enough.
What’s needed is a new, more expansive form of oversight – which lately has come to be known as “data observability.”
The what and the why of observability
The distinction between observability and monitoring is subtle but significant. As VentureBeat writer John Paul Titlow explained in a recent piece, monitoring allows technicians to view past and current data environments according to predefined metrics or logs. Observability, on the other hand, provides insight into why systems are changing over time, and may detect conditions that have not previously been considered. In short, monitoring tells you what is happening, while observability tells you why it’s happening.
To fully embrace observability, the enterprise must engage it in three different ways. First, AI must fully permeate IT operations, since this is the only way to rapidly and reliably detect patterns and identify root causes of impaired performance. Secondly, data must be standardized across the ecosystem to avoid mismatch, duplication and other factors that can skew results. And finally, observability must shift into the cloud, as that is where much of the enterprise data environment is transitioning to as well.
Event
Transform 2022
Join us at the leading event on applied AI for enterprise business and technology decision makers in-person July 19 and virtually from July 20-28.
Observability is based on Control Theory, according to Richard Whitehead, the chief evangelist at observability platform developer, Moogsoft. The idea is that with enough quality data at their disposal, AI-empowered technicians can observe how one system reacts to another, or at the very least, infer the state of a system based on its inputs and outputs.
The problem is that observability is viewed in different contexts between, say, DevOps and IT. While IT has worked fairly well by linking application performance monitoring (APM) with infrastructure performance monitoring (IPM), emerging DevOps models, with their rapid change rates, are chafing under the slow pace of data ingestion. By unleashing AI on granular data feeds, however, both IT and DevOps will be able to quickly discern the hidden patterns that characterize quickly evolving data environments.
This means observability is one of the central functions in emerging AIOps and MLOps platforms that promise to push data systems and applications management into hyperdrive. New Relic recently updated its New Relic One observability application to incorporate MLOps tools to enable self-retraining as soon as alerts are received. This should be particularly handy for ML and AI training, since these models tend to deteriorate over time. Data observability helps account for changing real-world conditions that affect critical metrics like skew, staleness of data as well as overall model precision and performance regardless of whether these changes are taking place in seconds or over days, weeks or years.
Automation on steroids
Over the next few years, it is reasonable to expect AI and observability to usher in a new era of “hyperautomation”, according to Douglas Toombs, Gartner’s vice president of research. In an interview with RT Insights, he noted that a fully realized AIOps environment is key to Gartner’s long-predicted “Just-in-Time Infrastructure” in which datacenter, colocation, edge, and other resources can be compiled in response to business needs within a cohesive but broadly distributed data ecosystem.
In a way, observability is AI transforming the parameters of monitoring and management in the same way it changes other aspects of the digital enterprise — by making it more inclusive, more intuitive and more self-operational. Whether the task is charting consumer trends, predicting the weather or overseeing the flow of data, AI’s job is to provide granular insight into complex systems and chart courses of action based on those analyses, some of which it can implement on its own and some that must be approved by an administrator.
Observability, then, is yet another way in which AI will take on the mundane tasks that humans do today, creating not just a faster and more responsive data environment, but one that is far more attuned to the real environments it is attempting to interpret digitally.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn more about membership.