We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 - 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Register today!
This article was contributed by Berk Birand, CEO of Fero Labs.
Is the hype around AI finally cooling?
That’s what some recent surveys would suggest. Most executives now say the technology is more hype than reality— and 65% report zero value from their AI and machine learning investments.
However, these statements often reflect a fundamental misunderstanding. Many executives don’t differentiate generic black box AI from related technologies such as explainable machine learning. As a result, they’re missing out on a crucial pathway to smarter and more efficient decision-making that can drive more enterprise value.
Event
Transform 2022
Join us at the leading event on applied AI for enterprise business and technology decision makers in-person July 19 and virtually from July 20-28.
Black boxes, or software programs that spit out mysterious answers without revealing how they got there, are the algorithms that power the world’s top tech companies. You have no way to know how a black box comes up with its result. Occasionally, the results are amusing, as when Google’s image recognition software erroneously identifies a cat as guacamole, or when Netflix recommends a bad show. In those cases, the stakes are low. A mistake on Netflix’s part costs, at most, a few wasted minutes.
But for complex, high-stakes sectors like healthcare, criminal justice, and manufacturing, it’s a different story. If AI technology informs a steel engineer to add the wrong quantity of alloys, producing a metal with the wrong density, buildings could collapse.
In areas like healthcare, where a single decision literally makes the difference between life and death, professionals may be particularly reluctant to trust the recommendations of a mysterious black box algorithm. Or, even worse, they might adopt them, leading to potentially catastrophic results.
Explainable machine learning
Unlike black box software, any AI solution that can properly call itself “explainable” should reveal how various inputs affect the output. Take an autopilot software, for example — the algorithm controlling the steering needs to know how much the aircraft will tilt if a sensor detects northwest winds of 50 miles per hour, and the user must be able to understand how this information impacts the algorithm’s predictions. Without this ability, the software would fail to serve its intended purpose, and thus would result in negative value.
Furthermore, explainable software should provide some kind of measurement indicating its confidence in each prediction, allowing for safe and precise decision-making. In healthcare, for example, a doctor wouldn’t just be told to use a certain treatment. Rather, they’d be told the probability of the desired result, as well as the confidence level. In other words, is the software very confident in its prediction, or is the prediction more of a guess? Only with this kind of information can the doctor make informed and safe decisions.
How can you apply explainable machine learning to drive smarter decision-making in your company?
If you want to build a tool internally, know that it is difficult. Explainable, machine learning is complex and requires deep statistical knowledge to develop. One sector that’s done this well is pharmaceuticals, where companies often have scores of Ph.D.s doing in-house explainable data science and analysis.
If you want to buy software, you’ll need to do some due diligence. Look at real use cases that the vendor provides, not just taglines. Look at the background of the science/research team — are they proficient in explainable machine learning? What evidence are they showing off their technology?
Most importantly? Use your judgment. The great thing about explainable machine learning is that it can be, well, explained. If you don’t get it, it probably won’t drive value for your company.
Berk Birand is the CEO of Fero Labs, an industrial AI software company based in New York.
DataDecisionMakers
Welcome to the VentureBeat community!
DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.
If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.
You might even consider contributing an article of your own!