We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 - 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Register today!


Despite the gains artificial intelligence has already brought to the enterprise, there is still much hand-wringing over its potential for unintended consequences. While the headlines tend to focus on AI running amok and destroying all mankind, the practical reality is that current generations of AI are more likely to wreak havoc on business processes — and profits — if not managed properly.

But how can you control something that, by its nature, is supposed to act autonomously? And by doing so, won’t the enterprise be hampering the very thing that makes AI such a valuable asset in the workplace?

AI gone wrong

OnCorps CEO Bob Suh offers a good overview of the harm AI can cause, even when employed with the best of intentions. The experiences of the social media platforms illustrate how poorly designed algorithms can produce one set of positive results (more sharing) while at the same time promulgating negative results (misinformation and manipulation). It’s fair to say that executives at Facebook and Twitter did not intend for their platforms to foster society-wide conflict and animosity, but they did focus their efforts on maximizing profits, so that is what their reinforcement learning agents (RLAs) did.

RLAs are based on simple reward functions in which positive results prompt the agent to expand and refine its actions. Without a countervailing reward mechanism, the agent will continue to get better at what it does even if it no longer produces a desirable outcome. In this way, the social media bots were highly successful at achieving their programmed objectives even as they were being manipulated by some in the user community to weaponize public opinion and sow discord throughout the population.

Event

Transform 2022

Join us at the leading event on applied AI for enterprise business and technology decision makers in-person July 19 and virtually from July 20-28.

Register Here

These same problems can emerge in sales, marketing and other functions. And unfortunately, few organizations are equipped to identify and correct algorithms that are driving undesirable outcomes until the damage is done. This can take the form of everything from lost revenue and missed market opportunities to security breaches and damage to internal structures and processes.

At its heart, implementing the proper controls on AI is a form of risk management. Due to its autonomous nature, however, AI requires a little more attention than standard IT, say McKinsey partners Juan Aristi Baquero, Roger Burkhardt, Arvind Govindarajan, and Thomas Wallace. For one thing, AI introduces a number of unfamiliar risks across a multiple disciplines, including compliance, operations, legal, and regulatory. Banks, for example, have long worried about bias from their human employees, but if those employees start making recommendations based on what a biased AI tells them, now the institution has systematized that bias into its decision-making process.

Another problem is the way AI is rapidly becoming decentralized across the enterprise, which makes it difficult to track and monitor. And as the various AI implementations of multiple vendors, partners, and other entities start to communicate with one another, the potential to introduce new risks increases. A new tool within a vendor’s CRM platform, for example, could create data privacy and compliance risks across multiple geographical regions.

Up-front management

The best way to manage these risks is to implement the proper controls before AI becomes woven into the fabric of the enterprise, according to Todd Bialick, digital assurance and transparency leader at PwC. To do that, you’ll need to conduct a full-stack review of everything AI touches as it seeks to fulfill its mandate. This includes data-layer policies governing input and set selection, oversight and transparency in algorithm and model development, continual review of output and decisions, and full control over logical security, compute operations, and program change and development.

Training AI to behave ethically is also an emerging, yet extremely nascent, field that has drawn the interest of both private- and public-sector organizations. One of the key difficulties is that ethics is a very subjective discipline, in which ethical actions in one set of circumstances can be unethical in another. But as James Kobielus, senior director of research for data management at TDWI, points out, ultimate ethical responsibility must remain with humans, not with inanimate objects, no matter how intelligent they seem. To prevent AI from “going rogue,” it should always incorporate human interests as a core element, and this can manifest itself in everything from the decisions it makes to the way it looks. As well, transparency in AI inferencing lineage may be necessary to ensure there is an audit trail back to the humans who created it.

In the end, of course, managing AI is a matter of trust. Organizations would be wise to treat AI just like any other employee: Give it a limited set of responsibilities, see how it performs, and then promote it to higher levels of authority after it has proven itself both capable and worthy. Just like you don’t appoint a recent graduate as your new CIO, you don’t place AI in charge of C-level sales management or HR.

While the risk of unintended consequences can never be reduced to zero, the odds are good that in some way, somehow, humans and AI will find a way to work together. And since both forms of intelligence provide strengths that compensate for the other’s weaknesses, it is more than likely their relationship with be mutually beneficial relationship, not hostile.

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn more about membership.

Author
Topics