We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 - 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Register today!


We’re at the dawn of the fourth industrial revolution. The choices we make today about how to govern AI will determine whether our future resembles dystopian science fiction, or something akin to Star Trek, where trustworthy AIs help bring out the very best in humanity. In the past when technologists and business leaders spoke of “digital transformation,” they had mobile applications or cloud computing in mind. Now they mean AI. However, AI is a powerful tool that impacts people and their trust, and we are only just beginning to understand the profound responsibility that goes with these self-learning systems.

Fear of disruptive, confusing, offensive, and even dangerous AI has caused the city of San Francisco last month to ban law enforcement from using facial recognition technology. While bans aren’t the answer, it’s hard to fault San Francisco’s leaders. The unrestrained use of AI will erode democratic rights and individual liberties. And as one city council member explained, “We have an outsize responsibility to regulate the excesses of technology precisely because they are headquartered here.”

Those closest to AI are in the best position to understand the threat of AI without ethics. Software development, in general, has had a longstanding need to implement better global ethical standards. But with AI, the need is greater because the stakes are greater. A study of 1,010 UK tech workers found that 90 percent consider technology a force for good, but 59 percent of those working in AI said they had worked on projects they felt might be harmful to society; 18 percent quit their jobs over ethical concerns.

But the question isn’t whether we should ban AI — we shouldn’t, and arguably it’s impossible to put the genie back in the bottle. We need to ask how governments can identify an ethical way forward that embraces progress while safeguarding citizens.

Citizens will expect governments to keep pace with technological innovation when it comes to public functions such as healthcare, policing, taxation, etc. At the same time, governments will have to make sure private sector AI solutions comply with the values of civil society. In both instances, the key is for governments to model best practices through the establishment of ethics councils that can build trust in AI. Specifically, you build trust by making AI’s decision-making functions transparent enough that the general public can see how the technology aligns with societal values.

Canada’s Directive on Automated Decision-Making offers a leading model for governments looking to create a framework for transparent AI. Using a tool called, Algorithmic Impact Assessment (AIA), the Canadian government has created a process for the use of AI in both the public and private sectors.

The AIA tool brings together all relevant stakeholders, including the public, to ask a range of important questions, such as:

  • What does the AI do?
  • What types of decision result from this AI program? Food or product safety? Crime prevention or investigation? Licensing or permitting?
  • Are the decisions based on objective criteria, or is there room for discretion and subjectivity?
  • Have all stakeholders been consulted?
  • Is there a recourse mechanism in case people have questions or complaints?
  • What are the processes you’ve put in place?
  • How explainable are machine determinations?
  • Are you monitoring for unintentional outcomes?
  • Do you have a clear audit trail around decisions?

The answers to these questions (as well as many others) are then scored based on various categories. Those raw scores are then made available to the public so that they can assess the tradeoffs between innovation and risk.

Grappling with these questions is difficult work for government regulators, technologists, private companies, and most of all, individual citizens. But abdicating our collective responsibility — whether through outright bans or willful ignorance — isn’t an option.

In order to ensure that social good is our North Star, bold government leadership requires moving at the speed of technological revolution, so that long-held values are woven into the DNA of tomorrow’s innovations.

Manoj Saxena is Executive Chairman of CognitiveScale and Chairman of AI Global.

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn more about membership.

Author
Topics