We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 - 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Register today!


Every day brings considerable AI news, from breakthrough capabilities to dire warnings. A quick read of recent headlines shows both: an AI system that claims to predict dengue fever outbreaks up to three months in advance, and an opinion piece from Henry Kissinger that AI will end the Age of Enlightenment. Then there’s the father of AI who doesn’t believe there’s anything to worry about. Meanwhile, Robert Downey, Jr. is in the midst of developing an eight-part documentary series about AI to air on Netflix.

AI is more than just “hot,” it’s everywhere. It’s a steamroller trend. But industry and government need to do more to ensure that AI application is ethical.

Seeing through a glass, darkly

In his article in The Atlantic, Kissinger notes that until now, it was the printing press that offered the most important technological breakthrough, providing the ability to capture and share empirical knowledge and creating a new world order. He views AI as an even more sweeping technological revolution and suggests we do not fully understand the consequences. The implication is that we can only imagine this future as if seeing through a glass, darkly. Given the potential power of AI, in weaponry and otherwise, it is clear why a former national-security adviser and secretary of state would have that point of view.

Certainly, others have sounded an alarm, though the distinction between narrow-AI and artificial general intelligence (AGI) is often lost in the din of debate. Narrow AI is generally machine learning that uses data and applies algorithms to it to gain knowledge. There are many forms of machine learning, but all fall within the narrow AI category. AGI is also referred to as superintelligence, or the singularity when machines surpass human intelligence.

Every AI application that exists today and is expected soon is narrow (or weak) AI, designed to perform a narrow task. This covers image and facial recognition, natural language processing, internet searches, and autonomous vehicles. All the AI wizardry to date, whether AlphaGO beating the best players in GO or the increasing applications in medical diagnostics, are examples of narrow AI.

For all the benefits we associate with narrow AI, the dangers are equally well-documented and cover social dislocation and lost jobs, advanced weaponry threats, increased cybersecurity risks, and “deepfakes” that erode what’s left of trust. As valid as these concerns are, AGI raises still greater concern. As Elon Musk notes, narrow AI is “not a fundamental species level risk, whereas digital super intelligence is.”

AI could end critical thought

Kissinger worries that our drive for AI is seemingly unstoppable, what Kevin Kelly refers to as inevitable. Beyond that, Kissinger sees this as “a potentially dominating technology in search of a guiding philosophy”:

Philosophers and others in the field of the humanities who helped shape previous concepts of world order tend to be disadvantaged, lacking knowledge of AI’s mechanisms or being overawed by its capacities. In contrast, the scientific world is impelled to explore the technical possibilities of its achievements, and the technological world is preoccupied with commercial vistas of fabulous scale.

More specifically, Kissinger sees AI development leading to a “transformation of the human condition.” Is this a euphemism for Musk’s species-level risk? It could be, but perhaps there are multiple levels to this.

Musk seems to be talking about essential survival. In his way of thinking, echoed by many others, an intelligence greater than ours could very well regard humans as irrelevant. Superintelligent machines could then decide to either ignore or kill us. But Kissinger believes there’s a more imminent danger, even from narrow AI, that is just as dangerous. Once machines can solve complex, seemingly abstract problems, would people lose the incentive to study and learn? Even more, would we lose the capacity for critical thought? He writes that by trying to emulate the math within an AI or simply accepting the results of the algorithm, “we are in danger of losing the capacity that has been the essence of human cognition.”

Which is to say, AI thinks so I don’t have to. And if I don’t think, who am I? This speaks to our eternal quest for meaning and gets to the imminent existential threat from even narrow AI.

The Age of Thinking Machines

If this is the end of the Enlightenment, what comes next? Perhaps it is the Age of Thinking Machines, with thanks to Ray Kurzweil and his anthology on machine intelligence, published in 1990. To shape this next age, Kissinger argues for a presidential commission of eminent thinkers to help develop a national vision. “This much is certain: If we do not start this effort soon, before long we shall discover that we started too late.”

No doubt he is right that our collective brainpower should be just as focused on the implications of AI as on its practical application and implementation. Some in the industry are already doing this as part of their ongoing AI work, including setting ethical and philosophical boundaries. But ad hoc measures may not be enough, as not all people are as conscientious or rigorous. Kissinger is right to call for leadership at national and international levels, with a focus on what is good for society at least as much as what is good for business.

Both Homo sapiens and Homo technicus

In the meantime, amid these legitimate concerns, the day-to-day benefits of narrow AI continue to mount, and the overall AI push is gaining momentum. According to a recently published study by Ernst & Young, organizations that have enabled AI at the enterprise level are increasing operational efficiency, making more informed decisions more quickly, and innovating new products and services.

It’s not just the enterprise where AI marvels are on display, as these AI-generated paintings show. And multiple scientific fields are similarly finding value. For example, researchers have applied AI to develop a new way to look inside living human cells, allowing them to see the whole cell for the first time. On the technical innovation front, Google Duplex is perhaps the most significant recent advancement as it demonstrates a leap in the ability of machines to understand and speak human language. It is easily the most natural-sounding bot yet developed, and it heralds the day when human-to-machine voice interaction is an everyday reality.

As the software rapidly develops, the pressure to use it to drive greater advantage — whether for commercial or military purposes — will grow apace. In his book The Master Algorithm, computer scientist and University of Washington professor Pedro Domingos assures us that “humans are not a dying twig on the tree of life. On the contrary, we are about to start branching. In the same way that culture coevolved with larger brains, we will co-evolve with our creations. We always have: Humans would be physically different if we had not invented fire or spears. We are Homo technicus as much as Homo sapiens.”

Hopefully Domingos is right, though providing guardrails for AI development only seems prudent lest we drive ourselves over a cliff. Never has a technology so profound been surrounded by so much uncertainty.

Gary Grossman is a futurist and public relations and communications marketing executive with Edelman.

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn more about membership.

Author
Topics