We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 - 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Register today!


Progress in technology and increased levels of private investment in startup AI companies is accelerating, according to the 2021 AI Index, an annual study of AI impact and progress developed by an interdisciplinary team at the Stanford Institute for Human-Centered Artificial Intelligence. Indeed, AI is showing up just about everywhere. In recent weeks, there have been stories of how AI is used to monitor the emotional state of cows and pigs, dodge space junk in orbit, teach American Sign Language, speed up assembly lines, win elite crossword puzzle tournaments, assist fry cooks with hamburgers, and enable “hyperautomation.” Soon there will be little left for humans to do beyond writing long-form journalism — until that, too, is replaced by AI. The text generation engine GPT-3 from OpenAI is potentially revolutionary in this regard, leading a New Yorker essay to claim: “Whatever field you are in, if it uses language, it is about to be transformed.”

AI is marching forward, and its wonders are increasingly evident and applied. But the outcome of an AI-forward world is up for debate. While this debate is underway, at present it focuses primarily on data privacy and how bias can negatively impact different social groups. Another potentially greater concern is that we are becoming dangerously dependent on our smart devices and applications. This reliance could lead us to become less inquisitive and more trusting of the information we are provided as accurate and authoritative.

Or that, like in the animated film WALL-E, we will be glued to screens, distracted by mindless entertainment, literally and figuratively fed empty calories without lifting a finger while an automated economy carries on without us. In this increasingly plausible near-future scenario, people will move through life on autopilot, just like our cars. Or perhaps we have already arrived at such a place.

Caption: Humans on the Axiom spaceship in Pixar’s WALL-E; Source

Welcome to Humanity 2.0

If smartphone use is any indication, there is cause for worry. Nicolas Carr wrote in The Wall Street Journal about research suggesting our intellect weakens as our brain grows dependent on phone technology. Likely the same could be said for any information technology where content flows our way without us having to work to learn or discover on our own. If that’s true, then AI applications, which increasingly present content tailored to our specific interests, could create a self-reinforcing syndrome that not only locks us into our information bubbles through algorithmic editing, but also weakens our ability to engage in critical thought by spoon-feeding us what we already believe.

Tristan Green argues that humans are already cyborgs, human-machine hybrids that are heavily dependent on information flowing from the Internet. During a weekend without access to this constant connection: “I found myself having difficulty thinking. By Sunday evening I realized that I use the almost-instant connection I have with the internet to augment my mental abilities almost constantly. … I wasn’t aware of how much I rely on the AI-powered internet as a performance aid.” Which is perhaps why Elon Musk believes we will need to augment our brains with instantaneous access to the Internet for humans to effectively compete with AI, this being the initial rationale behind his Neuralink brain-machine interface company.

The AI revolution could be different

I’ve read many analyses from AI pundits arguing that AI will be no different from other technology innovations, such as the transition from the horse economy to the automobile. Usually these arguments are made in the context of AI’s impact on jobs and concludes there will be social displacement in the short-term for some but long-term growth for the collective whole. The thinking is that new industries will birth new jobs and malleable people will adapt.

But there is a fundamental difference with the AI revolution. Previous instances involved replacing brute force with labor-saving automation. AI is different in that it outsources more than physical labor, it also outsources cognition, which is thinking and decision making. Shaun Nichols, professor in the Sage School of Philosophy at Cornell University, said in a recent panel discussion on AI: “We already outsource ethically important decisions to algorithms. We use them for kidney transplants, air traffic control, and to determine who gets treated first in emergency rooms.” As stated by the World Economic Forum, we are progressively subject to decisions with the assistance of — or even taken by — AI systems.

Are we losing our agency?

Algorithms now shape our thoughts and increasingly make decisions on our behalf. Wittingly or not, AI is doing so much for us that some are dubbing it an “intelligence revolution,” which forces the question, have we already become de facto cyborgs, and if so, do we still have agency? Agency is the power of humans to think for ourselves and act in ways that shape our experiences and life trajectories. Yet, the algorithms driving search and social media platforms, book and movie recommendations, regularly shape what billions of people read and see. If this was thoughtfully curated for our betterment, it might be okay. But as film director Martin Scorsese states, their purpose is only to increase consumption.

It seems we have already outsourced agency to algorithms designed to increase corporate well-being. This may not be overtly malicious, but it is hardly benign. Our thoughts are being molded, either by our existing beliefs that are reinforced by algorithms inferring our interests, or through intentional or unintentional biases from the various information platforms. Which is to say that our ability to perform critical thinking is both constrained and shaped by the very systems meant to aid and hopefully stimulate our thinking. We are entering a recursive loop where thinking coalesces into ever tighter groupings — the often-discussed polarization — that reduce variability and hence diversity of opinion.

It is as if we are the subjects in a grand social science experiment, with the resulting human opinion clusters determined by the AI-powered inputs and the outputs discerned by machine learning. This is qualitatively different from an augmentation of intelligence and instead augers a merger of humans and machines that is creating the ultimate group think.

It has never been easy to confront large societal problems, but they will become more challenging if humanity continues the path of outsourcing its thinking to algorithms that are not in our collective best interests. All of which begs the question, do we control AI technology or are we already being controlled by the technology?

Gary Grossman is the Senior VP of Technology Practice at Edelman and Global Lead of the Edelman AI Center of Excellence.

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn more about membership.

Author
Topics