Interested in learning what's next for the gaming industry? Join gaming executives to discuss emerging parts of the industry this October at GamesBeat Summit Next. Learn more.


Detroit: Become Human depicts a fictional near-future story about a world where humans are served by lifelike androids. It showed us a future where human unemployment was high, and many people had become resentful of the androids.

And the sentient androids rebel at being servants, and they launch their own rebellion. It may sound no more realistic than The Terminator, but game director David Cage did a lot of research that stemmed from Ray Kurzweil’s seminal book, The Singularity is Near. He tried to embed the game world with his research and make the scenario as plausible as he could.

He approached the challenge from a unique view, where he posed the question of what if the humans were bad and the androids were good. I talked with him about this at the Gamelab conference in Barcelona.

I interviewed both Josef Fares and Cage about their approaches to storytelling in games, onstage at Gamelab. But this interview is a transcript of a separate conversation I had with Cage about a similar topic, where he fully explained his point of view.

Event

MetaBeat 2022

MetaBeat will bring together metaverse thought leaders to give guidance on how metaverse technology will transform the way all industries communicate and do business on October 3-4 in San Francisco, CA.

Register Here

Here’s an edited transcript of our interview.

David Cage of Quantic Dream, creator of Heavy Rain, Beyond: Two Souls, and Detroit: Become Human.

Above: David Cage of Quantic Dream, creator of Heavy Rain, Beyond: Two Souls, and Detroit: Become Human.

Image Credit: Dean Takahashi

GamesBeat: VentureBeat does two conferences. We have the GamesBeat conference, and then we do an AI conference. The AI side is interesting to me because I wonder how seriously you guys researched the backstory for Detroit. How worried are you about what could happen with AI in the next couple of decades?

David Cage: I did a lot of research on the topic, because I’m personally interested. It started with an old book called The Singularity is Near, by Ray Kurzweil. That was important to me because it made me realize that there would be a point in the future where there will be machines that are more intelligent than we are. It’s hard to say when this will arrive, but there’s no question about whether it’s going to happen. It’s going to happen for sure.

For me that was the starting point of Detroit. What will happen by then? There are two theories. The first one is that they’re just machines. Just because they have more power, that doesn’t mean they’ll develop consciousness, because consciousness is something other than brainpower. Then there’s another theory, which is that we’re just biological machines, and consciousness emerges from the power of our brains. If that’s true, it means there will be machines that also have a sense of consciousness. How will react as a species when another species appears that’s intelligent, that has consciousness and emotions? What will be our position if that happens one day? That was the starting point.

But apart from that I did a lot of research about AI to know exactly how it would work. I visited some science labs working on musical composition with AI. I was very impressed by one demo that I saw. They put hours of music played by a jazz pianist, a very good improviser, into an algorithm. The algorithm analyzed the style, and then they had this soundtrack that started with the real human pianist, but at some point in the track it switched to the algorithm. It was horrifying, because you couldn’t tell the difference. The algorithm exactly analyzed the human’s style of playing and could play in the same way.

We spent a lot of time exploring — would AI write books one day? Would they tell stories? Would they make music or create paintings? We put some of those ideas into the game.

GamesBeat: I’m always interested in how AI researchers keep saying — they’re doing some interesting things like building ethics panels for AI now. They didn’t start that way. Microsoft erased its one famous AI project. They’ve learned that they need to somehow govern or control AI. But they do seem to make fun of the fiction about AI, things like the Terminator. “Everyone always says we’re building Skynet, and that’s nothing close to reality.” I don’t have a sense of whether they see a threat in AI to take seriously, or whether they simply want to pursue it in the name of science.

These are human cops, and they're far from perfect.

Above: These are human cops, and they’re far from perfect.

Image Credit: Sony

Cage: There’s one story that I really love about AI. It’s based on a real fact, but it’s a bit improved. There was this experience where they had two AIs creating their own language and starting to talk together in a language that no human being could understand, because they made it up. The story was a bit embellished compared to the real facts, but I find it fascinating that we can create something that will learn by itself, learning things even we can’t understand. When you have machines that are capable of learning and developing skills on their own, that’s when you start to lose control.

So, am I worried about technology in general? Yes. I’m more worried about human beings then about machines, though. It’s not a coincidence that in Detroit, we made the choice that the good guys are the androids and the bad guys are human.

GamesBeat: Humans are always trying to outdo each other, and they compete to the point where they’ll cross the lines of what they should do.

Cage: The good thing about AI is it usually behaves in a rational way, which isn’t the case with a human being. I’m more concerned about our relationship with technology, and this is also one of the themes we developed in Detroit. How dependent have we become upon technology? How addicted are we to our phones? You can see families sometimes in a place like a restaurant where everyone’s checking their emails and their messages. They’re talking to people who aren’t there instead of talking to the people who are there around the table. That’s something that worries me.

I also think that technology is changing the way our brains work. We need more and more stimulation. We need messages. We need this little ping on our phone. “Oh, there’s a message I need to check, wait a second.” We’ve become totally dependent on technology. Instead of technology serving us, we start to serve technology.

GamesBeat: I read Dan Brown’s novel Origin as well. It’s funny, because it’s set here in Barcelona. But I thought he had an interesting theory, that humans are causing climate change and running the planet into the ground, and it might actually be hopeful if AI takes over and limits the growth of the human race. He makes this prediction that AI will grow to become the dominant species. Which is futuristic thinking, but it seems plausible in some ways.

Cage: It’s definitely plausible. There’s a character in Detroit who says, “Androids are humans, but perfect.” I don’t know if you could really say that. It’s science fiction. But at the same time, you understand how machines can be very pragmatic and very understanding of what needs to be done, making compromises and being reasonable, where human beings are driven by passion and emotion. That’s why we run this planet, but at the same time, it’s also why we may die from it, because we’re not capable of being reasonable and pragmatic about certain things.

Maybe the perfect president for a country would be an android, because they wouldn’t be vulnerable to corruption. They would work hard. They would make the best decisions for the majority of people. They wouldn’t just try to get re-elected. They wouldn’t lie to get the job. When you think about it, they might be a pretty good president.

GamesBeat: How plausible did you think it was that the onset of AI would cause this 30 to 40 percent unemployment?

Cage: That’s something I studied for Detroit as well, the social impact of technology. In my research I discovered that the same discussions happened when the first steam engines appeared. Everybody said, “Wait a second, but workers will be unemployed. It’s going to be a disaster.” We know now that this isn’t what happened, because we needed more people to design machines and maintain machines and build other machines. The people who were doing these old low-paid jobs shifted to something else in different positions. But there was no massive unemployment.

I tend to think that this is also what will happen with AI. We’ll see a new breadth of jobs. They’ll be different, but machines will just replace us in the most common jobs that we won’t need to do anymore.

So you want to start a revolution?

Above: You say you want to a revolution?

Image Credit: Sony

GamesBeat: That’s a case maybe where what you believe is different from what happened in the game.

Cage: In the game I was interested in raising that point. I thought, “Let’s play with this idea.” It’s also a plausible scenario, honestly. No one can be sure. But I imagined the opposite scenario, where there would be unemployment, and I was interested in the social impact of this, where rich people would have a job, but also have androids serving them, while poor people would have lost their jobs and couldn’t afford androids. Technology would create two classes of people – the people who benefit from technology and the people who suffer because of technology. That’s another scenario.

GamesBeat: I do think the whole game ultimately became a good articulation of the issues people should think about around AI.

Cage: I hope so. I wanted the game to be food for thought. I don’t pretend to have any answers to any of these questions. They’re very complex, and people are spending their lives studying and trying to foresee what’s happening. I don’t pretend to come with answers. But I came with a couple of questions. It’s interesting. Some people saw that, and most people didn’t. Most people just saw a story about androids. But some people saw the interesting questions in the game. I think they’re there.

GamesBeat: It was interesting to me that someone could create an AI story that wasn’t just the Terminator, right? The sort of thing that real scientists and researchers are more likely to shrug off as implausible. Something like this, you have to think about the implications.

Cage: Honestly, Detroit for me is much more a story about ourselves, about human beings, than about machines. It asks many questions about our dependency on technology and how we see our future, where we want to go with this society. Much more than just the future of AI. But it’s interesting, because AI inspires questions about ourselves.

This question of consciousness we talked about is very important. Machines are going to tell us what we are. If they develop some form of consciousness, it will mean we’re just a machine that evolution has developed to a point where it becomes powerful enough to have a consciousness. But if machines never develop consciousness, maybe that will mean we’re more than that. Then we’ll need to ask ourselves, “Then what? What are we?” It’s a fascinating question when you think about it.

GamesBeat: Do you think you’re finished with the subject, or do you want to do more around this?

Cage: Who knows? It’s a big subject, that’s for sure.

Disclosure: The organizers of Gamelab paid my way to Barcelona. Our coverage remains objective.

GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it. Learn more about membership.

Author
Topics