Detroit: Become Human depicts a fictional near-future story about a world where humans are served by lifelike androids. It showed us a future where human unemployment was high, and many people had become resentful of the androids.

And the sentient androids rebel at being servants, and they launch their own rebellion. It may sound no more realistic than The Terminator, but game director David Cage did a lot of research that stemmed from Ray Kurzweil’s seminal book, The Singularity is Near. He tried to embed the game world with his research and make the scenario as plausible as he could.

He approached the challenge from a unique view, where he posed the question of what if the humans were bad and the androids were good. I talked with him about this at the Gamelab conference in Barcelona.

I interviewed both Josef Fares and Cage about their approaches to storytelling in games, onstage at Gamelab. But this interview is a transcript of a separate conversation I had with Cage about a similar topic, where he fully explained his point of view.

Here’s an edited transcript of our interview.

Artificial Intelligence: David Cage of Quantic Dream, creator of Heavy Rain, Beyond: Two Souls, and Detroit: Become Human.

Above: David Cage of Quantic Dream, creator of Heavy Rain, Beyond: Two Souls, and Detroit: Become Human.

Image Credit: Dean Takahashi

GamesBeat: VentureBeat does two conferences. We have the GamesBeat conference, and then we do an AI conference. The AI side is interesting to me because I wonder how seriously you guys researched the backstory for Detroit. How worried are you about what could happen with AI in the next couple of decades?

David Cage: I did a lot of research on the topic, because I’m personally interested. It started with an old book called The Singularity is Near, by Ray Kurzweil. That was important to me because it made me realize that there would be a point in the future where there will be machines that are more intelligent than we are. It’s hard to say when this will arrive, but there’s no question about whether it’s going to happen. It’s going to happen for sure.

For me that was the starting point of Detroit. What will happen by then? There are two theories. The first one is that they’re just machines. Just because they have more power, that doesn’t mean they’ll develop consciousness, because consciousness is something other than brainpower. Then there’s another theory, which is that we’re just biological machines, and consciousness emerges from the power of our brains. If that’s true, it means there will be machines that also have a sense of consciousness. How will react as a species when another species appears that’s intelligent, that has consciousness and emotions? What will be our position if that happens one day? That was the starting point.

But apart from that I did a lot of research about AI to know exactly how it would work. I visited some science labs working on musical composition with AI. I was very impressed by one demo that I saw. They put hours of music played by a jazz pianist, a very good improviser, into an algorithm. The algorithm analyzed the style, and then they had this soundtrack that started with the real human pianist, but at some point in the track it switched to the algorithm. It was horrifying, because you couldn’t tell the difference. The algorithm exactly analyzed the human’s style of playing and could play in the same way.

We spent a lot of time exploring — would AI write books one day? Would they tell stories? Would they make music or create paintings? We put some of those ideas into the game.

GamesBeat: I’m always interested in how AI researchers keep saying — they’re doing some interesting things like building ethics panels for AI now. They didn’t start that way. Microsoft erased its one famous AI project. They’ve learned that they need to somehow govern or control AI. But they do seem to make fun of the fiction about AI, things like the Terminator. “Everyone always says we’re building Skynet, and that’s nothing close to reality.” I don’t have a sense of whether they see a threat in AI to take seriously, or whether they simply want to pursue it in the name of science.

Artificial Intelligence: These are human cops, and they're far from perfect.

Above: These are human cops, and they’re far from perfect.

Image Credit: Sony

Cage: There’s one story that I really love about AI. It’s based on a real fact, but it’s a bit improved. There was this experience where they had two AIs creating their own language and starting to talk together in a language that no human being could understand, because they made it up. The story was a bit embellished compared to the real facts, but I find it fascinating that we can create something that will learn by itself, learning things even we can’t understand. When you have machines that are capable of learning and developing skills on their own, that’s when you start to lose control.

So, am I worried about technology in general? Yes. I’m more worried about human beings then about machines, though. It’s not a coincidence that in Detroit, we made the choice that the good guys are the androids and the bad guys are human.

GamesBeat: Humans are always trying to outdo each other, and they compete to the point where they’ll cross the lines of what they should do.

Cage: The good thing about AI is it usually behaves in a rational way, which isn’t the case with a human being. I’m more concerned about our relationship with technology, and this is also one of the themes we developed in Detroit. How dependent have we become upon technology? How addicted are we to our phones? You can see families sometimes in a place like a restaurant where everyone’s checking their emails and their messages. They’re talking to people who aren’t there instead of talking to the people who are there around the table. That’s something that worries me.

I also think that technology is changing the way our brains work. We need more and more stimulation. We need messages. We need this little ping on our phone. “Oh, there’s a message I need to check, wait a second.” We’ve become totally dependent on technology. Instead of technology serving us, we start to serve technology.

GamesBeat: I read Dan Brown’s novel Origin as well. It’s funny, because it’s set here in Barcelona. But I thought he had an interesting theory, that humans are causing climate change and running the planet into the ground, and it might actually be hopeful if AI takes over and limits the growth of the human race. He makes this prediction that AI will grow to become the dominant species. Which is futuristic thinking, but it seems plausible in some ways.

Cage: It’s definitely plausible. There’s a character in Detroit who says, “Androids are humans, but perfect.” I don’t know if you could really say that. I