Well, AlphaGo has been superseded several times. But what was interesting about AlphaGo’s success against Lee Sedol, the Korean Go champion, a few years ago, was the manner in which it won. It played in a completely different style. It made moves that made people fall about laughing, but the really hilarious, idiotic move proved to be the sensational one. And I think that opened the possibility of all kinds of things.

I remember raising the question with a very leading AI expert about whether there could be a program that could write novels. Not just novels that would pass some sort of Turing test, but novels that would really move people or make people cry. I thought that was interesting.

What did the expert say?

Well, OK, I was talking to Demis Hassabis [cofounder of DeepMind], and he was quite interested in this idea. We talked about it over a number of conversations, and I think the key question here is: Can AI actually get to that empathy, by understanding human emotions, controlling them through something like a work of art?

Once it gets to the point where an AI program, AlphaTolstoy or whatever, can actually make me laugh and cry, and see the world in a different way, I think we’ve reached an interesting point, if not quite a dangerous point. Nevermind Cambridge Analytica. If it can do that to me, then it understands human emotions well enough to be able to run a political campaign. It can identify the frustrations and angers and emotions in the nation, or in the world at large, and know what to do with that.

The novel also considers how a person’s personality might be captured and re-created algorithmically. Why are you interested in that?

Klara and the Sun just accepts a world in which big data, algorithms, these things have become so much part of our lives. And in that world, human beings are starting to look at each other in a different way. Our assumption about what a human individual is and what’s inside each unique human individual—what makes them unique—these things are a little bit different because we live in a world where we see all these possibilities of being able to excavate and map out people’s personalities.

Is that going to change our feelings toward each other, particularly when we’re under pressure? When you actually face the prospect of losing somebody you love, I think then you really, really start to ask that question, not just intellectually but emotionally. What does this person mean? What is this loss? What kind of strategies can I put up to defend myself from the pain?

I think the question becomes something very very real then. It’s not just an abstract philosophical question about, you know, the ghost in the machine, whether you have some sort of traditional religious idea of a soul or a more modern idea of a set of things that can be reduced to algorithms, albeit vast and complicated one.

So it becomes a very human and very emotional question. What the hell is a human being, what’s inside their mind and how irreplaceable is any one human? Those are the questions that, as a novelist, I’m interested in.

Artificial intelligence isn’t yet close to this. Should we still worry about what it can do?

In general, that question, about human oversight, is one that we need to be thinking about right now. In the popular discourse, the thing seems to revolve around whether the robots are going to kind of take us over, a kind of crazy zombie vampire kind of scenario, except featuring kind of sophisticated AI robots. That might be a serious concern, I don’t know, but it’s not one of the things that I’m particularly worried about. I think there are other things that are much more on our doorstep that we have to worry about.

The nature of this generation of machine learning, which I understand is called reinforcement learning, is quite different to the old forms [of AI], never mind just programming a computer. It’s just about giving an AI program a goal, and then we kind of lose control of what it does thereafter. And so I think there is this issue about how we could really hardwire the prejudices and biases of our age into black boxes, and won’t be able to unpack them. What seemed like perfectly decent normal ideas a few years ago, now we object to as grossly unjust or worse. But we can go back on them because we can actually see how they’re made. What about when we become very dependent on recommendations, advice, and decisions made by AI?