The technology behind ChatGPT has been around for several years without drawing much notice. It was the addition of a chatbot interface that made it so popular. In other words, it wasn’t a development in AI per se but a change in how the AI interacted with people that captured the world’s attention.
Very quickly, people started thinking about ChatGPT as an autonomous social entity. This is not surprising. As early as 1996, Byron Reeves and Clifford Nass looked at the personal computers of their time and found that “equating mediated and real life is neither rare nor unreasonable. It is very common, it is easy to foster, it does not depend on fancy media equipment, and thinking will not make it go away.” In other words, people’s fundamental expectation from technology is that it behaves and interacts like a human being, even when they know it is “only a computer.” Sherry Turkle, an MIT professor who has studied AI agents and robots since the 1990s, stresses the same point and claims that lifelike forms of communication, such as body language and verbal cues, “push our Darwinian buttons”—they have the ability to make us experience technology as social, even if we understand rationally that it is not.
If these scholars saw the social potential—and risk—in decades-old computer interfaces, it’s reasonable to assume that ChatGPT can also have a similar, and probably stronger, effect. It uses first-person language, retains context, and provides answers in a compelling, confident, and conversational style. Bing’s implementation of ChatGPT even uses emojis. This is quite a step up on the social ladder from the more technical output one would get from searching, say, Google.
Critics of ChatGPT have focused on the harms that its outputs can cause, like misinformation and hateful content. But there are also risks in the mere choice of a social conversational style and in the AI’s attempt to emulate people as closely as possible.
The Risks of Social Interfaces
New York Times reporter Kevin Roose got caught up in a two-hour conversation with Bing’s chatbot that ended in the chatbot’s declaration of love, even though Roose repeatedly asked it to stop. This kind of emotional manipulation would be even more harmful for vulnerable groups, such as teenagers or people who have experienced harassment. This can be highly disturbing for the user, and using human terminology and emotion signals, like emojis, is also a form of emotional deception. A language model like ChatGPT does not have emotions. It does not laugh or cry. It actually doesn’t even understand the meaning of such actions.
Emotional deception in AI agents is not only morally problematic; their design, which resembles humans, can also make such agents more persuasive. Technology that acts in humanlike ways is likely to persuade people to act, even when requests are irrational, made by a faulty AI agent, and in emergency situations. Their persuasiveness is dangerous because companies can use them in a way that is unwanted or even unknown to users, from convincing them to buy products to influencing their political views.
As a result, some have taken a step back. Robot design researchers, for example, have promoted a non-humanlike approach as a way to lower people’s expectations for social interaction. They suggest alternative designs that do not replicate people’s ways of interacting, thus setting more appropriate expectations from a piece of technology.
Defining Rules
Some of the risks of social interactions with chatbots can be addressed by designing clear social roles and boundaries for them. Humans choose and switch roles all the time. The same person can move back and forth between their roles as parent, employee, or sibling. Based on the switch from one role to another, the context and the expected boundaries of interaction change too. You wouldn’t use the same language when talking to your child as you would in chatting with a coworker.
In contrast, ChatGPT exists in a social vacuum. Although there are some red lines it tries not to cross, it doesn’t have a clear social role or expertise. It doesn’t have a specific goal or a predefined intent, either. Perhaps this was a conscious choice by OpenAI, the creators of ChatGPT, to promote a multitude of uses or a do-it-all entity. More likely, it was just a lack of understanding of the social reach of conversational agents. Whatever the reason, this open-endedness sets the stage for extreme and risky interactions. Conversation could go any route, and the AI could take on any social role, from efficient email assistant to obsessive lover.
Services Marketplace – Listings, Bookings & Reviews