Facebook owner Meta is planning to introduce chatbots with distinct personalities to its social media app. The launch could come as soon as this September and would be a challenge to rivals like ChatGPT, but there are concerns that there could be serious implications for users’ privacy.

The idea comes from the Financial Times, which reports that the move is an attempt to boost engagement with Facebook users. The new tool could do this by providing fresh search capabilities or recommending content, all through humanlike discussions.

The Facebook app icon on an iPhone <a href='https://floridabundledgolf.com/homepage-opening-text-q-n-a' target='_blank'>home</a> screen, with other app icons surrounding it.”><figcaption id=Brett Johnson / Unsplash

According to sources cited by the Financial Times, the chatbots will take on different personas, including “one that emulates Abraham Lincoln and another that advises on travel options in the style of a surfer.”

This wouldn’t be the first time we’ve seen chatbots take on their own personalities or converse in the style of famous people. The Character.ai chatbot, for example, can adopt dozens of different personalities, including those of celebrities and historical figures.

Privacy concerns

facebook privacy mark zuckerberg
Josh Edelson/Getty Images / Meta

Despite the promise Meta’s chatbots could show, fears have also been raised over the amount of data they will likely collect — especially considering Facebook has an abysmal record at protecting user privacy.

Ravit Dotan, an AI ethics adviser and researcher, was quoted by the Financial Times as saying “Once users interact with a chatbot, it really exposes much more of their data to the company, so that the company can do anything they want with that data.”

This not only raises the prospect of far-reaching privacy breaches but allows for the possibility of “manipulation and nudging” of users, Dotan added.

A big risk

A Meta Connect 2022 screenshot showing Mark Zuckerberg avatar.
Meta

Other chatbots like ChatGPT and Bing Chat have had a history of “hallucinations,” or moments where they share incorrect information — or even misinformation. The potential damage caused by misinformation and bias could be much higher on Facebook, which has nearly four billion users, compared to rival chatbots.

Meta’s past attempts at chatbots have fared poorly, with the company’s BlenderBot 2 and BlenderBot 3 both quickly devolving into misleading content and inflammatory hate speech. That might not give users much hope for Meta’s latest effort.

With September fast approaching, we might not have long to see whether Facebook is able to surmount these hurdles, or if we will have another hallucination-riddled launch akin to those suffered elsewhere in the industry. Whatever happens, it’ll be interesting to watch.

Editors’ Recommendations

Services MarketplaceListings, Bookings & Reviews

Entertainment blogs & Forums