yitit
Home
/
Computing
/
Facebook might get chatbots — and that could be a problem
Facebook might get chatbots — and that could be a problem-March 2024
Mar 30, 2026 6:45 PM

  Facebook owner Meta is planning to introduce chatbots with distinct personalities to its social media app. The launch could come as soon as this September and would be a challenge to rivals like ChatGPT, but there are concerns that there could be serious implications for users’ privacy.

  Contents

  Privacy concernsA big risk

  The idea comes from the Financial Times, which reports that the move is an attempt to boost engagement with Facebook users. The new tool could do this by providing fresh search capabilities or recommending content, all through humanlike discussions.

  Brett Johnson / UnsplashAccording to sources cited by the Financial Times, the chatbots will take on different personas, including “one that emulates Abraham Lincoln and another that advises on travel options in the style of a surfer.”

  Recommended Videos

  This wouldn’t be the first time we’ve seen chatbots take on their own personalities or converse in the style of famous people. The Character.ai chatbot, for example, can adopt dozens of different personalities, including those of celebrities and historical figures.

  Related

  ChatGPT shortly devolved into an AI mess This one image breaks ChatGPT each and every time Google might finally have an answer to Chat GPT-4

  

Privacy concerns

Josh Edelson/Getty Images / MetaDespite the promise Meta’s chatbots could show, fears have also been raised over the amount of data they will likely collect — especially considering Facebook has an abysmal record at protecting user privacy.

  Ravit Dotan, an AI ethics adviser and researcher, was quoted by the Financial Times as saying “Once users interact with a chatbot, it really exposes much more of their data to the company, so that the company can do anything they want with that data.”

  This not only raises the prospect of far-reaching privacy breaches but allows for the possibility of “manipulation and nudging” of users, Dotan added.

  

A big risk

MetaOther chatbots like ChatGPT and Bing Chat have had a history of “hallucinations,” or moments where they share incorrect information — or even misinformation. The potential damage caused by misinformation and bias could be much higher on Facebook, which has nearly four billion users, compared to rival chatbots.

  Meta’s past attempts at chatbots have fared poorly, with the company’s BlenderBot 2 and BlenderBot 3 both quickly devolving into misleading content and inflammatory hate speech. That might not give users much hope for Meta’s latest effort.

  With September fast approaching, we might not have long to see whether Facebook is able to surmount these hurdles, or if we will have another hallucination-riddled launch akin to those suffered elsewhere in the industry. Whatever happens, it’ll be interesting to watch.

Comments
Welcome to yitit comments! Please keep conversations courteous and on-topic. To fosterproductive and respectful conversations, you may see comments from our Community Managers.
Sign up to post
Sort by
Login to display more comments
Computing
Recent News
Copyright 2023-2026 - www.yitit.com All Rights Reserved