Welcome to AI This Week, Gizmodo’s weekly roundup where we do a deep dive on what’s been happening in artificial intelligence.

This week, Forbes reported that a Russian spyware company called Social Links had begun using ChatGPT to conduct sentiment analysis. The creepy field by which cops and spies collect and analyze social media data to understand how web users feel about stuff, sentiment analysis is one of the sketchier use-cases for the little chatbot to yet emerge.

Advertisement

Social Links, which was previously kicked off Meta’s platforms for alleged surveillance of users, showed off its unconventional use of ChatGPT at a security conference in Paris this week. The company was able to weaponize the chatbot’s ability for text summarization and analysis to troll through large chunks of data, digesting it quickly. In a demonstration, the company fed data collected by its own proprietary tool into ChatGPT; the data, which related to online posts about a recent controversy in Spain, was then analyzed by the chatbot, which rated them “as positive, negative or neutral, displaying the results in an interactive graph,” Forbes writes.

Advertisement

Obviously, privacy advocates have found this more than a little disturbing—not merely because of this specific case, but for what it says about how AI could escalate the powers of the surveillance industry in general.

Advertisement

Rory Mir, associate director of community organizing with the Electronic Frontier Foundation, said that AI could help law enforcement broaden their surveillance efforts, allowing smaller teams of cops to surveil larger groups with ease. Already, police agencies frequently use fake profiles to embed themselves in online communities; this kind of surveillance has a chilling effect on online speech, Mir said. He added: “The scary thing about stuff like ChatGPT is that they can scale up that kind of operation.” AI can make it “easier for cops to run analysis quicker” on the data they collect during these undercover operations, meaning that “AI tools are [effectively] enabling” online surveillance, he added.

Mir also noted a glaring problem with this kind of use of AI: chatbots have a pretty bad track record of messing up and delivering bad results. “AI is really concerning in high-stakes scenarios like this,” Mir said. “It’s one thing to have ChatGPT read a draft of your article so that you can ask it ‘How acceptable is this?’ But when it moves into the territory of, say, determining if somebody gets a job, or gets housing, or, in this case, determines whether someone gets undue attention from police or not, that is when those biases become, not just a thing to account for, but a reason not to use it in that way [at all].”

Advertisement

Mir added that the “black box” of AI training data means that it’s hard to be sure whether the algorithm’s response will be trustworthy or not. “I mean, this stuff is trained on Reddit and 4chan data,” he chuckles. “So the biases that come from that underlying data are going to reappear in the mosaic of its outputs.”

Question of the day: WTF did Sam Altman do?

Image for article titled ChatGPT Is Apparently a Great Surveillance Tool

Photo: Justin Sullivan (Getty Images)

Advertisement

In what has to be one of the most shocking upsets in recent tech history, Sam Altman has been ousted from his position as CEO of OpenAI. On Friday, a statement was released by the company announcing an abrupt leadership transition. “Mr. Altman’s departure follows a deliberative review process by the board, which concluded that he was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities. The board no longer has confidence in his ability to continue leading OpenAl.” In the immediate power vacuum opened up by this shocking turn of events, the board has apparently selected Mira Murati, the company’s chief technology officer, to serve as interim CEO, the press release states. So far, it’s entirely unclear what Sam might have done to allow such a catastrophic career nose-dive to take place. You have to seriously screw up to go from being Silicon Valley’s prince of the city to pariah in the course of the day. I am waiting on pins and needles to hear what exactly happened here.

More headlines from this week

  • Automated healthcare sounds like a certifiable nightmare. A new lawsuit claims that UnitedHealthcare is using a deeply flawed AI algorithm to “override” doctors judgements when it comes to patients, thus allowing the insurance giant to deny old and ailing patients coverage. The lawsuit, which was filed in US District Court in Minnesota, claims that NaviHealth, a UnitedHealth subsidiary, uses a closed-source AI algorithm, nH Predict, which, in addition to being used to deny patients coverage, has a track record of being wrong a lot of the time. Ars Technica has the full story.
  • Microsoft seems to have been “blindsided” by the abrupt Sam Altman exit at OpenAI. A new report from Axios claims that Microsoft, OpenAI’s pivotal business partner (and funder) was “blindsided” by the fact that its head exec is now being ejected with extreme prejudice. The report doesn’t say much more than that and only cites a “person familiar with the situation.” Suffice it to say everybody is still pretty confused about this.
  • The UK might not be regulating AI after all. It appears that Big Tech’s charm offensive across the pond has worked. In recent weeks, some of the biggest figures in the AI industry—including Elon Musk—traveled to the United Kingdom to attend an AI summit. The general tenor of the executives who attended was: AI could destroy the world but please, let’s not do anything about it for now. This week, the nation’s minister for AI and intellectual property, Jonathan Camrose, told the press that, “in the short term,” the nation did not want to implement “premature regulation” and wanted to steer clear of “stifling innovation.”

Services MarketplaceListings, Bookings & Reviews

Entertainment blogs & Forums