Headlines This Week

  • Meta’s AI-generated stickers, which launched just last week, are already causing mayhem. Users swiftly realized they could use them to create obscene images, like Elon Musk with tits, ones involving child soldiers, and bloodthirsty version of Disney characters. Ditto for Microsoft Bing’s image generation feature, which has set off a trend in which users create pictures of celebrities and video game characters committing the 9/11 attacks.
  • Another person has been injured by a Cruise robotaxi in San Francisco. The victim was initially hit by a human-operated car but was then run over by the automated vehicle, which stopped on top of her and refused to budge despite her screams. Looks like that whole “improving road safety” thing that self-driving car companies have made their mission statement isn’t exactly panning out yet.
  • Last but not least: a new report shows that AI is already being weaponized by autocratic governments all over the world. Freedom House has revealed that leaders are taking advantage of new AI tools to suppress dissent and spread disinformation online. We interviewed one of the researchers connected to the report for this week’s interview.

The Top Story: AI’s Creative Coup

Sam Altman, CEO of OpenAI.

Sam Altman, CEO of OpenAI.
Photo: jamesonwu1972 (Shutterstock)

Advertisement

Though the hype-men behind the generative AI industry are loathe to admit it, their products are not particularly generative, nor particularly intelligent. Instead, the automated content that platforms like ChatGPT and DALL-E poop out with intensive vigor could more accurately be characterized as derivative slop—the regurgitation of an algorithmic puree of thousands of real creative works created by human artists and authors. In short: AI “art” isn’t art—it’s just a dull commercial product produced by software and designed for easy corporate integration. A Federal Trade Commission hearing, held virtually via live webcast, made that fact abundantly clear.

This week’s hearing, “Creative Economy and Generative AI,” was designed to allow representatives from various creative vocations the opportunity to express their concerns about the recent technological disruption sweeping their industries. From all quarters, the resounding call was for impactful regulation to protect workers.

Advertisement

This desire for action was probably best exemplified by Douglas Preston, one of dozens of authors who is currently listed as a plaintiff in a class action lawsuit against OpenAI due to the company’s use of their material to train its algorithms. During his remarks, Preston noted that “ChatGPT would be lame and useless without our books” and added: “Just imagine what it would be like if it was only trained on text scraped from web blogs, opinions, screeds cat stories, pornography and the like.” He said finally: “this is our life’s work, we pour our hearts and our souls into our books.”

Advertisement

The problem for artists seems pretty clear: how are they going to survive in a market where large corporations are able to use AI to replace them—or, more accurately, whittle down their opportunities and bargaining power by automating large parts of the creative services?

Advertisement

The problem for the AI companies, meanwhile, is that there are unsettled legal questions when it comes to the untold bytes of proprietary work that companies like OpenAI have used to train their artist/author/musician-replacing algorithms. ChatGPT would not be able to generate poems and short stories at the click of a button, nor would DALL-E have the capacity to unfurl its bizarre imagery, had the company behind them not gobbled up tens of thousands of pages from published authors and visual artists. The future of the AI industry, then—and really the future of human creativity—is going to be decided by an ongoing argument currently unfurling within the U.S. court system.

The Interview: Allie Funk on How AI is Being Weaponized by Autocracies

Image for article titled AI This Week: AI's Threat to Creative Freedom

Photo: Freedom House

Advertisement

This week we had the pleasure of speaking with Allie Funk, Freedom House’s Research Director for Technology and Democracy. Freedom House, which tracks issues connected to civil liberties and human rights all over the globe, recently published its annual report on the state of internet freedom. This year’s report focused on the ways in which newly developed AI tools are supercharging autocratic governments’ approaches to censorship, disinformation, and the overall suppression of digital freedoms. As you might expect, things aren’t going particularly well in that department. This interview has been lightly edited for clarity and brevity.  

One of the key points you talk about in the report is how AI is aiding government censorship. Can you unpack those findings a little bit?

Advertisement

What we found is that artificial intelligence is really allowing governments to evolve their approach to censorship. The Chinese government, in particular, has tried to regulate chatbots to reinforce their control over information. They’re doing this through two different methods. The first is that they’re trying to make sure that Chinese citizens don’t have access to chatbots that were created by companies based in the U.S. They’re forcing tech companies in China to not integrate ChatGPT into their products…they’re also working to create chatbots on their own so that they can embed censorship controls within the training data of their own bots. Government regulations require that the training data for Ernie, Baidu’s chatbot, align with what the CCP (Chinese Community Party) wants and aligns with core elements of the socialist propaganda. If you play around with it, you can see this. It refuses to answer prompts around the Tiananmen square massacre.

Disinformation is another area you talk about. Explain a little bit about what AI is doing to that space.

Advertisement

We’ve been doing these reports for years and, what is clear, is that government disinformation campaigns are just a regular feature of the information space these days. In this year’s report, we found that, of the 70 countries, at least 47 governments deployed commentators who used deceitful or covert tactics to try to manipulate online discussion. These [disinformation] networks have been around for a long time. In many countries, they’re quite sophisticated. An entire market of for-hire services has popped up to support these kinds of campaigns. So you can just hire a social media influencer or some other similar agent to work for you and there’s so many shady PR firms that do this kind of work for governments.

I think it’s important to acknowledge that artificial intelligence has been a part of this whole disinformation process for a long time. You’ve got platform algorithms that have long been used to push out incendiary or unreliable information. You’ve got bots that are used across social media to facilitate the spread of these campaigns. So the use of AI in disinformation is not new. But what we expect generative AI to do is lower the barrier of entry to the disinformation market, because it’s so affordable, easy to use, and accessible. When we talk about this space, we’re not just talking about chatbots, we’re also talking about tools that can generate images, video, and audio.

Advertisement

What kind of regulatory solutions do you think need to be looked at to cut down on the harms that AI can do online?

We think there are a lot of lessons from the last decade of debates around internet policy that can be applied to AI. A lot of the recommendations that we’ve already made around internet freedom could be helpful when it comes to tackling AI. So, for instance, governments forcing the private sector to be more transparent about how their products are designed and what their human rights impact is could be quite helpful. Handing over platform data to independent researchers, meanwhile, is another critical recommendation that we’ve made; independent researchers can study what the impact of the platforms are on populations, what impact they have on human rights. The other thing that I would really recommend is strengthening privacy regulation and reforming problematic surveillance rules. One thing we’ve looked at previously is regulations to make sure that governments can’t misuse AI surveillance tools.

Advertisement

Catch up on all of Gizmodo’s AI news here, or see all the latest news here. For daily updates, subscribe to the free Gizmodo newsletter.

Services MarketplaceListings, Bookings & Reviews

Entertainment blogs & Forums

Advantages of overseas domestic helper.