Everybody knows that online disinformation is a huge problem—one that has arguably torn communities apart, manipulated elections, and caused certain segments of the global population to lose their minds. Of course, nobody seems particularly concerned about actually fixing this problem. In fact, the institutions most responsible for online disinformation (and thus, the ones most well-placed to do something about it)—that is to say, tech companies—seem intent on doing everything they can to make the problem exponentially worse.

Case in point: OpenAI launched Sora, its new text-to-video generator, on Thursday. The model is designed to allow web users to generate high-quality, AI videos with just a text prompt. The application is currently wowing the internet with its bizarre variety of visual imagery—whether that’s a Chinese New Year parade, a guy running backward on a treadmill in the dark, a cat in a bed, or two pirate ships swirling around in a coffee cup.

Advertisement

Advertisement

At this point, despite its “world-changing” mission, it could be argued that OpenAI’s biggest contribution to the internet has been the instantaneous generation of countless terabytes of digital crap. All of the company’s open and public tools are content generators, the likes of which, experts have warned, are primed to be used in fraud and disinformation campaigns.

Advertisement

In its blog post about Sora, OpenAI’s team openly acknowledges that there could be some potential downsides to their new app. The company said that its working on some watermarking technologies to flag content that its generator has created and that it’s in the process of interfacing with knowledgeable people to figure out how to make the inevitable deluge of AI-generated crap that Sora will unleash less toxic. Sora isn’t open to the public yet and, in the meantime, OpenAI says its creating systems that will deny users who want to generate violent or sexual imagery. The statement notes:

We’ll be engaging policymakers, educators and artists around the world to understand their concerns and to identify positive use cases for this new technology. Despite extensive research and testing, we cannot predict all of the beneficial ways people will use our technology, nor all the ways people will abuse it.

Advertisement

This sort of framing of the problem is sorta hilarious because it’s already totally obvious how OpenAI’s new tool could potentially be abused. Sora will generate fake content on a gargantuan scale—that much is clear. Some of that content, it seems likely, will be used for the purposes of online political disinformation, some of it could, hypothetically, be used to aid in a variety of fraud and scams, and some of could be used to generate toxic content. OpenAI has said it wants to put meaningful limits on violent and sexual content, but web users and researchers have shown how savvy they can be at jailbreaking AI systems to generate the kinds of content that disobey companies’ use policies. All of this Sora content is obviously going to flood social media channels, making it harder for everyday people to distinguish between what’s real and what’s fake, and making the internet, in general, a whole lot more annoying. I don’t think it requires a global panel of experts to figure that out.

There are a number of other obvious downsides, too. For one thing, Sora—and others of its ilk—probably won’t have the greatest environmental impact. Researchers have shown that text-to-image generators are significantly worse, environmentally speaking, than text-generators, and just creating an AI image takes the same amount of energy as it does to fully charge your smartphone. For another thing, new text-to-video generation technologies will likely hurt the video creator economy, because why should companies pay people to make visual content when all that’s necessary to create a video is clicking a button?

Advertisement

As far as the corporate class in this country goes, nothing really matters except money. Fuck the environment, fuck artists, fuck an internet that is disinformation-free, fuck the health of political discourse, fuck anything that gets in the way of the profit motive. Anything that can be squeezed to make money should be squeezed, even if it’s a software program whose only real utility is that it can generate a video of a cowboy hamster riding a dragon. As one X user put it: “This is what the morons sacrifice the environment for. Stupid. Shit. Like. This.”

Services MarketplaceListings, Bookings & Reviews

Entertainment blogs & Forums