The devil went down to Silicon Valley; he was looking for a soul to steal. But he ended up taking a consulting gig with Palantir instead.
In the meantime, the algorithm’s in charge of punishing the wicked now. And these days the sign above hell’s gates reads “Abandon Open Source,” with an Amazon smile beneath the print.
Those condemned to an eternity of pain and suffering in the modern era are now forced to read the same five AI articles over and over.
Which kind of sounds like what it’s like to read tech news back here on Earth anyway. Don’t believe me? Let’s dive in.
Number one: “This article was written by an AI.”
No it wasn’t. These articles usually involve a text generator such as OpenAI’s GPT-3. The big idea is that the journalist will either pay for access or collaborate with OpenAI to get GPT-3 to generate text from various prompts.
The journalist will ask something silly like “can AI ever truly think like a human?” and then GPT-3 will use that prompt to generate a specific number of outputs.
Then, the journalists and editors go to work. They’ll pick the best responses, mix and match sentences that make the most sense, and then discard the rest.
This is the editorial equivalent of taking the collected works of Stephen King, copy/pasting a single sentence from each book into a word doc, and then claiming you’ve published an entirely new book from the master of horror.
In hell, you stand in a long line to read hyperbolic, made-up stories about AI’s capabilities. And, as your ultimate punishment, you have to rewrite them for the next person in line.
Number two: “AI company raises $100 million for no apparent reason.”
I remember reading about an early funding round for an AI company called PredPol. It had raised several million dollars to develop an AI system capable of predicting crime before it happens.
I’m sorry. Perhaps you didn’t read that right. It says: predicting crime before it happens.
This is something that’s impossible. And I don’t mean technologically impossible, I mean not possible within the realms of classical or quantum physics.
You see “crime” isn’t generated from hotspots like mobs spawning in an MMO every 5 minutes. A first year statistics or physics student understands that no amount of historical data can predict where new crimes will occur. Mostly because the past isn’t literally prescient. But, also, it’s impossible to know how many crimes have actually been committed. Most crimes go unreported.
PredPol can’t predict crime. It predicts arrests based on historical data. In other words: PredPol tells you where you’ve already arrested people and then says “try there again.” Simply put: it doesn’t work because it can’t work.
But it raised money and raised money until one day it grew into a full-grown company worth billions – all for doing nothing.
In hell, you have to read funding stories about billion-dollar AI startups that don’t actually do anything or solve any problems. And you’re not allowed to skim.
Number three: “Facebook’s new AI makes everything you hate about Facebook 93.5% better.”
There’s variations on this one – “Google‘s AI demonstrates a 72% reduction in racial bias,” “Amazon’s new algorithm is 87% better at spotting and removing Nazi products from its store front” – and they’re all bunk.
Big tech’s favorite PR company is the mainstream media.
Facebook will, as a hypothetical example, say something like “our new algorithms are 80% more efficient at finding and removing toxic content in real time,” and that’s when the telephone game starts.
You’ll see half a dozen reputable news outlets printing headlines that basically say “Facebook’s new algorithms make it 80% less toxic.” And that’s simply not true.
If a chef were to tell you they’ve adopted a new cooking technique that results in 80% less fecal matter being detected in the soup they’re about to serve, you probably wouldn’t think that was a good thing.
Increasing the efficiency of an algorithm doesn’t result in a unilateral increase in overall system efficiency. And, because statistical correlations are incredibly difficult to make when you don’t have access to the actual data being discussed, the people writing up these stories are simply taking the big tech marketing team’s word for it.
In hell, you have to read articles about big tech companies that only have quotes from people who work at those companies and statistics that can’t possibly be verified.
Number four: “Ethics aside, this AI is great!”
We’ve all read these stories. They cover the biggest issues in the world of AI as if they’re writing about the weather.
The story will be something like “Clearview AI gets new government contracts,” and the coverage will quote a politician, the CEO of Clearview, and someone representing law enforcement.
The gist of the piece will be “Ethics aside, law enforcement agencies say these products are invaluable.”
And then, way down towards the end of the article, you’ll see the obligatory “studies have shown that facial recognition struggles to identify some faces. Experts warn against the use of such technologies until this bias can be solved.”
In hell, every AI article you read starts with the sentence “this doesn’t work as well for Black people or women, but we’re just going to move past that like it isn’t important.”
Number five: “Exclusive: Study demonstrates over 80% of CEOs named Bill know what AI is.”
My least favorite AI article is the ones that profess to tell me what non-experts think.
These are the articles with headlines like “Study: 80% of people believe AI will be sentient within a decade” and “75% of moms think Alexa is a danger to children.”
These “studies” are typically conducted by consultancy companies that specialize in this sort of thing. And usually they’re not out conducting studies on the speculation that some journalist will find their work appealing. They get paid to do their “research.”
And by “research,” I mean: sourcing answers on Amazon’s Mechanical Turk or giving campus students a gift card to fill out a survey.
These studies are often bought and paid for ahead of time by an AI company as a marketing tool.
These pitches, in my inbox, usually look something like “Hey Tristan, did you hear that 92% of CEOs don’t know what Kubernetes is? Are you interested in this exclusive study and a conversation with Dr Knows Itall, founder of the Online School For Learning AI Good? They can speak to the challenges of hiring quality IT talent.”
Can you spot the rubbish?
In hell, the algorithm tells you that you can read articles covering actual computer science research as soon as you finish reading all the vapid survey pieces on AI published in mainstream outlets.
But you’re never done are you? There’s always another. “What do soccer dads think about gendered voice assistants?” “What percentage of people think data is a character on Star Trek?” “Will driverless cars be a reality in 2022? Here’s what Tesla owners think.”
Yes, AI hell is a place filled with horrors beyond comprehension. And, just in case you haven’t figured it out yet, we’re already here. This article has been your orientation.
Now if you’ll just sign in to Google News, we’ll get started (Apple News is currently not available in hell due to legal issues concerning the App Store).