AI is often considered a threat to democracies and a boon to dictators. In 2025 it is likely that algorithms will continue to undermine the democratic conversation by spreading outrage, fake news, and conspiracy theories. In 2025 algorithms will also continue to expedite the creation of total surveillance regimes, in which the entire population is watched 24 hours a day.

Most importantly, AI facilitates the concentration of all information and power in one hub. In the 20th century, distributed information networks like the USA functioned better than centralized information networks like the USSR, because the human apparatchiks at the center just couldn’t analyze all the information efficiently. Replacing apparatchiks with AIs might make Soviet-style centralized networks superior.

Nevertheless, AI is not all good news for dictators. First, there is the notorious problem of control. Dictatorial control is founded on terror, but algorithms cannot be terrorized. In Russia, the invasion of Ukraine is defined officially as a “special military operation,” and referring to it as a “war” is a crime punishable by up to three years imprisonment. If a chatbot on the Russian internet calls it a “war” or mentions the war crimes committed by Russian troops, how could the regime punish that chatbot? The government could block it and seek to punish its human creators, but this is much more difficult than disciplining human users. Moreover, authorized bots might develop dissenting views by themselves, simply by spotting patterns in the Russian information sphere. That’s the alignment problem, Russian-style. Russia’s human engineers can do their best to create AIs that are totally aligned with the regime, but given the ability of AI to learn and change by itself, how can the engineers ensure that an AI that got the regime’s seal of approval in 2024 doesn’t venture into illicit territory in 2025?

The Russian Constitution makes grandiose promises that “everyone shall be guaranteed freedom of thought and speech” (Article 29.1) and “censorship shall be prohibited” (29.5). Hardly any Russian citizen is naive enough to take these promises seriously. But bots don’t understand doublespeak. A chatbot instructed to adhere to Russian law and values might read that constitution, conclude that freedom of speech is a core Russian value, and criticize the Putin regime for violating that value. How might Russian engineers explain to the chatbot that though the constitution guarantees freedom of speech, the chatbot shouldn’t actually believe the constitution nor should it ever mention the gap between theory and reality?

In the long term, authoritarian regimes are likely to face an even bigger danger: instead of criticizing them, AIs might gain control of them. Throughout history, the biggest threat to autocrats usually came from their own subordinates. No Roman emperor or Soviet premier was toppled by a democratic revolution, but they were always in danger of being overthrown or turned into puppets by their own subordinates. A dictator that grants AIs too much authority in 2025 might become their puppet down the road.

Dictatorships are far more vulnerable than democracies to such algorithmic takeover. It would be difficult for even a super-Machiavellian AI to amass power in a decentralized democratic system like the United States. Even if the AI learns to manipulate the US president, it might face opposition from Congress, the Supreme Court, state governors, the media, major corporations, and sundry NGOs. How would the algorithm, for example, deal with a Senate filibuster? Seizing power in a highly centralized system is much easier. To hack an authoritarian network, the AI needs to manipulate just a single paranoid individual.

Services MarketplaceListings, Bookings & Reviews

Entertainment blogs & Forums

Leave a Reply

Premios feroz music festival 2019.