A new poll of more than 1,200 registered voters provides some of the clearest data yet illustrating the public’s desire to reign in AI.

54% of registered US voters surveyed in a new poll conducted by The Tech Oversight Project agreed Congress should take “swift action to regulate AI” in order to promote privacy and safety and ensure the tech provides the “maximum benefit to society.” Republicans and Democrats expressed nearly identical support for reining in AI, a rare sign of bipartisanship hinting at a growing consensus about the rapidly evolving technology. 41% of the voters said they’d rather see that regulation come from government intervention as opposed to just 20% who thought tech companies should regulate themselves. The polled voters also didn’t seem to buy arguments from tech executives who warn new AI regulation could set the US economy back. Just 15% of the respondents said regulating AI would stifle innovation.

Advertisement

“While the new technology of artificial intelligence—and the public’s understanding of it—is evolving rapidly, it is deeply telling that a majority of Americans do not trust Big Tech to prioritize safety and regulate it, and by a two-to-one margin want Congress to act,” Tech Oversight Project Deputy Executive Director Kyle Morris told Gizmodo.

The poll drops at what could turn out to be an inflection point for government AI policy. Hours prior to the poll’s release the Biden Administration met with the leaders of four leading AI companies to discuss AI risks. The administration also revealed The National Science Foundation would provide $140 million in funding to launch seven new National AI Research Institutes.

Advertisement

Recent pushback against AI

Even without polling, there are some clear signs the national conversation surrounding AI has shifted away from mild amusement and excitement around AI generators and chatbots toward potential harms. What exactly those harms are, however, varies widely, depending on who you ask. Last month, more than 500 tech experts and business leaders signed an open letter calling on AI labs to immediately pause development on all new large language models more powerful than OpenAI’ GPT-4 over concerns it could pose “profound risks to society and humanity.” The signatories, which included OpenAI co-founder Elon Musk and Apple co-founder Steve Wozniak, said they’d support a government-mandated moratorium on the tech if companies refused to willingly play ball.

Advertisement

Other leading researchers in the field like University of Washington Professor of Linguistics Emily M. Bende and AI Now Institute Managing Director Sarah Myers West agree AI needs more regulation but balk at the increasingly common trend of ascribing human-like characteristics to machines essentially playing a highly advanced game or word association. AI systems, the researchers previously told Gizmodo, aren’t sentient or human, but that doesn’t matter. They fear the technology’s tendency to make up facts and present them as truth could lead to a flooding of misinformation making it even more difficult to determine what’s true. The tech’s baked-in biases from discriminatory datasets, they say, mean negative impacts could be even worse for marginalized groups. Conservatives, fearful of “woke” biases in chatbot results, meanwhile, have applauded the idea of Musk creating his own politically incorrect “BasedAI.”

“Unless we have policy intervention, we’re facing a world where the trajectory for AI will be unaccountable to the public, and determined by the handful of companies that have the resources to develop these tools and experiment with them in the wild,” West told Gizmodo.

Advertisement

A new wave of AI bills are on the way

Congress, a legislative body not known for keeping up with new tech, is scrambling to pick up the pace when it comes to AI tech policy. Last week, Colorado Sen. Michael Bennet introduced a bill calling for the formation of an “AI Task Force” to identify potential civil liberty issues posed by AI and provide recommendations. Days before that, Massachusetts Sen. Ed Markey and California Rep. Ted Lieu introduced their own bill attempting to prevent AI from having control over nuclear launch weapons. They said they worry could lead to a Hollywood-style nuclear holocaust. Senate majority leader Chuck Schumer similarly released his own AI framework attempting to increase transparency and accountability of the tech.

Advertisement

“The Age of AI is here, and here to stay,” Schumer said in a statement. “Now is the time to develop, harness, and advance its potential to benefit our country for generations.”

This week, the Biden administration signaled its own interest in the area by meeting with leaders of four leading AI companies this week to discuss AI safety. FTC Chair Lina Khan, one of the country’s top regulatory enforcers, recently published her own New York Times editorial with a clear, direct message: “We must regulate AI.”

Advertisement

Much of that sudden movement, according to lawmakers speaking in a recent Politico article, comes from a strong public response to ChatGPT and other popular emerging chatbots. The mass popularity of the apps and general confusion around their ability to create convincing, and sometimes disturbing responses had reportedly struck a nerve in ways few other tech issues have.

“AI is one of those things that kind of moved along at ten miles an hour, and suddenly now is 100, going on 500 miles an hour,” House Science Committee Chair Frank Lucas told Politico. “It’s got everybody’s attention, and we’re all trying to focus,” said Lucas.

Advertisement

Want to know more about AI, chatbots, and the future of machine learning? Check out our full coverage of artificial intelligence, or browse our guides to The Best Free AI Art Generators and Everything We Know About OpenAI’s ChatGPT.

Services MarketplaceListings, Bookings & Reviews

Entertainment blogs & Forums