Connecticut Sen. Chris Murphy stirred the AI researcher hornets’ nest this week by repeating a number of ill-informed, and increasingly popular statements about advanced AI chatbots’ capacity to achieve human-like understanding and teach themselves complex topics. Top AI experts speaking with Gizmodo said Murphy’s claims were nonsense detached from reality and potentially risk distracting people away from real, pressing issues of data regulation and algorithmic transparency in favor of sensationalist disaster porn.

Advertisement

In a tweet on Sunday, Murphy claimed ChatGPT had “taught itself to do advanced chemistry,” seemingly without any input from human creators. The tweet went on to imbue ChatGPT, OpenAI’s hotly hyped large language model chatbot, with uniquely human characteristics like advanced, independent decision-making. ChatGPT, according to Murphy, looked like it was actually in the driver’s seat.

“It [chemistry] wasn’t built into the model,” Murphy added. “Nobody programmed it to learn complicated chemistry. It decided to teach itself, then made its knowledge available to anyone who asked.”

Advertisement

Advertisement

Top AI researchers descended on Murphy like white blood cells swarming a virus.

“Please do not spread misinformation,” AI researcher and former co-lead of Google’s Ethical AI team Timnit Gebru responded on Twitter. “Our job countering the hype is hard enough without politicians jumping in on the bandwagon.”

Advertisement

That sentiment was echoed by Santa Fe Institute Professor and Artificial Intelligence author Melanie Mitchell who called Murphy’s off-the-cuff characterization of ChatGPT “dangerously misinformed.”

“Every sentence is incorrect,” Mitchell added.

Advertisement

Murphy tried to play damage control and released another statement several hours later shrugging off the criticisms as efforts to “terminology shame” policymakers on tech issues.

“I’m pretty sure I have the premise right,” Murphy added. Twitter later added a context tag to the original tweet with a notification telling viewers “readers added context they thought people might want to know.”

Advertisement

New York University associate professor and author of More Than a Glitch Meredith Broussard told Gizmodo the entire back and forth was a prime example of a lawmaker “learning in public,” about a complex, fast-moving technical topic. Like social media before it, lawmakers of all stripes have struggled to stay informed and ahead of the pace on tech.

“People have a lot of misconceptions about AI,” Broussard said. “There’s nothing wrong with learning and there’s nothing wrong with being wrong as you learn things.”

Advertisement

Broussard acknowledged it’s potentially problematic for people to believe AI models are “becoming human” (they aren’t) but said public spats like this were nevertheless an opportunity to collectively learn more about how AI works and the biases inherent to it.

Advertisement

Why Murphy’s argument is full of shit

University of Washington Professor of Linguistics Emily M. Bender, who’s written at length on the issue of attributing human-like agency to chatbots told Gizmodo that Murphy’s statement includes several “fundamental errors” of understanding when it comes to large language models (LLMs). First, Murphy attempted to describe ChatGPT as an independent, autonomous entity with its own agency. It isn’t. Rather, Bender said ChatGPT is simply an ”artifact” designed by humans at OpenAI. ChatGPT achieved its apparent proficiency in chemistry the same way it was able to pass medical licensing exams or business school tests or: it was simply fed the proper distribution of words and symbols in its training datasets.

Advertisement

“ChatGPT is set up to respond to questions (about chemistry or otherwise) from the general public because OpenAI put that kind of interface on it,” Bender wrote.

Even that may overstate things. Yes, ChatGPT can use chemistry problems or other documents contained in its dataset to impressively respond to users’ questions, but that doesn’t mean the model actually understands what it’s doing in any meaningful way. Just as ChatGPT doesn’t actually understand the concept of love or art, it similarly does not truly learn what chemistry means, large dataset or not.

Advertisement

“The main thing the public needs to know is that these systems are designed to mimic human communication in language, but not to actually understand the language much less reason,” Bender told Gizmodo.

AI Now Institute Managing Director Sarah Myers West reiterated that sentiment, telling Gizmodo some of the more esoteric fear associated with ChatGPT rests on a core misunderstanding of what’s actually going on when the tech answers a user’s query.

Advertisement

“Here’s what’s key to understand about Chat GPT and other similar large language models,” West said. “They’re not in any way actually reflecting the depth of understanding of human language—they’re mimicking its form.” West admits ChatGPT will often sound convincing but even at its best the model simply lacks the “crucial context of what perspectives, beliefs, and intentions ChatGPT’s tool reflects.”

LLMs: Rationality vs. Probability

Bender has written at length about this tricky illusion of rationality presented in chatbots and even coined the term “Stochastic parrot” to describe it. Writing in a paper sharing the same name, Bender describes the stochastic parrot as something, “haphazardly stitching together sequences of linguistic forms … according to probabilistic information about how they combine, but without any reference to meaning.” That helps explain ChatGPT in a nutshell.

Advertisement

That doesn’t mean ChatGPT and its successors won’t lead to impressive and interesting advancements in tech. Microsoft and Google are already laying down the groundwork for future search engines potentially capable of offering far more personalized, relevant results for users than thought possible only a few years prior. Musicians and other artists are similarly sure to toy with LLMs to create transgressive works that were previously incomprehensible.

But still, Bender worries thinking like Murphy’s could lead people to misinterpret seemingly coherent responses from these systems as being mistaken for human-like learning or discovery-making. AI researchers fear that core misunderstanding could pave the way for real-world harm. Humans, drunk on the idea of praising advanced God-like machines, could fall into the trap of believing these systems are overly trustworthy. (ChatGPT and Google’s Bard have already shown a willingness to regularly lie through their teeth, leading some to call them the “Platonic ideal of the bullshitter”). That complete disregard for truth or reality at scale means an already shit-clogged information ecosystem could be flooded with AI-generated waves of “non-information.”

Advertisement

“What I would like Sen Murphy and other policymakers to know is that systems like ChatGPT pose a large risk to our information ecosystem,” Bender said.

AI doomsday fears can distract from solvable problems

West similarly worries we’re currently experiencing a, “particularly acute cycle of excitement and anxiety.” This era of overly sensationalized exalting of LLM risks blinding people to more pressing issues of regulation and transparency staring them directly in the face.

Advertisement

“What we should be concerned about is that this type of hype can both over-exaggerate the capabilities of AI systems and distract from pressing concerns like the deep dependency of this wave of AI on a small handful of firms,” West told Gizmodo in an interview. “Unless we have policy intervention, we’re facing a world where the trajectory for AI will be unaccountable to the public, and determined by the handful of companies that have the resources to develop these tools and experiment with them in the wild.”

Bender agreed and said the tech industry “desperately needs” smart regulation on issues like data collection, automated decision-making, and accountability. Instead, Bender added, companies like AI appear more interested in keeping policymakers busy tearing their hair out over “doomsday scenarios” involving sentient AI.

Advertisement

“I think we need to clarify accountability,” Bender said. “If ChatGPT puts some non-information out into the world, who is accountable for it? OpenAI would like to say they aren’t. I think our government could say otherwise.”

Services MarketplaceListings, Bookings & Reviews

Entertainment blogs & Forums