Hello! We are officially launching a THING. It’s going to be a weekly roundup about what’s happening in artificial intelligence and how it affects you.

Headlines This Week

Advertisement

The Top Story: Zoom’s TOS Debacle and What It Means for the Future of Web Privacy

Image for article titled AI This Week: Zoom's Big TOS Disaster

Illustration: tovovan (Shutterstock)

Advertisement

Advertisement

It’s no secret that Silicon Valley’s business model revolves around hoovering up a disgusting amount of consumer data and selling it off to the highest bidder (usually our own government). If you use the internet, you are the product—this is “surveillance capitalism” 101. But, after Zoom’s big terms-of-service debacle earlier this week, there are some signs that surveillance capitalism may be shape-shifting into some terrible new beast—thanks largely to AI.

Zoom was brutally pilloried earlier this week for a change to its terms of service. That change actually happened back in March, but people didn’t really notice the new policy until this week, when a blogger pointed out the shift in a post that went viral on Hacker News. The change, which came at the height of AI’s hype frenzy, gave Zoom an exclusive right to use user data to train future AI modules. More specifically, Zoom claimed a right to a “perpetual, worldwide, non-exclusive, royalty-free, sublicensable, and transferable license” to users’ data which, it was interpreted, included the contents of videoconferencing data. Suffice it to say, the backlash was swift and thunderous, and the internet really spanked the company.

Since the initial storm clouds have passed, Zoom has promised that it isn’t, in fact, using videoconferencing data to train AI and has even updated its terms of service (again) to make this explicitly clear. But whether Zoom is gobbling up your data or not, this week’s controversy clearly indicates an alarming new trend in which companies are now using all the data they’ve collected via “surveillance capitalism” to train nascent artificial intelligence products.

They’re then turning around and selling those AI services back to the very same users whose data helped build the products in the first place, thus creating an endless, self-propagating loop. It makes sense that companies are doing this, since any fleeting mention of the term “AI” now sends tech company investors and shareholders into a tizzy. Still, the biggest offenders here are companies that already own vast swaths of the world’s information, making it a particularly creepy and legally weird situation. Google, for instance, has recently made it known that it is scraping the web to train its new AI algorithms. Big AI vendors like OpenAI and MidJourney, meanwhile, have also vacuumed up most of the internet in an effort to amass enough data. Helpfully, the Harvard Business Review just published a “how-to” guide for companies who want to transform their collected data troves into new AI algorithm juice, so I’m sure we can expect more offenders in the future.

Advertisement

So, uh, just how worried should we be about this noxious brew of digital privacy violations and automation? Katharine Trendacosta, director of policy and advocacy at the Electronic Frontier Foundation (and a former Gizmodo employee), told Gizmodo she doesn’t necessarily think that generative AI is accelerating surveillance capitalism. That said, it’s not de-accelerating it, either.

“I don’t know if it [surveillance capitalism] can be more turbocharged, quite frankly—what more can Google possibly have access to?” she says. Instead, AI is just giving companies like Google one more way to monetize and utilize all the data they’ve amassed.

Advertisement

“The problems with AI have nothing to do with AI,” Trendacosta says. The real problem is the regulatory vacuum around these new technologies, which allows companies to wield them in a blindly profit-driven, obviously unethical way. “If we had a privacy law, we wouldn’t have to worry about AI. If we had labor protections, we would not have to worry about AI. All AI is a pattern recognition machine. So it’s not the specifics of the technology that is the problem. It is how it is used and what is fed into it.”

Policy Watch

Image for article titled AI This Week: Zoom's Big TOS Disaster

Illustration: Barbara Ash (Shutterstock)

Advertisement

As often as possible, we’re going to try to update readers on the state of AI regulation (or lack thereof). Given the hugely disruptive potential of this technology, it just makes sense that governments should pass some new laws. Will they do that? Eh…

DEEPFAKES IN POLITICAL ADS: OBVIOUSLY A PROBLEM.

The Federal Election Commission can’t decide whether AI generated content in political advertising is a problem or not. A petition sent to the agency by the advocacy group Public Citizen has asked it to consider regulating “deepfake” media in political ads. This week, the FEC decided to advance the group’s petition, opening up the potential rule-making to a public comment period. In June, the FEC deadlocked on a similar petition from Public Citizen, with some regulators “expressing skepticism that they had the authority to regulate AI ads,” the Associated Press reports. The advocacy group was then forced to come back with a new petition that laid out to the federal agency why it did in fact have the jurisdiction to do so. Some Republican regulators remain unconvinced of their own authority—maybe because the GOP has, itself, been having a field day with AI in political ads. If you think AI shouldn’t be used in political advertising, you can write to the FEC via its website.

Advertisement

THE FRONTIER MODEL: A SELF-REGULATION SCAM

Last week, a small consortium of big players in the AI space—namely, OpenAI, Anthropic, Google, and Microsoft—launched the Frontier Model Forum, an industry body designed to guide the AI boom while also offering up watered down regulatory suggestions to governments. The forum, which says it wants to “advance AI safety research to promote responsible development of frontier models and minimize potential risks,” is based upon a weak regulatory vision promulgated by OpenAI itself. The so-called “frontier AI” model, which was outlined in a recently published study, focuses on AI “safety” issues and makes some mild suggestions for how governments can mitigate the potential impact of automated programs that “could exhibit dangerous capabilities.” Given how well Silicon Valley’s self-regulation model has worked for us so far, you’d certainly hope that our designated lawmakers would wake up and override this self-serving, profit-driven legal roadmap.

Advertisement

You can compare the U.S.’s predictably sleepy-eyed acquiescence to corporate power to what’s happening across the pond where Britain is in the process of prepping for a global summit on AI that it’ll be hosting. The summit also follows on the fast-paced development of the European Union’s “AI Act,” a proposed regulatory framework that carves out modest guardrails for commercial artificial intelligence systems. Hey America, take note!

NEWS ORGS TO GOVERNMENT: PLEASE REGULATE AI BEFORE IT DESTROYS OUR ENTIRE INDUSTRY

This week, a number of media conglomerates penned an open letter urging that regulations be passed. The letter, signed by Gannet, the Associated Press, and a number of other U.S. and European media companies, says they “support the responsible advancement and deployment of generative AI technology, while believing that a legal framework must be developed to protect the content that powers AI applications as well as maintain public trust in the media that promotes facts and fuels our democracies.” Those in the media have good reason to be wary of new automated technologies. News orgs (including the ones who signed this letter) have been trying to position themselves as best they can to a new industry that seems liable to eat traditional news media.

Advertisement

Question of the Day: Whose Job is Least at Risk of Being Stolen by a Robot?

Image for article titled AI This Week: Zoom's Big TOS Disaster

Illustration: graficriver_icons_logo (Shutterstock)

Advertisement

We’ve all heard that the robots are coming to steal our jobs and there’s been a lot of chatter about whose head will be on the chopping block first. But another question worth asking is: who is least likely to be laid off and replaced by a corporate algorithm? The answer apparently is: barbers. That answer comes from a recently published Pew Research report that looked at the jobs considered most “exposed” to artificial intelligence (meaning they’re most likely to be automated). In addition to barbers, the people most unlikely to be replaced by a chatbot include dishwashers, child care workers, firefighters, and pipe layers, according to the report. Web developers and budget analysts, meanwhile, are at the top of AI’s hit list.

The Interview: Sarah Meyers West on the Need for a “Zero Trust” AI Regulatory Framework

Image for article titled AI This Week: Zoom's Big TOS Disaster

Screenshot: AI Now Institute/Lucas Ropek

Advertisement

Occasionally, we’re going to include an interview with a notable AI proponent, critic, wonk, kook, entrepreneur, or other such person who is connect to the field. We thought we’d start off with Sarah Myers West, who has led a very decorated career in artificial intelligence research. In between academic careers, she recently served as a consultant on AI for the Federal Trade Commission and, these days, serves as managing director of the AI Now Institute, which advocates for industry regulation. This week, West and others released a new strategy for AI regulation dubbed the “Zero Trust” model, which advocates for strong federal action to safeguard against the more harmful impacts of AI. This interview has been lightly edited for brevity and clarity. 

You’ve been researching artificial intelligence for quite some time. How did you first get interested in this subject? What was appealing (or alarming) about it? What got you hooked?

Advertisement

My background is as a researcher studying the political economy of the tech industry. That’s been the primary focus of my core work over the last decade, tracking how these big tech companies behave. My earlier work focused on the advent of commercial surveillance as a business model of networked technologies. The sorta “Cambrian” moment of AI is in many ways a byproduct of those dynamics of commercial surveillance—it sorta flows from there.

I also heard that you were a big fan of Jurassic Park when you were younger. I feel like that story’s themes definitely relate a lot to what’s going on with Silicon Valley these days. Relatedly, are you also a fan of Westworld? 

Advertisement

Oh gosh…I don’t think I made it through all the seasons.

It definitely seems like a cautionary tale that no one’s listening to.

The number of cautionary tales from Hollywood concerning AI really abounds. But in some ways I think it also has a detrimental effect because it positions AI as this sort of existential threat which is, in many ways, a distraction from the very real reality of how AI systems are effecting people in the here and now.

Advertisement

How did the “Zero Trust” regulatory model develop? I presume that’s a play off the cybersecurity concept, which I know you also have a background in.

As we’re considering the path forward for how to seek AI accountability, it’s really important that we adopt a model that doesn’t foreground self-regulation, which has largely characterized the [tech industry] approach over the past decade. In adopting greater regulatory scrutiny, we have to take a position of “zero trust” in which technologies are constantly verified [that they’re not doing harm to certain populations—or the population writ large].

Advertisement

Are you familiar with the Frontier Forum, which just launched last week?

Yeah, I’m familiar and I think it’s exactly the exemplar of what we can’t accept. I think it’s certainly welcome that the companies are acknowledging some core concerns but, from a policy standpoint, we can’t leave it to these companies to regulate themselves. We need strong accountability and to strengthen regulatory scrutiny of these systems before they’re in wide commercial use.

Advertisement

You also lay out some potential AI applications—like emotion recognition, predictive policing, and social scoring—as ones that should be actively prohibited. What stood out about those as being a big red line? 

I think that—from a policy standpoint—we should curb the greatest harms of AI systems entirely…Take emotion recognition, for example. There is widespread scientific consensus that the use of AI systems that attempt to infer anything about your inner state (emotionally) is pseudo-scientific. It doesn’t hold any meaningful validity—there’s robust evidence to support that. We shouldn’t have systems that don’t work as claimed in wide commercial use, particularly in the kinds of settings where emotion-recognition are being put into place. One of the places where these systems are being used is cars.

Advertisement

Did you say cars?

Yeah, one of the companies that was pretty front and center in the emotion recognition market, Affectiva, was acquired by a car technology company. It’s one of the developing use cases.

Advertisement

Interesting…what would they be using AI in a car for?

There’s a company called Netradyne and they have a product called “Driveri.” They are used to monitor delivery drivers. They’re looking at the faces of drivers and saying, “You look like you’re falling asleep, you need to wake up.” But the system is being instrumented in ways that seek to determine a worker’s effectiveness or their productivity…Call centers is another domain where [AI] is being used.

Advertisement

I presume it’s being used for productivity checks?

Sorta. They’ll be used to monitor the tone of voice of the employee and suggest adjustment. Or [they’ll] monitor the voice of the person who is calling in and tell the call center worker how they should be responding…Ultimately, these tools are about control. They’re about instrumenting control over workers or, more broadly speaking, AI systems tend to be used in ways that enhance the information asymmetry

Advertisement

For years, we’ve all known that a federal privacy law would be a great thing to have. Of course, thanks to the tech industry’s lobbying, it’s never happened. The “Zero Trust” strategy advocates for strong federal regulations in the near-term but, in many ways, it seems like that’s the last thing the government is prepared to deliver. Is there any hope that AI will be different than digital privacy?

Yeah, I definitely understand the cynicism. That’s why the “Zero Trust” framework starts with the idea of using the [regulatory] tools we already have—enforcing existing laws by the FTC across different sectional domains is the right way to start. There’s an important signal that we’ve seen from the enforcement agencies, which was the joint letter from a few months ago, which expressed their intention to do just that. That said, we definitely are going to need to strengthen the laws on the books and we outline a number of paths forward that Congress and the White House can take. The White House has expressed its intention to use executive actions in order to address these concerns.

Advertisement

Catch up on all of Gizmodo’s AI news here, or see all the latest news here. For daily updates, subscribe to the free Gizmodo newsletter.

Services MarketplaceListings, Bookings & Reviews

Entertainment blogs & Forums