Apple’s new Apple Intelligence system is designed to infuse generative AI into the core of iOS. The system offers users a host of new services, including text and image generation as well as organizational and scheduling features. Yet while the system provides impressive new capabilities, it also brings complications. For one thing, the AI system relies on a huge amount of iPhone users’ data, presenting potential privacy risks. At the same time, the AI system’s substantial need for increased computational power means that Apple will have to rely increasingly on its cloud system to fulfill users’ requests.

Apple has historically offered iPhone customers unparalleled privacy; it’s a big part of the company’s brand. Part of those privacy assurances has been the option to choose when mobile data is stored locally and when it’s stored in the cloud. While an increased reliance on the cloud might ring some privacy alarm bells, Apple has anticipated these concerns and created a startling new system that it calls its Private Cloud Compute, or PCC. This is really a cloud security system designed to keep users’ data away from prying eyes while it’s being used to help fulfill AI-related requests.

Advertisement

On paper, Apple’s new privacy system sounds really impressive. The company claims to have created “the most advanced security architecture ever deployed for cloud AI compute at scale.” But what looks like a massive achievement on paper could ultimately cause broader issues for user privacy down the road. And it’s unclear, at least at this juncture, whether Apple will be able to live up to its lofty promises.

How Apple’s Private Cloud Compute Is Supposed to Work

In many ways, cloud systems are just giant databases. If a bad actor gets into that system/database, they can look at the data contained within. However, Apple’s Private Cloud Compute (PCC) brings a number of unique safeguards that are designed to prevent that kind of access.

Advertisement

Apple says it has implemented its security system at both the software and hardware levels. The company created custom servers that will house the new cloud system, and those servers go through a rigorous process of screening during manufacturing to ensure they are secure.  “We inventory and perform high-resolution imaging of the components of the PCC node,” the company claims. The servers are also being outfitted with physical security mechanisms such as a tamper-proof seal. iPhone users’ devices can only connect to servers that have been certified as part of the protected system, and those connections are end-to-end encrypted, meaning that the data being transmitted is pretty much untouchable while in transit.

Advertisement

Once the data reaches Apple’s servers, there are more protections to ensure that it stays private. Apple says its cloud is leveraging stateless computing to create a system where user data isn’t retained past the point at which it is used to fulfill an AI service request. So, according to Apple, your data won’t have a significant lifespan in its system. The data will travel from your phone to the cloud, interact with Apple’s high-octane AI algorithms—thus fulfilling whatever random question or request you’ve submitted (“draw me a picture of the Eiffel Tower on Mars”)—and then the data (again, according to Apple) will be deleted.

Advertisement

Apple has instituted an array of other security and privacy protections that can be read about in more detail on the company’s blog. These defenses, while diverse, all seem designed to do one thing: prevent any breach of the company’s new cloud system.

But Is This Really Legit?

Companies make big cybersecurity promises all the time and it’s usually impossible to verify whether they’re telling the truth or not. FTX, the failed crypto exchange, once claimed it kept users’ digital assets in air-gapped servers. Later investigation showed that was pure bullshit. But Apple is different, of course. To prove to outside observers that it’s really securing its cloud, the company says it will launch something called a “transparency log” that involves full production software images (basically copies of the code being used by the system). It plans to publish these logs regularly so that outside researchers can verify that the cloud is operating just as Apple says.

Advertisement

What People Are Saying About the PCC

Apple’s new privacy system has notably polarized the tech community. While the sizable effort and unparalleled transparency that characterize the project have impressed many, some are wary of the broader impacts it may have on mobile privacy in general. Most notably—aka loudly—Elon Musk immediately began proclaiming that Apple had betrayed its customers.

Advertisement

Simon Willison, a web developer and programmer, told Gizmodo that the “scale of ambition” of the new cloud system impressed him.

“They are addressing multiple extremely hard problems in the field of privacy engineering, all at once,” he said. “The most impressive part I think is the auditability—the bit where they will publish images for review in a transparency log which devices can use to ensure they are only talking to a server running software that has been made public. Apple employs some of the best privacy engineers in the business, but even by their standards this is a formidable piece of work.”

Advertisement

But not everybody is so enthused. Matthew Green, a cryptography professor at Johns Hopkins University, expressed skepticism about Apple’s new system and the promises that went along with it.

“I don’t love it,” said Green with a sigh. “My big concern is that it’s going to centralize a lot more user data in a data center, whereas right now most of that is on people’s actual phones.”

Advertisement

Historically, Apple has made local data storage a mainstay of its mobile design, because cloud systems are known for their privacy deficiencies.

“Cloud servers are not secure, so Apple has always had this approach,” Green said. “The problem is that, with all this AI stuff that’s going on, Apple’s internal chips are not powerful enough to do the stuff that they want it to do. So they need to send the data to servers and they’re trying to build these super protected servers that nobody can hack into.”

Advertisement

He understands why Apple is making this move, but doesn’t necessarily agree with it, since it means a higher reliance on the cloud.

Green says Apple also hasn’t made it clear whether it will explain to users what data remains local and what data will be shared with the cloud. This means that users may not know what data is being exported from their phones. At the same time, Apple hasn’t made it clear whether iPhone users will be able to opt out of the new PCC system. If users are forced to share a certain percentage of their data with Apple’s cloud, it may signal less autonomy for the average user, not more. Gizmodo reached out to Apple for clarification on both of these points and will update this story if the company responds.

Advertisement

To Green, Apple’s new PCC system signals a shift in the phone industry to a more cloud-reliant posture. This could lead to a less secure privacy environment overall, he says.

“I have very mixed feelings about it,” Green said. “I think enough companies are going to be deploying very sophisticated AI [to the point] where no company is going to want to be left behind. I think consumers will probably punish companies that don’t have great AI features.”

Services MarketplaceListings, Bookings & Reviews

Entertainment blogs & Forums