Sensitive data needs to be encrypted both when it’s at rest and in transit—that is, when it’s passively stored and when it’s being sent from one spot to another. Covering these two bases protects information a lot of the time, but still doesn’t account for every scenario. Now Google Cloud Services—which counts PayPal, HSBC, and Bloomberg as customers—is working to fill a crucial gap.
When you’re storing tons of data in the cloud, you typically don’t just move it into place and then leave it. Organizations generally want to actively process the information they hold—meaning cloud customers want to comb and index their data, train machine learning models with it, or otherwise crunch some numbers. With that priority in mind, Google is today introducing a “confidential computing” feature known as Confidential Virtual Machines that will allow customers to keep their data encrypted and inaccessible even while it’s being processed. Any entity that runs its own data center might want to have this protection, but it’s especially valuable when organizations entrust their data to rented infrastructure from cloud providers like Google. Without confidential computing mechanisms to process data privately, Google needs unencrypted or cleartext access to data—meaning law enforcement or attackers themselves could use this gap to access information stored in the cloud.
“It’s kind of obvious for our customers to look into cloud for capacity, but then the bigger problem is the lines are blurring,” says Nelly Porter, a senior product manager at Google Cloud. “Who is in control? How can I assure that I can protect my data? Data has to be processed, so you load it in memory in clear text and don’t have any additional protection. And confidential computing is trying to solve this. Despite the fact that we have very strong protections and isolation, cryptographic isolation is much stronger.”
The new feature uses second-generation “Epyc” processors from AMD as its foundation. In a cloud platform where lots of customers are all sharing the same infrastructure, “virtual machines” are a common tool for keeping customers separate, giving them access to their own dedicated “computers” and “servers” through virtualization, rather than through allotting dedicated physical hardware. The virtual machines Google and AMD launched for Google Cloud Services in February (known as N2D VMs) have special security features that allow AMD’s processors to generate and manage encryption keys that stay on the chip. In this way, the system can encrypt a customer’s virtual machines so they’re inaccessible from Google Cloud Services’ general infrastructure, but can still decrypt data for processing inside a walled garden that only customers can access.
“If I look at today, an admin has the ability to peer in and see what’s going on in each one of those VMs. And if I have a bad actor on one of those VMs there are tools that they can use to break out into neighbors’ VMs, peer inside and see the data, because it’s all unencrypted,” says Greg Gibby, senior product manager at AMD. “But now as the admin spins up VMs, they can no longer peer into those VMs and see the data. And if I have a bad actor in those VMs and they break into another one they can’t see the data that’s encrypted.”
With Confidential Virtual Machines turned on, data is decryptable on the chip itself, but remains encrypted to everyone else, including Google, since they can’t access the decryption keys stored only on the chip.
All of this could make the chip’s own security a single point of failure, though.
“The high-level goal of ‘confidential computing’ is to protect data even when it is ‘in use.’ That’s a fantastic goal and would have a major impact on security and privacy,” says Brown University researcher Seny Kamara. “That being said, I’m really skeptical about providing confidential computing based on special hardware. We’ve seen so many attacks already against purpose-built chips like Intel SGX, and there’s no reason to believe the attacks will stop.”