As cloud adoption gains traction, it’s clear that security teams have been left to play catch up. In diverse hybrid cloud and multi-cloud environments, encrypting data-at-rest and in transit isn’t enough; it needs to be encrypted in use, too. This is where confidential computing comes in.
Today, The Open Confidential Computing Conference (OC3) gathered together IT industry leaders to discuss the development of confidential computing. Hosted by Edgeless Systems, the event welcomed more than 1,200 attendees, technologists and academics.
What confidential computing is — and isn’t
One of the core panel discussions from the event, led by Russinovich, centred on defining what confidential computing is — and isn’t.
More specifically, a vendor using confidential computing will create a secure piece of hardware that stores encryption keys within an encrypted trusted execution environment (TEE). The TEE encrypts data and code while in use so they can’t be modified or accessed by any unauthorized third parties.
“Data in use means that, while an application is running, it’s still impossible for a third party — even the owner of the hardware the application is running — from ever seeing the data in the clear,” said Mark Horvath, senior director analyst at Gartner.
Encrypting data-in-use, rather than at rest or in transit, means that organizations can confidentially and securely process personally identifiable information (PII) or financial data with AI, ML and analytics solutions without exposing it in memory on the underlying hardware.
It also helps protect organizations from attacks that target code or data in use, such as memory scraping or malware injection attacks of the likes launched against Target and the Ukraine power grid.
Introducing the confidential cloud
One of the underlying themes at the OC3 event, particularly in a presentation by Lavender, was how the concept of the confidential cloud is moving from niche to mainstream as more organizations experiment with use cases at the network’s edge.
“The use cases are expanding rapidly, particularly at the edge, because as people start doing AI and machine learning processing at the edge for all kinds of reasons [such as autonomous vehicles, surveillance infrastructure management], this activity has remained outside of the security perimeter of the cloud,” said Lavender.
The traditional cloud security perimeter is based on the idea of encrypting data-at-rest in storage and as it transits across a network, which makes it difficult to conduct tasks like AI inferencing at the network’s edge. This is because there’s no way to prevent information from being exposed during processing.
“As the data there becomes more sensitive — particularly video data, which could have PII information like your face or your driver’s [license] or your car license [plate] number — there’s a whole new level of privacy that intersects with confidential computing that needs to be maintained with these machine learning algorithms doing inferencing,” said Lavender.
In contrast, adopting a confidential cloud approach enables organizations to run workloads in a TEE, securely processing and inferencing data across the cloud and at the network’s edge, without leaving PII, financial data or biometric information exposed to unauthorized users and compliance risk.
This is a capability that early adopters are aiming to exploit. After all, in modern cloud environments, data isn’t just stored and processed in a ring-fenced on-premise network with a handful of servers, but in remote and edge locations with a range of mobile and IoT devices.
The next level: Multi-party computation
Organizations that embrace confidential computing unlock many more opportunities for processing data in the cloud. For Russinovich, some of the most exciting use cases are multi-party computation scenarios.
These are scenarios “where multiple parties can bring their data and share it, not with each other, but with code that they all trust, and get shared insights out of that combination of data sets with nobody else having access to the data,” said Russinovich.
Under this approach, multiple organizations can share data sets to process with a central AI model without exposing the data to each other.
One example of this is Accenture’s confidential computing pilot developed last year. This used Intel’s Project Amber solution to enable multiple healthcare institutions and hospitals to share data with a central AI model to develop new insights on how to detect and prevent diseases.
In this particular pilot, each hospital trained its own AI model before sending information downstream to be aggregated within a centralized enclave, where a more sophisticated AI model processed the data in more detail without exposing it to unauthorized third parties or violating regulations like (HIPAA).
It’s worth noting that in this example, confidential computing is differentiated from federated learning because it provides attestation that the data and code inside the TEE is unmodified, which enables each hospital to trust the integrity and legitimacy of the AI model before handing over-regulated information.
The state of confidential computing adoption in 2023
While interest in confidential computing is growing as more practical use cases emerge, the market remains in its infancy, with Absolute Reports estimating it at a value of $3.2 billion in 2021.
However, for OC3 moderator Felix Schuster, CEO and founder of Edgeless Systems, confidential computing is rapidly “deepening adoption.”
“Everything is primed for it,” said Schuster. He pointed out that Greg Lavender recently spoke in front of 30 Fortune 500 CISOs, of which only two had heard of confidential computing. After his presentation, 20 people followed up to learn more.
“This unawareness is a paradox, as the tech is widely available and amazing things can be done with it,” said Schuster. “There is consensus between the tech leaders attending the event that all of the cloud will inevitably become confidential in the next few years.”
Broader adoption will come as more organizations begin to understand the role it plays in securing decentralized cloud environments.
Considering that members of the Confidential Computing Consortium include Arm, Facebook, Google, Nvidia, Huawei, Intel, Microsoft, Red Hat, EMD, Cisco and VMware, the solution category is well-poised to grow significantly over the next few years.
Why regulated industries are adopting confidential computing
So far, confidential computing adoption has largely been confined to regulated industries, with more than 75% of demand driven by industries including banking, finance, insurance, healthcare, life sciences, public sector and defence.
As the Accenture pilot indicates, these organizations are experimenting with confidential computing as a way to reconcile data security with accessibility so that they can generate insights from their data while meeting ever-mounting regulatory requirements.
Keeping up with regulatory compliance is one of the core drivers of adoption among these organizations.
“The technology is generally seen as a way to simplify compliance reporting for industries such as healthcare and financial services,” said Brent Hollingsworth, director of the AMD EPYC Software Ecosystem.
“Instead of dedicating costly efforts to set up and operate a secure data processing environment, organizations can process sensitive data in encrypted memory on public clouds — saving costs on security efforts and data management,” said Hollingsworth.
In this sense, confidential computing gives decision-makers both peace of mind and assurance that they can process their data while minimizing legal risk.