Making Data Better

Confidential Computing: Protecting Data and Code in Use

January 29, 2024 Lockstep Consulting Pty Ltd
Making Data Better
Confidential Computing: Protecting Data and Code in Use
Show Notes Transcript Chapter Markers

Data provides the basis for how we make decisions. An enemy of security these days, from our point of view, is plain text. We need better than that. We need device-assisted support for proving where data comes from and how it's been handled. We need systems that keep data (and code) from being altered without cause, that give us the ability to trace the change history of data. 

Confidential computing is a new compute paradigm that provides a hardware-based foundation for running code and the data it manipulates. It safeguards data and code (it's all data; it's all code) in its most vulnerable state: while it's being processed.

In this episode of Making Data Better Steve and George are joined by Anjuna's Mark Bauer to dive into this new model's high impact on security and low impact on cloud app development.

Mark dissects the mechanics behind this approach including how it strengthens the software supply chain through hardware-based attestation. He addresses its fit in modern cloud infrastructure including Kubernetes, data loss prevention (DLP), API scanning and more.

The conversation addresses  the initial major use cases for confidential computing. High risk environments including defense, banking, and healthcare are obvious.  Not so obvious is securing multi-party data sets in the cloud for machine learning and AI-based applications.

So take a listen to this episode of Making Data Better and learn how hardware-based security can harden the cloud. 

Speaker 1:

Welcome to Making Data Better, a podcast about data quality and the impact it has on how we protect, manage and use the digital data critical to our lives. I'm George Peabody, partner at Lockstep, and thanks for joining us With me is Lockstep Founders, steve Wilson. Hey, steve.

Speaker 2:

Good day George.

Speaker 1:

How are you going? Good to see you, Good to see you, Happy New Year. Here's to a great 2024 for all of us. So, Steve, today we're going to talk about an important technology that has much to offer for those concerned with security but, at least as far as I'm concerned, is not well understood or even known. It's called confidential computing. And while we're talking about this on our podcasts about making data better, because data is the basis on which we make decisions and we need systems that keep data from being altered without cause, without it giving us the ability to trace the change history of data. And, as you and I both talk about all the time, the enemy of security these days is, from our point of view, is plain text. We need better than that. We need device assisted support for providing data that well, you know where it comes from and how it's been handled. So here's this new notion confidential computing. We went to this trade group called the Confidential Computing Consortium and here's the definition they put. They posted. It says confidential computing protects data in use by performing computation in a hardware based, attested, trusted execution environment. We're going to get to all those terms. These secure and isolated environments prevent unauthorized access or modification of applications and data while in use, thereby increasing the security assurance for organizations that manage sensitive and regulated data. So you can see sort of the domain here. I know, Steve, you're going to like it because and I like it too because it's converging software and hardware. We think the two need to go together when we're talking about security Absolutely.

Speaker 2:

I love confidential computing, george. It's an idea that his time has finally come. We've had a go as an industry at hardware based security for a long time. People listeners would be familiar, which try not to get too technical, but most people would be familiar with the idea of encryption and the concept that we need encryption over data when it's in storage, at rest, and we need encryption over data when it's moving, when it's in motion. Now confidential computing is about yet another idea of using encryption over data when it's in use. There's a number of ways of doing this. There's homomorphic encryption, there's multi-party secure computation. I think confidential computing is the best go that we've had at this so far and I'm really looking forward to diving into some of those details. I'm just going to flag our test station, the idea that somebody's got your back about the quality of the data, the quality of the stories, the quality of the algorithms and the quality of the computing environment that we're talking about. It's a bit wild west at the moment. There's all sorts of secure chips and enclaves and secure elements and God knows what. How do you know that any of this hardware is actually in a proper state? How do we know that it's fit for purpose. That's what we're going to get into today, and how does this property and how does these characteristics become available to the enterprise?

Speaker 1:

To take us deeper into this topic. I'm delighted to welcome Mark Bauer, who is VP of Product at Confidential Computing Company, and Junna Mark join me on episode three of Making Data Better, where I asked him to set his way back machine to the aughts and his role in helping Heartland payments increase the security of its system after a card breach. Mark, I'm delighted to have you come back to Making Data Better and to talk about what you're actually doing today.

Speaker 3:

Yeah. So, george and Steve, good to meet you again, and thank you for having me back. By the way, I'm happy to have you here to everyone too. So, yes, I think Steve's absolutely right. Confidential Computing is a relatively new technology, that the concepts have been around for some time, but here at Anjuna we're essentially we have the vision of anybody be it enterprises, software companies, saas providers should have access to this technology because of the bar raising capabilities that it has in reducing risk. And if you think about all of the breaches that we've had over the many years, they often start in memory or in software that is vulnerable, and being able to reduce those vulnerabilities really has to come down to taking it down to the hardware itself, which is what confidential computing is about.

Speaker 1:

Wow. So before we get into all of that, mark, tell us a little bit about your backstory. How did you come to this role and interest?

Speaker 3:

Yeah, so it actually goes back to the story that we told in the previous episode, where we think about the Heartland breach and how that was resolved and how it came about. It was ultimately systems that were vulnerable, attackers getting in, getting access to things that they should have zero access to at the end of the day, and exploitation of vulnerabilities that resulted in hundreds of millions of credit cards being exposed, and the company Heartland reacted in a really positive way to turn that from being a potential disaster to a change that the industry had to embrace, which was really introducing hardware to protect data as it's acquired from the card readers themselves, which were always quite secure but weren't used in the way to protect card data, and then keeping that data secure all the way to the backend systems. And so that was in the days of what we thought of was end-to-end security that card data. When you think about the problems we're dealing with now, when organizations are embracing machine learning, massive amounts of data, video feeds, live audio coming in you need a better way to handle not just the data but the code itself, and so I'd been watching this technology come about over the last few years. I'd had it on my product roadmaps in the past, and I'd spent time at Amazon in the hardware security group as well, and so when I saw the opportunity to jump into a vendor that was spearheading the development of this technology at a very early stage in the market, I jumped across, because it is a shift in the way compute will take place and I think within five years and it could be even sooner than that it'll almost become the default way we compute, whether we like it or not, because of the demands on security and the risks that we're dealing with and the regulations and the privacy requirements that people ultimately expect. So I think there's a lot of change coming as a result and I want to be part of that.

Speaker 1:

Cool. So this is actually kind of hurting my head because I'm trying to square how we do business today, how somebody that the folks who've been breached in the past they just store data. There's no such thing as data they don't like.

Speaker 3:

Right this is a data opportunity.

Speaker 1:

Right and, as we know, the downside is it's also an opportunity for fraudsters. From a bridge to liability, it's an opportunity for fraudsters to steal it. Where does confidential computing come in? I mean, we think data minimization is a great idea, but I take it that you're applying confidential computing principles to the storage of all that data.

Speaker 3:

It's when you think about it. We've had and Steve alluded to this, we've had data at rest protection for many years and almost today you wouldn't go into a cloud or even into your own data center without data at rest protection. For the most part, it's something that, if it's there, it should be turned on. In fact, we've abstracted it to the hardware for a lot of bases. Same with data emotion. Data emotion is about the integrity of the data, it's about the confidentiality of the data and it's about making sure you know who you're talking to and either end. But all too often all of those environments are still processing data in memory during use. So you read data from a database into memory to process it, and now that data is in the clear and it's remarkably easy, if you have the right level of privilege which you can gain from malware and from vulnerability software to be able to get access to that memory. And there have been many attacks as a result of that that have compromised systems because you can pull down things like threads. You can pull down the personal data straight out of memory. You don't have to attack systems or worse, we see an encryption keys being stolen out of memory. You know, maybe it's memory dump that then gets moved to another system and that key then gets extracted by an attacker in a lower trust environment and it's used to decrypt data. That's in a production environment, and so the data in use problem is very, very real and it's been very difficult to solve until you take it to a hardware based approach to what is essentially confidential computing. It's simplest. It's essentially taking the memory, encrypting it during use, restricting the processing to processes that themselves can decrypt that content. At the very core it's encrypting the registers, the caches, everything else, and so that you're shrinking the attack surface to only the computational thing, the CPU core itself, which also has some security properties, so that you're eliminating unauthorized access to memory. And if you can do that, then data that is protected at rest or transported out of that environment can also be protected from that in the most key, essentially the CPU environment itself. And so that's fundamentally what it is about in the highest level.

Speaker 1:

So you're taking advantage of the hardware that's built into the CPU. The trusted the T, the trusted environment.

Speaker 3:

Yeah, very much so. In most modern processors now, or in some of the hyperscalers that have their own proprietary versions of this, there's instructions that have been added to the processors to more or less isolate the processing. So either creating a virtual machine instance that is isolated from the rest of the processor, instrumenting a process itself which cannot be seen by any of the other cores there's no other even to the rest of the operating system almost or creating an isolated instance that is locked down, and so you have techniques like memory encryption, isolation, and then things like process isolation. Plus on top of that, as Steve mentioned attestation, which I'll get to but in essence it's new capabilities in the processors that give you what were typically kind of things that you do using, say, hardware security modules, very bespoke pieces of kit that you would rack and stack for things like payments, encryption and key management, but to do that for more general purpose computation, and so that gives you the elasticity you need, the scale that you need, as opposed to running in a very confined hardware box that was over here on the left but did one thing well and secured it well, but now you can run any application in this kind of environment with the right enabling software and so on.

Speaker 1:

And are you able to virtualize the T, then Essentially, yeah.

Speaker 3:

So there are hardware capabilities and there's mechanisms called hardware roots of trust and there's hardware modules in the processors that handle the encryption, offload, the encryption of memory. So, it's not done on the regular processor. It's all offloaded so it runs at full speed if you will. But the kind of the notion of having hardware roots of trust gives you this ability to always come back and prove that the software that you're running is in this trusted environment, and which gives you a different way to start to prove that you're running in an environment that you can at least establish there was an acceptable level of security before you started to use it, which is something that you just don't get with regular processors at all. You make an assumption of trust.

Speaker 2:

So what's different now, mark? We throw some jargon around like trusted platform modules and secure elements and hardware security modules. I think most people have got a sense of this. Hardware security module is like a shoebox size piece of kit that runs in a computer rack and it costs $50,000 and it's a great idea. But unless you're a bank you can't afford it. We've had commoditized versions of this before. The trusted platform module was supposed to be a chip on the motherboard of every PC and in fact most PCs have got them, but they're not sort of turned on. What's happened? Leading question to make CCC accessible, confidential computing accessible, where we've had all these dead ends before?

Speaker 3:

Yeah, so confidential computing itself has actually been around for more than five years. In fact started in probably almost a decade ago with the very early processors that tended to be look more like the hardware security module small memory for prints, limited capabilities, very complex to use, just like HSMs, in fact. These days, though, that's changed. When you think about the demands of modern work loads. So you're running an AI workload that needs to process billions of points in a model, a core banking system that you want to run in these kinds of environments. Then you need the same computation capability that you get with regular instances that you have in the cloud. So I need 200 processors, I need 10,000 processors, so the technology is scalable to that level number one. There's also been a kind of a. You know availability is huge. Now Nearly every AWS instance has what's called AWS Nitro on-claim, which is their term for confidential computing.

Speaker 2:

It's extensions to run isolated work loads, and so this is much more than what Amazon and others have had like HSM, cloud HSMs for a long time. It's going beyond that, isn't it?

Speaker 3:

Yes, you know, the HSMs are great for storing keys, signing transactions, like PKI, so I've got a document that I need to sign with a digital certificate in a high integrity environment. It's great for that. It's not great if you want to run core banking in one of those things, because they're just not designed for it and you'd have to re-architect the app and everything about it. Proprietary operating systems it's a mess. These days you want to run Kubernetes applications and to run them in an isolated environment so that insiders have no access, so your admins have no access to them.

Speaker 2:

Re-archiving the special edges in script language.

Speaker 3:

Exactly exactly, which is that's what our role here is around. Junior is making this very, very simple so that not only you're taking advantage of this environment because it's there and you can, which is just a good thing. Generally you know better security overall. But it starts to get into that problem of how do I trust something before I use it? And if you think about that, when you go to a cloud today and you get an instance, it's like getting a server. You assume that the BIOS is high integrity and has no backdoors. You assume that the hypervisor you're using is good. You assume that the operating system is good and you look at the certifications that says this organization went through PCI, hipaa, soc2, etc. Which is an assessment done by human at some point in time. But it doesn't give you a way to actually measure as improved mathematically that this is a piece of hardware that meets this level of security and has this level of firmware and BIOS and microcode. So I know it hasn't been tampered with and confidential computing lets you do those things. So you can ask the hardware. Tell me what state you are in and prove that. Tell me what software you're running and is it the same software that I built in my secure environment over here in the data center, so I know that it hasn't been tampered with. So think of SolarWinds, that attack where there was manipulation of compilers and libraries being inserted into the supply chain. You could prevent that kind of situation by showing that this code has been tampered with before I run it, and I can prove that, and the hardware can tell me. It's not the piece of software that could be manipulated, it's the processor itself that can prove that. That's a game changer. When it comes to Real things like zero trust, we not just rely on a one way.

Speaker 2:

When remind the audience, the solar winds was the so-called software supply chain. It was almost like a black swan event. It shouldn't have been. I mean, we should have known all along that software is incredibly complicated. It's got its own life story, it's got its own supply chain and what happened with solar winds was that elements of that supply chain. The attackers were very smart. They found the software module sub providers, the subcontractors that were most vulnerable, and they attack those. So then the software modules come back into the mainstream, everything's recompiled, everything runs and you've got that Vulnerability lurking in the supply chain that has been exploited. Um, I, that is really why George and I so interested in Confidential computing. Foundationally, because it helps us tell the story behind the data. It helps us you know, code is data and it helps us improve that confidence in the, in the backstory of the code and the data that we're all depending on.

Speaker 1:

Yeah, exactly, it's all about my are you taking some Um fingerprint of the data, I'm sorry, fingerprint of this, of the, of the compiled code on the, on the Development system, and then using that as to compare to the running code that's inside of your, your, your, your system?

Speaker 3:

It's a little more than that. So, when you think about all of the, all of the things that you can measure, so what? What constitutes something that you're going to run? Well, you've got things like the, the bootloader, the firmware, you've got the microcode version in the processor, then you've got the actual code itself for the application, which may be, you know, containerized application, multiple things, and then you've also got the initial memory state. You expect that to start from. All of those things Constitute essentially what you expect to run. So, when you've gone through your build process and you typically you'll have good security practices around the build process Um, you've got to make sure that what you're building is actually what you've run and you don't have an unexpected hypervisor that's actually leaking data out to a third party. Um, that you don't have an operating system with a back door that you didn't expect to be there, or you are using something that's not in the build process, that has a vulnerability that somehow correct in, or malware. It's being injected somewhere. So, essentially, you're taking secure hatches of all of those components. You know you can choose which ones you want to do, depending on your sophistication and risk tolerance, but it's all about making measurements of things Computing hatches and then asking the hardware we compute those hashes for me and it will give you those measurements back and you can then look at those measurements and combine them to what you build. And the the upside of this too is that the measurements on that software and the fact that the hardware has digitally signed this with the key that's essentially Embedded in at manufacture time into the secure processor, it can sign that, so this then becomes Evidence of what you're running and that evidence can also be then used as identity Like a machine identity to then do things like pull a secret into that application. And so the old way of having to give a cred to an app After you start it and then hope that it's in a secure environment and hope that that cred doesn't get stolen by an insider that can be eliminated, that kind of threat.

Speaker 2:

And that means credential.

Speaker 3:

Everything's got a credential in these days.

Speaker 2:

Yeah, yeah, thank you.

Speaker 3:

Yeah, so the power of attestation comes down to proving trust in something that you're running and then also using that information to bind the identity to other things, like a key management system, so you can then pull in secrets or pulling configurations. If you can do all of that automatically, then confidential computing becomes Very, very seamless and you can instrument it into your dev processes, as opposed to building encryption tools or having to turn things on at the application level. That is the way we were doing it today.

Speaker 2:

Nice to trust, but it's better to verify.

Speaker 3:

I Hold that. It's trust but verify. But the question is well, how do you really verify if you can't verify the thing that's verifying, which is software for the most part, and so abstract that to the hardware and you do have to still trust the hardware? Yes, but we trust AMD, we trust intel, we trust arm. You know, we trust the manufacturers and they have very, very strong processes over this and the trust mechanisms in that process.

Speaker 1:

Mark, what's the process of deploying this? I? It sounds complex, complex and it sounds that I mean. My experience is that when encryption gets involved, there's overhead and and, and, and I'm just even from a processor. Overhead, you know, there's a performance hit. For those of us who think about CPU performance, yeah, how do you, how do you sell around that you? How do you sell around that?

Speaker 3:

Well, the thing is you don't have to, because the modern processors if I look at, say, an AMD sev processor and Intel TDX these are the very you know current processors and TDX is also fairly new the, the benchmarks of these show that the performance impact of the full confidential computing capabilities is somewhere to the order of, you know, 5% on average, even under the serious load, which is a very acceptable number when you're getting this amount of value from encrypting data in use. So you know, and the question that you asked is exactly the right one, traditionally software based encryption mechanisms always added a fair chunk of the processor to dedicate to the processing of data and encrypting it, whereas this is offloaded into hardware accelerators, on the, on the processors, for the memory and then also for things like IO and disk encryption and so on, and so you can get some very high performance and you have the elasticity, and so you can scale horizontally and, you know, set your limits on Kubernetes, run them in a confidential pod and you're away and you don't have to worry so much about that performance and you don't have to worry so much.

Speaker 1:

So, steve, I know you've got I suspect you have more more technical questions. I want to know how heck you get this into the marketplace. You know what. Who is buying this? Um, what are the objections you run into mark? How do you, how do you, knock those objections down?

Speaker 3:

Yeah, good question. So number one is always I haven't heard of this technology before. This sounds too good to be true and it is a very powerful technology, and so this definitely education that's needed, which is why we're doing this podcast. Why is we spend a lot of time with customers in workshops and things like that, to education on on the technology? Um, so that that'd be the first one, I think, when you think about you know. Coming back to your last question about well, how do you get this seen? Um, traditionally, confidential computing was quite an onerous task for organizations. They had to build applications to it. Um, that's no longer the case, especially with what we're doing. It's a matter of essentially taking applications and processes and putting them into a confidential computing environment, and you can do that in in very short order. You know, if you think about the old way of protecting data was the best practice was typically encrypted, the application tier, so we'd use a toolkit and we'd have key management and we'd have to think about the data flows and so on. That was, you know, weeks to months of effort, typically per application in a typical enterprise Confidential computing, including the ability to not only protect data in use but enforce data at rest and in motion can be instrumented into a CI CD, and so then it becomes a question of well, how are you building software?

Speaker 1:

We are CI CD.

Speaker 3:

Sorry, good point. So it's into the development pipeline process so you can actually turn on confidential computing when you need to use it, as opposed to code it in, which is how you used to build it into. You know you used to build application security with SDKs, like you know. Rsa, be Safe is the classic one, and there's loads of vendors that do this sort of stuff. With confidential computing, it doesn't have to be built into the app. It can be instrumented by operations and turn it on so that the instance becomes confidential and the application then runs confidentially, at least the way we implement it. So it's easy.

Speaker 2:

So I'm imagining that a confidential computing element like a T, a trusted execution environment, is essentially a hardened processor, a computing environment. I'm imagining that you're talking about taking enterprise software in its current state and, in a sense, recompiling it or running it in a virtual machine inside the TEE.

Speaker 3:

Yeah, it's more like the latter. So the confidential computing environment is going to be regular processors. It's not like a special purpose processor, like TPM was a dedicated special process of the managed keys. So you can have, you know, you can get an Intel Xeon with TDX or SGX extensions, you can have AWS, nitro enclaves. These are all the kind of the brands, if you will, of confidential computing. In essence, the way we look at it is that you should be able to take your applications, run them in confidential computing without change, without re-architecting, not even a line of code change or even recompiling. It's taking the binary and running it virtualized so that you can run in a confidential computing environment, a run-time environment. Essentially an operating system gives you, gives the application, what it needs to run. So if there was the confidential computing environment was missing networking and storage. We filled in that gap, so it just appears like a regular application environment. And then think about, say, starting a web server, right? So one of the things a web server needs is the TLS key so it can decrypt traffic coming in from the browser Simple key. That's a very important key because if that's exposed all sorts of compromise can happen. So if you can securely inject that into the enclave as it starts, so that it just looks like a file that it would normally pick up. Then it can just run just as it would on a regular environment, and so confidential computing has to be about the simplicity of the application and then also simplicity for the processes around it managing configurations, keys, orchestrating, integrating into Kubernetes so that it's just seamless. So that is how this works. It is almost a kind of a virtualized way of looking at the world, and an application that was formerly insecure can now be run securely without changing it. That's how you get to do this in the development to operations process, as opposed to in the decoding process.

Speaker 2:

Got it.

Speaker 1:

So that strikes me that part of your value proposition is you're eliminating the need for, I guess, deep dive security audits of all the source code to employ every protection technique, that secure technique that you might. Now you can just take that code that you've been running with you knew you had some technical debt associated with it move it into your environment and you can, because of that connection back to the hardware, you can adjust it.

Speaker 3:

You can certainly raise the bar on security for existing applications like that. You have to be mindful and realistic also about expectations. So, for example, if you have a database that you want to run in confidential computing, you absolutely can and should, so that the operations on the database are not visible to insiders and they can't memory dump and encryption of data at rest is protected inside the walled garden of the enclave of the secure execution environment. But if you have a SQL interface that is vulnerable and you can query it, you'll still have a vulnerable interface and so you have to be mindful of how you use it and the expectations around it. It's definitely a very powerful technology, but you might want to also think about the overall threat model to what you're instrumenting into the cloud or your data center and think about what mitigations you can reduce. But we've done analysis against the mitre attack matrix, which is basically the typical threats that people have to deal with in running applications, and there's at least 77 mitre attacks, as in vulnerabilities that are immediately eliminated by running it in hardware, which is a pretty big chunk of high risk that's reduced just out of the gate without doing anything.

Speaker 2:

So what do you use? Cases that you're seeing the most accident in People that are coming to you? Yeah, great question Is it beyond regulated and sensitive data?

Speaker 3:

Yeah, it's some really interesting stuff that we're seeing and it's probably like three categories. I'd say One is the regulated industries, the obvious suspects. You know the defense agencies, as you do. Others often started banking, healthcare as being the next obvious ones, and it's just unblocking things like cloud migration, where you know the irony of confidential computing is it's widely available in the cloud but it isolates you from the CSP. So if you don't trust the cloud, you can now control what you're putting into it. So it gives you hardware controls that you might have had in the data center that weren't there before. So now you can move things to the cloud. So banking, you know healthcare. But I think the two interesting categories we're seeing are especially machine learning and AI where, if you think about AI, you're often dealing with multiple entities of data, so people that don't often trust each other. So in a bank, you might have two lines of business that could, you know, merge data together, but the merger of that data could be very high risk from a breach risk perspective, like high net worth data above and beyond, you know, typically regulated, it's the crown jewels of most banks. Yet being able to analyze that with machine learning might give you something that's brand new, and you want to make sure that you have integrity over the execution of the model itself. So you want to make sure that you have integrity over the dilemma, exactly. And so now you can start to shrink the attack surface of execution of the AI model and create environments where you can have multiple parties come together into what is a trusted ecosystem that can't leak and you can control what comes out of that. So let's get the results out. So let's target this individual with these products much more granularly than I could with just anonymized data. Or, you know, minimize data AI and AI as a service inside organizations who just want to explore it and has a popularity and excitement, and so on. That's a really big one. And then the one that's really cropped up is third parties that for years have been dealing with security and they've got visibility into data that is very high value. So imagine all of those DLP vendors, you know, the ones that have the perimeters that used to inspect the data coming in and out.

Speaker 2:

DLP. These are the data loss prevention yeah, data loss prevention.

Speaker 3:

Or there's companies that do things like API scanning to see what the behavior looks like, with applications to detect risks based on the behavior of users and systems. Those have privilege over what's coming in and out. They also have the API tokens, so being able to secure that and lock that down closes a gap on what is actually security infrastructure itself, and that's helping those organizations move more of their customers to the cloud who have concerns over that level of access and visibility into the client on infrastructure that they don't own. So there's some really interesting scenarios that span everything from banks to healthcare to highly regulated industries and then brand new businesses forming on the back of this technology as well, like the multi-party computation. That's practical.

Speaker 1:

Well, mark, I think we're gonna leave it there. This has been really super interesting, unless there's something else you'd like to address.

Speaker 3:

No, well, I think I come back to the fact that you know, irrespective of whether you're thinking about cloud or computation in the data center, over the next few years the confidential computing is gonna be on people's radars and it's gonna end up on their roadmaps, whether they like it or not. You know the stakes are too high, especially as we get into higher levels of data that need to be processed by machine learning and AI. We're gonna hear about trusted AI. We're gonna hear more and more about trust I know that Steve's been talking about this for years and all of these things come together with trust. That has to come back to hardware at somewhere and then from there everything else can be trusted on top, and without that it's a house of cards. So this is absolutely in people's futures and I think what it'll mean is that security resolves to being about denial of service and the human error. If we can close down a lockdown computation in this way, which is a very profound, forward-looking statement, but I think in five years we'll look back and think you know what that might've been the right thing to do.

Speaker 2:

And then I think we'll take the confidential off the front of this and just make it all computing.

Speaker 3:

I think that is 100% correct.

Speaker 2:

It's understandable that we triage at this time because things are expensive for this, and we triage sensitive and regulated data to be the first use cases for confidential computing. But you know, as computing gets commoditized, I mean I'm sitting here in a room with a, for all I know, a software-controlled LED light bulb with a couple of thousand lines of code, and there's millions of these things around the world and they're all points of attack and that idea of attestation and hardware roots of trust, even in light bulbs and vacuum cleaners and automobiles. This stuff needs to be spread far and wide and it's great to see this, the practicality of the deployment that you talk about, that the existing compute and the existing work flows can be picked up and moved into a secure environment. It's gotta be the way to go, yeah absolutely.

Speaker 1:

I agree, so I may just stop right there, but I'm gonna say here myself that, while tremendously powerful, I'm also struck by the need for existing approaches. With Mark, you were talking about how do you protect an SQL interface and the database that where that data that's normally stored in plain text and you still have the techniques of good security hygiene that have to apply to those. Similarly, you were talking about AI data so much the data that LLM has been trained on it's bullshit.

Speaker 3:

You have to think about integrity.

Speaker 1:

So integrity, the regulatory discussions around bias that's built into the data sets, confidential computing, can't do anything about that. There's still a tremendous amount of work that needs to be done around the data itself, oh sure.

Speaker 3:

Yeah, yeah, absolutely. I mean, you can make sure that it's coming from somewhere you trust, but you have to make sure that the data is also reasonable as well.

Speaker 1:

And that's what's so powerful about what you're talking about. Yeah, exactly, To be able to prove that you have a trusted device is manipulating your bits and bytes, Exactly. Oh, Mark, thank you very much. We'll look forward to getting you back sometime. Hear about your progress and reach out to us. Let us know when something hot happens. We'd love to have you back.

Speaker 3:

Absolutely anytime. All right, good to see you again, guys.

Speaker 2:

Thanks, mark, so good. Thanks everyone, cheers. We'll see you next time.

Confidential Computing
Confidential Computing and Verifying Software Trust
Benefits and Applications of Confidential Computing