Skip to content

CONCEPT Cited by 1 source

Trusted Execution Environment (TEE)

Definition

A Trusted Execution Environment (TEE) is a hardware-enforced isolated execution context whose contents — memory, register state, and (with modern extensions) accelerator state — are inaccessible to the host OS, hypervisor, and cloud-operator control plane. A TEE typically provides:

  1. Memory confidentiality + integrity — contents encrypted + authenticated by the CPU; the host cannot read or silently tamper.
  2. Execution isolation — the TEE runs outside the hypervisor's trust boundary.
  3. Hardware root of trust — the CPU produces a signed attestation of the loaded binary's measurement that a remote verifier can check.

TEEs come in several shapes — enclave-style (e.g. Intel SGX: an application-level protected memory region inside a normal process), VM-style (e.g. AMD SEV-SNP, Intel TDX: the protected boundary is a whole virtual machine; see CVMs), and emerging accelerator-side (e.g. NVIDIA Hopper Confidential Computing mode on GPUs).

What TEEs are for

TEEs address the "data in use" gap: classical cryptography protects data at rest (disk encryption) and in transit (TLS), but once the data is decrypted for computation, it sits as plaintext in process memory — visible to a privileged attacker who compromises the host OS, hypervisor, or datacentre operator. TEEs close that gap by letting code run over plaintext inside a boundary the host cannot see into.

Why TEEs are load-bearing for private AI inference

Server-side LLM inference over private user content has historically required the user to accept that the inference provider can see the plaintext. WhatsApp Private Processing is the canonical wiki instance of using a TEE to run large-model inference while preserving end-to-end encryption: the device encrypts the request with an ephemeral key bound to a specific CVM whose binary digest has been attested against a published ledger; only the device and that CVM can decrypt.

What TEEs are NOT

TEEs do not by themselves:

  • Prove code correctness — attestation proves which binary is running, not that it does what it claims. Pair with transparency logs, open-source, and third-party audit.
  • Defeat side-channel attacks — TEE research repeatedly demonstrates speculative-execution, timing, and power side-channels. Defence-in-depth is required.
  • Defeat physical attacks on the host — encrypted DRAM closes many, not all. Meta explicitly notes this residual risk and layers physical datacentre controls on top.
  • Solve confidentiality of the application's own logic — a buggy app inside a TEE can still leak data. Containerisation, log-filtering, and minimised input surfaces still apply.
  • Eliminate the need for defence-in-depth — the TEE is one layer; everything above, below, and beside it still needs hardening.

Composition with the rest of the stack

In the Private Processing architecture, the TEE is the inner-most trust boundary, composed with:

Seen in

Last updated · 319 distilled / 1,201 read