CONCEPT Cited by 1 source
Trusted Execution Environment (TEE)¶
Definition¶
A Trusted Execution Environment (TEE) is a hardware-enforced isolated execution context whose contents — memory, register state, and (with modern extensions) accelerator state — are inaccessible to the host OS, hypervisor, and cloud-operator control plane. A TEE typically provides:
- Memory confidentiality + integrity — contents encrypted + authenticated by the CPU; the host cannot read or silently tamper.
- Execution isolation — the TEE runs outside the hypervisor's trust boundary.
- Hardware root of trust — the CPU produces a signed attestation of the loaded binary's measurement that a remote verifier can check.
TEEs come in several shapes — enclave-style (e.g. Intel SGX: an application-level protected memory region inside a normal process), VM-style (e.g. AMD SEV-SNP, Intel TDX: the protected boundary is a whole virtual machine; see CVMs), and emerging accelerator-side (e.g. NVIDIA Hopper Confidential Computing mode on GPUs).
What TEEs are for¶
TEEs address the "data in use" gap: classical cryptography protects data at rest (disk encryption) and in transit (TLS), but once the data is decrypted for computation, it sits as plaintext in process memory — visible to a privileged attacker who compromises the host OS, hypervisor, or datacentre operator. TEEs close that gap by letting code run over plaintext inside a boundary the host cannot see into.
Why TEEs are load-bearing for private AI inference¶
Server-side LLM inference over private user content has historically required the user to accept that the inference provider can see the plaintext. WhatsApp Private Processing is the canonical wiki instance of using a TEE to run large-model inference while preserving end-to-end encryption: the device encrypts the request with an ephemeral key bound to a specific CVM whose binary digest has been attested against a published ledger; only the device and that CVM can decrypt.
What TEEs are NOT¶
TEEs do not by themselves:
- Prove code correctness — attestation proves which binary is running, not that it does what it claims. Pair with transparency logs, open-source, and third-party audit.
- Defeat side-channel attacks — TEE research repeatedly demonstrates speculative-execution, timing, and power side-channels. Defence-in-depth is required.
- Defeat physical attacks on the host — encrypted DRAM closes many, not all. Meta explicitly notes this residual risk and layers physical datacentre controls on top.
- Solve confidentiality of the application's own logic — a buggy app inside a TEE can still leak data. Containerisation, log-filtering, and minimised input surfaces still apply.
- Eliminate the need for defence-in-depth — the TEE is one layer; everything above, below, and beside it still needs hardening.
Composition with the rest of the stack¶
In the Private Processing architecture, the TEE is the inner-most trust boundary, composed with:
- Remote attestation + RA-TLS — gate the client session on a verified TEE binary digest.
- Verifiable transparency log of acceptable digests — so the what is auditable, not just the whether.
- OHTTP + third-party relay — route-level non-targetability so the operator cannot map a user to a specific TEE host.
- Anonymous credentials — application-level authentication without identification.
- Log-filtering egress pipeline — observability without content leakage.
Seen in¶
- sources/2025-04-30-meta-building-private-processing-for-ai-tools-on-whatsapp — TEE (specifically a CVM + Confidential-Compute-mode GPU) is the core primitive underneath Private Processing. Canonical wiki instance of TEE-for-private-AI-inference.
Related¶
- concepts/confidential-computing — the industry-wide posture that TEEs instantiate.
- concepts/remote-attestation — the proof-of-binary-identity mechanism.
- concepts/ra-tls — the TLS composition that turns attestation into a session-gate.
- concepts/blast-radius — the TEE bounds blast radius on host compromise.
- concepts/defense-in-depth — TEEs are one layer, not a panacea.
- systems/cvm-confidential-virtual-machine — the VM-granularity realisation.
- patterns/tee-for-private-ai-inference — the architectural pattern.
- patterns/attestation-before-session-key-release — how TEEs are gated.