CONCEPT Cited by 2 sources
Cryptographic inventory (crypto-inventorying)¶
Definition¶
Cryptographic inventory is the organisation-wide mapping of where cryptographic primitives are used — which algorithms, which keys, which call-sites, which services, which hardware. "The process of mapping all the usages of cryptography within an organization is called Crypto Inventorying."
Crypto inventory is the load-bearing prerequisite for any targeted cryptographic change programme: PQC migration, algorithm deprecation, key rotation, compliance attestation, incident response after a primitive is broken.
Cryptography algorithms' strength decays with time. [...] The continuous need to replace cryptographic algorithms requires at the very minimum an understanding of where cryptography is being used. The problem is cryptography is ubiquitous, and finding all instances of a cryptographic primitive in a large infrastructure and codebase is inherently challenging. (Source: sources/2026-04-16-meta-post-quantum-cryptography-migration-at-meta-framework-lesson)
Why it's hard¶
Cryptography is embedded everywhere: TLS stacks, authentication libraries, signing daemons, HSMs, firmware, certificates, CI/CD pipelines, third-party SaaS, shadow IT, build-system plugins, internal RPC middleware, service-mesh sidecars, password-hashing libraries, KMS bindings. At hyperscale:
- Millions of call-sites across thousands of services.
- Multiple languages and libraries (even with a unified library there is always a long tail).
- Dynamic dispatch + binary-linked dependencies defeat naive source-code grep.
- Third-party components + shadow dependencies are invisible without runtime observation.
Two complementary inventory strategies¶
Meta names the two mechanisms that together close the coverage gap — neither is sufficient alone:
1. Automated discovery¶
Monitoring tools that "autonomously map cryptographic primitives used in production" — canonical instance: Meta's Crypto Visibility service built on FBCrypto's aggregating buffered logger (the 2024-12-02 ingest). "This provides high-fidelity data on active usage within our primary libraries."
Strengths:
- No-sampling: every cryptographic operation counted.
- Runtime truth: dynamic dispatch, config-driven choices, binary-linked libraries all show up.
- Quantitative: call-volume informs migration prioritisation.
- Low marginal cost: single instrumentation inside the shared library instruments the whole fleet.
Limits:
- Only primary libraries: if a service links a non-standard crypto library, automation misses it.
- Only active usage: dormant code paths don't fire and can hide.
- No intent capture: knows that something uses X25519, not why.
2. Developer reporting¶
"Because monitoring cannot capture every edge case or shadow dependency, we supplement automation with developer reporting. This process captures cryptographic intent for new architectures and uncovers legacy usage in systems outside standard monitoring paths."
Strengths:
- Edge-case capture: third-party-linked crypto, bespoke libraries, CI/CD-time signing, firmware.
- Intent: lets teams declare what they're doing and why.
- New architectures: captures future primitive needs before runtime telemetry exists.
Limits:
- Labour cost: not free — teams have to report.
- Incomplete by default: unless enforced, teams forget.
- Accuracy drift: reports become stale without re-validation.
The complementary-by-design property¶
The two mechanisms' failure modes are disjoint:
| What automation misses | What reporting misses |
|---|---|
| Third-party libraries | Dynamic runtime behaviour |
| Dormant code paths | Exact call-sites + volumes |
| Future primitives | Libraries engineers forgot about |
| CI/CD-time signing | Third-party-linked crypto (in theory covered, often missed) |
Running both in parallel and cross-checking is what achieves true coverage — which is why the pattern name is automated-discovery + developer-reporting rather than either alone.
Downstream consumers¶
Once an inventory exists, it powers:
- PQC migration scoping — the prioritisation needs to know which call-sites use which primitive to classify them into High / Medium / Low risk.
- PQC Migration Levels assessment — PQ-Aware is defined as "has completed an initial assessment" which is structurally an inventory operation.
- Algorithm-deprecation drives (SHA-1, TLS 1.0, RSA-1024) — same mechanism as PQC but for classical deprecations.
- Key rotation at scale — identifies every call-site using a given key.
- Emergency migration — when a primitive is broken, inventory identifies blast radius in minutes instead of weeks.
- Key-overuse detection — cumulative per-key operation counts.
- Compliance attestation — FIPS inventories, PCI scope.
Seen in¶
- sources/2026-04-16-meta-post-quantum-cryptography-migration-at-meta-framework-lesson — canonical wiki framing of crypto-inventorying as the load- bearing prerequisite for PQC migration, with the two-strategy (automated + reporting) model explicitly named.
- sources/2024-12-02-meta-built-large-scale-cryptographic-monitoring — the automated-discovery half of the inventory story: FBCrypto's aggregating buffered logger + Scribe + Scuba
- Hive. The 2026-04-16 post re-anchors this as "automated discovery" and adds the "developer reporting" half.
Related¶
- concepts/cryptographic-monitoring — the automated-discovery mechanism.
- concepts/post-quantum-cryptography — the primary downstream consumer that forced inventory investment.
- concepts/pqc-migration-levels — the ladder that depends on inventory for PQ-Aware assessment.
- concepts/pqc-prioritization-framework — what the inventory feeds into.
- patterns/automated-discovery-plus-developer-reporting — the operational pattern.
- systems/fbcrypto — Meta's unified crypto library, the instrumentation point for automated discovery.