PATTERN Cited by 1 source
Compiled vs dynamic plugin tradeoff¶
Pattern¶
When extending a host application with plugins, ship both an in-process compiled-plugin path (host's native language, linked into the binary, called at native speed) and an out-of-process dynamic-plugin path (any language, running as a subprocess, called over gRPC or similar IPC). Document the performance vs flexibility tradeoff explicitly and let users choose. Do not deprecate the compiled path when introducing the dynamic path.
Canonical wiki instance¶
Redpanda Connect's 2025-06-17 dynamic-plugin launch (Source: sources/2025-06-17-redpanda-introducing-multi-language-dynamic-plugins-for-redpanda-connect). Explicit guidance in the post:
"Use compiled plugins for most standard use cases and best performance. Use dynamic plugins when you need language flexibility or to wrap existing libraries not available in Go."
And the reverse framing:
"For performance-critical workloads where every microsecond counts, the best approach remains using native Go plugins compiled directly into the Redpanda Connect binary. Dynamic plugins shine for flexibility and language choice, while compiled plugins offer maximum performance."
Dynamic plugins are additive, not a replacement. Both models ship; guidance tells users which to pick per workload.
The tradeoff in detail¶
| Compiled plugin (in-process) | Dynamic plugin (subprocess) | |
|---|---|---|
| Language | Host language only (Go in Redpanda) | Any language with gRPC SDK |
| Call cost | Native function call (ns) | Protobuf + Unix socket + context switch |
| Fault isolation | None — plugin crash kills host | Full — subprocess crash contained |
| Deployment | Rebuild the host binary | Drop-in executable, no host restart |
| Library ecosystem | Host language's ecosystem | Any language's ecosystem (Python ML, etc.) |
| Memory footprint | Shared with host | Separate runtime per plugin subprocess |
| Suitable for hot-path | ✓ | ✗ (IPC cost dominates small calls) |
| Suitable for ML inference | ✗ (no Python ecosystem in Go) | ✓ (Python + Hugging Face, PyTorch, etc.) |
| Linker-level cost | Go plugin disables DCE |
None (not linked into host) |
The host's explicit "use X for Y workload" guidance is the load-bearing part of the pattern: without it, users pick the wrong model and either hit a performance wall (used dynamic for a hot path) or can't ship (tried to compile Python into Go).
Related patterns and contrasts¶
- patterns/grpc-over-unix-socket-language-agnostic-plugin — the architectural shape used for the dynamic-plugin half of this tradeoff.
- concepts/subprocess-plugin-isolation — the fault-containment property the dynamic half buys.
- concepts/batch-only-component-for-ipc-amortization — the mechanism making the dynamic half viable for non-trivial throughput.
- concepts/go-plugin-dynamic-linking-implication — a
cost the compiled half may impose on the host binary (Go-
specific: importing
plugindisables linker dead-code elimination across the host).
Prior art¶
- nginx ships with a compiled-module system (recompile
nginx with
--add-module=...) and a dynamic-module system (load_moduleat runtime) since 1.9.11 — same compiled / dynamic split. - HashiCorp Terraform supports compiled providers (vendor
into the binary via
replaceingo.mod) and subprocess-plus-gRPC dynamic providers (the normal case). The compiled path is for core providers; dynamic is for third-party. - Databases with UDFs — Postgres
CREATE FUNCTION ... LANGUAGE CvsLANGUAGE plpython3u. Native C is fast but shared-address-space (a crash takes down the backend); plpython3u runs in a trusted interpreter but with overhead. Same axis of tradeoff.
When a host should adopt this pattern¶
- The host has a performance-critical core path where compiled plugins are already the norm.
- The host wants to broaden its contributor base — data scientists, ML engineers, non-host-language developers who won't learn the host's language just to ship a plugin.
- The host is in a domain where ecosystem libraries are not host-language-native — ML frameworks in Python, crypto libraries in C, data-science tools in R, specialized DSLs.
- The host's users run a mix of pipelines with wildly different performance budgets — tight-loop stream processors and one-call-per-second enrichment processors both exist, and forcing both onto one plugin model is wasteful either way.
When not to adopt¶
- Single-language ecosystem, no cross-language demand. Don't build the dynamic-plugin path if no one needs it.
- Host is already one shape. If the host is a pure library imported by user code (not a server with a plugin abstraction), the compiled / dynamic split doesn't map.
- Security model forbids subprocesses. Embedded or sandboxed environments where spawning subprocesses isn't allowed.
Anti-pattern: one-model deprecation¶
A common failure mode when launching the dynamic-plugin model is to position it as the plugin model and push users off compiled plugins — leaving hot-path users without a good option. Redpanda Connect avoids this explicitly: the post's guidance says compiled plugins "remain" the right choice for performance-critical paths. The tradeoff is documented as both/and, not either/or.
Seen in¶
- sources/2025-06-17-redpanda-introducing-multi-language-dynamic-plugins-for-redpanda-connect — canonical wiki instance: explicit written guidance on the tradeoff; both plugin models shipped in the same binary; dynamic plugins framed as additive to compiled plugins, not a replacement.