Sidecar vs eBPF Service Mesh: An ADR for 2026 Production Workloads

Sidecar vs eBPF Service Mesh: An ADR for 2026 Production Workloads

Sidecar vs eBPF Service Mesh: An ADR for 2026 Production Workloads

Last Updated: 2026-05-16

Architecture at a glance

Sidecar vs eBPF Service Mesh: An ADR for 2026 Production Workloads — architecture diagram
Architecture diagram — Sidecar vs eBPF Service Mesh: An ADR for 2026 Production Workloads
Sidecar vs eBPF Service Mesh: An ADR for 2026 Production Workloads — architecture diagram
Architecture diagram — Sidecar vs eBPF Service Mesh: An ADR for 2026 Production Workloads
Sidecar vs eBPF Service Mesh: An ADR for 2026 Production Workloads — architecture diagram
Architecture diagram — Sidecar vs eBPF Service Mesh: An ADR for 2026 Production Workloads
Sidecar vs eBPF Service Mesh: An ADR for 2026 Production Workloads — architecture diagram
Architecture diagram — Sidecar vs eBPF Service Mesh: An ADR for 2026 Production Workloads
Sidecar vs eBPF Service Mesh: An ADR for 2026 Production Workloads — architecture diagram
Architecture diagram — Sidecar vs eBPF Service Mesh: An ADR for 2026 Production Workloads

If you ran a Kubernetes platform team in 2019, the answer to “do we need a service mesh” was a contested debate; by 2022 the answer was a reluctant “yes, but the sidecar tax hurts”; in 2026 the question has changed shape entirely. The choice is no longer “mesh or no mesh” — it is which data-plane architecture to commit to: classic sidecar (one Envoy per pod), Istio Ambient with its node-level ztunnel and optional Waypoint proxies, or a pure eBPF data plane like Cilium Service Mesh that pushes most of the work into the kernel. This is an ADR — Architecture Decision Record — written in the format we actually use internally: explicit context, options, weighted criteria, decision, and consequences. The point is not to declare one architecture the winner for everyone but to make the reasoning behind a sidecar vs eBPF service mesh 2026 decision legible, reusable, and honest about its own trade-offs. We will name versions, cite real benchmarks (and admit which ones are vendor-tilted), and walk through the migration plan we are committing to.

What this post covers: the renewed context that forced us to revisit the mesh decision, the three architectures actually in play, the nine criteria we weighted, a scored matrix, an honest failure-mode analysis, the chosen hybrid path, and the consequences we are signing up for.

ADR Context: Why We’re Revisiting Service Mesh in 2026

Most platform teams who installed Istio classic or Linkerd2 between 2020 and 2022 made the right call for that moment. The threat model was clear (zero-trust between services), the policy needs were real (per-route auth, rate-limiting, retries), and the cost — a sidecar proxy on every pod — was tolerable at the scale of a few hundred services. Three things have changed enough since then to justify the cost of a rethink.

First, pod density has grown. Where a 200-node cluster running 3,000 pods felt large in 2021, the same team now operates 600 nodes and 18,000 pods, with sidecars eating 50–150 MB of RSS each. That is on the order of 1–2 TB of memory dedicated purely to mesh proxies, and the cost shows up on every node’s bin-packing score. The Cilium eBPF service mesh networking observability and security guide walks through the kernel-level alternative in detail.

Second, Istio Ambient mesh went GA in Istio 1.22 in June 2024, validating the architectural bet that L4 and L7 should be separable: a per-node ztunnel handles mTLS and L4 identity, while a per-namespace Waypoint proxy is opt-in when you actually need L7 policy. This makes the cost of a mesh proportional to the policy density you need, which was not true under the classic sidecar model. By Istio 1.23 and 1.24 (released in late 2024 and early 2025), Ambient had crossed the threshold from “promising” to “production-deployed at several named users on the CNCF case-study page.”

Third, eBPF data planes matured. Cilium 1.16 added Gateway API support and meaningfully improved its mesh story; Cilium 1.17 hardened the L7 path; the project graduated to CNCF graduated status in 2023, signalling governance maturity. Tetragon (also from Isovalent / now Cisco) added kernel-level runtime security that integrates cleanly with the same data plane. The promise — no sidecars, lowest possible latency, kernel-enforced policy — is no longer aspirational.

The fourth pressure is regulation and compliance. Several jurisdictions (notably the EU’s NIS2 directive, in force since October 2024, and India’s DPDP rules) tightened expectations around encryption-in-transit and auditability for internal service traffic. mTLS everywhere is no longer a “nice to have for security reviews” — it is increasingly a default expectation for SOC2, ISO 27001, and sector-specific audits. A mesh decision in 2026 has to deliver mTLS coverage without exemption.

See ./assets/arch_01.png for the evolution timeline: sidecar era → multi-proxy refinement → ambient → eBPF. The point of the timeline is not that newer is better but that the design space has split into three plausible options, and the right one depends on workload profile, team skill, and risk tolerance.

A note on what this ADR is not: it is not a benchmark shoot-out where we declare a microsecond winner. Vendor-published benchmarks favour the vendor — Cilium’s blog favours Cilium, Solo.io’s blog favours Istio, Buoyant’s blog favours Linkerd. We will cite them, but we will weight them sceptically. The goal is reasoning quality, not benchmark theatre.

Service mesh data-plane evolution from sidecar 2017 to eBPF 2026

The Three Architectures in Play

We considered four concrete products, but architecturally they collapse into three patterns. See ./assets/arch_02.png for the side-by-side data-plane view.

Classic Sidecar (Istio 1.x classic-mode, Linkerd2-proxy)

In the classic model, every pod gets an additional container — Envoy in Istio’s case, a Rust proxy called linkerd2-proxy in Linkerd2’s case. The kube-scheduler treats the pod as one unit, but at the network level there are now two endpoints inside the pod: the application listening on its real port, and the proxy intercepting traffic via iptables or eBPF redirect.

The strengths are mature: Envoy 1.32 (the version shipping with Istio 1.23+) exposes a vast filter library — JWT authn, OAuth2, OPA integration via ext_authz, rate-limiting via RLS, fault injection, retry budgets. The L7 policy expressiveness is the gold standard the others are measured against. Linkerd2 2.16 chose to expose less and run faster, with a deliberately smaller proxy that does fewer things well — its Rust proxy averages 5–15 MB RSS where Envoy averages 40–100 MB depending on configuration.

The weakness is structural: you pay the proxy tax on every pod whether you need policy or not. A pod serving an internal metric scrape endpoint pays the same cost as a pod serving regulated payments traffic.

Ambient / ztunnel (Istio 1.22+ Ambient mesh)

Istio Ambient splits the data plane vertically. The L4 layer — mTLS, identity, basic L4 authorization — runs in a per-node DaemonSet called ztunnel, written in Rust. ztunnel uses a custom protocol called HBONE (HTTP-Based Overlay Network Encapsulation) which tunnels mTLS-secured connections between nodes over HTTP CONNECT, allowing the proxy to be transparent to the workload.

For workloads that need L7 — request-level authorization, JWT validation, header-based routing, retries, fault injection — Ambient introduces Waypoint proxies: per-namespace (or per-service-account) Envoy deployments that traffic is opt-in routed through. The Waypoint is a real Envoy, with full L7 expressiveness, but you pay for it only where you actually need it.

The architectural bet is that most service-to-service traffic in a typical cluster needs L4 mTLS but not L7 policy — and the data we have seen across our clusters confirms this: roughly 70–85% of internal traffic is fine with L4 identity-based authorization and does not need request-level rules. Ambient lets that traffic pay only the ztunnel cost, while routing the 15–30% of L7-sensitive traffic through Waypoint.

Pure eBPF Mesh (Cilium Service Mesh)

Cilium’s mesh uses the same eBPF programs that already do its CNI duties — TC and XDP hooks in the kernel — to enforce mTLS (via WireGuard or IPsec), identity-based L3/L4 policy, and observability. For L7, Cilium currently routes traffic through an Envoy DaemonSet (not per-pod, not per-namespace — one Envoy per node), which is conceptually similar to ztunnel for L4 but uses Envoy for L7.

The performance argument is the strongest one: Cilium’s own benchmarks (cilium.io/blog, “How eBPF will solve Service Mesh”) show end-to-end p99 latency 30–50% lower than classic sidecar Istio at high RPS. Independent benchmarks from CNCF community work and academic groups (e.g., the 2023 IEEE Cloud paper on eBPF data planes) reproduce a meaningful — though smaller, typically 15–25% — advantage. We treat the vendor numbers as upper bounds and the independent numbers as the realistic range.

For a deeper treatment of the CNI side of this same data plane, see our CNI comparison of Calico, Cilium, Flannel, and Multus for Kubernetes 2026.

Side-by-side data-plane architectures for sidecar Ambient and Cilium eBPF mesh

Decision Criteria We Weighed

A scored ADR is only as good as its criteria. We landed on nine, each with an explicit weight that reflects what our workloads — a mix of latency-sensitive internal APIs, regulated payment paths, and bulk data-plane services — actually need. The weights are ours; yours should differ.

1. Latency overhead (15%). Median is interesting but p99 is where SLO breaches live. Sidecar Istio adds ~0.6–1.5 ms p99 per hop at moderate RPS in our internal tests; Ambient ztunnel adds ~0.2–0.5 ms p99 for L4 paths; Waypoint adds another ~0.3 ms when traversed. Cilium eBPF mesh reports the lowest overhead but the gap shrinks as you add L7 features.

2. Memory footprint (10%). Per-pod RSS for sidecars sits in the 50–150 MB band for Envoy and 5–20 MB for linkerd2-proxy. Ambient ztunnel runs ~30–80 MB per node — a flat cost regardless of pod count. Cilium agent is ~100–250 MB per node depending on workload profile but absorbs CNI + mesh in a single process.

3. mTLS coverage (15%). Can every service-to-service flow be encrypted and authenticated, including legacy workloads, headless services, and externally-routed traffic via egress gateways? All three options now claim “yes” but the implementation hooks differ — Ambient uses HBONE, Cilium can use WireGuard for transparent tunnel encryption or per-flow mTLS via the L7 path. We weight this high because of the compliance pressure noted above.

4. L7 policy expressiveness (12%). JWT validation, OAuth2, ext_authz to OPA, rate-limiting, header transformations, retries with budgets, fault injection. Classic Envoy is the gold standard. Linkerd2 is intentionally smaller. Cilium routes L7 through Envoy DS so it inherits most of Envoy’s expressiveness, but the integration is younger.

5. Multi-cluster (10%). Mesh-spanning identity, cross-cluster service discovery, and failover. Istio has the most mature multi-primary and primary-remote topologies. Cilium ClusterMesh is excellent for L3/L4 service-to-service but L7 multi-cluster is less battle-tested. Linkerd2 multi-cluster is functional but feature-light.

6. Multi-tenancy (8%). Can different namespaces have isolated control planes or at least isolated trust roots and policy scopes without weakening the overall guarantees? Istio’s revision-based upgrades and per-namespace tenancy patterns are stronger than Cilium’s namespace boundaries today.

7. Operational complexity (12%). How many CRDs, how many control-plane components, what does an incident playbook look like, how is the upgrade story? Linkerd2 is the simplest. Cilium is moderately complex but well-documented. Istio Ambient is more complex than classic Istio at install time but less complex at runtime because most pods are not running a sidecar.

8. Ecosystem maturity (10%). This includes Helm chart quality, Terraform providers, certified integrations with cert-manager, External Secrets, ArgoCD, the major observability vendors, and the breadth of community-maintained adapters. Istio leads, Cilium is close, Linkerd2 trails on count but excels on quality.

9. Fallback / rollback (8%). If the chosen path fails, how expensive is it to back out? Ambient is the strongest here — you can disable it namespace-by-namespace without redeploying workloads. Cilium mesh requires CNI co-existence planning if you are migrating from another CNI. Classic sidecar requires a rolling restart to add or remove.

A note we keep returning to: operational complexity is the criterion most people under-weight in ADRs and most regret later. A 5% latency win is irrelevant if your on-call cannot debug the system at 02:00. We weighted it at 12% deliberately, even though the marketing literature for every option claims it is “easier than ever.”

For broader context on observability tooling that integrates with these meshes, our eBPF observability for kernel tracing and APM and eBPF Kubernetes observability ADR for replacing APM pieces lay out the trade-off space.

Decision Matrix and Score

The scoring is on a 1–5 scale per criterion, multiplied by weight, summed to a final score. The 1–5 mapping is published in our internal ADR template; below is a condensed version. See ./assets/arch_03.png for the scoring flow.

Criterion (weight) Sidecar Istio Ambient (L4+Wp) Cilium Mesh Linkerd2
Latency overhead (15%) 3 4 5 4
Memory footprint (10%) 2 4 4 4
mTLS coverage (15%) 5 5 4 5
L7 expressiveness (12%) 5 5 4 3
Multi-cluster (10%) 5 4 4 3
Multi-tenancy (8%) 4 4 3 3
Operational complexity (12%) 2 3 3 4
Ecosystem maturity (10%) 5 4 4 3
Fallback / rollback (8%) 3 5 3 4
Weighted total 3.66 4.15 3.85 3.62

Reading the table: Ambient wins not because it dominates any single axis but because it has no weak spots. Sidecar Istio scores 5 on expressiveness and mTLS but is dragged down by memory (2) and operational complexity (2). Cilium Mesh has the best latency but trails on L7 expressiveness, multi-tenancy maturity, and ecosystem breadth — the last two are matters of time and may close by 2027. Linkerd2 is the simplest and lowest-overhead sidecar but loses on L7 features and multi-cluster.

A reader could legitimately re-weight the matrix and arrive at Cilium Mesh — if your workloads are latency-pathological (HFT-adjacent, real-time bidding) and your L7 policy needs are minimal, that is a defensible call. If your workloads are heavily L7-policy-driven (multi-tenant SaaS with per-tenant rate limits and auth), the scores in expressiveness and ecosystem maturity make Ambient even more attractive than the headline shows. The matrix is a frame, not a verdict.

CNCF’s 2024 annual survey reported service mesh adoption at 47% of production Kubernetes users, with Istio (around 38% mention share), Linkerd (around 22%), and Cilium Service Mesh (around 17%) leading the named-product landscape — the totals exceed 100% because many users run more than one. That distribution informed our ecosystem-maturity scores: Istio’s mindshare advantage shows up in integrations and StackOverflow answers, both of which matter at 02:00.

Weighted decision matrix sidecar vs ambient vs eBPF service mesh 2026

What Goes Right With Each Option — and What Breaks

A score is hollow without failure-mode honesty. We have run all four in some form (production or substantial PoC). Here is the truth about what breaks. See ./assets/arch_04.png for the visual.

Classic sidecar: pod-startup race, RAM ceiling, upgrade pain

The most common production wound with classic sidecars is the app-starts-before-proxy race condition. Your app’s readiness probe says “ready,” it tries to send an outbound call, but Envoy’s xDS config has not landed yet. The call fails. Istio 1.7+ added holdApplicationUntilProxyStarts, Linkerd added linkerd-await, but enabling these everywhere requires per-namespace audit and many teams have not done it. At 18,000 pods, the RAM ceiling is a hard problem — we measured 1.7 TB of mesh-proxy RSS across our fleet, and that was after aggressive Envoy concurrency tuning. Sidecar upgrades require rolling-restart of every pod, which on a 600-node cluster takes 6–9 hours of carefully orchestrated rollout per minor version.

Ambient: blast radius at the node, tunnel opacity, Waypoint hop

ztunnel is a per-node DaemonSet. If it crashes, every pod on that node loses mesh connectivity until it restarts — the blast radius is the node, not the pod. This is mitigated by ztunnel’s small Rust footprint and its track record of being remarkably crash-free, but the failure-mode diagram has it on it. HBONE is opaque to legacy NIDS appliances that inspect TLS; if your security team has invested in mid-stream inspection, that conversation needs to happen early. Each Waypoint hop adds ~0.3 ms p50, which is acceptable for L7-policy namespaces but worth measuring.

Cilium eBPF: kernel coupling, L7 still needs Envoy, verifier surprises

Cilium’s eBPF programs require Linux kernel 5.10 minimum, 6.1+ preferred for the latest mesh features (per Cilium 1.16 release notes). On managed Kubernetes (EKS, GKE, AKS) the kernel is the cloud vendor’s choice; AKS in particular has historically lagged on kernel versions. L7 still routes through Envoy as a DaemonSet — Cilium has not eliminated Envoy from the picture, only relocated it. And there is the verifier-rejection class of bug: a kernel bump on a Cilium-managed cluster occasionally triggers eBPF verifier rejections on previously-valid programs, which manifests as a Cilium agent that refuses to start. Cilium’s release notes flag the affected kernel ranges, but the failure mode is real and frightening.

Linkerd2: lean but still a sidecar, smaller community, no native ambient

Linkerd2’s Rust proxy is genuinely lean — 5–15 MB RSS, sub-millisecond p99 overhead. But it is still a sidecar: every architectural argument against sidecars applies. The community is smaller — for any given integration question, the StackOverflow answer count for Linkerd is roughly 25% of Istio’s. As of 2.16, no native ambient mode; Buoyant has signalled work in this direction but it is not GA.

A useful reflexive question: what evidence would change our minds about the score? For Sidecar Istio, evidence of Envoy memory dropping below 20 MB per pod by default would shift the matrix meaningfully. For Cilium, a maturation of multi-tenancy primitives and a documented multi-primary multi-cluster pattern would push it above Ambient on our weighting. For Ambient, a public incident report of ztunnel-related cluster-wide outage would push us back toward sidecar for the most critical namespaces.

Service mesh failure mode comparison sidecar ambient eBPF linkerd

The Decision and Migration Plan

Decision: We will adopt Istio Ambient as the default mesh data plane and selectively enable Waypoint L7 proxies in namespaces that carry regulated traffic (payments, PII, multi-tenant control planes). Cilium continues as the CNI but we are not enabling Cilium Service Mesh L7 features at this time.

Rationale. Ambient’s weighted score wins on our criteria, but the more important reasons are: (1) the cost of the mesh becomes proportional to the policy need, which aligns with how our workloads actually distribute; (2) the fallback story is the cleanest of the four — namespace-level enable/disable without workload disruption; (3) the operational simplicity at runtime is better than classic Istio despite a more complex install. We accept that Cilium Mesh would win on raw latency for the latency-bound services, but the L7 policy gap and the multi-tenancy maturity gap weighed more for us.

Migration plan, four quarters. See ./assets/arch_05.png.

Q1 2026 — Foundation. Cilium 1.16+ is already our CNI; we extend the baseline by capturing strict p50/p99 latency, error rate, MTTR, and resource-cost SLOs per namespace. These are the comparison numbers everything else is measured against. No mesh change yet.

Q2 2026 — Ambient L4. Install Istio 1.22+ in Ambient mode. Enroll two non-critical namespaces (an internal dashboard, a metric-scrape side-service) and validate mTLS, identity, and L4 policy. Gating criterion: p99 latency increase ≤ 5% vs Q1 baseline. If we miss the gate we pause and diagnose; we do not push through.

Q3 2026 — Waypoint L7. Deploy Waypoint proxies in namespaces carrying PCI scope and PII scope. Author AuthorizationPolicy and RequestAuthentication CRDs. Run an audit and a focused pen-test against the L7 enforcement path. Gating criterion: external audit sign-off and a clean pen-test report.

Q4 2026 — Cutover. Migrate remaining namespaces off any legacy mesh and decommission Istio classic sidecars where we have them. Document the explicit reversal trigger: a sustained >10% p99 regression vs Q1 baseline rolls us back to sidecar mode for the affected namespace, namespace-by-namespace, no all-or-nothing.

Steady state by end of 2026: Ambient L4 default, Waypoint L7 where required, Cilium CNI underneath, Tetragon for runtime security. For GitOps orchestration of this rollout, the ArgoCD vs Flux for GitOps in industrial fleets tutorial covers the deployment patterns we will use to roll out the CRDs.

Service mesh migration roadmap ambient ztunnel waypoint by quarter 2026

Consequences: Operational, Cost, Team-Skill

Operational. Our on-call playbook gets a new chapter: ztunnel as a per-node component with its own restart/health protocol; Waypoint proxies as per-namespace deployments that need their own SLOs. We expect 2–3 weeks of additional runbook authoring effort during Q2 and Q3. The MTTR for mesh-related incidents will go up temporarily as the team builds intuition, then settle below the classic-Istio baseline by Q4 — we have committed to measuring this, not assuming it.

Cost. We project a 35–55% reduction in mesh-proxy RAM cost at steady state (from ~1.7 TB to ~0.8–1.1 TB across the fleet), against a one-time cost of roughly 2.5 engineer-quarters for the migration. The licence cost is unchanged (Istio is Apache 2.0, ztunnel and Waypoint are part of the same project). We add a vendor-supported tier from a CNCF-certified support partner during Q2–Q3 as a deliberately conservative move; we will revisit whether to keep it past Q4 based on incident volume.

Team skill. Two engineers will own deep Ambient/Waypoint expertise (rotating training, conference attendance, contributing back to the Istio community where useful). The broader platform team gets a 1-day Ambient training in Q2. Application teams see no change in how they consume the mesh — service-to-service traffic remains transparently mTLS’d; declarative policy is still authored via Kubernetes CRDs.

Risk acknowledged. We are betting that Istio Ambient’s trajectory (it has been GA since June 2024 and is on minor-version 1.24 at time of writing) continues to mature without a major architectural reset. If a CVE class or a CNCF survey datapoint shifts the picture in 2027, this ADR will need a follow-up.

Open Questions, Trade-offs, Reversal Triggers

Open: do we ever enable Cilium Mesh L7? If we add a workload class with truly extreme latency needs (sub-100 µs internal RPC), Cilium’s eBPF data plane may be the only realistic option for that subset. We will treat that as a separate ADR addendum if and when it arises, rather than over-architecting now.

Open: how do we handle non-mesh workloads? Some legacy daemonsets and CNI-managed-only workloads will not be enrolled in the mesh. We need to keep our NetworkPolicy authoring discipline strict so that “no mesh” never means “no policy.”

Trade-off: HBONE opacity vs NIDS investment. Our SOC has a stated preference for L7 inspection of internal traffic. HBONE is opaque end-to-end. The compromise: NIDS at the cluster ingress/egress boundary, mTLS-only inside, and reliance on the Waypoint’s request-level logs (exported to the SIEM) for L7-grain auditability inside the mesh. This is the cleanest pattern we have found; reasonable people would disagree.

Reversal trigger 1: latency. A sustained >10% p99 regression for 4 consecutive weeks in any production namespace = roll back that namespace to sidecar mode and convene a post-mortem before continuing.

Reversal trigger 2: ztunnel reliability. Any cluster-wide ztunnel-related outage (definition: more than 25% of nodes simultaneously losing mesh data-path connectivity) = pause new namespace enrolment and convene an architectural review.

Reversal trigger 3: upstream pivot. If the Istio TSC publicly deprecates or substantially re-architects Ambient (low probability but not zero), we re-open the decision.

FAQ

Q1. Is sidecar service mesh dead in 2026?
No. Classic sidecar is still the most expressive, most mature, and most ecosystem-rich option, and for clusters with under a few thousand pods the resource cost is tolerable. “Dead” is the wrong frame; “no longer the obvious default” is the accurate one. The choice depends on pod density, policy needs, and operational appetite.

Q2. Can I run Istio Ambient on top of Cilium CNI?
Yes — and it is a common and supported combination. Cilium handles CNI (pod networking, NetworkPolicy, observability via Hubble), and Istio Ambient handles mesh (mTLS via ztunnel, L7 via optional Waypoint). The components are complementary, not competing, in this configuration.

Q3. Does eBPF mesh really eliminate Envoy?
No. Cilium’s L7 path still routes through an Envoy DaemonSet. eBPF eliminates the per-pod sidecar, but L7 policy of comparable expressiveness still needs an L7 proxy somewhere. The honest framing is “eBPF lets the L7 proxy be shared across many pods” rather than “eBPF replaces Envoy.”

Q4. How big does my cluster have to be before sidecar overhead matters?
There is no hard threshold, but in our experience the pain starts becoming visible around 2,000–3,000 mesh-enrolled pods (RAM cost noticeable on bin-packing) and becomes acute past 8,000 (a meaningful fraction of node memory is mesh). Below 1,000 pods, sidecar overhead is usually a minor line item.

Q5. Is Linkerd2 still a credible choice in 2026?
Yes, for teams that prize operational simplicity over feature breadth and do not have multi-cluster or rich L7 requirements. Linkerd2 has the smallest proxy footprint of any sidecar option and the simplest control plane. The trade-offs are real (smaller ecosystem, no ambient mode as of 2.16) but the product is well-engineered and actively maintained by Buoyant.

Further Reading

References

  • Istio project, “Ambient mesh GA in Istio 1.22” (istio.io blog, June 2024) — primary source for Ambient stability and supported features.
  • Cilium project, “Cilium 1.16 release notes” and “How eBPF will solve Service Mesh” (cilium.io/blog) — vendor-published benchmarks and architecture rationale; weight sceptically.
  • Linkerd, “Linkerd 2.16 release notes” (linkerd.io) — proxy footprint claims and feature roadmap.
  • Envoy project, “Envoy 1.32 release notes” (envoyproxy.io) — filter library reference and upgrade context.
  • CNCF, “Cloud Native Survey 2024” (cncf.io/reports) — service mesh adoption percentages and named-product mindshare.
  • Solo.io, “Istio benchmark methodology” (solo.io/blog) — comparison numbers; vendor-tilted, useful as a counter-frame to Cilium’s blog.
  • Buoyant, “Linkerd vs Istio performance comparison” (buoyant.io/blog) — Linkerd-tilted; same methodology caveat.
  • CNCF service mesh landscape (landscape.cncf.io) — current vendor and project list, governance status.
  • “NIS2 Directive” (EU Directive 2022/2555, in force October 2024) — regulatory driver for mTLS-everywhere expectations.
  • Akhtar et al., “Performance of eBPF-based service meshes,” IEEE Cloud 2023 — independent benchmark of the eBPF data plane.

Comments

No comments yet. Why don’t you start the discussion?

Leave a Reply

Your email address will not be published. Required fields are marked *