Blog / mTLS Without Service Mesh
Security

mTLS between microservices without a service mesh

Vishwaraja Pathi · March 5, 2026 · 9 min read

Whenever the topic of encrypting pod-to-pod traffic comes up, someone in the room will say "just use Istio." It's become the default answer to any Kubernetes networking problem, the way "use Kafka" became the answer to every messaging question five years ago. And like Kafka, Istio is a genuinely powerful piece of technology that is wildly overkill for most teams that adopt it.

We've deployed Istio exactly once in production. The client had 80+ microservices, a dedicated platform team, and genuine requirements around traffic splitting, fault injection, and distributed tracing. For that team, a service mesh made sense. For the other dozen or so clusters we manage, it would have been an expensive distraction.

This post is about what we do instead.

The actual problem: plaintext by default

Kubernetes networking is flat and unencrypted by default. Every pod can talk to every other pod, and that traffic travels as plaintext over the node's virtual network. If you're running a CNI like Calico or the default VPC CNI on EKS, there's no encryption between pods unless you explicitly set it up.

For a lot of teams, this doesn't matter in development. But the moment you're handling PII, health data, financial transactions, or anything subject to SOC 2 or HIPAA, an auditor will ask: "Is your internal service-to-service traffic encrypted?" And "no, but the VPC is private" is not the answer they're looking for.

You need mutual TLS. Both sides of the connection present certificates, both sides verify the other. The question is how you get there without adopting a control plane that's more complex than your actual application.

cert-manager: your private CA in the cluster

The foundation of our approach is cert-manager with a self-signed ClusterIssuer acting as a private certificate authority. cert-manager is a mature, well-maintained project that does one thing well: it manages TLS certificates inside Kubernetes. It handles issuance, renewal, and rotation automatically.

We create a root CA certificate, then use that CA to issue short-lived certificates for each service. Those certificates get mounted into pods as Kubernetes Secrets, which the application (or a sidecar like Envoy) uses to terminate and originate TLS connections.

The Certificate CRD is straightforward:

apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
  name: order-service-tls
  namespace: production
spec:
  secretName: order-service-tls
  duration: 24h
  renewBefore: 8h
  privateKey:
    algorithm: ECDSA
    size: 256
  dnsNames:
    - order-service
    - order-service.production.svc.cluster.local
  issuerRef:
    name: internal-ca
    kind: ClusterIssuer

A few things to note here. We issue certificates with a 24-hour duration and renew 8 hours before expiry. Short-lived certs reduce the blast radius of a compromised key. We use ECDSA P-256 because the keys are smaller and signing is faster than RSA, which matters when you're terminating TLS on every service call. And the dnsNames field includes both the short service name and the fully-qualified cluster-local DNS name, because depending on how your services resolve each other, either could be the hostname used for the TLS handshake.

cert-manager watches these Certificate resources and automatically creates and rotates the corresponding Secrets. Your pods mount the Secret as a volume, and you configure your application or sidecar to use those cert files. No manual renewal, no cron jobs, no forgetting to rotate keys three months after launch.

Kong's mtls-auth plugin: verification at the gateway

If you're already running Kong as your ingress controller (and if you've read our EKS checklist, you know we run it on every cluster), there's an even simpler option for north-south traffic: the mtls-auth plugin.

Instead of modifying every service to verify client certificates, you let Kong handle it. The plugin validates that incoming requests present a certificate signed by your trusted CA. If the cert is missing, expired, or signed by the wrong CA, Kong rejects the request before it ever reaches your application.

apiVersion: configuration.konghq.com/v1
kind: KongPlugin
metadata:
  name: mtls-auth
  namespace: production
config:
  ca_certificates:
    - c5f6a98b-2e3d-4f1a-b7c8-9d0e1f2a3b4c
  skip_consumer_lookup: false
  revocation_check_mode: SKIP
plugin: mtls-auth

The ca_certificates field references the ID of your CA certificate uploaded to Kong. When skip_consumer_lookup is set to false, Kong maps the certificate's CN or SAN to a Kong consumer, which means you can apply per-service rate limits, ACLs, or logging based on which service is making the call. It turns your gateway into an identity-aware access layer without writing a single line of application code.

This is where we typically start with clients who need mTLS. It covers the gateway edge, it's a single plugin configuration, and it works with certificates issued by the same cert-manager CA you already set up. For strict east-west mTLS between services that don't go through the gateway, you still need the cert-manager approach with application-level or sidecar-level TLS termination.

What you give up

We should be honest about the tradeoffs, because they're real.

A full service mesh like Istio or Linkerd gives you transparent mTLS — the application has no idea encryption is happening. The sidecar proxy handles everything. With our approach, your application either needs to be configured to use the mounted certificates, or you need to add an Envoy sidecar yourself. That's extra work in your Helm charts and deployment configs.

You also lose the observability features that come with a mesh. Istio gives you per-request metrics, distributed traces, and topology maps out of the box. Without it, you need to build observability separately — Prometheus, OpenTelemetry, whatever your stack uses. For most of our clients, they already have observability tooling in place, so this isn't a gap. But if you're starting from zero, it's worth considering.

And there's no traffic management. Canary deployments, traffic splitting, fault injection, circuit breaking — these are features you get from a mesh's data plane. Without it, you're handling those concerns at the application level or through Kubernetes-native mechanisms like multiple Deployments behind a Service. It works, but it's more manual.

When this approach is enough

We've deployed this pattern across clusters running anywhere from 4 to 25 services. In our experience, it covers the vast majority of real-world requirements:

  • Compliance audits — you can demonstrate that all internal traffic is encrypted with mutual authentication. Auditors care about the presence of encryption, not whether it's implemented via a mesh or application-level TLS.
  • Zero-trust posture — every service proves its identity on every request. Compromising one pod doesn't give you free access to call any other service.
  • Certificate lifecycle — cert-manager handles rotation automatically. No human in the loop, no expiry surprises at 3 AM.

The operational cost is dramatically lower than running a mesh. No sidecar injectors to maintain, no control plane upgrades to manage alongside your Kubernetes upgrades, no debugging proxy configuration when things go sideways. cert-manager is a single controller. Kong's mTLS plugin is a single line in an annotation. The surface area of what can break is small.

Where we draw the line

For teams running more than 20-30 services with complex traffic patterns — A/B testing at the network level, multi-cluster service discovery, or advanced observability requirements — a service mesh starts earning its complexity budget. At that scale, the operational overhead of managing per-service TLS configs and Envoy sidecars manually starts to approach the overhead of just running Istio or Linkerd and letting the mesh handle it.

The key is being honest about where you are today. Most teams we work with are not running 50 microservices. They're running 8 to 15 services across two or three environments, and they need encrypted traffic for a SOC 2 audit that's happening next quarter. For that team, the answer is not "spend three weeks setting up Istio." The answer is cert-manager, a private CA, and the Kong mtls-auth plugin. Ship it in a day, move on to the actual product.

If you're trying to figure out where your team falls on this spectrum, or you need to get mTLS deployed before an upcoming audit, reach out. We've done this enough times to have an opinion on your specific situation.

V

Vishwaraja Pathi

Cloud & DevOps specialist with 13+ years of experience. Founder of Adiyogi Technologies. Previously at Roku, Rocket Lawyer, and BetterPlace.

More from the blog