Mutual TLS (mTLS) made easy with OpenShift Service Mesh, Part 1

Voravit L
7 min readJan 29, 2021

There are tedious tasks for both Dev and Ops teams if we need to enable Mutual TLS for communication for your applications. OpenShift Service Mesh can ease both of Dev and Ops to enable Mutual TLS to applications.

Security is a must for your applications and also for your platform. Kubernetes provides many features to enhance your Kubernetes itself and also with your applications e.g. Network Policy, egress firewall etc.

In many cases, there will be requirements to secure communication between your microservices with TLS or mutual TLS.

Actually, this can be done at application level. Developers can add some more lines and also a couple of configurations to their codes to do TLS/mTLS but this is quite tedious and a lot of effort to maintain configuration for private key and certificate stuff.

This is one of many areas that OpenShift Service Mesh (Istio) can help.(Check my previous blog OpenShift Service Mesh vs Istio)

Istio provides the ability to automatically connect to each other services within Service Mesh with mutual TLS connections. This can be done because within Service Mesh each pod communicates with each other via their sidecar (Envoy). When these pods make connection to another service that does not have a sidecar. Connection will be made in plain-text without mutual TLS. This will help you incrementally adopt mutual TLS implementations.

Setup Service Mesh Control Plane

We need Istio installed and configured to your cluster. For OpenShift, check my previous blog How to get Traffic into Your OpenShift Service Mesh (just follow section “Setup OpenShift Service Mesh”) or use following steps to setup control plane and data plane.

Remark that you need to install following operators from Operator Hub prior to installation.

  • ElasticSearch Operator (by Red Hat)
  • Jaeger Operator (by Red Hat)
  • Kiali Operator (by Red Hat)
  • OpenShift Service Mesh Operator (by Red Hat)

Follow below instructions, to create a control plane and configure data plane to be a member of this control plane.

Deploy Applications and Istio Config

Deploy sample applications to namespace data-plane

Istio provided 2 configurations to control traffic flow. Virtual Service defines how traffic will route to your service based on information provided by destination rules.

Create Destination Rule and Virtual Service for backend application

For frontend application, We need additional Istio configuration Gateway that controls load balancing for service at the edge of the mesh. Basically, for service that you create ingress or OpenShift’s route. You need to create a Gateway.

Virtual Service that matched with Gateway contains configurations that need to be matched with Gateway’s name and cluster’s domain.

Check following Gateway and Virtual Service configuration.

At Line 15 is FQDN that exposed with route and at line 26, Virtual Service contains Gateway’s name in format <namespace>/<name>

Follow instructions will get your cluster’s domain, create gateway, destination rule and virtual service for frontend application

Here is the high level architecture of our application.

Login to OpenShift Developer Console to check deployed applications.

Login to Kiali Console and check for Istio’s config. You can view and edit Istio config from this console.

You can view and update Istio config from this console.

Test our application.

# Use cURL to connect to frontend application
# DOMAIN is your cluster Domain
curl http://frontend.$DOMAIN# Sample outputFrontend version: 1.0.0 => [Backend: http://backend:8080/version, Response: 200, Body: Backend version:v1, Response:200, Host:backend-v1-58ff89cccc-pchmp, Status:200, Message: ]

Secure backend service with mTLS

Lock down backend service with mTLS by configure PeerAuthentication. PeerAuthentication controls how traffic will be tunneled with TLS that is DISABLE (TLS is off), PERMISSIVE ( either plain-text or mTLS is OK), STRICT (only mTLS) and UNSET (inherit from parent. Same as PERMISSIVE if parent is not specified)

We will set backend service to accept only mTLS with STRICT

Create PeerAuthentication for backend service

Test again with curl. You will get errors “upstream connect error or disconnect/reset before headers” similar to this.

Frontend version: 1.0.0 => [Backend: http://backend:8080/version, Response: 503, Body: upstream connect error or disconnect/reset before headers. reset reason: connection termination

Config backend destination rule with mTLS

Test again with curl. This time you will get a response with return code 200 again.

Frontend version: 1.0.0 => [Backend: http://backend:8080/version, Response: 200, Body: Backend version:v1, Response:200, Host:backend-v1-58ff89cccc-pchmp, Status:200, Message: ]

Test that pod without sidecar cannot access backend service by deploy another pod without sidecar and connect to backend service.

Check for backend service’s port

oc get svc/backend -n data-plane# Sample outputNAME      TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGEbackend   ClusterIP   172.30.77.122   <none>        8080/TCP   3h22m

Execute curl from pod without sidecar. This will failed because this pod does not connect with TLS and with a valid certificate.

oc run test-station -n data-plane -i --image=quay.io/voravitl/backend-native:v1 --rm=true  --restart=Never -- curl -vs http://backend:8080# Sample Output* Rebuilt URL to: http://backend:8080/*   Trying 172.30.77.122...* TCP_NODELAY set* Connected to backend (172.30.77.122) port 8080 (#0)> GET / HTTP/1.1> Host: backend:8080> User-Agent: curl/7.61.1> Accept: */*>* Recv failure: Connection reset by peer* Closing connection 0pod "test-station" deleted

Secure frontend service with mTLS

Repeat the same steps with frontend service

Test that pod without sidecar cannot access frontend service by deploying another pod without sidecar and connect to frontend service.

Check for frontend service’s port

oc get svc/frontend -n data-plane# Sample outputNAME       TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGEfrontend   ClusterIP   172.30.57.100   <none>        8080/TCP   21h

Execute curl from pod without sidecar

oc run test-station -n data-plane -i --image=quay.io/voravitl/backend-native:v1 --rm=true  --restart=Never -- curl -vs http://frontend:8080# Sample Output* Rebuilt URL to: http://frontend:8080/*   Trying 172.30.57.100...* TCP_NODELAY set* Connected to frontend (172.30.57.100) port 8080 (#0)> GET / HTTP/1.1> Host: frontend:8080> User-Agent: curl/7.61.1> Accept: */*>* Recv failure: Connection reset by peer* Closing connection 0pod "test-station" deletedpod data-plane/test-station terminated (Error)

Remark that specified mTLS in the destination rule will not be needed if we configure automatic mTLS in the control plane. This is intentionally disabled for demonstration purposes.

Following snippet is extracted from our control plane CRD. Data Plane security is configured with Auto mTLS and mTLS to false.

How about the Health Check Probe?

We just noticed that our backend deployment does not have a liveness probe and a readiness probe. Then we will add both probes to backend deployment.

Wait for a few seconds and check the status of deployment with oc command. You will found that backend pod is restarted many times because liveness probe failed

oc get pods -n data-plane# Sample OutputNAME                           READY   STATUS             RESTARTS   AGEbackend-v1-58ff89cccc-pchmp    2/2     Running            0          3h32mbackend-v1-6c8dbdd97b-96tcw    1/2     CrashLoopBackOff   5          99s    frontend-v1-6c7cb4d996-hj5xl   2/2     Running            0          3h32m

You can also check this from Developer Console

Both readiness and liveness probes are failed because the kubelet does not have a sidecar. To fix this issue. You can tell sidecar to rewrite incoming HTTP probe by annotate your deployment with

sidecar.istio.io/rewriteAppHTTPProbers equals to true

Check Kiali Console, navigate to Graph. On the Display Menu. Select Show “Traffic Animation” and Show Badge “Security”

Then generate some load to our application. In my environment, I used “siege” as load test tool. It’s a simple command line.

siege -c 10 http://frontend.$DOMAIN

Kiali will display the Graph as follows. Notice that on the lower part of Graph (deep blue line) is traffic from kubelet that does health check probes to backend application and the green line is traffic from siege to frontend application via istio-ingressgateway

Summary

With OpenShift Service Mesh (Istio), You can enable mTLS for communication between pods within the mesh from the platform without any modification of your application.

And all configuration can be done by creating, updating and deleting Custom Resource Definition (via YAML files). Then you can also integrate this with your Infrastructure-As-a-Service, CI/CD tools or even with GitOps.

How about securing our application that we exposed service with ingress by mutual TLS? Part 2 of this series will discuss this.

Last but not least. You can find YAML files and also shell script for automated setup all demo here.

https://github.com/voraviz/openshift-service-mesh-ingress-mtls.git

You can also learn and play with Istio form your browser with following interactive lab.

OpenShift: Interactive Learning Portal

Have fun with Service Mesh :)

--

--