How to get traffic into OpenShift Service Mesh

Voravit L
8 min readNov 20, 2020

Service Mesh can make your Microservices application more simplified by push complexity to be handled by infrastructure with Infrastructure-as-as-Code capabilities.

Lots of things that Service Mesh can do for you e.g. service resilience with timeout, retry, circuit breaker or routing your service based or even secure Microservices communication with mutual TLS.

This blog will get you through the process of deploying OpenShift Service mesh, get the traffic into your mesh with OpenShift’s route then apply Istio policies for chaos engineering and canary deployment

Demo Applications

This demonstration code simulates a common use case, which is application consists of frontend application may be your API and backend application may be your service that access your backend system

Frontend and Backend app will be deployed in namespace data-plane. Frontend is configured to call backend app using service name and backend is configured call to external site that is https://httpbin.org/status/200

Follow these steps to deploy and test frontend and backend applications.

Deploy sample applications

Setup OpenShift Service Mesh

Next, we will create control plane on namespace control-plane, then join namespace data-plane to control-plane and inject sidecar to frontend and backend application

Overall process for setup OpenShift Service Mesh is quite simple as follow:

Install required operators from OperatorHub — ElasticSearch, Jaeger, Kiali and OpenShift Service Mesh

Create a namespace for the control plane.

Create control plane by create ServiceMeshControlPlane Custom Resource Definition (CRD)

Join data plane to control plane by create ServiceMeshMemberRoll CRD

Follow these steps to create a control plane and join namespace data-plane into service mesh.

Control Plane Creation

Step 8 is waiting for control plane creation. Total initial number of control plane’s pods is 12 pods. This number is based on OpenShift Service Mesh 1.1.10 that is based on Istio 1.4.8

The number of pods in the control plane will be significantly reduced in OpenShift Service Mesh 2.0 that uses Istio 1.6 as upstream project which introduces istiod that is a more monolithic approach.

Sidecar Injection

OpenShift Service Mesh control automatic sidecar injection at pod level. Control plane will automatic inject sidecar to pod with

annotation sidecar.istio.io/inject equals to true

You can do this by editing deployment with oc edit or kubectl edit command. Alternatively you can run the following command to patch existing deployment.

sidecar injection

Check the frontend and backend pods that both pods contain 2 containers. Notice that the READY column shows 2/2 that there are 2 containers in this pod and both pods are in ready state.

oc get pods -n data-plane#Sample output
NAME READY STATUS RESTARTS AGE
backend-v1-5c45fb5d76-gg8sc 2/2 Running 0 68s
frontend-v1-546485fc46-q2nnf 2/2 Running 0 92s

At this stage, our application will look like the following diagram and traffic is represented with a blue line.

Test application again with cURL

curl -s -w"\n Response Code:%{http_code}\n"  $(oc get route frontend -n data-plane -o jsonpath='{.spec.host}')

This time you will get HTTP response code 503 that is Service Unavailable. Why?

Network Policies

Why can’t you access your application via route? This is because OpenShift Service Mesh automatically added network policy to allow traffic from OpenShift’s route only if that pod contains label

maistra.io/expose-route: “true"

You can verify this by run following command

oc get networkpolicy/istio-expose-route-basic-install -n data-plan -o yaml

Traffic from namespace with label network.openshift.io/policy-group: ingress (line 6) is allowed only if pod contains label maistra.io/expose-route equals to true (line 9)

network policy: istio-expose-route

Then we will annotate frontend’s deployment with label maistra.io/expose-route: “true” to make the route of the frontend application reach the frontend’s pod.

#Patch deployment
oc patch deployment/frontend-v1 -p '{"spec":{"template":{"metadata":{"labels":{"maistra.io/expose-route":"true"}}}}}' -n data-plane
#Test with cURL again
curl $(oc get route frontend -n data-plane -o jsonpath='{.spec.host}')
#Sample output
Frontend version: v1 => [Backend: http://backend:8080, Response: 200, Body: Backend version:v1, Response:200, Host:backend-v1-5c45fb5d76-gg8sc, Status:200, Message: Hello, Quarkus]

Using Istio Ingress Gateway along with OpenShift Route

Route traffic into Service Mesh with OpenShift’s router only, you can access your deployed applications that are part of Service Mesh but you cannot apply policies for incoming traffic at the edge of Mesh because OpenShift’s router does not have a sidecar and Service Mesh needs a sidecar to do its magic.

In what case do you need Istio to do the jobs for you?

For example, You want to launch new features for your UI and you planned to test this for just some portion of users e.g. only for users that use Firefox browser or You want to implement Zero trust network by validating JWT token of incoming request.

So, to apply Istio’s policies to the edge of the mesh we need to drive incoming traffic through Istio’s ingress gateway in the control plane.

To do this, We will create Istio’s gateway, Istio virtual service and OpenShift’s route for frontend application as following diagram

Gateway

Let’s check for Gateway first. We will create a wildcard gateway with the following YAML in control-plane namespace.

Please note that at line 14. You need to replace SUBDOMAIN with your cluster subdomain.

wildcard-gateway

OpenShift’s Route

Next, check for Route, We will create route in control-plane namespace with the following YAML

Please note that at line 6, host need to replace SUBDOMAIN with your cluster subdomain and target service at line 11 of this route is pointed to istio-ingressgateway service.

OpenShift’s Route for Istio Ingress

Virtual Service

Last, we need to create Istio’s virtual service for frontend in a data-plane namespace. Check for following YAML at line 7, hosts need to be matched with route’ URL in previous step then you need to replace SUBDOMAIN with your cluster’s subdomain

At line 9, gateways need to be matched with the gateway that we already created and at line 15, target service for this virtual service is pointed to frontend service.

frontend virtual service

Run the following command to create gateway, route and virtual service.

SUBDOMAIN=$(oc whoami --show-console  | awk -F'apps.' '{print $2}')curl -s https://raw.githubusercontent.com/voraviz/openshift-service-mesh-istio-gateway/main/wildcard-gateway.yaml | sed 's/SUBDOMAIN/'"$SUBDOMAIN"'/' | oc apply -n control-plane -f -
curl -s https://raw.githubusercontent.com/voraviz/openshift-service-mesh-istio-gateway/main/frontend-virtual-service.yaml | sed 's/SUBDOMAIN/'"$SUBDOMAIN"'/' | oc apply -n data-plane -f -
curl -s https://raw.githubusercontent.com/voraviz/openshift-service-mesh-istio-gateway/main/frontend-route-istio.yaml | sed 's/SUBDOMAIN/'"$SUBDOMAIN"'/' | oc apply -n control-plane -f -

Now you can access the frontend application via a new route through Istio’s ingress gateway.

#Test with cURL again
curl $(oc get route frontend -n control-plane -o jsonpath='{.spec.host}')

Chaos Engineering with Istio

Next let’s play with some of Istio’s policy. We will configure frontend virtual service with chaos engineer by return HTTP code 500 if request does not contains header foo with value bar.

This virtual service will return with response code 500 (line 11–16) for every request if the header named foo matches with value bar (line 18–21).

Fault Injection

Replace existing frontend virtual service and test with cURL again.

oc apply -f https://raw.githubusercontent.com/voraviz/openshift-service-mesh-istio-gateway/main/istio/frontend-virtual-service-chaos.yaml -n data-plane#cURL with header foo:barROUTE=$(oc get route -n $CONTROL_PLANE | grep frontend | awk '{print $1}')FRONTEND_URL="$(oc get route $ROUTE -n $CONTROL_PLANE -o jsonpath='{.spec.host}' )"curl -H foo:bar $FRONTEND_URL -w "\nResponse Code:%{http_code}\n"#Sample output
fault filter abort
Respnose Code:500
#cURL with header foo:bar1
curl -H foo:bar1 $(oc get route frontend -n control-plane -o jsonpath='{.spec.host}') -w "\nResponse Code:%{http_code}\n"
#Sample output
Frontend version: v1 => [Backend: http://backend:8080, Response: 200, Body: Backend version:v1, Response:200, Host:backend-v1-5c45fb5d76-gg8sc, Status:200, Message: Hello, Quarkus]
Response Code:200

A/B Testing Deployment

Let’s say we have frontend-v2 and want to launch this version for Firefox only. We need to do 2 things.

The first thing we need to do is modify our virtual service to routing based on HTTP header User-Agent and the second thing we need to do is defined frontend into 2 subset one for v1 and another one for v2

To do this we need to create Istio’s Destination Rule. Destination Rule provides information for virtual service to use for their routing e.g. destination service, load balance algorithm, connection pool configuration etc.

Let’s take a look at our Destination Rule. Line 8–11 define subset name v1 for pod with label app equals to frontend and version equals to v1 and Line 15–18 define subset name v2 define for pod with label app equals to frontend and version equals to v1.

A/B Testing deployment by Header User-Agent

Overwrite existing frontend virtual service with following YAML. Line 11–14 define that all incoming request containing header User-Agent that start with Mozilla will be routed to frontend subset v2.

Remark that you need to replace SUBDOMAIN at line 7 with your cluster’s subdomain.

Let’s test this

#Deploy frontend-v2
oc create -f https://raw.githubusercontent.com/voraviz/openshift-service-mesh-istio-gateway/main/istio/frontend-v2-deployment.yaml -n data-plane
#Create Destination Rule
oc apply -f https://raw.githubusercontent.com/voraviz/openshift-service-mesh-istio-gateway/main/istio/frontend-destination-rule.yaml -n data-plane
#Replace virtual service
oc apply https://raw.githubusercontent.com/voraviz/openshift-service-mesh-istio-gateway/main/istio/frontend-virtual-service-ab-testing.yaml -f -n data-plane
#Test with header User-Agent contains "Firefox"
curl -H "User-Agent:Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0) Gecko/20100101 Firefox/78.0" $(oc get route frontend -n control-plane -o jsonpath='{.spec.host}')
#You will get response from frontend-v2
Frontend version: v2 => [Backend: http://backend:8080, Response: 200, Body: Backend version:v1, Response:200, Host:backend-v1-5c45fb5d76-gg8sc, Status:200, Message: Hello, Quarkus]
#Test again with header User-Agent start without "Firefox"
curl -H "User-Agent:Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/79.0.3945.130 Safari/537.36 Edg/79.0.309.71" $(oc get route frontend -n control-plane -o jsonpath='{.spec.host}')
#You will get response from frontend-v1
Frontend version: v1 => [Backend: http://backend:8080, Response: 200, Body: Backend version:v1, Response:200, Host:backend-v1-5c45fb5d76-gg8sc, Status:200, Message: Hello, Quarkus]

Summary

With OpenShift’s route, You can let traffic into OpenShift Service Mesh by 2 methods.

The first method is using OpenShift’s Route the same way you do with your application without Service Mesh. This can be done but you cannot apply Istio’s policies at the edge of the mesh. You cannot do A/B Testing deployment for incoming request to incoming request e.g. from Firefox route that request to frontend-v1 anything else to frontend-v2 but you can still do Istio policy within the mesh. For example, Creating virtual service and destination rule for the backend application with circuit breaker policy to let the frontend know if something happened to the backend service.

The second method is using Istio Gateway along with OpenShift’s Route. This is the preferred method to leverage Service Mesh capabilities including the edge of the mesh.

Have fun with Service Mesh :)

You can find all YAML files here

--

--