Intelligent Traffic Routing with Istio

In this article, I will describe, step-by-step, how to achieve intelligent traffic routing with Istio by writing a simple Spring Boot Microservice. For those of you not familiar with it, Istio is a Service Mesh. I like to think of it as Microservice injectable services.

Pre-requisites

Before proceeding, your environment should meet the following pre-requisites. It might be that this example works with other versions, but it has only been tested with the below stack:

  • Java 11.0.1+
  • Docker 18+ (Installed via the Docker client for Mac)
  • Kubernetes 1.10.1+ (Installed via brew)
  • Istio 1.0.5+ (Installed following the instructions here)
    • Make sure to install the Istio sidecar injector
    • Make sure you enable the injector by default by following the instructions here.
    • Make sure you install Istio with strict Mutual TLS. This example relies on it.
  • Git 2.20.1+
  • Maven 3.6.0+

Setting the environment up

I’ve used IntelliJ and a Mac for this example. You might want to choose a different Java IDE / OS, although I haven’t tested this example on, say, Windows.

Run the following command:

kubectl get pods -n istio-system

And make sure that the following pods are running:

Open the command prompt at your favourite location and clone the source code on GitHub with the following command:

git clone git@github.com:mtedone/istio-experiments.git

Or alternatively you can directly download a zip of the repository from GitHub.

You should now have a istio-experiments folder on your file system. The project file system should look like the following:

This is a list of what the folders contain:

  • Docker: Contains the Docker file
  • istio: Contains the gateway, virtual service and a file to ping the Microservice
  • k8s: Contains the Kubernetes deployment file
  • src: Contains the Spring Boot Microservice.

The Spring Boot Microservice

The Microservice is defined by the IndexController class. It exposes a GET method for the path /hello and returns the value: “Hello from version <version>”, where <version> is injected into the application through an environment variable defined in the Kubernetes deployment file. This follows best practices according to the 12 Factors App, as explained in one of my previous blogs.

Building the Microservice and the Docker image

From the command prompt within the istio-experiments folder, type the following command:

mvn clean package && docker build . -t istio-experiments:v1 -f ./Docker/Dockerfile  

Please note that the first time it might take some time to execute as the Java image is based on AdoptOpenJDK version 11. To verify that your image has been built successfully, you can type the following command:

docker images | grep istio-experiments

And you should see something like the following:

Since we pass the variable arguments through an environment variable, there’s no need to change the code to create a second version. We can simply repeat the earlier command, this time tagging the Docker image with v2, as follows:

mvn clean package && docker build . -t istio-experiments:v2 -f ./Docker/Dockerfile  

Now, by typing the command:

docker images | grep istio-experiments

You should actually see something like the following:

In real life, you would have probably made some changes to your microservice and then built v2.

Deploying the Kubernetes Service and Deployments

Now we will proceed with the deployment of the Kubernetes Service and Pods for v1 and v2. Below you can find the file containing the deployment directives (k8s/istio-experiments-deployment.yml).

apiVersion: v1
kind: Service
metadata:
  name: istio-experiments-service
spec:
  ports:
  - port: 8080
    protocol: TCP
    name: http
  selector:
    app: istio-experiments
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: istio-experiments-v1
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: istio-experiments
        version: v1
    spec:
      containers:
      - name: istio-experiments-v1
        image: istio-experiments:v1
        imagePullPolicy: IfNotPresent
        env:
          - name: VERSION
            value: VERSION-1
        ports:
        - name: http
          containerPort: 8080
          protocol: TCP
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: istio-experiments-v2
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: istio-experiments
        version: v2
    spec:
      containers:
      - name: istio-experiments-v2
        image: istio-experiments:v2
        imagePullPolicy: IfNotPresent
        env:
          - name: VERSION
            value: VERSION-2
        ports:
        - name: http
          containerPort: 8080
          protocol: TCP

The file above contains the following elements:

  • A Kubernetes Service declaration. Its name is: istio-experiments-service, it exposes port 8080 and it applies to all containers with label: app: istio-experiments
  • Two Kubernetes Deployment definitions, one for each version of the Microservice. Please note that each Deployment passes to the container it defines the environment variable VERSION with value respectively of VERSION-1 and VERSION-2. The declaration of this environment variable can be found in the application.properties file under src/main/resources. The default value for such variable is 1.0, in case no environment variable is passed. The class AppConfig under the com.devopsfolks.istio.demo.istiodemo.config package defines the value of this environment variable as a Spring Bean, which then gets injected in the IndexController.

To deploy the Service and Deployments, run the following command:

kubectl apply -f k8s/istio-experiments-deployment.yml

Watch the progress of the deployment with the command:

watch -n 3 kubectl get pods

You should see something like the following:

Please note that for each Pod it says 2/2. This is very important, because it indicates that the Istio Sidecar Injector has been enabled. For each Pod, we now have the Istio Sidecar Proxy and our container (Istio Experiments v1/v2). If you have forgotten to do this, you can run the command:

kubectl label namespace default istio-injection=enabled

To enable automatic injection. Then kill the pods with the command:

kubectl delete pod <pod-name>  #where <pod-name> with reference to the above example would be: istio-experiments-v1-7b66c6d889-r9nqq for v1

Installing Istio Gateway and VirtualService

The containers we have installed are not directly reachable, because we have deployed the Service as a Cluster IP (not reachable from outside the Cluster).

In order to make our service reachable from outside the cluster, we need to deploy an Istio Gateway and a VirtualService.

From the command prompt, run the following command to install the gateway:

kubectl apply -f istio/gateway.yml

The file contains the following content:

apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
  name: istio-experiments-gateway
spec:
  selector:
    istio: ingressgateway # use Istio default gateway implementation
  servers:
  - port:
      number: 80
      name: http
      protocol: HTTP
    hosts:
    - "*"

It basically opens a gateway on port 80 for all host names.

To deploy the VirtualService and the associated Destination Rules, run the following command:

kubectl apply -f istio/virtualservice.yml

The file contains the following content:

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: istio-experiments-vs
spec:
  hosts:
    - "*"
  gateways:
    - istio-experiments-gateway
  http:
  - route:
    - destination:
        host: istio-experiments-service
        subset: v1
        port:
          number: 8080
      weight: 90
    - destination:
        host: istio-experiments-service
        subset: v2
        port:
          number: 8080
      weight: 10
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
  name: istio-experiments
spec:
  host: istio-experiments-service
  trafficPolicy:
    tls:
      mode: ISTIO_MUTUAL
  subsets:
  - name: v1
    labels:
      app: istio-experiments
      version: v1
  - name: v2
    labels:
      app: istio-experiments
      version: v2
  • First, it defines a VirtualService that applies to all hosts (to match the gateway). The VirtualService applies to the Istio Gateway we deployed above and it defines two routes, one for v1 and one for v2 of our Microservice.
  • Notice that the host for both routes is the name of the Kubernetes service. This is how Istio resolves the host name through internal DNS. The other thing to notice is that we have defined two subsets (defined towards the bottom of the file in the DestinationRule section): v1 and v2. We assign 90% of the traffic to v1 and 10% to v2. This could be useful, for instance, for canary releases or green/blue deployments.
  • In the DestinationRule section, we defined where to apply the subsets: v1 should be applied to the Kubernetes Deployment with labels: (app: istio-experiments and version: v1) and we define something similar for v2. We also specify that the traffic policy is mutual TLS. This is actually very important because without this declaration the calls are going to fail.

Testing Istio intelligent routing

From the command prompt, type the following command:

./istio/ping-service.sh

The file contains the following content:

#! /bin/bash

while true; do
  curl -s http://localhost/hello
  echo ""
  sleep 1
done

exit 0

Basically it keeps invoking http://localhost/hello at intervals of 1 second and it prints out the output. On my system, I see the following output:

As you can see, v1 is invoked a lot more often than v2 (shall we say 9 out of 10?).

Conclusions

Why is Istio intelligent routing important? Thanks to this technique, it’s possible to execute canary releases (only a small number of clients matching some criteria get to access a new feature) or green/blue deployments (it’s possible to have both versions live and gradually drive more traffic to the new version, until v1 can be retired). This is how modern applications in the digital era are deployed to guarantee 24/7 uptime and frequent production releases. Microservices, Cloud and Service Mesh, together allow to architect systems for this type of modern deployments and customers get to continuously enjoy delightful features.