Intelligent Traffic Routing with Istio

In this article, I will describe, step-by-step, how to achieve intelligent traffic routing with Istio by writing a simple Spring Boot Microservice. For those of you not familiar with it, Istio is a Service Mesh. I like to think of it as Microservice injectable services.


Before proceeding, your environment should meet the following pre-requisites. It might be that this example works with other versions, but it has only been tested with the below stack:

  • Java 11.0.1+
  • Docker 18+ (Installed via the Docker client for Mac)
  • Kubernetes 1.10.1+ (Installed via brew)
  • Istio 1.0.5+ (Installed following the instructions here)
    • Make sure to install the Istio sidecar injector
    • Make sure you enable the injector by default by following the instructions here.
    • Make sure you install Istio with strict Mutual TLS. This example relies on it.
  • Git 2.20.1+
  • Maven 3.6.0+

Setting the environment up

I’ve used IntelliJ and a Mac for this example. You might want to choose a different Java IDE / OS, although I haven’t tested this example on, say, Windows.

Run the following command:

kubectl get pods -n istio-system

And make sure that the following pods are running:

Open the command prompt at your favourite location and clone the source code on GitHub with the following command:

git clone

Or alternatively you can directly download a zip of the repository from GitHub.

You should now have a istio-experiments folder on your file system. The project file system should look like the following:

This is a list of what the folders contain:

  • Docker: Contains the Docker file
  • istio: Contains the gateway, virtual service and a file to ping the Microservice
  • k8s: Contains the Kubernetes deployment file
  • src: Contains the Spring Boot Microservice.

The Spring Boot Microservice

The Microservice is defined by the IndexController class. It exposes a GET method for the path /hello and returns the value: “Hello from version <version>”, where <version> is injected into the application through an environment variable defined in the Kubernetes deployment file. This follows best practices according to the 12 Factors App, as explained in one of my previous blogs.

Building the Microservice and the Docker image

From the command prompt within the istio-experiments folder, type the following command:

mvn clean package && docker build . -t istio-experiments:v1 -f ./Docker/Dockerfile  

Please note that the first time it might take some time to execute as the Java image is based on AdoptOpenJDK version 11. To verify that your image has been built successfully, you can type the following command:

docker images | grep istio-experiments

And you should see something like the following:

Since we pass the variable arguments through an environment variable, there’s no need to change the code to create a second version. We can simply repeat the earlier command, this time tagging the Docker image with v2, as follows:

mvn clean package && docker build . -t istio-experiments:v2 -f ./Docker/Dockerfile  

Now, by typing the command:

docker images | grep istio-experiments

You should actually see something like the following:

In real life, you would have probably made some changes to your microservice and then built v2.

Deploying the Kubernetes Service and Deployments

Now we will proceed with the deployment of the Kubernetes Service and Pods for v1 and v2. Below you can find the file containing the deployment directives (k8s/istio-experiments-deployment.yml).

apiVersion: v1
kind: Service
  name: istio-experiments-service
  - port: 8080
    protocol: TCP
    name: http
    app: istio-experiments
apiVersion: extensions/v1beta1
kind: Deployment
  name: istio-experiments-v1
  replicas: 1
        app: istio-experiments
        version: v1
      - name: istio-experiments-v1
        image: istio-experiments:v1
        imagePullPolicy: IfNotPresent
          - name: VERSION
            value: VERSION-1
        - name: http
          containerPort: 8080
          protocol: TCP
apiVersion: extensions/v1beta1
kind: Deployment
  name: istio-experiments-v2
  replicas: 1
        app: istio-experiments
        version: v2
      - name: istio-experiments-v2
        image: istio-experiments:v2
        imagePullPolicy: IfNotPresent
          - name: VERSION
            value: VERSION-2
        - name: http
          containerPort: 8080
          protocol: TCP

The file above contains the following elements:

  • A Kubernetes Service declaration. Its name is: istio-experiments-service, it exposes port 8080 and it applies to all containers with label: app: istio-experiments
  • Two Kubernetes Deployment definitions, one for each version of the Microservice. Please note that each Deployment passes to the container it defines the environment variable VERSION with value respectively of VERSION-1 and VERSION-2. The declaration of this environment variable can be found in the file under src/main/resources. The default value for such variable is 1.0, in case no environment variable is passed. The class AppConfig under the com.devopsfolks.istio.demo.istiodemo.config package defines the value of this environment variable as a Spring Bean, which then gets injected in the IndexController.

To deploy the Service and Deployments, run the following command:

kubectl apply -f k8s/istio-experiments-deployment.yml

Watch the progress of the deployment with the command:

watch -n 3 kubectl get pods

You should see something like the following:

Please note that for each Pod it says 2/2. This is very important, because it indicates that the Istio Sidecar Injector has been enabled. For each Pod, we now have the Istio Sidecar Proxy and our container (Istio Experiments v1/v2). If you have forgotten to do this, you can run the command:

kubectl label namespace default istio-injection=enabled

To enable automatic injection. Then kill the pods with the command:

kubectl delete pod <pod-name>  #where <pod-name> with reference to the above example would be: istio-experiments-v1-7b66c6d889-r9nqq for v1

Installing Istio Gateway and VirtualService

The containers we have installed are not directly reachable, because we have deployed the Service as a Cluster IP (not reachable from outside the Cluster).

In order to make our service reachable from outside the cluster, we need to deploy an Istio Gateway and a VirtualService.

From the command prompt, run the following command to install the gateway:

kubectl apply -f istio/gateway.yml

The file contains the following content:

kind: Gateway
  name: istio-experiments-gateway
    istio: ingressgateway # use Istio default gateway implementation
  - port:
      number: 80
      name: http
      protocol: HTTP
    - "*"

It basically opens a gateway on port 80 for all host names.

To deploy the VirtualService and the associated Destination Rules, run the following command:

kubectl apply -f istio/virtualservice.yml

The file contains the following content:

kind: VirtualService
  name: istio-experiments-vs
    - "*"
    - istio-experiments-gateway
  - route:
    - destination:
        host: istio-experiments-service
        subset: v1
          number: 8080
      weight: 90
    - destination:
        host: istio-experiments-service
        subset: v2
          number: 8080
      weight: 10
kind: DestinationRule
  name: istio-experiments
  host: istio-experiments-service
      mode: ISTIO_MUTUAL
  - name: v1
      app: istio-experiments
      version: v1
  - name: v2
      app: istio-experiments
      version: v2
  • First, it defines a VirtualService that applies to all hosts (to match the gateway). The VirtualService applies to the Istio Gateway we deployed above and it defines two routes, one for v1 and one for v2 of our Microservice.
  • Notice that the host for both routes is the name of the Kubernetes service. This is how Istio resolves the host name through internal DNS. The other thing to notice is that we have defined two subsets (defined towards the bottom of the file in the DestinationRule section): v1 and v2. We assign 90% of the traffic to v1 and 10% to v2. This could be useful, for instance, for canary releases or green/blue deployments.
  • In the DestinationRule section, we defined where to apply the subsets: v1 should be applied to the Kubernetes Deployment with labels: (app: istio-experiments and version: v1) and we define something similar for v2. We also specify that the traffic policy is mutual TLS. This is actually very important because without this declaration the calls are going to fail.

Testing Istio intelligent routing

From the command prompt, type the following command:


The file contains the following content:

#! /bin/bash

while true; do
  curl -s http://localhost/hello
  echo ""
  sleep 1

exit 0

Basically it keeps invoking http://localhost/hello at intervals of 1 second and it prints out the output. On my system, I see the following output:

As you can see, v1 is invoked a lot more often than v2 (shall we say 9 out of 10?).


Why is Istio intelligent routing important? Thanks to this technique, it’s possible to execute canary releases (only a small number of clients matching some criteria get to access a new feature) or green/blue deployments (it’s possible to have both versions live and gradually drive more traffic to the new version, until v1 can be retired). This is how modern applications in the digital era are deployed to guarantee 24/7 uptime and frequent production releases. Microservices, Cloud and Service Mesh, together allow to architect systems for this type of modern deployments and customers get to continuously enjoy delightful features.

Spring Boot Microservices on GKE

In this blog I’m going to explain step by step how I have deployed Pulse, a responsive, multi-channel, feedback service. Below you can see Pulse’s landing page:

Pulse Landing Page
Welcome to Pulse

Pulse has been deployed following a Microservices-based architecture. In particular, I’ve used Spring Boot, Docker, Kubernetes and Google Kubernetes Engine (GKE) to deploy this application.

From a high-level perspective, Pulse architecture looks like the following:

  • A User enters the URL: in the browser
  • I’ve purchased this domain from Google Domains and mapped it to a static IP address that I’ve created in Google Cloud (GCP)
  • The static IP address forwards requests to the pulse-ui Microservice. This is a Kubernetes Service of type Load Balancer, which forwards all requests from port 443 to port 8443. Port 8443 is where the Spring Boot pulse-ui micro service is running. This has been configured to run with SSL enabled.
  • The UI appears as above and the user interacts with the application. All data comes from the pulse-backend Spring Boot Microservice, which exposes REST APIs. The UI interacts with the database only through the pulse-backend microservice. The pulse-backend microservice interact with the MySQL database (itself running as a MySQL Docker container).
  • Neither the pulse-backend microservice or the MySQL service are publicly accessible outside the Kubernetes cluster, making this solution quite secure.

Pre-requisites to build and deploy the application on Google Cloud

  • First, I’ve purchased the domain from Google Domains. You can of course choose a different provider. Make sure that your provider allows you to edit the DNS settings for your domain. In particular you will need to map your chosen domain to a static IP address, as explained later.
  • You will also need a SSL certificate. There are various ways of obtaining one: you could either purchase one from a Certificate Authority (CA) or create one yourself as self-signed certificate. For production I’d suggest to get a CA certificate, otherwise users will be prompted that the traffic from your website is not private and they will likely leave
  • I’ve deployed my application on GCP, using Load Balancers, Volumes, Docker Image Registry and GKE. There is of course a cost associated with running things in the cloud, especially the Kubernetes Cluster nodes and the Load Balancer. If you don’t want to spend money, this blog post is probably not for you. The current cost all included is about $70/month.
  • You will need to install a Docker native client on your machine. Instructions can be found here. Make sure that whatever client you install allows you to run a local Kubernetes cluster on your desktop, useful for development.
  • You will need the kubectl, docker and gcloud (for the latter instructions can be found here) command line tools. There are plenty of guides online on how to set these up.

Creating a static IP address on GCP

The first thing you will want to do is to create a static IP address on GCP. Once you have an account and have set up a project, the command to obtain it is straight forward (see below). Instructions can be found here.

gcloud compute addresses create pulse-web-static-ip --region <gcp region>

To verify the IP address that GCP has created, you can run the following command:

gcloud compute addresses describe pulse-web-static-ip --region europe-west2

The result can be seen below:

creationTimestamp: '2019-01-06T04:34:40.052-08:00'
description: ''
id: '----'
kind: compute#address
name: pulse-web-static-ip
networkTier: PREMIUM
status: IN_USE

Take note of the public IP address as you will need it next

Mapping a domain to the static IP address

This step is relatively easy. Below I’m pasting the screenshot from my Google Domain screen that shows the mapping for the domain.

Just map the hostname and the DNS A record to the static IP address and the CNAME www to the root of your domain (e.g.

Obtaining a SSL certificate

As mentioned above, you have two options to get a SSL certificate. You can create and self-sign your own certificate or you can purchase a CA SSL certificate.

Creating your own SSL certificate

There’s a nice blog post on how to do this.

Purchasing your own SSL certificate

There are many SSL certificate providers. Your company might even give one to you. As this application is non-commercial and I’m paying out of my own pockets, I found that SSLMate provides an excellent service. For just $16/year it provides a DV (Domain Validation) SSL certificate. Basically you subscribe to the website, provide your credit card details, install their command line tool (with brew in my case) and in under 60 seconds you can get a fully functional SSL certificate. Once the process is complete, you will obtain the following files:

  • <domain-name>.chain.crt
  • <domain-name>.chained.crt
  • <domain-name>.crt
  • <domain-name>.key

The above picture shows how they look for me. Of these, the two most important ones are the .crt (your certificate) and your .key (the private key used to sign the certificate). Save all files to a secure place as you will now use them to create the SSL store for your Spring Boot UI Microservice.

Generating a PKCS12 keystore file

After much browsing and searching, trial and error, I’ve found this brief but very useful article on how to generate a PKCS12 store file from your SSL certificates. Basically it all boils down to the following command:

openssl pkcs12 -export -out server.p12 -inkey <domain-name>.key -in <domain-name>.crt 

In my case I didn’t have to add the –certfile CACert.crt option. You will of course need to substitute the <domain-name> with your own values. In my case the command looked like the following:

You will be asked to enter a password. Make sure that you keep note of this password as we will need it later, when setting up SSL for the Spring Boot Frontend microservice.

openssl pkcs12 -export -out server.p12 -inkey -in

The above command will generate the server.p12 file. Copy this file to the src/main/resources folder of your Spring Boot UI app but make sure that if you are committing code to GitHub, you add this file to the .gitignore file. It’s always safe not to commit any sensitive files (e.g. certificates) to a code repository.

Create a Kubernetes Cluster on GKE

Instructions can be found here.

Creating the Kubernetes secret for MySQL

Before installing the MySQL database on GKE, we need to create a Kubernetes secret to hold the MySQL database root password plus the username and password details that the application will use to connect to the database.

There are various ways to achieve this and more details can be found here. We will create a mysql-local-secrets Kubernetes secret.

To create a secret, first you need to create the base64 version of your secrets. Below you will find the commands that I’ve used to respectively create base64 (fictitious) values for:

  • rootPassword
  • db_user
  • password

Then, you want to create a YAML file (I named mine mysql-local-secrets.yml). It looks like the following:

In a production environment you’d use much stronger passwords.

MYSQL_XXX are keys of the secret file that we will use later when installing the MySQL service on GKE. These files should never be committed to a source code repository or stored in an unsafe place! 

To create the secret, open a command prompt and point to the folder where you have created the YAML file. First ensure that you are pointing to your GKE cluster. I do this by running the command:

kubectl config get-contexts

If your current context (marked by *) starts with gke_, then you’re pointing at your GKE cluster. Otherwise you can point to it by running the command:

kubectl config use-context gke_...

To create the secret for MySQL, now simply run the command:

kubectl apply -f mysql-local-secrets.yml

Obviously make sure you give your file name. After this, you can check that the secret has been created by typing the command:

kubectl get secrets

You can indeed see that mysql-local-secrets has been created.

Creating the MySQL service on GKE

To create the MySQL service on GKE, I created a Kubernetes Deployment that also defines a Service. It’s all contained in a YAML file, which I’ve named: mysql-deployment.yml. I’m going to paste parts of this file (as it’s too long otherwise).

MySQL Deployment file: the Volumes section

In order to avoid your data be lost everytime the MySQL container gets recreated, it’s normally advisable to store data in external volumes. To do this, first we ask GKE to create a Persistent Volume and a Persistent Volume Claim.

GKE creates a volume and a volume claim for you when you apply this file. To check them, you can access the “Storage” section of your GKE cluster.

And if you click on it, you can see the details:

MySQL Deployment file: The Service and Deployment

The last part of the file defines the MySQL Service and Deployment.

This file defines a Kubernetes Service of type ClusterIP, listening on port 3306 (this can only be reached from within the GKE cluster) and a Kubernetes Deployment that deploys the official MySQL image mysql:5.7 from Docker Hub.

Few things of interest here:

  • I had to pass the args with “–ignore-db-dir=lost+found” otherwise the database wouldn’t start. It might not be the case for you
  • The environment variable (env -> name) MYSQL_ROOT_PASSWORD and how to use it is documented in the official MySQL image documentation on Docker Hub. The database will be created with user root and the password you have defined here. Remember the secret we created earlier? Here we are saying to GKE that the value for this environment variable must be taken from the mysql-local-secrets secret with key MYSQL_ROOT_PASSWORD. This is an application of the 12 Factors as explained in one of my previous blogs.
  • Notice that we are using the volumes we have defined at the beginning of this file.

To create the database, just run the command:

kubectl apply -f mysql-deployment.yml

To watch the progress, you can run the command:

watch -n 3 kubectl get pods

And once the pod has been created, you can check the database instance with the following command (it assumes your pod has been named mysql-8498c4899d-w5xxc):

kubectl exec -it mysql-8498c4899d-w5xxc -- /bin/bash

This should land you on the bash shell of your MySQL container. You can then try to login to the database by executing the following command:

mysql -u root -p 

And when asked for the password, use the “clear text” one that you have created earlier as part of the secret (if you’ve followed this blog, you would enter rootPassword).

Et voila’, you are inside a fully working MySQL server.

Deploying the Backend Microservice with Spring Boot REST Data JPA

Now that we have a database in place we can spin up the Microservice that will manage the data in and out. I’ve used a Spring Boot REST Data JPA Microservice. The advantage of this kind of service is that one defines the JPA entities and the JPA repositories and Spring Boot generates all the CRUD REST APIs automatically.

Of course, as the application becomes more complex, there might be the need for some customisations, but the majority of functions is there. So this Microservice reads JSON from the UI Microservice (described later) to query/write to the database and returns JSON with database data. The UI Microservice never interacts directly with the database, an important concept in a Microservice-based architecture.

The YAML defines a Service and a Deployment.

The Kubernetes service is of type NodePort, because I don’t actually need this service to be accessible from outside the cluster. It defines the NodePort (the one you would use as part of your address in the browser) which maps to the Spring Boot port. In this case, port 30002 -> 8080. This means that whenever the UI Microservice invokes http://pulse-backend:30002/ it actually means http://pulse-backend:8080.

Two important things to notice here:

  • Look at the JDBC connection string. It contains mysql as a server, but we know that there isn’t such a thing, at least that would normally be reachable, but because we have deployed our MySQL service with name mysql, this is all that is required for the pulse-backend Microservice to be able to resolve that server name. Nobody outside the cluster would be able to resolve the same address. One gotta love the magic of Docker and Kubernetes!
  • The second part is the environment variables we pass to our Spring Boot app. Of particular importance we have three: the prod profile (as I have different configurations based on the active profile), the spring_datasource_username and the spring_datasource_password. Do you remember when we have set up the MySQL secret? We are using them here to pass to Spring Boot the username/password it should use to initialise its connection to the database. Again, this is an implementation of the 12 Factor App. My Spring Boot app doesn’t have one single sensitive configuration information (in fact no sensitive data at all) in the source code, it’s all managed through secrets. This way I can write the code once and, deploy anywhere. This also follows good DevOps principles because it can be highly automated.

Deploying the SSL Frontend Microservice with Spring Boot

For the frontend, I also used Spring Boot, this time in conjunction with Thymeleaf, Bootstrap and Spring Security. The job of the frontend Microservice is very simple: display the UI to the users and ask the pulse-backend Microservice to store/read data to/from the database in JSON format, by invoking its REST APIs.

The UI must be accessible externally. I’ve tried various configurations with trial and error.

  • First I tried with the Spring Boot app running on HTTP and fronted with a Kubernetes Ingress running on SSL. That worked except from the security bit, that is when the user tried to perform admin tasks. The /login action was somehow redirected to HTTP instead of HTTPS, so I had to abandon this avenue.
  • The second configuration was instead to deploy the frontend microservice as a Kubernetes Service of type LoadBalancer. In this configuration, I protected the Spring Boot app with SSL and the Kubernetes Service of type LoadBalancer simply mapped port 443 (HTTPS) to port 8443 (which is the port I’ve told the Spring Boot app to run on). This configuration worked and it’s the one that we’re going to see now. With this configuration, GKE automatically creates a Load Balancer for you and even a public IP address if you haven’t specified one. Very seamless.

As mentioned in the opening of this blog post, you should now have a server.p12 store file in your file system. Make sure to place this under src/main/resources of your Spring Boot app, as the following image shows:

Then make sure your file contains the following:

Let’s see what happens here:

  • First I told Spring Boot to run the app on port 8443
  • Secondly, I instruct the SSL engine that the store type is PKCS12
  • Then I give the SSL engine the path to the server.p12 file as per above. Notice the classpath: prefix to instruct Spring Boot to look into the class path (src/main/resources is in the class path).
  • When we created the server.p12 file at the beginning of this blog post, we had to enter the password. The server.ssl.key-store-password property must have that value. Now, following 12 Factors and good practices, we don’t want to enter any sensitive configuration information in the source code, therefore we use an environment variable ${SSL_PASSWORD}. Can you guess how we will be passing this value to the Microservice? Through a Kubernetes secret, exactly!

Creating the Pulse UI Kubernetes Secret

Similarly to what we have done for the MySQL database, I created another YAML file with some secrets I want to pass to my Spring Boot micro services. The file (with obviously fake values) looks like the one below:

The SSL_PASSWORD key is the one I will use when deploying this UI Microservice as Kubernetes Service, as explained below.

Deploying the UI Microservice

Similarly to the pulse-backend Microservice, I deployed this Spring Boot Microservice as a Kubernetes Service and Deployment, however with one important difference. This Kubernetes Service is of type LoadBalancer and I specify the static IP address that I created at the beginning of this post.

Here are the things to notice:

  • The Service type is LoadBalancer
  • I specified my static IP address with the loadBalancerIP property
  • The service redirects traffic from 443 (HTTPS) to 8443 (the port I asked Spring Boot to run on). Since my domain is mapped to my static IP address, this allows me to invoke: which will ultimately result in https://pulse-ui:8443/
  • Again, I’m passing the active profile as an environment variable and I pass a couple of password through secrets.

That’s all there is to it folks!

‘Till the next time, roger and out!

My personal mission statement

My mission is to bring joy, happiness, meaning and vocational purpose in the life of others.

To fulfil this mission, I will abide by the following foundations: 

  • Love. Love is the most important foundation of all. It’s what drives everything I do. It’s what connects all that has ever been and that will ever be. It’s humanity social glue. 
  • Kindness. Kindness is one of Love’s manifestations. Being kind to other human beings brings them joy and happiness. 
  • Selflessness. In order to bring joy and happiness in the lives of others, one has to place other people’s lives at the centre of one’s existence
  • Peace. Love can only be shared when peace reigns in people’s hearts and mind. 
  • Righteousness. Love is true and to love also means to be righteous and stand firm for what is right. 
  • Forgiveness. To give is to receive and forgiveness is, after personal sacrifice, the highest form of giving. 
  • Know-how. I will continually seek to increase my knowledge of the things that can help me fulfil this mission.
  • Anchorage. I will be a reliable anchorage for my family to feel loved, safe and valued.
  • Followership. I will serve my leaders to help them be successful in their mission
  • Leadership. I will provide my followers with effective leadership to empower them to be successful
  • Health. I will do whatever it takes to live a healthy life so as to bring positive energy in the people I interact with.

I will achieve my mission through the following roles:

  • Husband. My wife is the most important person in my life and her happiness is my priority.
  • Father. My children are, after my wife, the most important people in my life. My mission is to nurture them with love and create the right environment for them to find their mission in life, live with joy, happiness and purpose. I seek to be remembered as a loving, caring, dedicated and inspirational father. 
  • Technology Leader. I will lead and grow the highest performing technology teams in the world to coach them as agents of positive change in themselves and others.
  • Social brother.  I will help my fellow social brothers and sisters who are seeking joy, purpose and happiness through taking concrete actions, sharing my foundations, leadership and followership experiences.
  • Student and Teacher.  I will keep learning until able with the goal of sharing these learnings with my fellow brothers and sisters.
  • Amateur Athlete. I will keep myself healthy through physical exercise to increase the chances to stay around my loved ones for as long as I can as well as to live a more productive life.
  • Elder. I will live and share this mission with my family and society until the day I will return to the Creator. I will spend my late years scanning the future for inspiration, remembering the past for closure and enjoying the present as a legacy for those who I love and leave behind.

Using the break to brush up on skills

Like probably many of you I’m enjoying the Christmas break. It’s been a busy year, with 2019 looking even busier and a two weeks holiday break was what was really needed.

This time of the year is particular welcome as we get to spend more time with our families, do all the things that normally we have no time to do.

This period of the year is also welcome because I can spend some quality time brushing up my skills. For example, during this break, I’ve decided to follow a particular learning path, with the overall outcome to learn various branches of AI, including Machine Learning and Deep Learning. In my line of work, keeping up to date with technology is a must and never like the present time, the pace at which technology is evolving is very fast.

I have to thank the company that employs me for making available a Safari Book Online (SBO) account. I’d strongly recommend any company wishing to have the best human capital to do the same. The quantity and quality of courses on SBO is impressive.

My Learning Path to AI

AI is a generic term which encapsulates several concepts. In my line of work it’s ultimately about machines learning from data, whether with training (Machine Learning) or unsupervised (Deep Learning). If machines have the ability to learn from data and therefore modify their behaviour based on such learning, it’s possible to apply the outcomes of such activities to a number of fields, from medical science, to fighting financial crime, to auto-trading and so on.

AI topics are quite difficult in nature, involving a lot of Math (mostly probability and statistics but also advanced algebra, combinatorial Math and so on). There’s no denying that the pioneers in this space had a lot to do to even get to the basics, but like everything in science, new ideas are built on the “shoulders of giants” and today’s AI landscape looks a lot more approachable than at its inception.

More approachable doesn’t mean easy. While the major public cloud providers like Google, Amazon and Microsoft all have support for AI activities, with targeted services, for any non-basic use case there is still a lot to learn. In the case of Google, they even have created a dedicated hardware processor, TPU, for their TensorFlow library. Some of their services (for example Natural Language Processing (NTLP) don’t even require users to know anything about AI; they offer APIs that provide “AI as a service”, like the overall sentiment of a piece of text (which, for example, can be used to automate some call centre classification, reducing processing costs), or image categorisation.

If today one looks at the AI landscape, there are few areas that emerge as required to be learned:

  • Machine Learning and Deep Learning
  • Python
  • R
  • Math

I’ve decided to start with Python as I didn’t personally know this language at all. In particular I’m going through the following path:

  • Learning the basics of Python.
  • Learn Object Oriented Programming with Python
  • Learn Web Development with Python
  • Learn Database management and Data Analysis with Python
  • Learn AI with Python
  • Learn / Re-Learn Algebra and Probability, potentially some combinatorial math as well.
  • Obtain the Python PCEP, PCAP and eventually the PCPP certifications
  • Learn Google and AWS AI services, including TensorFlow
  • Learn R

Looking at Safari Book Online, below it’s a table of the courses I’m going through:

TopicLearning AcademyCourseCommentsCompletedRating
Python BasicsSafari Book OnlineMTA 98-381: Introduction to Programming Using PythonFor beginners, introduces all the basics; it can be studied as part of the MTA 98-381 exam preparationYes***
Python Object Oriented ProgrammingSafari Book OnlinePython Beyond The Basics – Object Oriented ProgrammingAn essential course for all those wanting to work with Python professionally. Personally, I’d have liked the course based on Python 3 but it was based on Python 2Yes****
Python Programming LanguageSafari Book OnlinePython Programming LanguageThis is potentially the best all-rounder Python Course on SBO I found so far. Highly recommended to anyone either wanting to lear Python3 in its entirety or wanting to brush up their skillsIn Progress*****
Learn Web Development with PythonSafari Book OnlineWeb Development in Python with Django: Building Backend Web Applications and APIs with DjangoIn Progress
Safari Book Online
Web Applications with Python and the Pyramid Framework
Python RESTful APIsSafari Book OnlineBuilding REST APIs with Python
Safari Book OnlineBuilding RESTful Python Web Services with Flask
Safari Book OnlineBuilding RESTful Python Web Services with Django
Python Data ScienceSafari Book OnlineData Acquisition and Manipulation with Python
Safari Book OnlineMaster the Fundamentals of SQL with Python
Safari Book OnlineWorking with Big Data in Python
Safari Book OnlineData Science Fundamentals Part 1: Learning Basic Concepts, Data Wrangling, and Databases with Python
Python and RSafari Book OnlineLearning Path: Step-by-Step Programming with Python and R

RSafari Book OnlineLearning Path: R Programming for Data Analysts
Python Data VisualisationSafari Book OnlineData Visualization with Python: The Complete Guide
Python Data ScienceSafari Book OnlinePython Data Science Essentials

I will update the list as I go along. I hope you found this article useful.

How about you? What does your learning path look like for 2019?

Happy Festivities!

How to run Pods and Services locally on Kubernetes

In order to run something on k8s we need to create a runnable unit. This can be a pod or a service.

What is a Kubernetes Pod?

A Kubernetes Pod is the smallest deployable unit on Kubernetes. It is represented by one or more Docker containers, generally tightly coupled, which are meant to be part of the same deployable unit. As such, containers within a Pod, share the Pod IP address, they can communicate with each other via localhost and they share other privileges, e.g. network and volumes.

What is a Kubernetes Service

A Kubernetes Service is an abstraction that exposes a logical set of Pods. It is generally used as the entry point for clients of exposed containers within a Pod.

How to define a Pod?

We describe Pods via YML files. In this post I’ve created a pod-helloworld.yml file.

apiVersion: v1
kind: Pod
    app: helloworld
  - name: k8s-course
    image: alzamabar/k8s-course:v1
    - containerPort: 3000

The above script creates a Pod named that runs the Docker image: alzamabar/k8s-course:v1 on port 3000

Having a k8s running engine

There are various ways one can run k8s locally. As of the time of writing (July 2018) I’m using the Docker Edge native client for Mac (which will eventually make its way to the stable version). However other options are available, e.g. Minikube (

Once you’ve chosen the k8s engine you want to use, make sure it’s started.

Creating the Pod from its definition

In order to create the runnable unit on k8s, the following command can be used (assuming there is a pod-helloworld.yml file in the current directory):

kubectl create -f ./helloworld.yml


Accessing the pod through port-forwarding

There are various ways of exposing the Pod. One is through Pod port’s forwarding. The command is the following:

kubectl port-forward <pod-name> 8081:3000

Doing that on my laptop, the following appears on the command line:


K8s is now listening on localhost:8081 and will forward all traffic to port 3000 (which is the k8s pod native port, as defined in the above pod definition file).

Opening the address: http://localhost:8081on my browser yielded to the following output:


Exposing a service on Kubernetes running locally

If you’re running k8s locally (e.g. via the Docker native client), you can create a service by expose the pod port with type NodePort as follows:

kubectl expose pod --type=NodePort --name=nodehello-service

This command creates a k8s service by exposing the Pod we have already described as a NodePort type and assigning the service the name nodehello-service

Upon running this command I get the following output:


I can now inspect the service by running the following command:

kubectl get service


I can see that the k8s service is not up and running. If I want to access it, I need to access localhost on the service port, indicated under the PORT(S) column. I can see that the port is 30189 therefore to access this service from a browser I can visit the following address from a browser: http://localhost:30189


You can achieve the same by viewing the service description through the following command:

kubectl describe service nodehello-service

On my laptop I get the following output:


As you can see the ingress LoadBalancer is localhost and the NodePort is 30189.

Deleting the Kubernetes Pod and Service

To delete the Pod, it’s enough to run the following command (assuming there is a pod-helloworld.yml file in the current directory):

kubectl delete -f ./pod-helloworld.yml


At this point the Pod is deleted but the service is still there. If you want to delete the service as well, you can run the command:

kubectl delete service nodehello-service