How to run Pods and Services locally on Kubernetes

In order to run something on k8s we need to create a runnable unit. This can be a pod or a service.

What is a Kubernetes Pod?

A Kubernetes Pod is the smallest deployable unit on Kubernetes. It is represented by one or more Docker containers, generally tightly coupled, which are meant to be part of the same deployable unit. As such, containers within a Pod, share the Pod IP address, they can communicate with each other via localhost and they share other privileges, e.g. network and volumes.

What is a Kubernetes Service

A Kubernetes Service is an abstraction that exposes a logical set of Pods. It is generally used as the entry point for clients of exposed containers within a Pod.

How to define a Pod?

We describe Pods via YML files. In this post I’ve created a pod-helloworld.yml file.

apiVersion: v1
kind: Pod
    app: helloworld
  - name: k8s-course
    image: alzamabar/k8s-course:v1
    - containerPort: 3000

The above script creates a Pod named that runs the Docker image: alzamabar/k8s-course:v1 on port 3000

Having a k8s running engine

There are various ways one can run k8s locally. As of the time of writing (July 2018) I’m using the Docker Edge native client for Mac (which will eventually make its way to the stable version). However other options are available, e.g. Minikube (

Once you’ve chosen the k8s engine you want to use, make sure it’s started.

Creating the Pod from its definition

In order to create the runnable unit on k8s, the following command can be used (assuming there is a pod-helloworld.yml file in the current directory):

kubectl create -f ./helloworld.yml


Accessing the pod through port-forwarding

There are various ways of exposing the Pod. One is through Pod port’s forwarding. The command is the following:

kubectl port-forward <pod-name> 8081:3000

Doing that on my laptop, the following appears on the command line:


K8s is now listening on localhost:8081 and will forward all traffic to port 3000 (which is the k8s pod native port, as defined in the above pod definition file).

Opening the address: http://localhost:8081on my browser yielded to the following output:


Exposing a service on Kubernetes running locally

If you’re running k8s locally (e.g. via the Docker native client), you can create a service by expose the pod port with type NodePort as follows:

kubectl expose pod --type=NodePort --name=nodehello-service

This command creates a k8s service by exposing the Pod we have already described as a NodePort type and assigning the service the name nodehello-service

Upon running this command I get the following output:


I can now inspect the service by running the following command:

kubectl get service


I can see that the k8s service is not up and running. If I want to access it, I need to access localhost on the service port, indicated under the PORT(S) column. I can see that the port is 30189 therefore to access this service from a browser I can visit the following address from a browser: http://localhost:30189


You can achieve the same by viewing the service description through the following command:

kubectl describe service nodehello-service

On my laptop I get the following output:


As you can see the ingress LoadBalancer is localhost and the NodePort is 30189.

Deleting the Kubernetes Pod and Service

To delete the Pod, it’s enough to run the following command (assuming there is a pod-helloworld.yml file in the current directory):

kubectl delete -f ./pod-helloworld.yml


At this point the Pod is deleted but the service is still there. If you want to delete the service as well, you can run the command:

kubectl delete service nodehello-service


Kawasaki Z1000SX vs Versys 1000 for long commutes

This morning I had the pleasure of test-driving the Kawasaki Z1000SX. The reason why I wanted to test-drive it is that I wasn’t too satisfied with the turbulence / fuel consumption of my Kawasaki Versys 1000 and wanted to know whether a sportier motorbike would be better for long commutes.

For commuting, what I’m looking for in motorbikes are the following properties:

  • Low or zero turbulence at speeds of 60+ mph
  • Riding comfort (especially butt and lower back)
  • Ease of handling on urban roads, especially when busy with traffic
  • Fuel consumption

First part of my commute: Versys wins 3-0

In the first part of the test-drive, I rode all the way to my work place. The commute involved motorways as well as urban roads. The thing here is that I rode the Z1000SX as if it was my Versys and compared the two.

The Versys was better at the following: turbulence, riding comfort and ease of handling. The Z1000SX is “harder”, more “Spartan”, more aggressive and probably “powerful” than the Versys, although they share the same engine. While these qualities might make it fun to ride it for more “exciting needs”, the Versys is a more comfortable motorbike for long commutes.


Second part of my commute: Z1000SX wins on the motorway

During the second part of my commute, while going back to the Kawasaki dealer, it’s when I discovered the true appeal of the Z1000SX. While on the motorway, I tried “ducking in”; basically push my bottom towards the pillion seat, lower my head, close my arms. I then opened the throttle and…WOW! What I experienced was amazing. This is when the Z1000SX comes to its element and it’s most fun to ride. I felt no turbulence, the motorbike was shooting like a rocket, it was stable on the road even at high speeds and I could handle it fine.

I then realised that during the first part of my commute I was driving the Z1000SX as if it was my Versys 1000 but the two motorbikes are different. The centre of gravity for the Versys 1000 is higher than the Z1000SX, therefore with the latter one *has* to duck in to get the best from it.

Reviewing the four parameters above, I’d say that the two motorbikes have both pros and cons:

  • Low or zero turbulence at speeds of 60+ mph. The Versys 1000, having a higher screen hits the turbulence at speeds of about 65+ mph. The Z1000SX at 55+ mph. With the Z1000SX this can be easily fixed by ducking in.
  • Riding comfort (especially butt and lower back). The Versys seat felt overall more comfortable. However there are options for the Z1000SX to buy comfort seats (with gel in it). Also the suspensions were better on the Versys 1000, they delivered a more “plush” experience on road bumps and the like.
  • Ease of handling on urban roads, especially when busy with traffic. Here the two were pretty much equivalent with one noticeable difference. The Versys 1000 tips on the sides (as the effect of counter steering) much more easily than the Z1000SX. I guess that’s due to two factors: the height and the handlebar width (both greater in the Versys 1000). With the Z1000SX, coming from the Versys 1000, I felt that I really had to “force” the counter steering for the motorbike to dip, which it makes it more work to handle especially on urban roads. The Z1000SX felt overall more stable at higher speeds, again I believe due to the centre of gravity being lower, which made me feel safer at higher speeds.
  • Fuel consumption. Here the two were practically equivalent. I got about 48 mpg from both of them, although I have to say that I pushed the Z1000SX much more than my Versys.

My motorcycle commute to work

Hi, below you’ll find my first Video Blog (or vlog). As this is my first attempt, there are many things that could be improved, e.g. the camera angle was a bit tilting to the right.

In this vlog I wanted to discuss and show few topics:

  • Why did I chose a one litre bike as my first bike?
  • A review of the Kawasaki Versys 1000
  • How does the Kawasaki Versys 1000 fare as a commuting vehicle?
  • How does the Kawasaki Versys 1000 handle on motorways and urban roads?
  • Why motorbikes for commuting at all?
  • My motorcycle cleaning routine and some suggestions for products
  • General thoughts

The video is over 1 hour long, so be prepared.

How to keep a forked Git repository in sync with the original

A (self) reminder

I’m writing this brief note more as a (self) reminder than anything else. The reality is that I arrived at a point in my professional evolution when I do less coding and more product and people leadership.

This does not mean, however, that I do not do coding at all. Whenever the day-to-day activities leave in me some energy, I’m trying to learn a bunch of things, from Kubernetes, to Angular, to Node to Machine/Deep learning and AI to the latest in Cloud technology.

So my current pattern looks something like the following infinite loop:

  • Start learning a new technology on my bucket list
  • Start cloning the instructor’s code examples from Github
  • [Something else happens, e.g. business travel, major work delivery, tiredness, keep the house lights running and so on]
  • Forget everything you learnt at the beginning
  • [Some more spare time makes its way into my life]
  • Start from the beginning

When starting to learn a new technology, I tend to rely more on video courses because I find them easier and lighter to follow. Generally an instructor has some code on Github and the beginning of the course starts with a request to the students to clone the author’s Github repository.

However, I normally want to make changes to the author’s code, in order to better understand what I’m learning, therefore I normally fork the author’s code and create my own repository, also because this way I can push changes, experiments while keeping track of all the changes I made. This is where knowing how to keep a Github forked repo in sync with the original author’s version is not only handy but also necessary.

The procedure for doing so is really easy and Github documents it in these two articles:

I don’t intend to rewrite these two articles but I’m writing this post mainly to remind myself (and possibly others) of a quick way to condense these two posts into one set of instructions, so when I go back learning something that I temporarily set aside, I don’t need to go chasing Github articles around, I can refer to my blog.

Configuring a remote for a fork

The first thing that I will normally do is to clone my own forking repository, with a command similar to this one (note that I used SSH; some users might prefer https):

git clone

The command above will create a “my-project” folder at the path where the command was run.

Adding the remote to the author’s original Github repo

The next step is to add to my cloned repository the information of where the author’s original Github repository is located. This is done by adding a remote to the Git project configuration. One needs to decide how to call the author’s original repository). The Github articles mentioned above suggest the name upstream. So we need to add remote information to our Github project that say: the code you were forked from resides at this address and I’ll give call it upstream. The command to add such remote is thus:

git remote add upstream

Now that my local Github repository configuration knows where the original code resides, I want to fetch all the original code with the following command:

git fetch upstream

This command will download all commits, files and refs from a remote repository to the local repository. If you want to know more about what git fetch does, refer to this article from Atlassian.

At this point I’m normally in the situation where the original author’s repository has moved ahead compared to my fork and I want to bring my (normally master) branch in synch with the author’s. I then execute the following commands (assuming I want to sync the master branch):

git checkout master            --> This is my local master branch
git merge upstream/master      --> This merges the author's original with mine

These commands will automatically merge all code from the original’s author master branch to my own master branch. Finally, if I’m happy with the latest changes, I can push them to my repository with the command:

git push


Where to start with a new idea?

I’ve been off the wire for quite some time.

A demanding year at work, a new life enriching our family and a busy life in general all meant that I arrived at home just with enough strength to help my daughter with her homework, eat and go to bed.

If you’re thinking that since you’re reading this article the situation is different now, I’ll have to disappoint you. However, I chose to make an extra effort, to keep learning as that’s the only way to move forward.

To help myself in this journey I decided to implement a new product. Its concept is very simple but I wanted it to touch most of the technology aspects that I want to stay current with. My goal is to learn new technologies and techniques, re-learn and validate rusty ones and prepare material for a new development training course. Additionally I’d like to offer this product to the public as a SaaS (Software As A Service).

The product will have the following characteristics:

  • Use an API-led and Microservices based architecture
  • Use Docker and Kubernetes as container technology
  • Be Cloud native, deployed as a SaaS
  • Use Bootstrap 4 for the front-end
  • Use some database in the cloud (I haven’t decided yet if NoSQL or relational)

When thinking which activity to tackle first, I found myself going backwards and forwards between API, front-end, Cloud setup, requirements, Continuous Delivery and so on.

I started tackling the API but after upgrading all my API tech stack to the latest versions, I thought of it again and decided that maybe I should have wire-framed the solution to help with the journeys.

Ultimately though I decided that my first activity would be to define the user journeys that this product will address. As for the technique, I decided to use the User Story Mapping technique introduced by Jeff Patton is his book:


In one of my Twitter conversations with Jeff I asked if he could recommend any tools to work with User Story Mapping and his reply was:


The full Tweet can be found here: Tweet with Jeff Patton

So I decided to try Cardboardit. One of the primary reasons was that it was free for public boards and there’s nothing better than free when you’re learning.

So I created an account and created a board for my product, which I’ll call Pulse. Here’s a screenshot:


You can view the board online here.

So, why requirements first? I think they have several advantages:

  • They provide the foundations to build a shared understanding between all stakeholders involved in Product delivery
  • If written properly, they are independent from the technology solution. They should survive evolving technologies
  • They can be understood easily by people new to the team, thus making new members on boarding easier
  • They are the foundations onto which build Behaviour Driven Development (BDD) automated acceptance criteria. We indeed should write journeys like the ones above as the result of conversations we have with interested stakeholders. They should emerge by asking the right questions, typically “What if…”.
  • They are long term. With this I don’t want to say requirements don’t change, as they inevitably do. What I mean is that I might go back to them some time later and remember with ease what I wanted to build.


I’ll stop here for now. The next step for me will be to implement the simplest end-to-end journey, so I can prove the infrastructure and technology to lay the foundations for my Continuous Delivery Pipeline.