In one of my previous posts I showed how to write a bash script to automatically login on your AWS boxes. However, there’s much more that a typical user would want to do with AWS infrastructure. For example, I’m hosting this WordPress website on AWS. It runs on two servers in two different Availability Zones. The code is synchronised between the two servers through S3. Two cron jobs ensure that the content is synchronised from the WordPress servers to S3 and that the code from S3 is synchronised back to the server (in case I wanted to add additional servers or I need to spin up a new one because one of them died).

When I install or update a WordPress plugin, the relevant code will end up in one of the two servers. Depending on where the LB will route the next request, upon a subsequent request I might see the content from the updated server or from the one who hasn’t got the updated plugin. This leads to inconsistency. So I needed a way to stop the cron job on all servers before running the plugin update, as well as a way to run commands on the remote AWS boxes to synchronise the content of my WordPress website with S3.

In this post I’ll show you how I have achieved this. All the code presented in this post is intended for Mac (OS X) but with minor adaptations it could run on any xUnix-based system.


  • Install Ansible. Instructions on how to install Ansible can be found here
  • Install AWS CLI. Instructions on how to install the AWS CLI can be found here
  • Configure AWS CLI. Instructions on how to configure the AWS CLI can be found here
  • Both ansible and aws commands should be available in the $PATH
  • You’ve created a $HOME/.ansible.cfg file specifying the location of your inventory. The latest ansible.cfg default file can be found here. I’ve saved the content of this file as $HOME/.ansible.cfg and the only line I’ve changed is the inventory line, pointing at the ansible hosts file which I want: inventory      = $HOME/runtime/ansible/hostsRemember to uncomment the “inventory” entry or Ansible won’t recognise it. For more information on where Ansible looks for configuration files, you can visit this page
  • Add your .pem keys to your account by running: ssh-add <pem file>

Downloading and installing the scripts from GitHub

I’ve made available all my scripts on GitHub.

To install them, run the following command from your $HOME folder (or fork the repo and run the command with your own user):

git clone bin

This will create a bin folder under your $HOME folder. Make sure to add this folder to your path. On Mac, this can be achieved by editing your ~/.bash_profile as follows:

export PATH=$HOME/bin:$PATH:

Running the scripts

To see if the scripts work, you can run the following command from anywhere: <your profile name>

So, if your ~/.aws/credentials file contains a [default] profile entry, the command to run will be: default

This command should open an ssh session for every IP address contained in your inventory file.


Folder structure and function of each script




  • assign-apache-owner.yml. This script makes sure that my WordPress folder under /var/www/html/ is owned by apache. This is necessary / useful when installing WP plugins might override permissions to root
  • startcron.yml. It starts the cron service on all my WP boxes
  • stopcron.yml. It stops the cron service on all my WP boxes
  • sync-from-s3.yml. It runs the aws s3 command to sync the content of my S3 folder to the /var/www/html/ folder
  • sync-to-s3.yml. It runs the aws s3 command to sync the content from the /var/www/html/ folder to S3
  • update-wordpress-os.yml. It updates the OS on the AWS WP boxes, by running: yum update -y


  • Wrapper for assign-apache-owner playbook
  • It opens an ssh connection to each AWS instance in a new Terminal window
  • Invoked by to connect to each box
  • [Ignore]. This is not related to the functionality described in this post
  • Writes the Ansible inventory file by adding the IP addresses of the AWS instances running under the profile passed as argument
  • [Ignore]. This is not related to the functionality described in this post
  • Wrapper for startcron.yml playbook
  • Wrapper for stopcron.yml playbook
  • Wrapper for sync-from-s3.yml playbook
  • Wrapper for sync-to-s3.yml playbook

Happy Cloud!


In this post I’ll show how simple it is to use the DynamoDB database with the Java AWS SDK. Amazon has got amazing documentation on what DynamoDB is and how it works. The source code is available below: it has been packaged as a Maven project which you should be able to run from your command line. This code is for demonstration purposes only and it’s not meant for production.



  • You have an AWS account, possibly still within the free tier.
  • You have Java and Maven installed
  • You can reach out to the internet from your PC
  • You know Java
  • You’ve configured the AWS client on your PC and have at least one profile in your ~/.aws folder

The Model

As a developer, when it comes to databases, I don’t want to care too much about the target data sink. Whether it’s relational, noSQL, in memory, all I care about is to have a fast, reliable, ubiquitous data sink which allows me to perform CRUD (Create, Retrieve, Update, Delete) operations. So when coding applications, I like to start from the Model, e.g. the data I want to work with expressed as Plain Old Java Objects (POJOs) and delegate the persistence to an underlying layer which takes care of the details.

The DAO pattern is famous to achieve these sorts of things. As developers, we work with an interface which exposes data sink CRUD operations; the implementations of that interface take care of handling the data sinks details. So I might have a single interface whose underlying implementations handle different database types. If we focus on the Model expressed as POJOs, the interface can accept the Model and delegate to various implementations how these should be handed off to the database.

This concept is so popular that a number of frameworks and technologies were born to allow just that. JEE, Hibernate, Spring and others all allow to annotate our Model with information that they can retrieve at runtime to correctly handle interactions with the data sinks. Not surprisingly, Amazon baked into DynamoDB this functionality as well. More details can be found here.

The Model in this post is purposely simple. I want to store a Customer with an Address.

A peek at the Customer class shows how simple it is (please note the DynamoDB annotations):



 * Created by tedonema on 14/11/2015.
@DynamoDBTable(tableName = Main.TABLE_NAME)
public class Customer {
    private long id;
    private String firstName;
    private String lastName;
    private Address address;

    @DynamoDBHashKey(attributeName = "Id")
    public long getId() {
        return id;

    public void setId(long id) { = id;

    @DynamoDBAttribute(attributeName = "firstName")
    public String getFirstName() {
        return firstName;

    public void setFirstName(String firstName) {
        this.firstName = firstName;

    @DynamoDBAttribute(attributeName = "lastName")
    public String getLastName() {
        return lastName;

    public void setLastName(String lastName) {
        this.lastName = lastName;

    @DynamoDBAttribute(attributeName = "Address")
    public Address getAddress() {
        return address;

    public void setAddress(Address address) {
        this.address = address;

    public String toString() {
        final StringBuilder sb = new StringBuilder("Customer{");
        sb.append(", firstName='").append(firstName).append('\'');
        sb.append(", lastName='").append(lastName).append('\'');
        sb.append(", address=").append(address);
        return sb.toString();

The Address class, contained within Customer is even simpler (note the DynamoDB annotation):



 * Created by tedonema on 14/11/2015.
public class Address {

    private String address1;
    private String address2;
    private String city;
    private String postCode;
    private String county;
    private String country;

    public String getCountry() {
        return country;

    public void setCountry(String country) { = country;

    public String getCounty() {
        return county;

    public void setCounty(String county) {
        this.county = county;

    public String getPostCode() {
        return postCode;

    public void setPostCode(String postCode) {
        this.postCode = postCode;

    public String getCity() {
        return city;

    public void setCity(String city) { = city;

    public String getAddress2() {
        return address2;

    public void setAddress2(String address2) {
        this.address2 = address2;

    public String getAddress1() {
        return address1;

    public void setAddress1(String address1) {
        this.address1 = address1;

    public String toString() {
        final StringBuilder sb = new StringBuilder("Address{");
        sb.append(", address2='").append(address2).append('\'');
        sb.append(", city='").append(city).append('\'');
        sb.append(", postCode='").append(postCode).append('\'');
        sb.append(", county='").append(county).append('\'');
        sb.append(", country='").append(country).append('\'');
        return sb.toString();

The Main Class

The Main class performs the following operations:

  • Populates a collection of customers, using Podam, my open source project to automatically fill POJO graphs
  • It creates a DynamoDB table only if it doesn’t already exists
  • It inserts customers into the table
  • It retrieves and displays all customers from the table
  • Optionally, it deletes the table once you’re done (if you uncomment the deleteCustomerTable method in the main method of the Main class).

Populating a collection of customers using Podam

Podam makes the task of populating the Customer POJO and its associated Address POJO extremely easy:

private List<Customer> populateInMemoryData(int nbrCustomers) {

    PodamFactory factory = new PodamFactoryImpl();
    List<Customer> customers = new ArrayList<Customer>();
    for (int i = 0; i < nbrCustomers; i++) {
        Customer item = factory.manufacturePojo(Customer.class);
    return customers;

No annotations required, no fuss. Once you’ve got a PodamFactory, populating a POJO is a one-liner.

Creating the DynamoDB table

We obviously want to create the DynamoDB table before starting to use it and only if it doesn’t already exists. This is easily accomplished with the following code:

private void createCustomerTable() {

    DynamoDB dynamoDB = DynamoDbFactory.getDynamoDbClient(PROFILE);

    boolean alredyExist = doesTableAlredyExist(dynamoDB);

    if (!alredyExist) {

        System.out.println("Starting the creation of the Customer table...");


        List<AttributeDefinition> attributeDefinitions = new ArrayList<AttributeDefinition>();
        attributeDefinitions.add(new AttributeDefinition().withAttributeName(KEY_NAME).withAttributeType("N"));

        List<KeySchemaElement> keySchemaElements = new ArrayList<KeySchemaElement>();
        keySchemaElements.add(new KeySchemaElement().withAttributeName(KEY_NAME).withKeyType(KeyType.HASH));

        CreateTableRequest request = new CreateTableRequest()
                .withProvisionedThroughput(new ProvisionedThroughput().withReadCapacityUnits(READ_CAPACITY_UNITS)

        Table table = dynamoDB.createTable(request);
        try {
            System.out.println("It took: " + stopWatch.getLastTaskInfo().getTimeSeconds() + " to create a table");
            System.out.println("Table created successfully");
        } catch (InterruptedException e) {
            System.out.println("Error creating table");
    } else {
        System.out.println("Table " + TABLE_NAME + " already exists.");


The Main class defines a number of constants which are used in the snipped above, some of which you’ll need to change according to your needs.

public static final String TABLE_NAME = "Customers";
public static final String KEY_NAME = "Id";

/** Change this to your profile */
private static final String PROFILE = "devopslead";

// 25 is the free usage tier limit
public static final long READ_CAPACITY_UNITS = 25L;
public static final long WRITE_CAPACITY_UNITS = 25L;

Adding data to the DynamoDB table

Adding Customers to the DynamoDB is extremely easy:

private void insertCustomers(List<Customer> customers) {

    System.out.println("Starting the batch save...");
    DynamoDBMapper mapper = new DynamoDBMapper(DynamoDbFactory.getAmazonDynamoDBClient(PROFILE));
    System.out.println("It took " + stopWatch.getLastTaskInfo().getTimeSeconds() + " seconds to save " + customers.size() +
            " customers to the table");
    System.out.println("All customers have been saved to the DB");


In order to save all the Customers at once, we use the DynamoDbMapper class (which maps our Model to attributes to store in our DynamoDB table) and the batchSave method. That’s it!

Retrieving data from the DynamoDB table

Now that we’ve stored our Customers in the DynamoDB table, we want to retrieve them to see whether it all worked. DynamoDB allows various way of searching for data. Normal business operations would just search for a primary key (or a set of) and possibly for secondary indexes. Details of how these different methods work it outside the scope of this post and they are explained at length in the Amazon documentation. In order to retrieve all records in a table, we use the scan operation, which is a lazy-load operation, e.g. it provides data as you request it.


private void listCustomers() {

    DynamoDBScanExpression expression = new DynamoDBScanExpression();
    DynamoDBMapper mapper = new DynamoDBMapper(DynamoDbFactory.getAmazonDynamoDBClient(PROFILE));
    List<Customer> customers = mapper.scan(Customer.class, expression);
    System.out.println("Retrieved: " + customers.size() + " customers in " + stopWatch.getLastTaskInfo().getTimeSeconds() + " " +
    for (Customer customer : customers) {

Doing so is extremely easy. We need a DynamoDBScanExpression to optionally limit the range of data we want and a mapper. The scan operation does everything else for us. An example of the above output can be seen below:

Customer{id=83847448181563, firstName='4jRgDJorsC', lastName='XA8vpF9H0X', address=Address{address1='Dy_eIGMTJa', address2='ukGpKhWZRs', city='KqMPqSIwFt', postCode='pSNq5wCBu7', county='JXHJdIV1Wm', country='PpQkXX3Tq6'}}
Customer{id=85947743819987, firstName='0CnPEbiNch', lastName='ZxvBvyeZPT', address=Address{address1='NMLDczb8vy', address2='S9Uhlyzf8E', city='Bn1XxHHyHC', postCode='6dSUKnL31h', county='US8Fz7mPFf', country='QE2Gc0A0HO'}}
Customer{id=83847297858262, firstName='5i4ZqY86hr', lastName='WiGeDllrIe', address=Address{address1='D5UrPCU_Wx', address2='9cmzpzwCds', city='kPnHOVNQk_', postCode='moFQ6tH6As', county='PdJNoVVU7G', country='DqZrFl1_yq'}}
Customer{id=83650072252268, firstName='TaXi4JJPHr', lastName='lZIrhMPAWH', address=Address{address1='0JTQknw1Or', address2='iJYOpVAloM', city='22VJCDEza9', postCode='qkwQ6653Dn', county='lC2Cwv9u53', country='_3PK5coxcL'}}

As you can see, the data contain both Customer and Address data. I didn’t have to code complex persistence APIs to get the job done. I just had to annotate my model and the mapper API took care of everything else.

Deleting the table

If you’re playing with DynamoDB you’ll want to delete the table once you’re done. The code to accomplish this is really easy:

private void deleteCustomerTable() {

    DynamoDB dynamoDB = DynamoDbFactory.getDynamoDbClient(PROFILE);
    Table table = dynamoDB.getTable(TABLE_NAME);
    if (null != table) {
        System.out.println("Table deleted successfully in " + stopWatch.getLastTaskInfo().getTimeSeconds() + " seconds.");


Some performance metrics

So why DynamoDB? Amazon documentation explains really well all the advantages of DynamoDB. If I had to choose a few, I’d say that its main advantages are:

  • Speed. DynamoDB is fast (see some metrics examples below)
  • Location agnostic. DynamoDB is a noSQL database in the cloud. You don’t need to set up servers, maintain them, upgrade OS versions, etc. In fact you don’t have a DynamoDB server to log in into. All yo have is an interface (or API or command line) to set up your tables, primary and secondary keys and you’re off.
  • Simplicity. As shown above using DynamoDB is extremely easy
  • Scalability. How fast you want your DynamoDB to perform is entirely up to you and it’s driven by READ and WRITE capacity. There are of course some limitations around the size of your reads and writes, but the API offers mechanisms to make your application resilient (e.g. if you’re uploading data above your WRITE capacity or read too much data in one go above the read limit per unit, you can always re-process the records that weren’t uploaded / retrieved). Each write capacity unit defines how many writes you need per second: so if you want 10 writes per second you can request 10 write capacities; if you want more you can just request more write capacity units. If you request 10 ready capacity units you are requesting 10 strongly consistent reads per second, or 20 eventually consistent reads per second (consistency is typically achieved within 1 second)
  • Availability / Resiliency. When you write data to DynamoDB, all data is automatically replicated to different Availability Zones within a region.
  • Pay as you go. With DynamoDB you only pay for what you use, e.g. the READ and WRITE capacity units you use.

Running the Main class the first time (e.g. when the table needs to be created) to create 1000 Customer records, leads to the following timings on my PC (with 25 read and write capacity):

It took: 5.16 to create a table
Table created successfully
Starting the batch save...
It took 2.647 seconds to save 1000 customers to the table
All customers have been saved to the DB
Retrieved: 1000 customers in 0.406 seconds
Table deleted successfully in 0.13 seconds.

Disabling the table delete and running the example twice, on the second run I get the following (notice that I got an alarm from AWS alerting that I exceeded by write capacity unit):

Table Customers already exists.
Starting the batch save...
It took 2.449 seconds to save 1000 customers to the table
All customers have been saved to the DB
Retrieved: 2000 customers in 0.461 seconds

It saved one Customer record every 2.5 milliseconds and it took half a second to retrieve 2000 Customers (this is because it retrieved also the previous 1000). Given this was achieved going through the internet, performance is pretty amazing if you ask me.

Possible disadvantages of using DynamoDB

There are potentially few disadvantages in using DynamoDB.

  • Once you start using it, you made a commitment to use an Amazon service therefore you’ll need to keep using it. However, if we implement the DAO pattern, it could be relatively easy to switch to another DB provider
  • One needs to be careful about the pricing model and the size limitations for each read and write as well as the number of secondary indexes which one defines for each table. Each secondary index consumes additional read / write capacity units.

But that’s pretty much it. If your organisation is committed to using AWS, then DynamoDB is definitely one option to look into.

Happy Cloud!


The need

I wanted to login to my WordPress EC2 instances to perform some routine maintenance but I didn’t want to login into the console, click on the EC2 dashboard, then on instances, then jot down the public IP addresses…Do you see where I’m going with this?

So I created a couple of scripts which do this automatically.


  • You’re working on a Mac (but what’s described in this post can be easily adapted to any Unix-like environment
  • You’ve installed the AWS client
  • You’ve configured at least the default profile
  • You’ve added the .pem file(s) you’re using to ssh into your EC2 instances to the keychain through ssh-add -K <pemfile>. If you haven’t done so, you’ll need to modify the second script to pass the .pem key as argument

The Solution

The solution consists of a couple of script:

  • The first queries AWS for EC2 instance details, extracts the public IP addresses and stores the information in a temporary file. It then reads all the IP addresses in this file and for each one it  opens a new Terminal window asking to execute the second script, passing the IP as argument
  • The second script simply opens an ssh connection to the IP address

The master script

I’ve called the first script


function usage()
cat <<- _EOF_
Usage (* denotes mandatory arguments): $0 <profile>*
<profile> is the AWS profile name which must match an entry in ~/.aws/credentials
function connect()
osascript <<-END
  tell application "Terminal"
    do script with command "~/bin/ $box"
  end tell

if [ -z "$1" ]
  exit 1

aws ec2 describe-instances --profile $profile | grep PublicIpAddress | awk '{print $2}' | sed 's/"//g' | sed 's/,//g' > ~/tmp/wordpress-instances.txt
while read ip; do
  connect "$ip"
done < ~/tmp/wordpress-instances.txt

The “slave” script

I called the second script


if [ -z "$1" ]
  echo "No IP passed"
  exit 1
ssh ec2-user@$box

Happy Cloud!




Just few weeks ago I knew very little about the Cloud. I knew what it was; I knew that Amazon, Google and Microsoft offered Cloud solutions and, like probably most, I created an account, launched an instance and felt very powerful at the idea that I could work with my own hardware without having to buy a PC. I also used the Simple Email Service (SES) to send myself an email every time a visitor to my website left a feedback comment.

In recent months, given my increased interest in DevOps as a mindset to delivery Quality Software @ Speed, it became clear that the Cloud was necessary to achieve the goal and I knew that Amazon was regarded as a leader in the Cloud space. So I asked myself: “Is it possible to become Amazon Certified?”.

Why Amazon?

Of the three providers I knew, the Google Cloud experience didn’t leave me satisfied; Microsoft wasn’t an option as an IT professional I try to steer clear of them whenever I can, although Gartner places them as the major Amazon competitor as of May 2015, with Google following in third position.

So I decided to give Amazon a try.

Running the magic search in Google was a life-changing event. Not only did I find that Amazon does indeed offer a wealth of certifications for the Cloud, but I also found that there were numerous courses available to prepare for it. I had to choose between the Solution Architect, Developer or SysOps as initial courses, although my mind was already spinning towards the DevOps Engineer – Professional. Given my current role, the Solution Architect seemed the most obvious choice, so I decided to go for it.


Being already an Udemy student I searched for AWS certification courses and I’ve found a set of courses run by Ryan Kroonenburg, a guru when it comes to AWS, so I purchased the course.


  • It’s cheap. Since I was already an Udemy student, I was able to purchase the course for £12, although the current price is £62.
  • Ryan does a great job at introducing AWS Cloud newbies to this fantastic course. Since he holds numerous AWS Certifications, he knows what’s required for the exam and he tailors his courses with that goal in mind
  • By the end of the course, the student has a thorough understanding of AWS services and will have a good knowledge baggage to enter the exam
  • There’s a community external to Udemy where one can share questions, experience, etc.
  • Short videos to show theory in practice


  • The course teaches the fundamentals but I felt it wasn’t going beyond that.
  • The quizzes don’t explain you why an answer is wrong, they just calculate the final score. For some questions there is an explanation

Cloudacademy: your best chance to pass the certification exam

Again, I asked our friend Google for other course providers and after some digging I found Cloudacademy. Let me tell you that this provider is, by far, the best I’ve found in training people to pass the AWS exams.


  • Richness and completeness of content. With the yearly package, one gets all-you-can-eat courses. It’s a social networking for Cloud students. It offers Certification Paths, Targeted Quizzes, Labs, Paths and a Community Forum
  • Modern Website content. Cloudacademy offer an amazing website with a personal Dashboard which tracks your progress and compares it with the rest of the community, personalised learning paths, Labs on the real AWS cloud without one having to create an AWS account or spend money, Leaderboard, “Memory” of your interaction with the material so that intelligently keeps adapting questions and content based on your progress
  • Right content for the right level. If you choose the “Advanced” material, you’ll find the toughest questions to prepare you for the exam. Initially you’ll sweat blood and tears as you won’t have a clue of how to answer, but thanks to an explanation linked to the relevant AWS documentation for every single question, you’ll quickly learn AWS deepest secrets.
  • Amazing customer support. I’ve highlighted a couple of mistakes on the questions and for each one I was given a free month membership. Now that’s what I call leadership: encouraging feedback and use it to improve
  • Short videos to show theory in practice

Short videos to show theory in practice


  • It’s not the cheapest of service (at the time of writing the yearly subscription is $395, which has been reduced from $495). The latest pricing can be found here. However if you think that for that amount you can prepare for any Cloud certification, it seems good value for money

My Tips

  • Practice makes perfect. Whatever course you decide to do, get your hands dirty on the AWS platform. Spin up boxes, create VPCs, set up your own domain, create your own high-available website (devopsfolks runs on the AWS cloud, with a high-availability, scalable setup which I did all by myself – after learning from the masters of course). Practice, practice, practice. I couldn’t emphasise it more. The mistakes you’ll make on the platform will help you provide the correct answers during the exam
  • Read Amazon’s documentation. Amazon’s AWS documentation is probably the best example of thorough documentation about a service. The AWS Cloud is immense: Amazon offers many services, each with precise characteristics to it and the documentation explain them in detail, offers guided examples, suggestions for best practices, etc. The majority of exam questions are based on the documentation and hands-on. It’s really up to you
  • Follow each lecture thoroughly. Lessons on both Udemy and Cloudacademy are organised in short bursts videos. Concentrate for the video duration. Try to keep distractions at bay (having a young and playful daughter I know a bit about distractions)
  • Never get satisfied with what you know. Once you know something, aim at knowing more. Read the online documentation, try to most difficult quizzes and tests, interact with the community, ask and provide help
  • Give yourself a deadline and commit to it. Although seemingly in contradiction with the previous point, I found that helped giving myself a target date and committing to it. The way you commit is by purchasing the exam ($150, non refundable). The risk otherwise is that you’ll try to perfect your knowledge forever and will keep postponing the exam.

That’s it DevOps folks! I hope soon you’ll be too able to display this logo in your blog.


Good Luck!

We are witnessing another IT revolution

We live in a fantastic time… Not because the need to deliver quality software @ speed has changed, but because today we can combine people, processes and technology to make it happen.

There’s always been the need to release quality software @ speed in order to deliver business value but the gap between IT methodologies and technology has left this vacuum unbridged.

Today we can answer the question: “How can we turn business requirements into fully working production systems in the shortest possible time frame?”

This is possible thanks to the technology enablers which, in turn, have opened the doors to a change in how people interact, processes are followed and ultimately IT systems are delivered.

Today, after the Touring machine and the Internet we are witnessing another IT revolution…It’s called DevOps.

Technology is the enabler

We got to this point in small steps: initially computers were invented. These machines could perform calculations much faster than human beings which created new possibilities and as a consequence people became more demanding. Computers were suddenly used in space programs, military, education and of course, business.

But computers and programmers weren’t cheap. Mistakes were costly and we adapted by creating software delivery methodologies with the goal to remove the delivery of buggy IT systems. This is when the Waterfall methodology bloomed. There was the need to get the requirements absolutely right, then the design, then the development and finally the testing. This was all driven by the illusion that since computer programs are the combination of zeros and ones we could decide in a similar binary fashion what we wanted out of them, until we discovered that, as humans, we are not as binary. The Waterfall gated approach didn’t deliver on the expectations. The majority of IT programmes driven by this methodology failed or delivered much later than required, often providing functionality that was different from the original specs and at a cost that was significantly higher than what had been budgeted.

Technology, once again, was the enabler. With computers getting faster and cheaper and the advent of the Internet, a number of new software systems emerged. Now programmers weren’t part of a restricted elite, but programming became mainstream. With more developers now empowered to transform their ideas into reality, the number of “utility software” grew as well. We also saw best practices and patterns emerge; concepts such as Source Code Management (SCM), Test Driven Development (TDD) and Continous Integration aimed at breaking some of the barriers represented by the Waterfall gated approach. With these barriers partially removed, the need arose for a greater collaboration between those who requested features (the business) and those who delivered them (IT). So in the late 90’s a new mindset made its way in the IT industry: Agile. Its goal was to bring business and IT together, to favour dialogue over gates, collective responsibility over finger pointing and break complex problems into small deliveries which, as a consequence, were faster, more responding to the business needs and of a higher quality.

But we were still a long way from delivering quality systems at pace. The problem, once again, was that we started from the details rather than looking at the bigger picture.

The Bigger Picture

Yes, fundamentally the bigger picture has never changed since IT made its appearance. We need to deliver entire systems, not just parts of a system.

IT software delivery can be compared to a manufacturing plant: there are work centres depending on each other. If part A enters the first work centre, once processed it becomes part A2; this is then delivered to the next work centre which processes it to become, say, part B, and so on. Relating this to software delivery, we can easily see that business requirements enter the manufacturing line as part A, engineering process them producing an IT business delivery, say part B, which is then passed to the next work operations for deployment into production.

Every business can be thought of as the combination of three fundamental parts:

  • Inventory
  • Operational cost
  • Throughput

In order to be successful, every business needs to:

  • Reduce Inventory
  • Reduce Operational cost
  • Maximise Throughput

When we produce artefacts, whether parts or software systems, we are producing Inventory. Operational cost is the cost of people, machines and processes that created the artefact. It’s only when the part is sold, or by analogy the software system goes live that Inventory becomes Throughput. Any barrier to a product delivery, therefore, affects the business negatively and by contrast, every product delivery increases business success.

A system will only be delivered at the speed of its slowest cost centre, a.k.a Theory of Constraints. What we’ve done with computers first and IT methodologies such as Waterfall and Agile later, has only speeded up one work centre (engineering), not the entire system. Much of the disillusionment involving Agile nowadays is caused by the lack of vision for the bigger picture. The Business is asking: why is IT still delivering software systems so slowly even though we are using Agile? Wasn’t Agile meant to deliver software quicker?

The answer is that Agile has speeded up the “engineering” work centre but systems are the result of the collaboration of various actors in the process: business for requirements, engineering for development and testing, operations for Continuous Delivery. Software systems are delivered at the speed of their slowest work centre. In order to speed up software delivery, we need to identify the slowest work centres (a.k.a. the bottlenecks), subordinate everything we do to the bottleneck, elevate the bottleneck until we break it and start all over again, all the while making sure that Inertia doesn’t become the bottleneck.

So what are the bottlenecks to break?

The answer varies from organisation to organisation. I’m pretty sure that people reading this article will be familiar with at least one of the following concepts:

  • Infrastructure provisioning takes too long
  • Developers and Operations work in isolation, as distinct teams. Developers think that their job is done when they’ve finished coding. Operations, dreading that moment, want smooth, reproducible, automated and reliable production releases
  • Lack of test automation (or lack of testing in general)
  • Manual software and database deployments
  • Different environment landscapes, e.g. Dev is different from Integration, which is different from QA, which is different from UAT, which is different from Production
  • Change Requests for production releases take too long
  • Server software installed / maintained by external teams
  • Lack of Continuous Integration tooling
  • Lack of support for operations in the code (logging, resiliency)
  • Lack of monitoring
  • External teams imposing to the Developers the tools to use
  • Inadequate tooling (e.g. SCM, CI, Testing automation, etc)
  • Poor requirements (e.g. not following a BDD mindset, lack of executable acceptance criteria)
  • Red tapes, e.g. controls imposed by organisations to safeguard the stability of production systems
  • Unskilled staff (e.g. in development and Scrum)

By operating with a DevOps mindset, organisations are able to break these constraints and therefore to deliver Quality @ Speed,

DevOps to the rescue

If you look at all of the above bottlenecks you’ll notice that they’ve all got one thing in common: lack of collaboration between people. DevOps is a mindset which wants to increase the collaboration between people by getting them to work together as one team. A team is composed by all the actors who collaborate towards the delivery of a Business Capability. By collaborating,  they can ensure that requirements are written following a BDD approach, code is developed by taking Operations needs in mind, infrastructure is provisioned and configured automatically, code is continuously built, tested and deployed to integration environments, the delivered solution addresses non-functional requirements (NFRs) and it’s of high quality. The picture below gives a high-level overview of the DevOps mindset.




In order to deliver on the DevOps vision, all stakeholders in a project need to collaborate each to deliver their part. The cornerstones of a successful DevOps strategy are:

  • Technology as an enabler, e.g. being able to use the Cloud allows for scalability and elasticity
  • Automation. Every phase in the SDLC needs to be automated, from requirements validation, to testing, to infrastructure provisioning, to configuration management to Continuous Delivery
  • Repeatability. By automating, we remove the chance of human errors. Being able to repeatedly configure our infrastructure and validate our system correctness and quality, we achieve consistent and fast feedback, which in turn allows us to react to changes in scenario. Additionally, repeatability allows us to treat any phase of the SDLC as BAU. A deployment to an integration development environment should not be different than a deployment to production.
  • Consistency. Automation and Repeatability allows our activities to be consistent. Consistency is the key to transparency and speed. Transparency allows us to identify improvement opportunities fast and therefore to adapt to changes in the way we operate to continuously identify and elevate our constraints to increase overall throughput.
  • Collaboration. If all people involved in a Business Capability delivery collaborate as one team, technology and people can deliver the best results.