Choose the simplest possible journey
If you look at the User Story Mapping board where I collected the requirements for Pulse, you’ll notice a number of user journeys (green cards represent themes, white cards represent journeys).
I want to start with the simplest possible journey with the following outcomes:
- Validate the end to end technology architecture to lay a sound foundation for Continuous Delivery pipelines to follow
- Learn and enforce my knowledge on some technologies I’m curious about
- Move the needle towards a Minimum Viable Product (MVP) that I can start using
- Fail fast by moving fast. By implement small end to end journeys, I can learn fast what works and what doesn’t and change my direction when I fail.
As I want to start with the simplest possible journey, I’ll pick up the journey: “Automated Creation of Organisation upon subscription”, belonging to the “Offer Pulse as SaaS” theme.
The journey might right something like: “Jenny visits Pulse home page and she’s presented with various subscription options: Personal, Professional, Small Business, Enterprise. She chooses one subscription, enters some necessary details for the subscription, gives her consent to process and store her data and proceeds to the payment. After a short time, she receives an email with the following information:
- Organisation name
- Default Campaign info
- Her subscription
- How to get support
- What to do next.”
The journey above covers the entire on-boarding process. I’m thinking of exposing it as a Process API that orchestrates a number of System APIs underneath.
In this journal I’ll define the steps to go through one of the System APIs, namely the one exposing CRUD functionalities for the Organisations data.
In order to implement this journey I’ll need the following:
- Have a Cloud database where to store organisations. I chose Cloud because I want to build a Cloud Native application
- Design the Data Domain Model for the Organisations table and represent it as JSON schema and in JSON format
- Create a se RESTful API to expose CRUD operations for Organisations
- Create the API fulfilment through a Microservices based architecture. In this case I’ll chose Docker and Kubernetes as container and orchestration technologies respectively
- Create a Continuous Delivery pipeline to deliver changes to production quickly and automatically
Few technology choices needs to be evaluated. I like thinking of experimentation and learning while developing a product, rather than committing myself to long term choices. If the experiment goes well I might decide to invest in it in the long term, otherwise I might choose a different technology. Key to this flexibility will be to design the fulfilment system following well known design patterns, like interface-based design, the DAO pattern and so on.
You might be thinking: “What about automated tests?”. In this phase of the product development, I don’t want to invest too much time writing automated tests. I don’t know whether the product makes sense, whether the end to end technology will make sense and stick together. This is really just an experiment and I’m not aiming at delivering a 100% perfect product, rather something functional enough for friendly customers to evaluate and provide feedback. If the idea makes sense and customers like it, I will invest in a full suite of automated tests to validate both the product and the internal workings.
Let’s start with the database. Nowadays we essentially have three big families of databases: relational, NoSQL and graph. Each family has its advantages and use cases. For the storage of tabular data whose format never (or very rarely) changes, relational databases are the obvious option. For variable formats, typically stored as JSON documents, NoSQL document databases like MongoDB would be the obvious choice. Where relationships between elements are required, with the need to have flexible formats in the elements properties, graph databases would be the obvious choice.
For this journey all I need is an Organisation table. For now I’ll go with a relational database and I’ll choose MySQL as it’s easy to run it with Docker which means I can easily test on my laptop without having to spend money. Given it runs on Docker it also means I’ll be able to easily run it in the Cloud.
The Data Domain Model
Next I’ll need to define the Data Domain Model. As I want to build Pulse as an API based product, I want to use RESTful APIs and JSON to exchange data. Therefore, after designing the data model with my tool of choice, I’ll represent it as a JSON Schema. The reasons for defining it as a JSON schema is also that I’m going to use MuleSoft and RAML for the API definition and Gateway and in RAML it’s possible to define data types as JSON schemas.
Designing the RESTful API
As I’ve decided to build Pulse as an API-based solution, the obvious architecture of choice for APIs will be REST. There are numerous reasons for it but in a nutshell:
- RESTful is a stateless architecture, thus leading to APIs that are easy to scale horizontally in the Cloud
- RESTful is typically based on the HTTP protocol, therefore saving me the effort to reinvent methods and status codes
- RESTful typically exchanges data through JSON (although XML is also possible) and JSON is much lighter than, say, SOAP
- RESTful is overall very lightweight and easy to understand thus delivering a better developer’s experience
- Finally, I want to keep practicing with REST and RESTful APIs as this is where the technology architectures are heading towards.
I will also use MuleSoft as API platform, mainly because I want to learn and re-learn more about it. It is not excluded though that after my experiment I will try to deploy the same API to AWS API Gateways and Google Cloud Endpoints. One positive aspects of API and Microservice based architectures is the deployment flexibility as the architectures are quite standard and portable.
The API fulfilment
As API fulfilment, I’ll initially choose Docker and Kubernetes as container and orchestration technologies for a Microservice based architecture. There are many reasons for this:
- Docker is the world leading container technology
- Kubernetes is the world leading Docker orchestration engine
- Docker’s mantra is that one can build a Docker application once and run it anywhere, on-prem, on an external Cloud or in a hybrid model. Kubernetes offers the same capabilities.
- It’s very easy to experiment with Docker. For example, to use MySQL it takes only few minutes to set up. If later I want to switch to a different database, e.g. Neo4J or MongoDB, is equally easy. This allows for faster experimentation and learning, which is at the core of my objectives for this initiative.
- Docker containers can scale very well horizontally
- Docker and Kubernetes are widely adopted, with major Public Cloud providers like AWS, Google Cloud and Azure all offering services to run Docker and Kubernetes as a service
Create a continuous Delivery pipeline
The final step in this journey will be to create a Continuous Delivery pipeline that will build and deploy each component to production. Since I’m using API and Microservices, I’ll also be able to follow a green-blue deployment approach, being able to deploy to production after every code commit while at the same time having the freedom to decide when to release the new feature to customers.