My thoughts on AWS re:Invent 2016 keynote by Werner Vogel

In this blog I’ll talk about my first impressions on Werner Vogel’s keynote at AWS re:Invent 2016.

 

In my previous blog I’ve shared my impressions on Andy Jassy’s (AWS CEO) keynote.

Werner’s keynote theme was around Transformation, particularly in three areas:

  • Development
  • Data
  • Compute

Development

Werner talked about the necessity to adhere to good practices when architecting for the Cloud. In particular, he talked of the 12 Factor App principles and the AWS Well Architected Framework.

He then talked of the need for Operational Excellence, clearly linking it to a DevOps operating model. If you want to know more about DevOps you can refer to my article on Lean Enterprise.

When in comes to DevOps and Operational Excellence, there are clearly three main focus areas:

  • Prepare
  • Operate
  • Respond

Let’s see what Werner had to say about those.

Prepare

The pre-requisite to the ability to operate within a DevOps operating model is preparation. This includes creating and configuring new infrastructure. Werner talked of improvements to AWS CloudFormation service, which allows to manage infrastructure as code, one of the key principles of DevOps. CloudFormation now supports YAML format and perhaps most importantly, it exposes its schemas publicly, so that third parties can now build tools on top of this service.

If CloudFormation is about spinning up new infrastructure, Chef (like Ansible, Puppet and Salt Stack) is about configuring infrastructure so that it’s consistent and managed as code. Werner announced the introduction of a fully managed Chef service and an AWS EC2 System Manager service, a collection of AWS tools for package installation, patching, resource configuration and task automation. It’s clear that AWS has fully embraced and believes in a DevOps operating model and it’s providing its customers with the primitives to build a fully scalable and managed DevOps capability in the Cloud.

Operate

Once the infrastructure has been created and configured, it’s about shipping valuable software in the hands of the customers. This activity typically consists of four phases:

  • Commit code
  • Build it
  • Deploy it
  • Monitor it

DevOps is at the heart of a Lean Enterprise way of working, which in turn is based on a startup way of thinking, e.g.:

  • Someone has an hypothesis that they believe will generate some value
  • Let’s deploy that hypothesis in production and ask a number of friendly customers to help validate it
  • Measure the results
  • Adapt based on the feedback

This is very different from, say, what Scrum suggests, i.e. a potentially shippable capability at the end of each Sprint. To be truly successful, hypothesis (and therefore capabilities) need to be validated by real users in production. The four phases highlighted by Werner also link very well with the Scientific Method and the PDCA Deming cycle (Plan Do Check Act). A true DevOps operating model must allow an organisation to identify that 5-10% of ideas that generate value. This means that if an organisation is able to execute 100 production deployments per day, it will find 5-10 valuable capabilities every day. If it executes 1 deployment per year, it will take 50 to 100 years to find a valuable idea.

On the Operate side, Werner covered the whole Delivery Pipeline, from AWS CodeCommit (a Git compliant SCM managed service) to AWS CodeDeploy. In doing so, he announced a new service, AWS CodeBuild, which provides a fully managed and scalable code build tool in the Cloud. It will also be able to run unit and integration tests. Gone are the days where a bunch of Jenkins instances becomes a bottleneck as the number of tests increases over time. AWS CodeBuild automatically scales based on demand and one pays only for what they use.

As for monitoring, Werner talked of the AWS X-Ray tools, which allow for the analysis and debug of distributed applications in production. Monitoring is a key aspect of the PDCA cycle: it allows to validate hypothesis by measuring what actually happened versus what we thought it would happen. It’s only by experimenting outside our threshold of knowledge (the perimeter identified by what we know based on data) that we can hope to make discoveries. If we only experiment within what we know, we will just take measurements.

Respond

Once a capability is deployed into production, we need to measure it and respond to feedback from users. We also must be able to quickly respond to disruptions affecting our production environments.

One of the major threats that all companies face is the Distributed Denial Of Service (DDoS) attack. There are mainly three types of DDoS attacks:

  • Those targeting the network layer (Level 2 of the network stack). These account for 64% of the total, according to Werner. The aim is to “flood the pipe”, i.e. to take network capabilities away.
  • Those targeting the transport layer (Level 3 of the network stack). These attacks aim at state-exhaustion, e.g. create state for every TCP connection, which exhausts all resources and they account for 18% of the total.
  • Those target the application layer (Level 7 of the network stack). These try to attach our applications as visible to the users and they too account for 18% of the total.

Werner announced AWS Shield and AWS Shield Advanced. The former is an AWS free service enabled by default. Any application running on AWS is automatically protected against network and transport level DDoS attacks. The latter is a collaboration between a customer and AWS Shield team with the intent of keeping large and sophisticated application layer DDoS attacks at the Cloud periphery thus limiting damage. The advanced option also looks at capping Route 53 routing costs in case of DDoS attacks.

Data

Warner has presented the product family for data.

  • Elastic Map Reduce (EMR) for data processing
  • Amazon Athena for SQL queries on S3 resources
  • Amazon Redshift for Data Warehousing
  • Amazon QuickSight for Business Intelligence
  • Amazon Elastic Search for Search and Analytics
  • Amazon AI  for Artificial Intelligence capability. This includes:
    • Amazon Lex. A Natural language and speech recognition system
    • Amazon Polly. Text to speech
    • Amazon Rekognition. Processing and Analysis of images
  • Amazon Machine Learning for Machine Learning activities
  • Amazon Pinpoint, a new Analytics service. It allows for targeted push notifications for mobile apps. It allows to understand customer behaviour, define who to engage, deliver a campaign and track the results.

Werner then talked about the concept of Modern Data Architecture and introduced AWS Glue, a system who he believes can address the 10 factors of a Modern Data Architecture:

  1. Automated and Reliable Data Ingestion
  2. Preservation of Original Source of Data (some call this Data lake)
  3. Lifecycle Management and Cold Storage
  4. Metadata Capture
  5. Manage Governance, Security, Privacy
  6. Self-service discovery, search and access
  7. Managing Data Quality
  8. Preparing for Analytics
  9. Orchestrating, do job scheduling
  10. Capturing Data Change

 

AWS Glue is a fully managed data catalogue and ETL service. It integrates with S3, Redshift, RDS and any JDBC-compliant data store. With it one can:

  • Build the data catalogue
  • Generate and edit transformations
  • Schedule and run jobs

Werner also talked of a new service: AWS Batch, which is a fully managed service to process batches at scale.

Compute

Finally Werner addressed the compute parts of the Transformation journey. He made really clear that the future of computing resides in containers, microservices and serverless architecture. He praised Amazon ECS (Elastic Container Service)  as a platform to run containers and microservices and he made clear that in future architectures will be a combination of the three architectures. At this point he introduced Bloxa collection of open source projects for container management and orchestration.

He also introduced a new Task Placement Engine, which will allow customers to select policies on how to place microservices into the appropriate engines, e.g. based on location, or typology and so on.

Werner proceeded then to talk about Lambda, as the main AWS primitive, to the point that even AWS uses it for its own services. He introduced AWS Lambda@Edge, i.e. Lambda functions will be able to run from CloudFront Edge locations.

Finally, he introduced something that I believe most developers will welcome with open arms: AWS Step Functions, a service that allows developers to graphically combine Lambda functions as a State Machine and that will create an execution flow from the visual representation.

Conclusions

AWS if the best public Cloud offering, there is no doubt about that. Many enterprise with huge and diverse requirements are migrating to it, with many now thinking of an “All In AWS” strategy. I believe that this is because of its approach to business, which is to listen to its customers, improve its services and create new ones to address customers need, release small and release often.

Werner Vogel has been very clear on the importance of such approach to be able to transform one’s business and on the critical need to work with a DevOps operating model. AWS now offers a fully managed and cloud based Delivery Pipeline infrastructure, from code management to code build and deployment. It offers configuration management as a service through Chef servers. Finally it offers monitoring capabilities that allow organisations to monitor their infrastructure, applications and customers behaviours in real time and at scale, thus allowing for protection against attacks, automated recovery of failing systems and change in strategy to address customers behaviour. AWS offering allows for an operating model that observes the three ways:

  • Increase throughput from left to right
  • Increase flow from right to left (feedback from monitoring)
  • Practice to become perfect and experiment

AWS provides a full suite of services to handle data, from small use cases to huge enterprise requirements with Petabytes of data being continuously processed. From Big Data platforms, to RDS (with Aurora supporting PostreSQL now), to SQL queries on S3 resources, AWS offers a full suite of services to address any needs.

Finally, in Amazon view, customer data and customer security are critical priorities and the new services announced during the keynotes make us hope a brighter future in the public Cloud.

Learn Spring Boot, Bootstrap, AWS and Stripe

If you're looking at transforming your career, this is the course for you.


cloudydevops
 

I'm a Lean Enterprise transformation specialist who helps organisations deliver business value faster by looking at software delivery as business value delivery and as a system flow, where BDD, Agile, DevOps, Testing Automation, Portfolio and Budget Manager, Regulatory and Compliance, Security and all NFRs in general are all parts of a single journey. My favourite execution tool for Lean Enterprise transformations is the Improvement/Coaching Kata, by Mike Rother.

Click Here to Leave a Comment Below 0 comments
Privacy Policy
Terms of Use
Disclaimer
Antispam
Amazon Affiliate
Affiliate Disclosure
FB Policy

Sign up to download this FREE Ebook

x