by QA

Why DevOps is great

To give you an idea of what’s possible in terms of productivity – Amazon have their pipeline set up to allow for over 50 million deployments a year.

Each organisation will have its own pipeline – there’s no one true way of doing this – and new tools are being released all the time, always changing what’s possible.

Humans and automation: getting the balance

In an ideal world, there is very little human interaction in the DevOps pipeline. Code is fed in one end and (after going through several processes) it’s then deployed.

But before you start – before any code is written – you do need some human input here. The customer and (usually) the Business Analyst will meet to work out what the requirements of the project are – and manage expectations. Bear in mind, a good set of requirements is one that can be tested automatically.

After the meeting, the Business Analyst will go to the development team and talk about what’s required. The Business Analyst also becomes an important go-between for the customer (if the customer isn’t directly working with the developers). Or, if this isn’t needed, they’ll stay on site and help with the project.

To get it right – automation is key. If possible, humans should now be taken out of the equation for the middle steps of the pipeline. These are repeatable processes – which computers can do for you, so you get the most out of adopting DevOps.

DevOps in five steps

To explain, we think about DevOps tools in five steps:

  1. Source control
  2. Build tools
  3. Containerisation
  4. Configuration management
  5. Monitoring

Step 1: Source control

There are very few organisations out there that can cope without some kind of source control method.

Source control methods come in two types: Centralised or Distributed.

The aim is to have a single place where everyone can get a copy of the code for a project. Multiple people can work on it at the same time, adding their changes, and then all the changes are merged together into one place. Conflicts between different people's work are sorted straight away.

We’ll look at Git as an example – one of the most popular tools. It uses a Distributed Model. There’s no single point of failure, but this model does open up more change merger issues.

Source control is a key part of the DevOps pipeline because it’s where the developer submits code and tests into the pipeline. Everything that’s required to build and test (and later, even deploy) a project should be stored in the source repositories.

When new code is submitted to the repository, this sets off the next step of the pipeline – build. This can be done by the Git server – creating a push notification which tells the build machine to start, or the build machine can be set up to watch Git for changes.

Step 2: Build tools

There are two types of tools grouped together in this step:

  1. Local build tools and dependency managers such as Maven , Gradle, sbt or npm.
  2. Server based solutions such as Jenkins, Travis CI or Bamboo.

Local build tools all work in a similar way. There is a file which will describe how to build your project. Maven uses the project object model file - pom.xml - whereas sbt uses a build.sbt file.

The contents are formatted differently, but the effect is the same. The build file describes:

  • How to compile the software
  • What dependencies are required
  • Where to get them from

The build file can also look at code style and report test coverage statistics.

Importantly these build tools always build the project in the same way – using the same building blocks. This removes issues with different versions of libraries being used, as the build tool will pick them up automatically.

Server based build tools work alongside the local build tools. They will notice when a change is made to the source repository, clone a copy of the code and run the Maven, Gradle or sbt jobs to build the project. They can also set off pre- or post-build steps, such as deploying to a server, or informing another service that the built project is ready.

Jenkins is one of the most popular options out there for build management. It was originally used mostly for Java projects, but it has a very active community behind it. Its popularity means there will be a plugin for just about any language, reporting system, test or deployment tool.

After the project is built, the pipeline splits. Some organisations deploy their code directly from the build server. Others send a notification to a configuration management tool to pick up the built code and deploy it.

Step 3: Containerisation

Software containers are like real containers. A box doesn’t care about the contents inside it. If there are 10 boxes, all the same size, they will still stack the same – whether they contain the same thing or vastly different things.

Containers for software are boxes which contain code and everything that code relies on to run correctly. They are small, self-contained units which can be loaded and unloaded on any operating system which has an engine to do it. The most popular tool for this is Docker.

Docker builds a container from a script known as the Dockerfile. This script is simply a text file, so it can be stored along with the code in source control. Build systems (like Jenkins) can even set off the Docker build process as a post-build step. Containers are designed to run on any system – they are entirely infrastructure and content agnostic. This means a container can be built on a Windows desktop machine and transferred to a Linux machine in the cloud – it will still run in the same way on both systems. All the files, libraries, settings and even the operating system the container need are all inside it.

The best practice with containers is to make them as small as possible so they are quick to start or transfer between machines. Containers should be able to be started, stopped and removed at will. So there should be no data stored inside it.

Docker and all the other containerisation groups have started a standard for containers - runC . This means that in the future, coders will be able to create containers with one system and move them to another.

This leads to the next step: configuration management. Containers can be deployed directly (see Docker Machine, Swarm or Kubernetes) or they can be passed over to the configuration management tools.

Step 4: Configuration management

Configuration (config) management is all about making sure that servers (or other machines) are in the state they’re expected to be in. Tools like Chef, Puppet or Ansible are the key here.

Config management tools generally involve a master server of some kind, which holds the configuration for all the agents. The two styles here are a push and pull.

Push style systems send a notification to all the agents telling them they need to update or check that their current configuration is in line with what’s required. Pull style notifications are where the agent checks in every half an hour to see if there are any changes (Chef and Puppet operate like this).

Again, all the configuration is held in a series of text files. Chef and Puppet use a Ruby-like syntax to express how a machine should be set up. As these files are all just plain text, they can also be stored in source control, alongside the code, the tests and the Docker container build file.

There are modules for config management tools that also deal with the idea of Infrastructure as Code. This means even the number of servers, and how they are networked together is expressed in terms of files. There are different systems for different clouds: AWS uses CloudFormation and Azure uses resource management templates. Both do the same job, describing the setup for each of the machines in the network. There are also other systems that offer an abstraction layer on top of this, such as Terraform. This tool can link together systems in different clouds or even in data centres.

There is a Docker plugin for Puppet which will pull and start up containers on a base system. People who are running only micro services can use something like Docker throughout – and people who run legacy systems, or anything where data is to be stored. Container and config management systems have a lot of cross over, but they are both worth having to cover different situations.

Step 5: Monitoring

The last step is monitoring. Monitoring is often seen as something to leave until the end – but getting it right is key in any successful DevOps pipeline. Monitoring done right produces some clever automated results.

Changes can be made to the infrastructure through changes to text files. The monitoring tools indicate whether that change was beneficial. This is why it’s so important.

For instance, monitoring tools can tell developers if a machine is overloaded with requests – and then instruct the config management tools to create some more machines – and add them to the load balancer until the traffic calms down again.

Monitoring is also the first line of defence against something going wrong. In an ideal world, the monitoring system will pick up any problems with a system – before users have a chance to report anything is wrong. Then either automated systems can kick in to fix the problem, or a notification is sent to the people running the pipeline to have a look and solve it.

The monitoring systems should be available for anyone to look at. The developer should be just as interested in why part of their program is taking a long time to complete – as the database staff who are trying to speed up data retrieval – and the ops staff who are trying to make sure there’s a 99.999% uptime on the system. These metrics can also feedback through to the customer and the Business Analyst as they sit down and talk about the next step of the pipeline.

A final note

This is an example pipeline, and doesn't cover everything. So much is possible with DevOps. For example, there are programs out there that will allow you to visualise and manage your pipelines. For some organisations, repository management takes on a special importance – as they want to make sure that everything being used by a project is from a trusted, in-house source.

Even security can and should be part of the pipeline. Rather than waiting until the end of the process and have the security team sign off a change, why not have as many security checks as possible automated too? There are tools available to do some of the standard checks, and these should be integrated within the pipeline before anything is pushed through to deployment.

The DevOps pipeline is just a tool, made up of other tools – but it’s very powerful. Using it allows for faster, safer and successful software delivery.

Want to learn more about DevOps?

Check out our training courses. We have DevOps professional training courses, software and development apprenticeships and Graduate Academy programmes. Discover more about them, to see which is right for you.

Related Articles