Katrina McIvor | 3 June 2016
One of the main principles of DevOps is that we break down the silos. Rather than having two individual teams for Development and Operations with conflicting goals, there is one group, all pulling in the same direction. But DevOps is about more than two teams now. Testing should be done throughout, not just plugged in at the end, especially with automated testing in every stage of the pipeline. The quality assurance department should be involved earlier to ensure that everything is on track.
So what about security? Where does security fit in this wonderful 'everyone pulling together in one direction' world?
The DevOps pipeline is designed to make the process of deploying software faster. After the initial check in to source control by a developer the code is pulled into a Continuous Integration and Build system, such as Jenkins, that then resolves dependencies, builds the code and runs the automated tests. A step further is that something will then take the artefacts created by the build server and push them though to deployment. The tests run through the pipeline ensure its success. This is part of how we learn to trust and rely on the pipeline. If the tests have all gone green then the software is considered safe to deploy – it can just go live without any further human contact.
Despite the fact we have this idea of automating all the unit tests, automating as many of the UI tests as we can and even using infrastructure as code to automate building and configuring our servers, the security process is still very manual.
But what if we looked at this again? The DevSecOps argument is that by using the idea of "Rugged DevOps" we can add security into the mix earlier. Why manually test that a security policy is being followed? Why leave the “security stuff” until the end? If we invite the security people to the stand ups then they can head off problems before they become something big, they can work with everyone involved to ensure that the application is security and follows all the compliance regulations without just saying no at the end.
The DevSecOps Community have written a manifesto, which while not a snappy as the agile manifesto, has the ability to change how security is seen within a business.
One of the key ideas is that security should not sit in its ivory tower, separate from the rest of the process. They should be involved in all aspects of software development and deployment. The key ideas are to be involved in the wider community, both the developer and ops people in a company but also sharing information with other people, rather than hiding it away. The first tenant is that security needs to lean in, rather than saying no to new ideas. By working with people from stage one rather than being left to the end there is a higher chance of security concerns and compliance issues being factored in from the beginning.
The rallying cry of the DevSecOps conference last year was that we should start expressing security as code.
Before, security and compliance has been something of a tick box exercise. There are pages and pages of requirements that all need checking before something can be signed off and sent to production. Most of this would be checked by a person. Now I’m pretty sure we can agree that most humans are terrible at repetitive actions. They get bored, they go make a drink, they lose their place, they go to sleep and forget what they were doing at all. So why, in a world where we have computers, do we bother with all this boring stuff ourselves?
Rather than having long and dull meetings interpreting security policy, why not write the security policy as code? If it is code then it can be run, or say, put into a deployment pipeline and always checked as part of the deployment stage. Code can be version controlled. A CI server won’t get bored running tests, they will always be done. So rather than having your security manuals ignored by developers, write some tests. Automate everything you can!
I know, I know, not everything will be able to be done. This approach has the same drawbacks as any testing approach. You can only test for things you know about. Otherwise we would not be finding security holes in old code still. Automating social engineering attacks would also be somewhat tricky.
More tools are becoming available to help with this – The DevSecOps toolkit, The Security Content Automation Protocol, OpenSCAP's container compliance project and OWASP's ZAP project to name a few. These will check your host system against a set of known vulnerabilities and best practices.
But what about your own security policies?
There are other options out there, with a lot built on server spec as a test framework.
Our Security Policy:
We’re going to look at two sides of security – general security of the host or specific security based on our requirements. Security policies will normally be more complex than this, but it's good to have something to start from. So our security policy is:
- The host machine should be configured according to best practices
- Apache should be installed
- Git should not be installed
- Port 80 should be open
- Port 8080 should not be open
Security in Configuration Management (using Chef)
This cookbook goes through the CIS benchmark list, a very long and detailed set of requirements for ensuring that your data and security is compliant with their standards which, according to their <a target=">website, “eliminate 80-95% of known security vulnerabilities”.
We can add this recipe to the run list for a node and check to see that best practices are being observed. It is very unlikely that you will ever be 100% compliant (especially in our case as apache should not be installed according to the benchmarks, but it's on our list of things to ensure exist!)
On the specific side we can turn to Inspec- a testing framework that can be used alongside existing recipes. So in this cookbook we have two recipes, default and test.
For those that don't know Chef so well, this will install apache and start the service for us (good) and then install git (bad).
The test for this is in the other recipe:
We can define our security policy here as a series of rules which will be checked against. They are very similar to RSpec or ServerSpec (almost like it is all written in ruby, and there are only so many different ways of defining a more human readable DSL).
We can test this with Chef’s test kitchen and it will flag the problem areas in our policy:
A company can have a security cookbook that lists all their policies for servers, have them tailored for specific roles and then they just need to be included in the recipe list for that machine. Inspec can be used to test local or remote servers, and with impact factors a score can be calculated per node. This means that we can continuously check a machine, even after deployment. So when should we be testing?
All. The. Time.
Before you deploy run the tests locally - test kitchen, server spec, build it into your Gradle or npm tests. After it has been committed have your CI server run the tests again, plus a battery of other ones - check for default passwords left in. Check for random other packages added. Let Jenkins, Travis, GoCD or whatever you use do the hard work for you. Then after your software is deployed let the monitoring system take over, running tests as part of the security checks, or let something like Chef Compliance take over for you.
This is a very Chef specific example, but similar tests can be written for whatever type of Config Management system you use. Containers also can be tested, have a look at the Docker benchmark or just use Server Spec again. Since Server Spec can be run on any machine that we can get a remote connection to, so the possibilities are endless!