Graeme Batsman | 14 November 2017
Over the years I have worked across various sectors, from single-person companies to giant central government departments and publicly-known FTSE 100 corporations. The one thing all have in common is poor security processes, little interest and very basic understanding of security controls, risks and threats.
Project Managers and Directors have a lot on their plate, for example:
Often many of these are required for the organisation's gating process, which all leads to the number one (arguably wrong) concern: the go-live date. Fitting around a go-live date clearly leads to everything being rushed, and even antivirus firms rush to push out new products which have not been fully tested.
On the numerous engagements I have been on over the last seven years, there is often no security review, resource or testing regime. It is not unheard of for contracts to be signed with zero security resource budgeted for. Then many months or even two years later the general resources ask questions about security, or an internal audit raises concerns, and 'voila', a budget is assigned for a security architect or test manager.
The buck stops with top management or whoever is in charge of the programme, since they approve resourcing plans and budgets. You may think "well this only happens in projects worth £200,000 or less". Wrong. Even programmes worth millions or tens of millions can steam ahead with no security resource to review the ADD (Architecture Design Document), write hardening guides (which even if written are often ignored) or book a pen test.
Organisations get breached because new rollouts may have sloppy code, default user accounts or poor single-factor authentication for employees accessing an 'internal' service over the internet. These new services may not be breached in the first year, but problems can arise years later, long after the project team has moved on.
Even when a large programme has a security resource (Architect or Test Manager) who is in liaison with corporate information security, they can still be wrongly bypassed or overruled. Let's take pen test findings, or general programme stoppers like:
When given to a Technical Security Specialist they can work out:
- the chance of this being exploited
- if it can be exploited externally, and
- what could happen if it was
They will have opinions on the above and may suggest a need to delay go-live, so these can be fixed or extra mitigations be installed to counter the defect. Imagine if the problem is hidden from the security resource, or simply pushed up the management chain for a risk acceptance note to be approved, thus ignoring the defect forever or temporarily. Disheartening as this is, it does happen.
If higher management approve minor changes to the functional requirement (e.g. a 'pink' login box instead of the 'blue' login box stipulated), it is unlikely to make front-page news next year. Accepting a security defect is different. Next year tens of thousands of customer records might be leaked, and an external investigation may find the ADD specified two-factor authentication yet it was not implemented. The risk acceptance note is dug up with a Director's signature approving it. The FCA or ICO (or of course Europe and GDPR) will question this, and ask why it was approved despite the technical risk.
Cyber security in the board room is a big issue, and security, both technical and GRC (Governance Risk and Compliance), is driven from the top because people listen to leadership. Board members and Directors need to understand the risks and threats better, and view security as an enabler not a hindrance to help iron out future breaches (and bad publicity).
At QA we have developed the most comprehensive end-to-end Cyber Security training portfolio providing training for the whole organisation, from end user to executive board level courses as well as advanced programmes for security professionals.
Visit www.qa.com/cyber for more information