Over the years I have worked across various sectors, from single-person companies to giant central government departments and publicly-known FTSE 100 corporations. The one thing all have in common is poor security processes, little interest and very basic understanding of security controls, risks and threats.
Project Managers and Directors have a lot on their plate, for example:
- documentation approval
- operational acceptance testing
- user acceptance testing
- performance volume testing
- penetration testing (or 'IT health checks', as government prefers to call it)
Often many of these are required for the organisation's gating process, which all leads to the number one (arguably wrong) concern: the go-live date. Fitting around a go-live date clearly leads to everything being rushed, and even antivirus firms rush to push out new products which have not been fully tested.
On the numerous engagements I have been on over the last seven years, there is often no security review, resource or testing regime. It is not unheard of for contracts to be signed with zero security resource budgeted for. Then many months or even two years later the general resources ask questions about security, or an internal audit raises concerns, and 'voila', a budget is assigned for a security architect or test manager.
The buck stops with top management or whoever is in charge of the programme, since they approve resourcing plans and budgets. You may think "well this only happens in projects worth £200,000 or less". Wrong. Even programmes worth millions or tens of millions can steam ahead with no security resource to review the ADD (Architecture Design Document), write hardening guides (which even if written are often ignored) or book a pen test.
Organisations get breached because new rollouts may have sloppy code, default user accounts or poor single-factor authentication for employees accessing an 'internal' service over the internet. These new services may not be breached in the first year, but problems can arise years later, long after the project team has moved on.
Even when a large programme has a security resource (Architect or Test Manager) who is in liaison with corporate information security, they can still be wrongly bypassed or overruled. Let's take pen test findings, or general programme stoppers like:
- no two-factor authentication
- requests to disable operating system updates to stop the application falling over
- missing patches
- XSS (cross site scripting) defects
When given to a Technical Security Specialist they can work out:
- the chance of this being exploited
- if it can be exploited externally, and
- what could happen if it was
They will have opinions on the above and may suggest a need to delay go-live, so these can be fixed or extra mitigations be installed to counter the defect. Imagine if the problem is hidden from the security resource, or simply pushed up the management chain for a risk acceptance note to be approved, thus ignoring the defect forever or temporarily. Disheartening as this is, it does happen.
If higher management approve minor changes to the functional requirement (e.g. a 'pink' login box instead of the 'blue' login box stipulated), it is unlikely to make front-page news next year. Accepting a security defect is different. Next year tens of thousands of customer records might be leaked, and an external investigation may find the ADD specified two-factor authentication yet it was not implemented. The risk acceptance note is dug up with a Director's signature approving it. The FCA or ICO (or of course Europe and GDPR) will question this, and ask why it was approved despite the technical risk.
Cyber security in the board room is a big issue, and security, both technical and GRC (Governance Risk and Compliance), is driven from the top because people listen to leadership. Board members and Directors need to understand the risks and threats better, and view security as an enabler not a hindrance to help iron out future breaches (and bad publicity).
At QA we have developed the most comprehensive end-to-end Cyber Security training portfolio providing training for the whole organisation, from end user to executive board level courses as well as advanced programmes for security professionals.
Visit www.qa.com/cyber for more information
Graeme joined QA in 2017 and has worked in security on and off for 15 years. His last role was as a Senior Technical Security consultant at Capgemini covering the public and private sector.
From the age of 17, he was running investigations into online scams and phishing. Today he teaches and/or has written: CEH, OSINT, CTF (conventional or OSINT), CyberFirst, practical encryption and Security+. Graeme is an avid writer with 130+ articles to his name and a chapter in a published book.
He loves thinking like a hacker to review and tweak settings with a fine-tooth comb.
More articles by Graeme
Shadow IT during Covid-19: Do not let your employees decide which apps and tools to use
11 cybersecurity tips for more secure home-working during the Covid-19 outbreak
Hostile reconnaissance: What is it and how do we stay safe?
My partner is a landscape gardener – who would want to hack me?
7 cybersecurity tips for wedding photographers – or anyone, really
Cyber Security for everyone - what we all should know
Cyber Attacks - Most of them are not as high-tech as you'd think
Rise and Fall of Bitcoin
Endpoint and network firewalling needs to change
The perils of single-factor authentication