The UK’s National Audit Office (NAO) has just published its findings following its investigation into the WannaCry cyber attack that significantly disrupted the National Health Service (NHS) in May this year. To those who have followed this story from the start, it won’t be a surprise that the NAO concluded the NHS trusts had not acted on a 2014 warning from central government to patch or migrate away from vulnerable older software.
Following WannaCry, cyber security specialists were emphasising the need for upgrades and patches to occur on a timely basis. Much the same thing was said after Petya as well and will probably be repeated again after the next large scale cyber attack.
All of this makes for great headlines, but to say that systems should be upgraded from vulnerable older versions and patches implemented immediately is to ignore the reality of the modern world.
Let’s take system upgrades first – these cost money. Hardly a revelation, but in an NHS that is already strapped for cash, nobody in senior management is going to authorise a costly IT upgrade when this money could be spent on primary care. Imagine the headlines and criticism that would be forthcoming from the Daily Mail et al if/when the system upgrade is late and/or goes wrong and you can see why the money will go into primary care.
While not as exposed to media criticism, a similar issue exists in the private sector, where funds available for investment are also finite. Company directors have to make the choice where these funds are used – are they used in a project that will grow revenues and increase profits or are they used in a project that has no immediate impact on the bottom line and solely exists to protect against a future cyber attack that may or may not happen? I know which choice my money will be on.
In contrast, patches are free as they are provided by the software vendor, but that is not to say there is no cost. The majority of IT networks in both the public and private sectors have been cobbled together over time, a legacy of mergers, demergers, staff changes and previous investment decisions. However, these networks operate on a knife edge given the general lack of compatibility between different applications and the bespoke software code that is usually required to make the whole thing work. A software patch may alter how these applications interact, causing IT teams to write more bespoke code to ensure that the network continues to work and the lights remain on.
However, not every company has the luxury of a test environment to consider these issues in a safe manner. On that basis, there is a real risk of the need for additional bespoke code not being identified prior to a patch being implemented. If this risk crystallises, then the house of cards that is most networks will collapse. The time required to write the new code is anyone’s guess and, in the meantime, staff can’t work and the company is losing money. Given that, if you are head of IT, are you going to implement each and every patch as soon as it is issued, knowing that if it doesn’t work as expected, you might be out of a job?
Keeping software up to date and implementing patches on a timely basis is clearly a good idea and I’m not arguing a contrary position. However, the reality of modern business and the way in which decisions are made in both the public and private sectors, cannot be ignored.
Who knows where on earth this leaves cyber insurers though, given that some policy wordings require companies to undertake their best endeavours to keep the network up to date. I can’t wait for the discussion on the claim that results from a failed software patch, which was addressing a security issue, which in turn caused a complete network outage lasting 2 weeks with the associated loss of profit. Particularly if the incident could have been avoided by simply doing nothing…