There are two things about security and compliance that bother me. The first is security and the second is compliance; the first clearly isn’t very effective and the second is a nonsense.
The problem with both is that they are abstract ideas that have little meaning in reality. If you try to define the concept of being secure it really boils down to not being insecure. Sure, you can say that security is the maintenance of availability, confidentiality, integrity and this, that and the otherity – but it really means nothing because our knowledge of security is quantified only by its loss. We could spend £1 million per month on security and not be secure; we could spend nothing on security and be secure. The difference is solely defined by whether we are currently compromised or breached; and that, empirically, has little to do with the size of our security budget.
In some ways, compliance is a bureaucratic methodology to ensure that we at least do something. The purpose is to try to ensure that we are secure by regulation. There are two approaches: one is to say you must be secure or else; while the other says you must do this, and this and this or else. In the first instance, just like security itself, a company is compliant regardless of what it does right up until a breach proves that it is not compliant – so what is the point? In the second instance, doing this and this and this to be compliant will not make you secure, which is the purpose of compliance – so what is the point?
The danger comes when you put the two together. You have to be compliant even if it is pointless. That, frequently, is the law. Its purpose is to provide security; so all too often concentration on compliance is all that is done in the name of security. Security thus becomes a tick-box compliance effort, which won’t make us secure but will at least keep us legal. The danger in compliance is that it can lower the bar on security.
So is there no hope? Should we all just accept our insecurity; simply tick the minimum number of boxes necessary to be compliant and hope for the best? Well, no – there is hope; but it’s coming from the practitioners (CSOs) rather than the theorists (security industry) and compliance legislators (governments). What is happening is the slow realisation that security is not a thing in and of itself, but nothing more than an aspect of business risk management. It is not a thing to be acquired, but a concept to be managed.
A new report from the Wisegate community of IT executives – including CSOs – demonstrates that security theory is being replaced by risk management methodologies. Rather than a blanket desire to ‘be secure’, CSOs are starting to manage the business risk. Instead of security being a meaningless concept protected by numerous discrete and leaky band aids, it is becoming part of the continuous management of the business’ level of risk tolerance. Within this approach, compliance becomes an aspect of risk management; security becomes a process within risk management; and people become as important as products.
The report is called Moving From Compliance to Risk-Based Security: CISOs Reveal Practical Tips – it’s worth a look.
One thing that RSA week always brings is dozens of new surveys and research reports. I looked at three for Infosecurity Magazine on Friday:
- 2013 Security Report (Check Point)
- Targeted attacks and how to defend against them (Trend Micro/Quocirca)
- Managing information security: Public sector survey report (Clearswift/SPS)
They are all looking at different issues, but there is a common finding in all of them – a disconnect between recognising a threat and taking the right or adequate action to mitigate that threat. More specifically, they all say that the public sector is the worst offender.
From Check Point we learn that government is the leading offender in the use of high risk applications (remote admin, file storage and sharing, P2P file sharing, and anonymizers). In particular government is more likely than any other sector to suffer an incident that could lead to data loss at least once every week; and government is the leading offender in sending credit card information to external resources.
From Clearswift we learn that “Despite 93% of [UK public sector] organisations sharing sensitive information with external partners, 30% don’t view information security as a high priority when selecting a partner.”
Trend Micro, commenting on its own report, says, “Public sector respondents were guilty of a worrying level of complacency, with over a third claiming targeted attacks are not a concern, despite 74 per cent of such organisations having been a victim of these attacks in the past.”
Put quite simply, government cannot and must not be trusted with our personal information. In the UK, this is the government that plans to build a national DNA database within the NHS; and that wishes to be able to intercept our private communications at will. For the sake of our security, it must be stopped.
Compliance – at least European regulatory compliance – bothers me. Whenever I speak to a security expert, those concerns are allayed for just so long as we talk; and then they come back again.
The problem is that Europe passes principle-based legislation (the US is more likely to pass rule-based legislation). The former tells you what must be achieved (the principle), while the latter tells you how it must be done (the rules).
The European Data Protection Directive is a perfect example of principle-based legislation. It says that personal information must be held securely; but it doesn’t tell you how it should be done.
Here’s my problem. Data that hasn’t been lost or stolen has, de facto, been held securely and the company is in compliance – even if it spends nothing on compliance. Data that has been lost or stolen has not, de jure, been held securely and the company fails compliance even if it has spent many ££millions on compliance. The existence or lack of infosecurity defences is irrelevant: if you lose that data, then you are in breach of the act; if you do not lose the data then you are not in breach of the act.
I’m not interested in claims that proof you spent money on security will make the ICO (a marketing man, mark you – not a lawyer) go easy on you. That’s just marketing dross to hide the underlying contradiction.
What I want to know is quite simple. How can it possibly be right to frame a law that states someone who tries to comply can fail compliance, while someone who ignores compliance can be compliant? The result is that there is no logical reason to spend money on securing personal data – just hope you don’t get hacked. This is aggravated by the common and growing perception that if you get targeted, you will get breached. So if you get targeted, you will have failed compliance whether you try to comply or not. Why bother?