There can be little doubt that there were huge security failings at Target when it got breached late last year and 40 million credit card details were stolen. It had, just two months previously, been assessed compliant with the PCI DSS security standard. While we are not privy to the details of the assessment, two months later Target was clearly not or no longer compliant (failure to adequately segment its networks, storage of the cards’ security codes and more).
Now Target’s PCI qualified security assessor (QSA), Trustwave, is being sued for failures that led to ‘monumental’ damage. That is going to be difficult to prove. Trustwave will have followed the PCI DSS guidelines. Proving that it did not will be difficult. In fact, it will be easier to demonstrate failings in PCI DSS than in Trustwave’s commitment to it. “Several years ago,” Ilia Kolochenko, CEO at High-Tech Bridge told me by email, “I notified the PCI Council about vulnerabilities (including a critical one) on its own website. Obviously, PCI DSS standard is continuously improving, but I think that practically speaking it’s still far from being perfect today.”
Now Target’s security monitoring firm, Trustwave, is being sued for failures that led to ‘monumental’ damage. That is not merely something difficult to prove, it is monumentally absurd. If we were to sue our anti-virus supplier every time it fails to stop a virus, we would very soon have no security industry at all. Perhaps we should sue the police for not stopping that car theft in Much-Binding-in-the-Marsh in 1952…
No, this law suit is absurd and deserves to fail monumentally. It is just the banks doing what they do best – attempting to spread the blame elsewhere, and make someone else pay for their own errors. For make no mistake, the fault here lies with the finance industry more than anyone else. It lies with the banks for not insisting that bank cards switch to the far more secure EMV (chip and PIN) standard, and with the PCI Security Standards Council for spreading the lie that conforming to PCI DSS will ensure security.
(As an aside, some four weeks ago I wrote to the PCI Security Standards Council – the group that develops the PCI Data Security Standard – and asked: “Do all the recent US retail breaches prove that PCI DSS doesn’t work?” I did not get a reply.)
So, is Trustwave faultless? Certainly not. Trustwave provided both the security assessment and the security mitigation. This is clearly wrong. If it did not pick up problems in the assessment, it would not be looking for them in the mitigation. But it did so because it was allowed to do so. So once again it is PCI that is at fault. You cannot blame Trustwave for making money where it is allowed to do so. It is PCI that must ensure that assessment and mitigation are segregated.
Until we escape from the wrong and blinkered view that compliance provides security, there will be more Targets in the future.
So, in the final analysis, compliance – a concept designed and intended to improve corporate security levels – actually does the opposite. In promising something it cannot deliver, it draws resources from the one organization that has any chance of succeeding: your own, in-house full-time professional security staff.
I expand in a post on Lumension’s blog: Compliance is bad for security
There are two things about security and compliance that bother me. The first is security and the second is compliance; the first clearly isn’t very effective and the second is a nonsense.
The problem with both is that they are abstract ideas that have little meaning in reality. If you try to define the concept of being secure it really boils down to not being insecure. Sure, you can say that security is the maintenance of availability, confidentiality, integrity and this, that and the otherity – but it really means nothing because our knowledge of security is quantified only by its loss. We could spend £1 million per month on security and not be secure; we could spend nothing on security and be secure. The difference is solely defined by whether we are currently compromised or breached; and that, empirically, has little to do with the size of our security budget.
In some ways, compliance is a bureaucratic methodology to ensure that we at least do something. The purpose is to try to ensure that we are secure by regulation. There are two approaches: one is to say you must be secure or else; while the other says you must do this, and this and this or else. In the first instance, just like security itself, a company is compliant regardless of what it does right up until a breach proves that it is not compliant – so what is the point? In the second instance, doing this and this and this to be compliant will not make you secure, which is the purpose of compliance – so what is the point?
The danger comes when you put the two together. You have to be compliant even if it is pointless. That, frequently, is the law. Its purpose is to provide security; so all too often concentration on compliance is all that is done in the name of security. Security thus becomes a tick-box compliance effort, which won’t make us secure but will at least keep us legal. The danger in compliance is that it can lower the bar on security.
So is there no hope? Should we all just accept our insecurity; simply tick the minimum number of boxes necessary to be compliant and hope for the best? Well, no – there is hope; but it’s coming from the practitioners (CSOs) rather than the theorists (security industry) and compliance legislators (governments). What is happening is the slow realisation that security is not a thing in and of itself, but nothing more than an aspect of business risk management. It is not a thing to be acquired, but a concept to be managed.
A new report from the Wisegate community of IT executives – including CSOs – demonstrates that security theory is being replaced by risk management methodologies. Rather than a blanket desire to ‘be secure’, CSOs are starting to manage the business risk. Instead of security being a meaningless concept protected by numerous discrete and leaky band aids, it is becoming part of the continuous management of the business’ level of risk tolerance. Within this approach, compliance becomes an aspect of risk management; security becomes a process within risk management; and people become as important as products.
The report is called Moving From Compliance to Risk-Based Security: CISOs Reveal Practical Tips – it’s worth a look.
Compliance – at least European regulatory compliance – bothers me. Whenever I speak to a security expert, those concerns are allayed for just so long as we talk; and then they come back again.
The problem is that Europe passes principle-based legislation (the US is more likely to pass rule-based legislation). The former tells you what must be achieved (the principle), while the latter tells you how it must be done (the rules).
The European Data Protection Directive is a perfect example of principle-based legislation. It says that personal information must be held securely; but it doesn’t tell you how it should be done.
Here’s my problem. Data that hasn’t been lost or stolen has, de facto, been held securely and the company is in compliance – even if it spends nothing on compliance. Data that has been lost or stolen has not, de jure, been held securely and the company fails compliance even if it has spent many ££millions on compliance. The existence or lack of infosecurity defences is irrelevant: if you lose that data, then you are in breach of the act; if you do not lose the data then you are not in breach of the act.
I’m not interested in claims that proof you spent money on security will make the ICO (a marketing man, mark you – not a lawyer) go easy on you. That’s just marketing dross to hide the underlying contradiction.
What I want to know is quite simple. How can it possibly be right to frame a law that states someone who tries to comply can fail compliance, while someone who ignores compliance can be compliant? The result is that there is no logical reason to spend money on securing personal data – just hope you don’t get hacked. This is aggravated by the common and growing perception that if you get targeted, you will get breached. So if you get targeted, you will have failed compliance whether you try to comply or not. Why bother?
Sometimes you just have to laugh for fear of crying. The Information commissioners Office (ICO) strategy for 2012 makes me do just that. It is a 17 page purple prose self-aggrandizing Declaration of Independence, declaring itself to be independent of political, public and media pressure. It should just simply say that ‘we will uphold the law in our role defined by the law.’
But it doesn’t do that. It seems more concerned to distance itself from the letter of the law by defining its own interpretation of the law, and to align itself with that interpretation. It has, in short, evolved an overblown idea of its function, which it attempts to define in this rather long and mis-titled public-relations document. I give just a few examples:
we will neither be exclusively an educator nor exclusively an enforcer. We are both, even though we prefer to deliver our desired outcomes through help and encouragement rather than force. This means we are primarily a facilitator…
In the time-honoured liberal tradition it has failed to understand that facilitation is delivered by enforcement, not enforcement delivered by facilitation.
We cannot address all risks to the upholding of information rights equally nor should we attempt to do so.
Yes it most certainly should attempt to address all risks to the upholding of information rights equally.
we will treat all cases that come to us fairly and properly but not necessarily pursue them with equal vigour.
This is perhaps one of the most worrying comments. The ICO is declaring that it will decide, arbitrarily, whether your complaint is worth its attention. Not the law, not the judiciary, not parliament, not you, but its own self will pre-judge a case and decide whether or not to pursue it with vigour.
we will devote particular effort to investigating, analysing and ultimately enforcing in those cases that we see as contributing most to the delivery of our desired outcomes and not just those presenting the biggest risk…
Not just those presenting the biggest risk. It really does say that it, the body responsible for enforcing the Data Protection Act, is not necessarily going to spend its effort on the biggest risk.
Laugh or cry? You decide.
Security and compliance go together like love and marriage – you can’t have one without the other. That is the common perception (we’re talking of course solely about the infosec aspects of compliance). But is it true? Are security and compliance synonymous? If you are secure, will you be compliant? If you are compliant, will you be secure? What, in short, is the relationship between the two?
Here’s my problem. The purpose of the infosec aspects of the Data Protection Act is to keep personal data secure. But how can you be compliant with this requirement? If you have the strongest security in the world, you still cannot guarantee that the data won’t be lost. If you have virtually no security, you might never lose the data. The only empirical test for conformance, or at least the lack of it, is whether you keep personal data safe. If you lose the data you are not compliant, regardless of your security. If you do not lose the data, you are compliant, regardless of your security.
This leads to an important question: if compliance is purely a legal requirement effectively disconnected from security, will it lead companies to concentrate on legal compliance to the detriment of true security? To help me understand, I spoke to a number of security experts.
Lars Davies. CEO at Kalypton and a one-time visiting fellow at the Centre for Commercial Law Studies, Queen Mary, University of London, is clear on the relationship. “The problem comes from the fact that compliance and security are not commutative,” he told me. “One does not necessarily infer the other. Compliance infers security. Security does not infer compliance… Compliance tells you what you need to achieve. Good security is simply one of a set of components that you need to achieve the goal.”
Infosec in this sense is a tool for compliance, not a required effect of compliance; although confusion comes from the need to use security (and therefore gain security) in order to achieve compliance.
“If you are compliant then you must be secure; your security must be fit for purpose,” continued Lars. “You simply cannot not end up with the lowest common denominator at all and still remain vulnerable. If you are vulnerable then you cannot, by definition, be compliant.” So, “If you are compliant you must, by definition also be secure… Compliance and security are like pregnancy, you either are or you are not.”
This gives me a problem, since I believe it is impossible to be secure.
“You define security as the ability to avoid compromise,” replies Lars. “That is one definition. However, it does not say avoid compromise absolutely. It is impossible to avoid compromise if you are the subject of a targeted attack. However, you can make such attacks extremely difficult, and you can put in monitoring processes and procedures to try to detect and thus counter those attacks. That is also part of achieving security. You must continually refresh and update your security tools based on your on-going assessment of their suitability to meet your requirements. That is what you need to do as part of your efforts to achieve compliance.”
This is the view of Edy Almer, VP Marketing and Business Development at Safend. “The reality is that to be ‘secure’ is a continuum not a discrete state. Compliance mandates acceptable risk points along that continuum. If you are compliant there is a very reasonable possibility that your risk is lower than it would otherwise be.”
David Emm, senior security researcher at Kaspersky Lab, comes to a slightly different conclusion from the same argument. “Security is a bit like housework, by which I mean it’s a process, rather than a fixed set of actions or tools implemented in an organisation. Regulations are invariably static and may not keep pace with technological developments – either positive ones or those that attackers make use of. I think there’s a parallel here with health and safety legislation. A company may be compliant for the annual inspection; but if it plays fast-and-loose with safety for the rest of the year, how ‘compliant’ is it in reality?”
Howard Sklar, senior corporate counsel at Recommind and advisor to the InfoRiskAwareness Project, takes a slightly different view. “Being ‘compliant’ doesn’t necessarily mean secure. ‘Compliance’ means ensuring that your people, process, and technology all work together to meet standards or policies. To turn compliance into security, you need to make sure that the standards you set are sufficient to keep you secure. If your policies allow for open access for everyone, including the public, then having totally insecure computers would still be compliant: you’re meeting the requirements that you set out. They’re just the wrong requirements.”
Paul Davis, Director-Europe at FireEye, has a more traditional security-centric view. “Simply put,” he says, “compliance is a necessary step towards better security; but inadequate by itself to protect against advanced malware and sophisticated cyber criminals. Compliance regulations set the minimum requirements for organisations to meet by only accounting for generally well-known cyber attack tactics and threats. We’ve all heard of the successful attacks on ‘compliant’ organisations like Epsilon marketing and even computer security companies, like RSA. Today’s advanced malware can bypass traditional and next-generation firewalls, AV, IPS, and Web gateways easily. Being “compliant” does not mean the network has been ‘secured’, but rather that it has taking the first step towards protecting customer data, intellectual property, and sensitive information. Compliance is only one of the first steps towards a secure IT infrastructure.”
Mehlam Shakir, CTO at NitroSecurity, sees the danger in treating compliance as the winning line rather than just ‘one of the first steps’. “For many businesses it is a vital necessity that they are compliant with regulations such as PCI DSS, GPG13 or CoCo; but there is a rapidly emerging trend of organisations just thinking about what needs to be achieved to reach compliance – which is undermining and negating the security measures that should be in place as a first point of call. This means that more and more businesses are finding themselves at risk because basic security measures are either not in place or up-to-date.”
“Being compliant to a standard is important to having better security; however it doesn’t always guarantee that the network is secure,” agrees Alex Teh, Commercial Director, Vigil Software. “What I mean is that quite often being compliant to a particular standard like PCI DSS relates only to the part of the network that is holding credit card information and not security in general. Quite often the role of a QSA is to limit the extent of the network that needs to be PCI compliant. This often means ruling out major parts of the network to reduce cost.”
And there’s another potential by-product. Compliance requirements could persuade companies to become ‘early adopters’ of apparently relevant new technologies. “But if the organization is one of the ‘late majority’ in the technology adoption lifecycle,” explains independent governance and risk consultant Roger Southgate, “they may be significantly less vulnerable than organizations that are early adopters of new technologies, and in effect the trail blazers in identifying what security requirements are most appropriate for their risk appetite.” Don’t be the guinea-pig has always been good business advice.
Am I any more clear about the relationship between security and compliance? No, I am not. The main issue is well described by Frank Coggrave, General Manager EMEA, Guidance Software. “Compliance is backward facing and security should be forward facing,” he explains. “Compliance is about adherence to rules that have been set in the past (by definition) that reflect the thoughts, worries and concerns that created the desire to have the rule. Although they can try to take account of future expectations they will always fail to do so, to some greater or lesser extent. If compliance was perfect why would we have a set of financial rules called Basel III – Basel I should have been enough. Security is about responding to today’s and tomorrow’s threats and concerns. It needs to be more reactive than a compliance cycle. Compliance is important to ensure you don’t leave yourself exposed to the old stuff, but it’s no security blanket – there are too many moths active out there.”
So after all of this I can come to only one conclusion. If security and compliance are like love and marriage – we need a divorce. Ensure compliance for the sake of compliance regardless of security, and seek security for the sake of security regardless of compliance. Don’t let one influence the other and you will be more successful in both.
The Institute of Directors – talking net neutrality, compliance and breach notification with Richard Swann
Richard Swann is head of IT at the Institute of Directors (IoD), an organization that needs no introduction to anyone in the UK. To non-UK readers it is a non-party political (yet highly political) independent body formed by royal charter to foster excellence in business – and its members include 43,000 of the UK’s leading businessmen. Given the IoD’s pre-eminent position as a champion of all business and an influencer of government, I asked Richard if he would talk about some of the more contentious business/computing issues of the day. He agreed; and we started with ‘net neutrality’.
“We’re in favour of net neutrality,” he said. We try to represent business to the government, and we have a fear that loss of net neutrality could discriminate against small businesses if the big boys are able to buy, shall we say, advantageous service.” Richard has other concerns: how, for example, would ISPs decide which traffic to restrict? In the USA, the country’s largest ISP famously, or infamously, started to restrict the bandwidth for P2P traffic. Other companies apparently also use deep packet inspection to be able to recognise and discriminate against P2P. It’s a slippery slope. “We’re talking about examining the data that you’re passing. That leads to the possibility that some entities could decide for themselves, well this looks a bit iffy, we should maybe examine things a bit closer on who is originating this traffic, and do something about it.” That’s a dangerous direction.
But if we’re talking about deep packet inspection, what, I asked him, about behavioural advertising? “That’s a difficult one to answer,” he replied. Bear in mind that the IoD has to represent the interests of all of its members – and that includes those perfectly legitimate companies that would dearly love to have access to, and would use responsibly, behavioural data in order to market their products. “Not speaking for the IoD, but from a purely personal point of view, I don’t have too many problems with behavioural analysis – provided that it is consensual; provided that the user can clearly opt in or out of the process.”
We turned to one of the issues that bothers me considerably: the increasing and arbitrary powers of the police in the UK. One current debate is happening with Nominet, the UK company that maintains the official register of .uk domain names. SOCA, the UK’s Serious Organized Crime Agency, requested that Nominet take down .UK domains on its own say so. “The rate of change brought about by the internet has been phenomenal,” said Richard, “but at the same time it has brought about an increasing amount of criminal and fraudulent activity. We will support any effort that will provide a consistent and controlled response leading to the take down of fraudulent and criminal sites; but there has got to be judicial oversight. A police force or a police body cannot set themselves up as being the deciding factor. It’s a bit like a search warrant,” he continued. “The police cannot just turn up and search a private property without justifying themselves to the court and getting a search warrant signed by a judge. So, to me, yes, by all means if they have the evidence they should be able to get a website taken down – but there has got to be that judicial oversight.”
It is, of course, not simply the police who want a greater say in what can and cannot happen on the internet. Increasing government regulation has spawned an entire new security industry: compliance – complying with legal requirements for the use and storage of data. Is this, I asked Richard, a problem for business in the UK? “That’s a funny one, actually, because I’ve just been speaking to our policy expert who deals with this area. It’s true that we are very much in favour of cutting red tape and making life easier, especially for the SMEs; but one of the things he said to me was that compliance with things like data protection doesn’t seem to be much of an issue with our members.” This did surprise me. I had expected that since so many of the regulations fail to define what you have to do, only what you have to achieve, that this would require business to spend more time and effort than might strictly be necessary. But no. “I think the principle behind these laws, the need to protect people’s privacy, the need to prevent bribery, are so well accepted that most businesses don’t have a problem with them.”
Which just left my final question: breach notification. Take Sony, I suggested. It took the company rather a long time before it came clean about the breach. Should this be allowed? Should we have a law requiring that as soon as a breach is discovered, anyone affected must be notified immediately? “I think we should,” said Richard. “In this instance the number of people concerned is incredible; and in that week before the loss was publicised, think of the damage that could be done. The sooner people know that their financial details might have been compromised, then the sooner they can do something about it. If that was my personal details, I would want to be protecting myself as quickly as I possibly could.” And while we both believe we need a European Data Breach Notification Directive, neither of us could see why we haven’t got one. “In fact,” added Richard, “from a personal point of view you would think that it would be quite easy to implement – it would only take a relatively small addition to the existing act that requires us to protect the data in the first place.”