So, in the final analysis, compliance – a concept designed and intended to improve corporate security levels – actually does the opposite. In promising something it cannot deliver, it draws resources from the one organization that has any chance of succeeding: your own, in-house full-time professional security staff.
I expand in a post on Lumension’s blog: Compliance is bad for security
There are two things about security and compliance that bother me. The first is security and the second is compliance; the first clearly isn’t very effective and the second is a nonsense.
The problem with both is that they are abstract ideas that have little meaning in reality. If you try to define the concept of being secure it really boils down to not being insecure. Sure, you can say that security is the maintenance of availability, confidentiality, integrity and this, that and the otherity – but it really means nothing because our knowledge of security is quantified only by its loss. We could spend £1 million per month on security and not be secure; we could spend nothing on security and be secure. The difference is solely defined by whether we are currently compromised or breached; and that, empirically, has little to do with the size of our security budget.
In some ways, compliance is a bureaucratic methodology to ensure that we at least do something. The purpose is to try to ensure that we are secure by regulation. There are two approaches: one is to say you must be secure or else; while the other says you must do this, and this and this or else. In the first instance, just like security itself, a company is compliant regardless of what it does right up until a breach proves that it is not compliant – so what is the point? In the second instance, doing this and this and this to be compliant will not make you secure, which is the purpose of compliance – so what is the point?
The danger comes when you put the two together. You have to be compliant even if it is pointless. That, frequently, is the law. Its purpose is to provide security; so all too often concentration on compliance is all that is done in the name of security. Security thus becomes a tick-box compliance effort, which won’t make us secure but will at least keep us legal. The danger in compliance is that it can lower the bar on security.
So is there no hope? Should we all just accept our insecurity; simply tick the minimum number of boxes necessary to be compliant and hope for the best? Well, no – there is hope; but it’s coming from the practitioners (CSOs) rather than the theorists (security industry) and compliance legislators (governments). What is happening is the slow realisation that security is not a thing in and of itself, but nothing more than an aspect of business risk management. It is not a thing to be acquired, but a concept to be managed.
A new report from the Wisegate community of IT executives – including CSOs – demonstrates that security theory is being replaced by risk management methodologies. Rather than a blanket desire to ‘be secure’, CSOs are starting to manage the business risk. Instead of security being a meaningless concept protected by numerous discrete and leaky band aids, it is becoming part of the continuous management of the business’ level of risk tolerance. Within this approach, compliance becomes an aspect of risk management; security becomes a process within risk management; and people become as important as products.
The report is called Moving From Compliance to Risk-Based Security: CISOs Reveal Practical Tips – it’s worth a look.
Compliance – at least European regulatory compliance – bothers me. Whenever I speak to a security expert, those concerns are allayed for just so long as we talk; and then they come back again.
The problem is that Europe passes principle-based legislation (the US is more likely to pass rule-based legislation). The former tells you what must be achieved (the principle), while the latter tells you how it must be done (the rules).
The European Data Protection Directive is a perfect example of principle-based legislation. It says that personal information must be held securely; but it doesn’t tell you how it should be done.
Here’s my problem. Data that hasn’t been lost or stolen has, de facto, been held securely and the company is in compliance – even if it spends nothing on compliance. Data that has been lost or stolen has not, de jure, been held securely and the company fails compliance even if it has spent many ££millions on compliance. The existence or lack of infosecurity defences is irrelevant: if you lose that data, then you are in breach of the act; if you do not lose the data then you are not in breach of the act.
I’m not interested in claims that proof you spent money on security will make the ICO (a marketing man, mark you – not a lawyer) go easy on you. That’s just marketing dross to hide the underlying contradiction.
What I want to know is quite simple. How can it possibly be right to frame a law that states someone who tries to comply can fail compliance, while someone who ignores compliance can be compliant? The result is that there is no logical reason to spend money on securing personal data – just hope you don’t get hacked. This is aggravated by the common and growing perception that if you get targeted, you will get breached. So if you get targeted, you will have failed compliance whether you try to comply or not. Why bother?
Sometimes you just have to laugh for fear of crying. The Information commissioners Office (ICO) strategy for 2012 makes me do just that. It is a 17 page purple prose self-aggrandizing Declaration of Independence, declaring itself to be independent of political, public and media pressure. It should just simply say that ‘we will uphold the law in our role defined by the law.’
But it doesn’t do that. It seems more concerned to distance itself from the letter of the law by defining its own interpretation of the law, and to align itself with that interpretation. It has, in short, evolved an overblown idea of its function, which it attempts to define in this rather long and mis-titled public-relations document. I give just a few examples:
we will neither be exclusively an educator nor exclusively an enforcer. We are both, even though we prefer to deliver our desired outcomes through help and encouragement rather than force. This means we are primarily a facilitator…
In the time-honoured liberal tradition it has failed to understand that facilitation is delivered by enforcement, not enforcement delivered by facilitation.
We cannot address all risks to the upholding of information rights equally nor should we attempt to do so.
Yes it most certainly should attempt to address all risks to the upholding of information rights equally.
we will treat all cases that come to us fairly and properly but not necessarily pursue them with equal vigour.
This is perhaps one of the most worrying comments. The ICO is declaring that it will decide, arbitrarily, whether your complaint is worth its attention. Not the law, not the judiciary, not parliament, not you, but its own self will pre-judge a case and decide whether or not to pursue it with vigour.
we will devote particular effort to investigating, analysing and ultimately enforcing in those cases that we see as contributing most to the delivery of our desired outcomes and not just those presenting the biggest risk…
Not just those presenting the biggest risk. It really does say that it, the body responsible for enforcing the Data Protection Act, is not necessarily going to spend its effort on the biggest risk.
Laugh or cry? You decide.
Security and compliance go together like love and marriage – you can’t have one without the other. That is the common perception (we’re talking of course solely about the infosec aspects of compliance). But is it true? Are security and compliance synonymous? If you are secure, will you be compliant? If you are compliant, will you be secure? What, in short, is the relationship between the two?
Here’s my problem. The purpose of the infosec aspects of the Data Protection Act is to keep personal data secure. But how can you be compliant with this requirement? If you have the strongest security in the world, you still cannot guarantee that the data won’t be lost. If you have virtually no security, you might never lose the data. The only empirical test for conformance, or at least the lack of it, is whether you keep personal data safe. If you lose the data you are not compliant, regardless of your security. If you do not lose the data, you are compliant, regardless of your security.
This leads to an important question: if compliance is purely a legal requirement effectively disconnected from security, will it lead companies to concentrate on legal compliance to the detriment of true security? To help me understand, I spoke to a number of security experts.
Lars Davies. CEO at Kalypton and a one-time visiting fellow at the Centre for Commercial Law Studies, Queen Mary, University of London, is clear on the relationship. “The problem comes from the fact that compliance and security are not commutative,” he told me. “One does not necessarily infer the other. Compliance infers security. Security does not infer compliance… Compliance tells you what you need to achieve. Good security is simply one of a set of components that you need to achieve the goal.”
Infosec in this sense is a tool for compliance, not a required effect of compliance; although confusion comes from the need to use security (and therefore gain security) in order to achieve compliance.
“If you are compliant then you must be secure; your security must be fit for purpose,” continued Lars. “You simply cannot not end up with the lowest common denominator at all and still remain vulnerable. If you are vulnerable then you cannot, by definition, be compliant.” So, “If you are compliant you must, by definition also be secure… Compliance and security are like pregnancy, you either are or you are not.”
This gives me a problem, since I believe it is impossible to be secure.
“You define security as the ability to avoid compromise,” replies Lars. “That is one definition. However, it does not say avoid compromise absolutely. It is impossible to avoid compromise if you are the subject of a targeted attack. However, you can make such attacks extremely difficult, and you can put in monitoring processes and procedures to try to detect and thus counter those attacks. That is also part of achieving security. You must continually refresh and update your security tools based on your on-going assessment of their suitability to meet your requirements. That is what you need to do as part of your efforts to achieve compliance.”
This is the view of Edy Almer, VP Marketing and Business Development at Safend. “The reality is that to be ‘secure’ is a continuum not a discrete state. Compliance mandates acceptable risk points along that continuum. If you are compliant there is a very reasonable possibility that your risk is lower than it would otherwise be.”
David Emm, senior security researcher at Kaspersky Lab, comes to a slightly different conclusion from the same argument. “Security is a bit like housework, by which I mean it’s a process, rather than a fixed set of actions or tools implemented in an organisation. Regulations are invariably static and may not keep pace with technological developments – either positive ones or those that attackers make use of. I think there’s a parallel here with health and safety legislation. A company may be compliant for the annual inspection; but if it plays fast-and-loose with safety for the rest of the year, how ‘compliant’ is it in reality?”
Howard Sklar, senior corporate counsel at Recommind and advisor to the InfoRiskAwareness Project, takes a slightly different view. “Being ‘compliant’ doesn’t necessarily mean secure. ‘Compliance’ means ensuring that your people, process, and technology all work together to meet standards or policies. To turn compliance into security, you need to make sure that the standards you set are sufficient to keep you secure. If your policies allow for open access for everyone, including the public, then having totally insecure computers would still be compliant: you’re meeting the requirements that you set out. They’re just the wrong requirements.”
Paul Davis, Director-Europe at FireEye, has a more traditional security-centric view. “Simply put,” he says, “compliance is a necessary step towards better security; but inadequate by itself to protect against advanced malware and sophisticated cyber criminals. Compliance regulations set the minimum requirements for organisations to meet by only accounting for generally well-known cyber attack tactics and threats. We’ve all heard of the successful attacks on ‘compliant’ organisations like Epsilon marketing and even computer security companies, like RSA. Today’s advanced malware can bypass traditional and next-generation firewalls, AV, IPS, and Web gateways easily. Being “compliant” does not mean the network has been ‘secured’, but rather that it has taking the first step towards protecting customer data, intellectual property, and sensitive information. Compliance is only one of the first steps towards a secure IT infrastructure.”
Mehlam Shakir, CTO at NitroSecurity, sees the danger in treating compliance as the winning line rather than just ‘one of the first steps’. “For many businesses it is a vital necessity that they are compliant with regulations such as PCI DSS, GPG13 or CoCo; but there is a rapidly emerging trend of organisations just thinking about what needs to be achieved to reach compliance – which is undermining and negating the security measures that should be in place as a first point of call. This means that more and more businesses are finding themselves at risk because basic security measures are either not in place or up-to-date.”
“Being compliant to a standard is important to having better security; however it doesn’t always guarantee that the network is secure,” agrees Alex Teh, Commercial Director, Vigil Software. “What I mean is that quite often being compliant to a particular standard like PCI DSS relates only to the part of the network that is holding credit card information and not security in general. Quite often the role of a QSA is to limit the extent of the network that needs to be PCI compliant. This often means ruling out major parts of the network to reduce cost.”
And there’s another potential by-product. Compliance requirements could persuade companies to become ‘early adopters’ of apparently relevant new technologies. “But if the organization is one of the ‘late majority’ in the technology adoption lifecycle,” explains independent governance and risk consultant Roger Southgate, “they may be significantly less vulnerable than organizations that are early adopters of new technologies, and in effect the trail blazers in identifying what security requirements are most appropriate for their risk appetite.” Don’t be the guinea-pig has always been good business advice.
Am I any more clear about the relationship between security and compliance? No, I am not. The main issue is well described by Frank Coggrave, General Manager EMEA, Guidance Software. “Compliance is backward facing and security should be forward facing,” he explains. “Compliance is about adherence to rules that have been set in the past (by definition) that reflect the thoughts, worries and concerns that created the desire to have the rule. Although they can try to take account of future expectations they will always fail to do so, to some greater or lesser extent. If compliance was perfect why would we have a set of financial rules called Basel III – Basel I should have been enough. Security is about responding to today’s and tomorrow’s threats and concerns. It needs to be more reactive than a compliance cycle. Compliance is important to ensure you don’t leave yourself exposed to the old stuff, but it’s no security blanket – there are too many moths active out there.”
So after all of this I can come to only one conclusion. If security and compliance are like love and marriage – we need a divorce. Ensure compliance for the sake of compliance regardless of security, and seek security for the sake of security regardless of compliance. Don’t let one influence the other and you will be more successful in both.
The Institute of Directors – talking net neutrality, compliance and breach notification with Richard Swann
Richard Swann is head of IT at the Institute of Directors (IoD), an organization that needs no introduction to anyone in the UK. To non-UK readers it is a non-party political (yet highly political) independent body formed by royal charter to foster excellence in business – and its members include 43,000 of the UK’s leading businessmen. Given the IoD’s pre-eminent position as a champion of all business and an influencer of government, I asked Richard if he would talk about some of the more contentious business/computing issues of the day. He agreed; and we started with ‘net neutrality’.
“We’re in favour of net neutrality,” he said. We try to represent business to the government, and we have a fear that loss of net neutrality could discriminate against small businesses if the big boys are able to buy, shall we say, advantageous service.” Richard has other concerns: how, for example, would ISPs decide which traffic to restrict? In the USA, the country’s largest ISP famously, or infamously, started to restrict the bandwidth for P2P traffic. Other companies apparently also use deep packet inspection to be able to recognise and discriminate against P2P. It’s a slippery slope. “We’re talking about examining the data that you’re passing. That leads to the possibility that some entities could decide for themselves, well this looks a bit iffy, we should maybe examine things a bit closer on who is originating this traffic, and do something about it.” That’s a dangerous direction.
But if we’re talking about deep packet inspection, what, I asked him, about behavioural advertising? “That’s a difficult one to answer,” he replied. Bear in mind that the IoD has to represent the interests of all of its members – and that includes those perfectly legitimate companies that would dearly love to have access to, and would use responsibly, behavioural data in order to market their products. “Not speaking for the IoD, but from a purely personal point of view, I don’t have too many problems with behavioural analysis – provided that it is consensual; provided that the user can clearly opt in or out of the process.”
We turned to one of the issues that bothers me considerably: the increasing and arbitrary powers of the police in the UK. One current debate is happening with Nominet, the UK company that maintains the official register of .uk domain names. SOCA, the UK’s Serious Organized Crime Agency, requested that Nominet take down .UK domains on its own say so. “The rate of change brought about by the internet has been phenomenal,” said Richard, “but at the same time it has brought about an increasing amount of criminal and fraudulent activity. We will support any effort that will provide a consistent and controlled response leading to the take down of fraudulent and criminal sites; but there has got to be judicial oversight. A police force or a police body cannot set themselves up as being the deciding factor. It’s a bit like a search warrant,” he continued. “The police cannot just turn up and search a private property without justifying themselves to the court and getting a search warrant signed by a judge. So, to me, yes, by all means if they have the evidence they should be able to get a website taken down – but there has got to be that judicial oversight.”
It is, of course, not simply the police who want a greater say in what can and cannot happen on the internet. Increasing government regulation has spawned an entire new security industry: compliance – complying with legal requirements for the use and storage of data. Is this, I asked Richard, a problem for business in the UK? “That’s a funny one, actually, because I’ve just been speaking to our policy expert who deals with this area. It’s true that we are very much in favour of cutting red tape and making life easier, especially for the SMEs; but one of the things he said to me was that compliance with things like data protection doesn’t seem to be much of an issue with our members.” This did surprise me. I had expected that since so many of the regulations fail to define what you have to do, only what you have to achieve, that this would require business to spend more time and effort than might strictly be necessary. But no. “I think the principle behind these laws, the need to protect people’s privacy, the need to prevent bribery, are so well accepted that most businesses don’t have a problem with them.”
Which just left my final question: breach notification. Take Sony, I suggested. It took the company rather a long time before it came clean about the breach. Should this be allowed? Should we have a law requiring that as soon as a breach is discovered, anyone affected must be notified immediately? “I think we should,” said Richard. “In this instance the number of people concerned is incredible; and in that week before the loss was publicised, think of the damage that could be done. The sooner people know that their financial details might have been compromised, then the sooner they can do something about it. If that was my personal details, I would want to be protecting myself as quickly as I possibly could.” And while we both believe we need a European Data Breach Notification Directive, neither of us could see why we haven’t got one. “In fact,” added Richard, “from a personal point of view you would think that it would be quite easy to implement – it would only take a relatively small addition to the existing act that requires us to protect the data in the first place.”
The Data Protection Act: the ICO demonstrates that the cost of compliance is greater than the cost of non-compliance
The Information Commissioner, Christopher Graham, is being decidedly unfair to the security industry. Consider this: fear sells. Government does it all the time. It keeps us in constant fear of terrorists, pedophiles, drug runners, gun runners, Katie Price, identity thieves and the Russian Mafia so that we will buy its lies about the need to curtail our liberty to keep us safe on the street. Security vendors do the same – they keep us in constant fear of cyber terrorists, online purveyors of child abuse, money mules, Katie Price, identity thieves and the Russian Mafia so that we will buy their products to keep ourselves safe online.
But we have to be afraid, or none of it works.
Enter the Information Commissioner. Last April he gained the power to enforce his responsibility for the Data Protection Act by levying fines of up to £500,000. What music to the ears of the security industry – something else for us to be afraid of! Another reason to buy security products; this time to help us comply with the Data Protection Act.
But what a let down Mr Graham has been!
Of the 2,565 data leaks reported to the watchdog in the past year, the ICO has only taken action in 36 cases and handed out only four fines, according to data revealed by ViaSat UK under the Freedom of Information Act.
ICO acts on only 1% of reported data breaches
I’m not sure of the maths here, but nevermind. The point is very clear – if you breach the Data Protection Act you are overwhelmingly likely to get away with it. So what does that do? It tells us that the cost of compliance is considerably greater than the cost of non-compliance. In other words, don’t bother about the Data Protection Act. And don’t bother buying any security products to help with compliance.
He’s so unfair!
The Bribery Act is yet another compliance requirement – sort of. If you take PCI compliance, then there are specific things you have to do. For example:
Requirement 1: Install and maintain a firewall configuration to protect cardholder data
Payment Card Industry (PCI) Data Security Standard, V2.0
The Bribery Act is different. Section 1 makes it an offence for one individual to bribe another; but the Bribery Act 2010 also creates a new offence under Section 7:
7. Failure of commercial organisations to prevent bribery
(1) A relevant commercial organisation (“C”) is guilty of an offence under this section if a person (“A”) associated with C bribes another person intending…
In effect, it says what you mustn’t do (be corrupt) without ever defining what it is (corruption) that you mustn’t do (at which point, M’lud, I refer you to The Weasel Words Principle that underpins British law-making). A quick example: what on earth is ‘facilitation’? It’s not defined. But it is recognised as potentially a necessary evil. You could even make the case for suggesting that facilitation that lands an important contract that will benefit the UK is acceptable and legal; while facilitation that invokes international condemnation is bribery and therefore illegal.
Back to the point. Effectively, in order to comply with the Bribery Act, you have to be able to prove you didn’t do what isn’t clear; which is a bit like proving a negative (some experts say you can do it, while others say you can’t do it; but either way ‘you can’t prove a negative’ is a negative…).
So what can you do to comply with this negative requirement? Luckily there is some help in Ken Clarke’s Guidance Notes issued last week. Frank Coggrave of Guidance Software points to Paragraph 12 as being of particular relevance:
The application of bribery prevention procedures by commercial organisations is of significant interest to those investigating bribery and is relevant if an organisation wishes to report an incident of bribery to the prosecution authorities – for example to the Serious Fraud Office (SFO) which operates a policy in England and Wales and Northern Ireland of co-operation with commercial organisations that self-refer incidents of bribery (see ‘Approach of the SFO to dealing with overseas corruption’ on the SFO website). The commercial organisation’s willingness to co-operate with an investigation under the Bribery Act and to make a full disclosure will also be taken into account in any decision as to whether it is appropriate to commence criminal proceedings.
Guidance about procedures which relevant commercial organisations can put into place to prevent persons associated with them from bribing
In other words, compliance with the Bribery Act requires a two-pronged approach: you make visible efforts to ensure that the stable-door remains tightly closed; but if you suddenly discover that the horse has already bolted, you grass it up and hang it out to dry.
Compliance with the Bribery Act in effect revolves around how you tackle this two-pronged approach. The first part will involve establishing policies and procedures designed to prevent the possibility of giving or receiving bribes (that is, closing the stable door). “The sort of things companies can do,” says Mark Burgess of Blackhawk Investigations, “include delivering relevant training to their staff, and developing a clear communication strategy that this sort of behaviour is culturally unacceptable through both the induction process and ongoing anti-fraud training and initiatives.” You could say that Bribery Act compliance must not merely be done, it must be seen to be done.
But if all of this fails (that is, the horse has already bolted) and you have a rogue employee who either gives or receives bribes, then you need to be able to proactively discover this rogue and turn him into the authorities (grass him up and hang him out to dry) in order to avoid corporate complicity. And that will probably involve the use of eDiscovery and forensic software such as Guidance Software’s EnCase. “Our software,” says Frank Coggrave, “is used for many different compliance requirements, freedom of information requests, and litigation issues. But you can repurpose the software to do regular scans of your environment designed to turn up malpractice [and rogues]. That would be a very positive statement in your favour in case someone, possibly a competitor, suddenly found that you had made excessive facilitation payments. If you are taking proactive measures both in training and employing things like eDiscovery software to make those sweeps, you will be less likely to be prosecuted.”
A conversation with Ed Macnair, chief executive officer of Overtis
“The challenge for companies moving into the cloud,” says Ed Macnair, CEO of Overtis, “is that the traditional IT model gets turned upside down and inside out. We’re outsourcing data; but we’re also outsourcing responsibility.” That gives us two problems: we lose control of our data; and we lose control of who can access it.
The first is because we no longer know where that data resides. “Most of the big SaaS players are American – so if we use them we’ve got the whole EU Data Protection thing to worry about.” The problem with the cloud is that the more we use it and the more we maximise the value we get from it, the more we abdicate control. And without that control we can be neither secure nor compliant.
The second problem is exacerbated by the new wave of consumerisation within computing. “The security officers I speak to,” says Ed, “are having kittens because employees are demanding, not asking but demanding, to be able to use their own devices. And it’s being allowed – which makes good business sense in a lot of areas. But we now have this plethora of different devices – iPhones and iPads, Androids, netbooks, Mobile 7 and more – all accessing our corporate data from we don’t know where. And how do we control that? And how do we know who’s holding those different devices”
The traditional silo model for security, says Ed, has failed. “The silo model is all about point products. We can have email security products and web security products, and a firewall and our intrusion detection and prevention systems; and they can all only look at their own specific area. But they don’t understand the user. SIM and SEM tries to paper over the cracks but still doesn’t provide end-to end visibility of what the user is doing.” If silo doesn’t work in traditional computing, how on earth will it work in cloud computing?
Ed’s new product (VigilancePro Web Application Manager) takes a fresh approach. To get security in this new world, he says, “we have to invest security into the browser.” It’s the only common point across all the different access devices and all the different data locations. “And that’s what we’re doing,” he says. “VigilancePro is a secure browser plug-in currently available for Internet Explorer and Firefox (with Safari to follow soon).”
The basic premise of this new product is that the user doesn’t know and doesn’t need to know his or her own secure log-in credentials – it’s all managed by the plug-in. “A new user coming into the organisation gets sent a link to a site from where he or she downloads and installs the browser plug-in. That browser plug-in has the user credentials and the user permissions that control which applications can be used and what can be done with them.” Complete security provisioning in a single step. “By logging into the browser plug-in, the plug-in automatically logs the user into all the web applications he or she is entitled to use. It doesn’t bypass any strong two factor authentication, it simply acts as a secure single sign-on to all the web applications that the user is entitled to use. And the plug-in has to be present before the user can access those applications.”
Needless to say, de-provisioning is just as easy. “Revocation is done centrally,” says Ed. “If someone leaves the company, a link to Active Directory decommissions the plug-in; and the user loses all access to the restricted areas.”
But useful as this is, it would be very wrong to think of VigilancePro as just a single sign-on system. Since it lies at the heart of the browser, it can provide tight control over what can be done via that browser; and detailed reporting on what has been done. “By implementing this as a browser plug-in, we not only get web single sign-on, but we get really granular management of all interactivity between the user and the web application – a full audit trail as to which page the user went to, and what he or she did on that page. But we also have the ability to block and actually prevent certain actions. We can control access to any tab, any URL or any view on a web page; and we can control the use of any HTML component. We can control any of the browser menu options – such as export, print, copy, cut, save as, and so on; and we have the ability to mask any regulated or sensitive data. We think this is a complete game-changer. So far we’ve been trying to manage identity in the cloud – but now we can manage user activity in the cloud.”
The problem with the cloud is that you cannot secure your data because you don’t know where it is. Nor can you secure the users because you don’t know who they are. But you can secure the channel used by the users to get to the data. That channel invariably goes through the browser. Control the browser and you can control the user. Control the user, and it doesn’t matter where the data resides. In short, by controlling the browser you can get both security and compliance (this is not legal advice!) in the cloud. Almost all public cloud computing is done via the browser. Add security to the browser and you secure almost all public cloud computing.
In April of this year Kroll Fraud Solutions released a report showing that the Healthcare industry tends to be more reactive than proactive when it comes to data security. This, frankly, is not good enough. The move to electronic health records (EHRs) through HITECH (the Health Information Technology for Economic and Clinical Health Act) is gathering pace: and at the same time the associated data protection regulations are becoming more stringent. In particular, a more rigid mandatory breach notification scheme imposes a naming a shaming regime wherever companies regulated by HITECH ‘lose’ personal health information; and it’s an industry where the ‘shaming’ aspect could be catastrophic.
Far better to abandon the old reactive stance and become proactive; both to comply with the regulations and to avoid the breach notification requirements by avoiding a breach. Brian Lapidus, the chief operating officer for Kroll’s Fraud Solutions division, offers five security tips to ensure this. If you are defined as a covered entity under HITECH:
Protect outsourced data. You must know exactly where and how your data is stored with all of your third-party vendors; because even if it is they that suffer the breach, it is you who must notify the individuals and the appropriate federal entities.
Make sure all portable media devices are fully encrypted. The bottom line is that encrypted data cannot be ‘lost’ as far as the Act is concerned.
Train your staff. “Employee training,” says Lapidus, “is the most important thing an organization can do to assure that its privacy and security policies are correctly implemented. The most successful organizations make training part of the culture as compared to those organizations who limit training to reviewing a manual and signing an agreement.”
Plan for an event, and then test your plan. The HITECH act specifies that notification must occur without unreasonable delay and in no case later than 60 calendar days after discovery of the breach. “Let’s face it,” says Lapidus, “from the moment you uncover a breach, every second counts. That’s why all healthcare organizations are under pressure to develop and implement a breach preparedness and actionable incident response plan.”
Understand the complexity of breach response and notification requirements. Even though the new HITECH requirements are federal, your organization will still be required to comply with state laws that govern the breach of PII and PHI. Depending upon the number of affected individuals, among other variables, your notification requirements under HITECH (and other applicable state laws) could include notifying Department of Health and Human Services (HHS), Centers for Medicare and Medicaid Services (CMS), local media, state attorneys general offices, as well as affected businesses. Missing deadlines could result in hefty penalties or fines.