Archive

Archive for January, 2010

Another day, another (terrorist) threat, another (freedom) loss

January 29, 2010 1 comment

Now I’m no cynic, so I take everything that any government, especially the UK and US governments, tell me at face value.

On 22 January, Home Secretary Alan Johnson warned us that “The Joint Terrorism Analysis Centre has today raised the threat to the UK from international terrorism from SUBSTANTIAL to SEVERE. This means that a terrorist attack is highly likely.” He also pointed out that “In his statement to parliament on security and counter terrorism earlier this week, the Prime Minister said that the first and most important duty of government is the protection and security of the British people.”

Oh dear. I and the rest of the British people must expect another terrorist attack imminently. That’s what he said. But that nice Mr Brown also said that the government is duty bound to do everything to protect me. I must accept their help and help them to help me.

Yesterday, I also noticed a separate and obviously unrelated report in the Register:

Exclusive The Home Office has created a new unit to oversee a massive increase in surveillance of the internet, The Register has learned…

“More recently, [a Home Office spokesman said] we have been considering how, in a changing communications environment, lawful acquisition of communications data and interception of communications can continue to save lives, to counter terrorism, to detect crime and prosecute offenders, and to protect the public.”

Officials envisage communications providers will maintain giant databases of everything their customers do online, incluing email, social networking, web browsing and making VoIP calls. They want providers to process the mass of data to link it to individuals, to make it easier for authorities to access.

Oh dear, oh dear. Here, despite it being International Data Privacy Day (also yesterday) I can see that I shall have to abandon any outrage against the creeping tyranny and galloping authoritarianism of Comrade Brown, MI5, their NSA paymasters, the Grand Committee of the New World Order, and all the other bastards and accept the total loss of all my data privacy in order to protect myself from this new SEVERE terrorism threat.

After all, this government has never, ever, ever lied to me before. Has it?

Jobs’ megalomania: the fatal flaw of a tragic hero

January 28, 2010 5 comments

The big news of the week is the iPad, Apple’s new tablet computer. I have to say something; so, well, I’m worried. I’m as impressed as ever with Apple design and innovation. I had the first publically owned Mac+ in the UK (I was reviewing it for the FT, liked it so much that I made Apple an offer and refused to give it back). But the new Apple is now doing the same as the old Apple.

Back then, Apple was on the verge of dominating the world’s desktops. Then they took a wrong turn: they kept the Apple software proprietary to the Apple hardware and tried to own the applications through associated companies like Claris. Meantime, Microsoft was almost giving away its own software, probably with the intent to kill off Digital Research and the CP/M family but with the added bonus of halting Apple in its tracks. Apple went into hibernation; Steve Jobs left, and as far as I could tell, the company very nearly didn’t survive.

Meanwhile, I switched from the Mac to Windows. I still preferred the Mac, but as a freelance journalist I had to follow the market.

Then Jobs came back. The iMac and the Macbook followed. It looked as if Apple had learnt its lesson: retained its innovative design but become more open. My first concern was when Apple tried to prosecute bloggers – that’s not the old Apple that I knew and loved. But the adoption of Unix as the basis of OS X seemed to confirm the new openness. But look closer and look elsewhere. Jobs’ inherent desire to own the world is showing through again. Look at how Apple controls music through the iPod. Consider how closed the iPhone is, and how difficult and restrictive it is for new developers. If you don’t get approved and into the App Store, you don’t get onto the iPhone. And now the iPad is just, if not more, restrictive.

The OS is still closed – if your app isn’t approved by Apple, you can’t get it on the tablet; and you can only get your books from the iBook Store. And Jobs has closed the usual back door – this thing has no standard ports (just like the iPhone). It will only connect to other Mac products.

Jobs has done it again. Closed shop. Apple will own the world. Only it won’t. He tried before and he failed. And, eventually, he will fail again. Take the book app on the iPad. If you decide it’s a bit bulky to read the iPad on the train, and think a Kindle would be better, guess what: you can’t copy the electronic book you bought from one to the other. Eventually,there will be open alternatives to all the Apple products – and when that happens the current Apple devotees, me included, will desert in droves, feeling betrayed by Jobs’ megalomania.

Jobs’ megalomania is the hero’s fatal flaw. As all students of literature know, the fatal flaw is the cause of the tragedy; and it is Apple and all Apple lovers that will suffer.

Categories: All, General Rants

They know who you are: browser fingerprinting

January 28, 2010 Leave a comment

I have long been a fan of the Electronic Frontier Foundation (EFF). This is an example of why they have full Hero status with me…

Most of us are concerned about the amount of information that marketing companies are collecting on each of us. We assume that the main methods are the use of tracking cookies and the even more intrusive use of deep packet inspection interception/surveillance (DPI). We circumvent the former by turning off cookies – or at least removing them when we close the browser. And although the UK Home Office controversially advised Phorm that DPI was probably legal, most legal experts and the EU itself seem to think it is not. Game won? Oh, no. Why on earth should we assume that?

It should come as no surprise, then, to discover that the marketers are looking at other ways they can get what they want: information about us. One approach is browser fingerprinting; the analysis of the data that your browser leaks when it visits a website. It turns out that when this information is captured and analysed, it creates a fingerprint of your browser. Needless to say, the more information it gathers, the more unique is your fingerprint.

EFF has started an experiment, called Panopticlick, that will help it to “evaluate the capabilities of Internet tracking and advertising companies, who are in the business of finding as many ways as possible to record your online activities. They develop these methods in secret, and don’t always tell the world what they’ve found. But this experiment should give us more insight into the future of online tracking, and what web users can do to protect themselves.”

Put very briefly, Panopticlick will, with your approval and anonymously, analyse the information given out by your browser and tell you how common your browser fingerprint probably is. If it is unique, of course, they (the marketing companies) have got you. Rather than them giving you a tracking cookie to follow you around the Internet, you are presenting them with your own unique business card every time you visit one of their ‘participating websites’ (inevitably all adult and betting sites, and almost certainly all big commercial organizations).

At this point I would not be surprised if you were to shrug your shoulders and say,impossible – they can’t possibly tell who I am just from me anonymously visiting a website. If you think that, please read EFF’s A Primer on Information Theory and Privacy. It explains the number theory behind the process in terms that even the most unscientific of us, including me, can understand.

Then, if any of this has got your concerned, or even just mildly interested, please go to Panopticlick itself. Take the test. I did. Now I think I’m pretty careful. I clear out my cookies every time I close the browser, and I have NoScript installed. Nevertheless, in a test database of about 10,000 users, I am as near as dammit unique. That’s worrying. It means that every time I visit a participating website they almost certainly know it’s me. It’s worrying to me as a consumer to think that marketing companies are gathering data on me; and it’s worrying to me as a citizen. If commerce can do it, you can be damn certain that Big Brother governments (not necessarily but including your own) are doing exactly the same thing.

New, radical, cutting-edge, innovative Government IT policy

January 27, 2010 1 comment

In one sentence Cabinet Office Minister Angela Smith fills me with dread: “Our new ICT Strategy is smarter, cheaper and greener and will save the public purse £3.2 billion annually,” she announced today. Government has never done anything with IT projects other than overspend, under use, never complete and finally cancel. But this is worse and has the potential to throw government computing into chaos for years to come: the “I’m in charge and I know best so you just do what I tell you and things will be better because I say so and you know it makes sense” mentality of core socialist philosophy lives on.

Implementing a common desktop strategy.
A new set of common designs for desktop computers across the public sector. Historically each organisation has separately specified, built and designed its desktop computers. Creating one set of designs will lead to savings of £400 million per year.

That’s going to be well received in the small offices off the third corridor in rural town halls a hundred miles from Whitehall!

Creating a Government Applications store.
The Application Store will be a marketplace for sharing and reusing online computer programmes (like standard Office applications such as word processing and email) on a pay by use basis. It will speed up procurement and deliver savings of approximately £500 million per year.

More centralization. So what’s going on? John Suffolk, Government Chief Information Officer, explained: “We have seen a period of significant change over recent months and years. Technology has changed, the economy has changed and ICT in government must also change. This strategy sets out a new model for Government ICT which will deliver a secure and resilient ICT infrastructure that will enable faster, better services for the public.” Usual twaddle, a lot of noise, many promises and little information.

It’s actually very simple: the new policy is to move government computing into the Cloud; not the open Cloud but a closed and private Cloud:

Key measures the Public Sector ICT Strategy include:
Establishing a Government Cloud or ‘G-Cloud’. The government cloud infrastructure will enable public sector bodies to select and host ICT services from one secure shared network.

There’s that centralized control thing again: one secure shared network. Will it work? I doubt it. Will it be secure? I doubt it. Will it save money? I doubt it.

There is one slightly hopeful comment. At the end of the announcement is the statement that “A revised version of the Open source, Open standards, Reuse policy has also been published.” And that report says “Often, Open Source is best – in our web services, in the NHS and in other vital public services. But we need to increase the pace and drive the principles of Open Source Open Standards and Reuse through all ICT enabled public services.” Sadly, ‘open source’ is mentioned nowhere else in today’s announcement; and the ethos of open source runs contrary to the ethos of centralized control.

So what can we make of this new shift in Government IT policy? Not much, I think. It is all sound and waffle signifying nothing. The main point of this announcement is to tell the electorate, just prior to an election, that this government will save £3.2 billion pounds every year. It has more to do with the election than anything else.

Categories: All, Security News

Digital Economy Bill & Mandelson OMG (his next title)

January 26, 2010 Leave a comment

Mandelson! Can you believe it? We have an unelected puppet master running the country by pulling the strings of Gordon Puppet Brown, and jerking the strings of the rest of us, not merely writing a law that will control us for the rest of our lives, but writing into that law the ability to change it whenever he sees fit without any restraint from anyone. Not even from the elected Members of the Parliament that we choose to make and define the laws of this country. And not one of us has voted him the right to even enter Parliament, never mind run it. Where on earth does he get off? Where in the mentality of someone who claims to uphold democracy does he find the crass arrogance to believe he has the right to have any say in the laws of this country. Have we really descended into the realms of a banana republic dictatorship? The answer is yes: “In modern usage, the term “dictator” is generally used to describe a leader who holds and/or abuses an extraordinary amount of personal power, especially the power to make laws without effective restraint by a legislative assembly.” Wikipedia.

If anyone doesn’t know what I’m talking about, it is the UK Digital Economy Bill proposed by Mandelson. The next debate, scheduled for later today, includes a new clause that calls on Mandelson to keep the details of representations to him to change the law to be kept secret. So, if Hollywood wants Mandelson to increase penalties, make it easier to throttle the Internet, lock up copyright infringers under anti-terrorist laws or whatever – and he does so – he is not allowed to tell anyone why. Too much, do I hear you cry? Well, yes, even Mandelson agrees, because he has added another clause that insists that before he changes the law (without reference to Parliament, remember) he must consult with people of his own choosing. That’s alright then.

For pity’s sake – what has this country come to?

Categories: All, General Rants

Cloud: What Information Security has been waiting for?

January 25, 2010 Leave a comment

The advantages of cloud computing are clear: low running costs, little capital expenditure, no capacity issues, fewer depreciating assets and much more. The question, then, is why aren’t we all leaping in? The answer is simple: pure FUD.

    Doubt

because there have been so many computing panaceas over the years that have delivered little and cost much.

    Uncertainty

over what the cloud really is: is it cluster computing, grid computing, virtualisation, SaaS, PaaS, IaaS, public, private, community, hybrid, all or none of the above? And

    Fear

fear of the unknown, and especially fear over security issues.

According to IDC research published in August 2008 (IDC Enterprise Panel, n=244) security is ranked as the major challenge for cloud computing. Little has changed since then. In a more recent Osterman Research survey commissioned by Proofpoint to examine professional attitudes towards the cloud (August 2009), fifty percent of the IT professionals surveyed believe that sensitive data held in the cloud is inevitably at a higher risk of compromise, or likely to be in violation of government data protection laws, than that same data held on their own servers. “Because the public cloud is outside the firewall, there are concerns over security, data access, and privacy for enterprise customers. Public clouds also find it difficult to meet auditing, regulatory, and compliance requirements.” (Platform Computing, Enterprise Cloud Computing: Transforming IT, July 2009)

This article was first published by, and is reprinted here with kind permission of, Raconteur (Raconteur on Cloud Computing, the Times, 2 December 2009). For more information on special reports in The Times Newspaper, call Dominic Rodgers on +44 207 033 2106.

Security is clearly a genuine concern. But is it a valid concern, or more the result of that FUD that sticks to any new technology, nevermind the huge paradigm shift that is a move from the local computer room to the remote and nebulous cloud? And yet most of us are already users of cloud computing. We happily use Googlemail or Hotmail to handle our email. We use Facebook to maintain links with our friends, and we use LinkedIn to manage our business contacts. In each of these cases we have moved a personal computing requirement onto the internet (that is, into the cloud) in order to leverage the power of the internet, to reduce our costs, and to improve the performance of the function. Security is, as it should always be, an issue. We know, for example, that Facebook is visible to the world, so we take care over what information we put on it. We know that email providers scan our emails; we know that our emails are stored on their servers – and yet we choose to trust them. If the email is particularly sensitive, we have the option of encrypting it. In both cases we weigh the risk with the threat and take the appropriate action.

The question is, if we move more and more of our business computing requirements onto the internet, can we weigh the risk with the threat and come up with a suitable response? To answer this, we need first to examine the threats. This is simple. They are the same threats we already face: data loss, insider threats, compliance requirements around data protection and privacy, hackers and so on.

Let’s consider the hacker threat first. In July 2009 there was a widely reported password hack into Twitter – or more specifically, the Googlemail account of one of its senior executives. This allowed the hacker access to sensitive messages and all of the executive’s Google Apps. Both Google and Twitter can be considered cloud applications, so the question is ‘does this prove the insecurity of the cloud?’


The solution is not to abandon the cloud, but to improve your password security.


It does not. It was a simple password breach that could have happened anywhere. If you have weak password security, you’ll get hacked, whether it’s on your desktop, in the computer room, or in the cloud. The solution is not to abandon the cloud, but to improve your password security. And the same basic principle applies to all security threats in the cloud.

As it happens, Gary Wood (research consultant at the Information Security Forum and co-author of a new ISF report called Security Implications of Cloud Computing) believes that direct password cracking is more rare than most people think. “If the hacker doesn’t have access to the back-end store or personal knowledge of the target, and provided it’s been set up to block access after three failed attempts, it’s near impossible to break into an account.” ‘Password hack’, he believes, is a term commonly used for a wide variety of password breaches; not the least being password-stealing malware dropped onto the desktop after visiting a poisoned website. This, however, is a problem that cloud computing can help mitigate, and one we’ll come back to it later.

Gary Wood

Gary Wood, research consultant at the Information Security Forum

What about the insider threat. Clearly, if you don’t have control over who is employed, you have no real control over this threat. But while a valid concern, this doesn’t really examine the true situation. Do you, for example, have full control over your existing employees? The answer can only relate to the care and concern with which you vet your staff and the quality of your security policy. But if you move to the cloud, then you effectively outsource that care and concern to the cloud provider.

It is almost invariably true that this cloud provider will have greater security expertise than your own organization. If you are a large corporation, this is still probably true. If you are an SME or start-up, this is most definitely true. The secret to security in the cloud therefore starts with your relationship with the cloud provider. You need to leverage his expertise to your advantage. The starting point is due diligence. You need to make sure that your preferred provider has all the ability and expertise to provide you with the trust you need for a secure operation. And then you need to cement that into a clear service level agreement. “The cloud is not a one-menu shop,” says Gary Wood. “If the menu isn’t what you want, you can go to a different restaurant. If you’re big enough, you could even persuade them to change the menu. And if you’re a small company and can’t change the menu it will still be better than the one you can get at home.”

Eric Baize

Eric Baize, Senior Director, Secure Infrastructure Group, EMC Corporation

Having chosen a provider that will suit, you should next examine the potential for data loss, and any difficulties in legal compliance. Are these problems or opportunities? Eric Baize (Senior Director, Secure Infrastructure Group, EMC Corporation) has no doubts. He is excited by the cloud. “It is an opportunity to build security into the framework, rather than bolt it on from the outside as we have had to do in the past.” He uses content-aware storage as supplied by EMC’s Atmos, as an example. Many people fear that local data protection laws will be difficult to obey in the cloud. Baize believes that it is an opportunity to build security consciousness into systems at the data rather than hardware level. With Atmos, different rules can be applied to different categories of data. Specific rules could be applied, for example, to personally identifiable information. EU data could be forced to reside on virtual servers that are physically located within the EU to conform with EU regulations. Indeed, the whole concept of data loss prevention and compliance can be designed into the structure of the system at the data level.

Peter Shillito, Lead Security Architect for cloud services provider Fujitsu, takes a similar view. “For years we have talked about ‘de-perimeterizing’ security. The cloud gives us the opportunity. Consider security and event management (SIEM). Traditionally, it is all about gathering and interpreting data triggered by security devices such as firewalls.”

Traditionally, firewalls have been situated between our own servers in our own data centres and the internet on the outside. Really, they have been protecting routes into the room rather than the data, leaving the data itself exposed to anyone who finds a different way into the room. But the physical location of the data held within the cloud is not so easy to specify. While firewalls remain important, the development of cloud computing offers the opportunity for us to consider protecting the data rather than just its location.

Peter Shillito, Lead Security Architect for cloud services provider Fujitsu

“SIEM principles,” says Shillito, “can be expanded to manage data behaviour as well as security incidents.” For example, if we look at content aware storage we can specify where certain data should reside. But with cloud-based SIEM, we can also monitor who is looking at that data, and from where. The SIEM could traditionally warn us of an external hack attempt; but we can now develop a system that tells us if somebody inappropriate is accessing the data even if they are not triggering a traditional security event. By moving our storage into the cloud, we are forced to look at the data itself rather than it’s physical location, and this in turn gives us the opportunity to design security from the base up.” In other words, cloud development gives us the opportunity to monitor events at the data level rather than the perimeter level, and this will provide greater flexibility in our security options.

But there is an even easier way to ensure compliance within the cloud: encryption. Encryption is, of course, already available; but it is so rarely used. Time and again we hear new cases where even government loses unencrypted data. Perhaps the problem is that encryption is too obvious; it has become familiar and we treat it with contempt. But in the cloud, encryption is not merely obvious, it is essential. “In [the de-perimeterized cloud] world, there is only one way to secure the computing resources: strong encryption and scalable key management… Safe harbor provisions in laws and regulations consider lost encrypted data as not lost at all.” (Security Guidance for Critical Areas of Focus in Cloud Computing, Cloud Security Alliance, 2009) Compliance becomes so much easier if you cannot lose the data!

In our earlier example of the threat to data via password hacks, we said we’d come back to the threat from the desktop. The desktop is a bigger nightmare for the majority of system administrators than is the hacker, because it is usually the desktop (or more specifically, the behaviour of the desktop user – that is, you and me) that let’s the hacker in. We break the rules. We don’t patch our PCs. We go where we shouldn’t go, and we do what we shouldn’t do. In short, we catch infections and then pass them on to the body corporate.


Hosted virtual desktops put full control in the hands of the IT administrator.


Once again, a move to cloud computing can help. “A hosted virtual desktop environment enabled by platforms such as VMware View,” explains Eric Baize, “separates the corporate desktop from the underlying hardware giving almost real-time control to the desktop administrators on desktop images. Furthermore, end-user data does not leave the data center even when it is used by the end-user, and virtualisation isolation characteristics ensure that the non-corporate use of the desktop does not interfere with its corporate use, thus greatly reducing the risk posed to corporate assets by infected desktops. Hosted virtual desktops do not change the end-user behavior but they put full control and visibility of the corporate desktop back in the hands of the IT administrator.”

It seems that everywhere we look at the challenges for security in cloud computing we find relatively easy solutions that don’t merely meet the challenge, they provide a level of security greater than we currently have. Security is not a problem in cloud computing, it is an opportunity. And here’s one, courtesy of Gary Wood, that is the icing on the cake. “Patching is one of the big headaches for administrators.” Doing it interferes with the users; not doing it leaves us insecure. The solution? Use the cloud. “Hire a few extra servers for a couple of days, mirror your system, install the patches without interfering with the users, test, and then flip.”

Postscript
This article was written in the cloud using Google Docs. It meant that the author could continue research and writing whether at home with the iMac, in the office with the Dell, or on the road with the Netbook. Telephone interviews were conducted through the cloud with Skype. It required no additional hardware, nor any costly word processing or telephone software. Just access to the cloud.

Categories: All, Security Issues

Data.gov.uk. Where to start?

January 22, 2010 Leave a comment

What is it?
Announced yesterday, 21 January 2010, it is the UK government following Barack Obama’s initiative and creating an online searchable database of the information, facts and figures it collects about us, our locality, and everything else that they know about in this wonderful country. This has to be a good thing, because it is a practical example of something I believe in: freedom of information. But my immediate reactions were not joy, but ‘privacy’ and ‘security’.

Privacy
Let’s start with privacy. From the FAQ:

How were the datasets in data.gov.uk selected?
Excluding personal and sensitive information, all information created by public sector bodies is, in principle, available for re-use. In the past, different approaches were adopted by local and regional authorities and individual agencies. The government is now widely encouraging all previously inaccessible public information to be made accessible through this website.

That means my privacy will be protected, right? Well, yes and no. It doesn’t seem as if my personal data as held in the electoral register and the DVLA will be made available through data.gov. So that’s a good thing. It will only be made available to anybody willing to pay for it, as it is already. Is this a privacy issue or a commercial issue to keep it out of data.gov?

OK, I admit I’m a cynic. I do not believe that this government does anything much other than to seek its own re-election; and I don’t trust it as far as I could through the Lower Chamber. But in principle, data.gov has got to be a good thing, provided that government is consistent. If it keeps data out of data.gov because it might be personal or sensitive information, then that data must not be sold for a profit elsewhere. And if it is willing to sell some of its data, then that same data should be included for free access on data.gov. (Incidentally, while trying to find more about this project, I came across the following: “People want government to be there for them, to help them succeed and make the very best of life and the new opportunities the world offers. They do not want a government that leaves them to face these challenges alone.” Well, no. I, for one, am fed up with government telling me what I want. Frankly, I just want government to butt out and leave me alone.)

Security
What about security? I notice that Tim Berners-Lee was at the announcement and apparently played a strong part in the development of the system. Choice of an open source CMS, Drupal, rather than some fiercely protected proprietary database system was probably down to him. Respect to Sir Tim. What he has done in developing and promoting freely available information makes him a hero. But here he was just a visual political sound-bite paraded for maximum political effect. Where, I wonder, was someone like Ross Anderson, who would be able to talk about the security of this whole project? Could it be that they couldn’t find a respectable security expert who would be guaranteed to stay on song?

For there really are serious security considerations here. I’m not talking about the usual concerns, accessing private data. By definition, if the data is there it should be accessible. I’m talking about the potential for data.gov to be compromised and for a popular data.gov to subsequently own the greater part of the kingdom. But the more likely route will be for the bad guys to either develop their own apps for data.gov containing their own malware, or for them to attack and compromise existing apps. Since it seems that anyone can develop apps, and anyone isn’t necessarily security-savvy, the potential is frightening.

Actually, I wonder if it’s all one gigantic plot for the government to do just that; to develop government malware in the form of data.gov apps that drop government spyware on every PC in the country. They could justify it as their way of protecting us from terrorists, keeping our kids safe from pedos, preventing benefit fraud, attacking money laundering, and fighting crime in general. The malware could switch on the web cams and find illegal smokers, spot domestic violence, catch truants, and just get on with the job which is what we want them to do. The fines they could levy would pay off the national debt in no time!

Conclusion
But seriously, the potential for problems is massive, and I would really like to hear what is being done on the security side of things. In the meantime, I for one will not be using it.

Categories: All, Security News

New types of Malware need a new type of Defence

January 21, 2010 Leave a comment

Growth of the poisoned website
The traditional method of infecting a PC with malware that can be used to steal passwords and account details has been to send the target an email with an infected attachment. Over the years, ISPs, anti-virus companies and users themselves have become more adept at recognizing such threats, so the success rate for the attacker has diminished.

Now attackers have changed their primary modus operandum. Today, one of the strongest and most worrying trends is the growth of what is often called ‘drive-by downloads’. This approach is to infect a legitimate page on a legitimate website with malware, so that when an innocent visitor accesses that page, the user’s browser passes the infection from the web page to the user’s PC.

email down, web up

Attack methods: email decreasing, poisoned webs increasing

This initial infection is often quite small. Its function is primarily to open a communication channel from the PC to the attacker’s own computer; and once this channel has been opened, the attacker can transfer larger and more sophisticated malware to the user – such as spyware, a rootkit, a keylogger, a remote administration tool or a combination of all four.

Now, while users and professionals have become more adept at recognizing attachment-borne viruses, installed rootkits are a completely different matter. It is anyone’s guess how many PCs have been ‘rooted’ while their owners know nothing about it.

Methodology
How is this achieved? Well, it is often the sort of sophisticated attack that is thought to be behind the recent hack and theft of intellectual property from Google by Chinese hackers. It starts with locating and compromising a legitimate website; or by developing a web site that appears to be a legitimate website, but is not. In practice, there is little need to do the latter because it is so easy to do the former.

Because the infection does nothing to the host (the infected legitimate website) it will often go undetected. But that’s only half the job – the attackers’ next task is to get users to visit that site and get infected themselves. This is usually done by ‘phishing’. An email is sent to users with the sole purpose of persuading them to visit the poisoned site. This can be very persuasive, for a well-crafted email is entirely innocent-looking. The user isn’t asked to do anything or download anything, just to have a look at something on the site. It is claimed that a variation of this called ‘spear phishing’ trapped the Google employees. Spear phishing is simply highly targeted phishing. It’s not mass emails sent to snare as many people as possible, but specific emails sent to specific targets – and is consequently all the more persuasive.

Modern anti-virus products can detect much of the malware used to poison websites, but there is very little defence against zero-day malware (malware known only to the criminal world) such as that used in the Aurora Google/China attacks. As I write this, news is appearing about extensive disruption at Exeter University caused by a zero-day virus. Alastair Revell, Managing Consultant of Revell Research Systems in Exeter, has blogged: “I remain concerned about the zero-day virus threat. A virus that spreads quickly and easily such as this one, that exploits a flaw such as the one in Internet Explorer that saw Google hacked in China, with a drive-by infection capability on a site such as any of the international versions of Google would lead to huge economic disruption across the globe.”

The effect of poisoned websites and drive-by downloads
If drive-by downloads are capable of global disruption, the effect on individuals can be catastrophic. A visitor whose PC is infected by a poisoned website can say goodbye to all the sensitive data on his computer: his passwords, his bank details, those private emails to his secretary. Or, he might simply and quietly be recruited into the massed ranks of the botnets that deliver so much of the Internet’s spam. The user becomes a spammer without ever knowing about it.

For the website, the owner will lose credibility when the infection is finally discovered, as eventually it will be. Who wants to do business with a company that has been infecting all of its visitors? Brands can be destroyed, reputations ruined, and revenue disrupted. At the very least, the website will be blacklisted by ISPs and even browsers.

Clearly, the solution to this problem is to prevent the infection of the websites in the first place – but that is easier said than done, is probably impossible, and doesn’t help the hundreds of thousands of sites already infected. The priority here is to find those sites and help the individual webmasters sanitize their pages. After that they can start to close the holes that let the infection in. But there will always be holes and there will always be attackers.

One solution
Dasient is a new company with a new approach, with a combination of products and services to combat the threat of drive-by downloads. The main product, Web Anti-Malware (WAM) periodically scans its customers’ websites looking for malware. Alternatively, if a webmaster using a remote ISP to host his site becomes concerned about one or more of his pages, he can send the relevant URLs, or complete domain URL, for Dasient to instigate a remote scan. In both cases the webmaster gets an alert whenever a poisoned page is discovered, that also states whether the site has already been blacklisted.

malware alert

Malware alert from WAM; also confirms the site has been blacklisted by Google

At the heart of the system is the Malware Analysis Platform. It scans millions of websites looking for new attacks, it learns from these attacks and catalogues the attack string. The subsequent knowledgebase combined with algorithms that look for malicious behaviour enable WAM to detect and isolate poisoned pages. Based on the knowledgebase of existent infections, the system can either filter out the malware, or simply block the page/s concerned so that visitors cannot access the page nor get infected; and the website will not be blacklisted.

WAM scan report

Figure 4: a WAM scan report giving the all clear

It is one of those products that passes the KISS-test. For the customer it all looks very easy: easy to use and easy to understand. Its sophistication, and it is sophisticated, is hidden away behind an effective GUI. It won’t stop you getting infected (there is nothing that could guarantee this), but it could stop your business being disrupted.

Addendum
While writing this piece, Dasient has

  • announced a new version (v0.2) of mod_antimalware_lite, an open-source extension to Apache that will block infected web pages from being served to users.
  • launched a free blacklist monitoring service. It’s good to know that you won’t be the last to learn about your infection!

Get more information on Dasient and WAM from http://wam.dasient.com/wam/

Categories: All, Security News

And just what is YOUR password?

January 21, 2010 Leave a comment

Imperva has analysed the 32 million passwords that were stolen in the RockYou breach at the end of last year. The hacker thoughtfully posted the passwords on the Internet for all to see; but showed greater security awareness than RockYou had. RockYou had stored the passwords in clear text with the relevant users and associated accounts (bebo, myspace, gmail etc). The hacker just posted the passwords with no personally identifiable information.

Anyway, the report makes depressing reading: by far the majority of all passwords are classified as weak and could have been easily cracked anyway. We never learn.

By far, and by a long way, the most popular password in use (and we can guess that this translates to everywhere) is ‘12345’. In order, the next most used are 12345, 123456789 and Password. Damn! Means I’d better change both of my passwords.

The report quotes Bruce Schneier’s advice on developing a strong but personally memorable password: “Take a sentence and turn it into a password. Something like ‘This little piggy went to market’ might become “tlpWENT2m”. That nine-character password won’t be in anyone’s dictionary.”

An alternative for Windows users would be to use the free Password Safe, originally developed by Schneier himself and now an open source project: http://passwordsafe.sourceforge.net/.

You can get Imperva’s report here: www.imperva.com/ld/password_report.asp.

Categories: All, Security News

Google/China syndrome version 2

January 20, 2010 Leave a comment

A new strain of cynicism has emerged:

  1. why weren’t Google employees using the Google Chrome browser and operating system instead of IE and Windows; and why on earth the ancient IE6 for God’s sake? Perhaps it was a complex plot to discredit Microsoft and boost use of Chrome (certainly both Firefox and Opera reported huge download increases in the aftermath of the German and French advice to ditch IE at least for the time being).
  2. Google showed remarkable gullibility and lax security controls to allow foreign national insiders to breach their systems; and they therefore deserve all they got.
  3. Google engineered the whole situation so that their righteous indignation could help repair their worldwide battered ‘do no evil’ brand image.

There are elements of truth in the first two of these suggestions, although I’m not sure I would go along with the conclusions. I doubt the extent of the third; but Google would be stupid not to take the moral high ground once it had happened. Nevertheless, there is still much in this story that doesn’t add up, and we still haven’t heard the last of it.

Categories: All, Security News