Why go to the trouble of hacking if you can find an easier method? Why not just pay mobile phone company employees to simply give you the codes that can unlock users’ unique SIM cards? That is what, apparently, has been happening for the last five years in France. The outside crooks paid the inside crooks €3 for each code, and then sold them on to hackers for €30.
The first thing you have to ask is how can this possibly happen in 2010? Security professionals have been shouting for years that the insider is as big a threat as the outside hacker. And we have solutions.
A database activity monitoring system that looks at the rate at which data is taken out of the database would have detected this problem but it is not enough to have a simple monitoring solution because the access to the database is usually through an application so you need to be able to maintain end to end visibility through all the different tiers. The system should alert on any abnormal amount of data retrieved from the database and also apply geo-location analysis and alert on an illogical access to database by a user who should not be accessing the data so many times or retrieving a large number of details in a single session.
Amichai Shulman, CTO, Imperva
OK – it shouldn’t have happened. But it did; so there are other questions we need to consider. Mobile phones are increasingly used as authentication devices for mobile banking. Just how serious can this get? I spoke to Jonas Thulin, VP of Sales Engineering at FireID, for his views.
FireID uses mobile phones to provide two factor authentication to banks, and I asked if the code theft was a worry. “Not for us,” he told me. “It doesn’t affect our customers at all, because we don’t link our application to the SIM card on the phone. To generate the one-time password, we have a shared secret, a seed number, that we store on the phone. This gets encrypted by a PIN number that the user configures when he installs the application. Basically, there simply isn’t enough information on the phone to successfully decrypt the PIN code in order to steal OTPs.
“However, where these thefts can cause problems,” he added, “is where the 2FA isn’t really 2FA at all – but more properly it uses the second factor as an alternative rather than an addition to the first factor. A good example is Google’s new 2FA for Google Apps where the authenticating code is sent to the handset as an SMS message. More worryingly, a lot of banks also still do this. Where this happens, hackers with access to the stolen SIM codes can also get access to bank access codes.”
In short, where the mobile phone authentication mechanism is genuinely two factors, such as that supplied by FireID, you’ll be OK just so long as the bad guys don’t get hold of both the SIM details and your PIN code. But if your bank simply sends you a text password – then you should be concerned. The moral is that genuine two factor authentication works; pseudo two factor authentication falls short.
Elcomsoft, a Russian cryptanalysis company, has a history of upsetting the West. Way back in 2001, Dmitry Sklyarov, an Elcomsoft programmer, was arrested in the USA after presenting at DEF CON. He had developed a product, The Advanced eBook Processor, that would decrypt encrypted Adobe e-books. He had not broken any US laws while in the USA, nor was his product illegal in Russia. But it certainly upset Adobe and other western publishers at the time.
Today we have a new Elcomsoft product: the Elcomsoft Wireless Security Auditor, complete with WPA2 brute force password cracking. And they’re still upsetting people. Idappcom’s CTO, Roger Haywood, has commented:
…the reality is that the software can brute force crack as many as 103,000 WiFi passwords per second – which equates to more than six million passwords a minute – on an HD5390 graphics card-equipped PC. Furthermore, if you extrapolate these figures to a multi-processor, multiple graphics card system, it can be seen that this significantly reduces the time take to crack a company WiFi network to the point where a dedicated hacker could compromise a corporate wireless network.
Our observations at Idappcom is that this is another irresponsible and unethical release from a Russian-based company that has clearly produced a `thinly disguised’ wireless network hacking tool with the deliberate intention of brute force hacking wireless networks.
The solution is clearly and intentionally priced within the grasp of any hacker or individual intent on malicious wireless attacks. Assuming you have no password and access control recovery system, if you do forget the password to a wireless network that you own, how difficult do you think it is to walk over to the device and press the reset button? In most situations resetting a wireless device, restoring a configuration and setting a new password is a process that can be achieved in minutes.
This is an absolutely valid viewpoint. But I’d like to suggest an alternative view. Was Adobe’s encryption weak in 2001 because of Dmitry Sklyarov; or did/could Dmitry Sklyarov produce his software because Adobe’s encryption was weak? Adobe’s security is far stronger today. Is that partly because of Elcomsoft?
And now, does the Elcomsoft EWSA product create insecure networks, or merely demonstrate that those networks are already insecure? One thing we can be sure of; the security of those WiFi networks will now have to improve. Is that a bad thing?
There is a similarity here with the full disclosure debate. And I suspect that people will take similar sides. You may have guessed that, on balance, I believe that security is improved by full disclosure; and by companies like Elcomsoft. Those who believe that full disclosure is irresponsible disclosure will probably believe that Elcomsoft is irresponsible.
And never the twain shall meet.
This year’s computing buzzword is probably ‘consumerisation’. It is used to describe the growing influence of staff in the choice and use of corporate computing devices – and one effect of this consumerisation is that the demarcation lines between corporate and personal use are at best blurred and more likely non-existent.
Nigel Hawthorn, VP EMEA marketing at Blue Coat, believes this growing consumerisation may have another effect – a strain on corporate network bandwidths. Hawthorn has a history of predicting the unexpected. Back at the beginning of the summer, he suggested that some company networks would be overwhelmed by their own staff watching the World Cup live on company bandwidth (FIFA World Cup: the world’s biggest ever DoS?). He was confident enough to declare that if wrong, he’d eat his shirt; and he did not have to eat his shirt.
Now he points out that the latest version of the BBC’s iPlayer, coupled with consumerisation, has the potential to place a growing and sustained strain on UK bandwidths. “The iPlayer’s new version has some great new features on it,” he told me. “and there are two that are particularly important from a network manager’s point of view. Firstly, iPlayer can now support HD. What’s that – 3.2 Mb per second; where it’s just 1.6 for non-HD? And secondly, and probably more insidiously because users might not realise what they are doing, you can now set a particular programme or set of programmes as one of your favourites, and say that you want your PC to automatically download each new episode as soon as it is broadcast. So the PC sits there, boots up in the morning, and because the user has at some time in the past said ‘I love Eastenders’, it downloads last night’s Eastenders to the PC, even if that user never goes back and watches it.”
Network managers would do well to think about this. Consider the temptation on staff. Fewer people are seeing much difference between company computing and home computing. They check and respond to their company email at home on their own bandwidth; why shouldn’t they be just as relaxed at work with the company bandwidth? So what is wrong with downloading last night’s television to watch while having a sandwich lunch at your desk? Or to have something to watch on your laptop during the hour-long train commute home?
Then do the math. If just ten members of staff have got the iPlayer app on their PCs and have set it to download one or more favourites, that could be something like 15Gb of non work-related downloads, probably in one go when people boot up their PCs in the morning. And if those people haven’t downloaded, but all decide to watch streaming programmes during the same lunch-break, we’re talking about something like 32Mb/sec. Could your network cope?
The consumerisation of computing is just beginning. The use of the internet as the source of all video entertainment is growing. So the demand on company bandwidth from non work-related staff downloads is likely to grow exponentially over the next couple of years. This means that it will be essential to develop ‘acceptable use policies’ that are in themselves acceptable and yet enforceable. This will require a clear view into the use of your networks, and strong control over that use. And that requires the products of companies like Blue Coat.
The new two-factor authentication (2FA) for Google Apps has to be a good thing. But why now, I asked Eran Feigenbaum, director of security at Google Apps?
And without a hint of irony he answered, “At Google we are always looking for ways to improve users’ security and [wait for it] privacy. Since the weakest link in both is the password, 2FA is the logical next step.” Actually, I believe him.
This is how it works (if you want it). When you login to Google Apps, a small app on your Android, iPhone or BlackBerry generates an out-of-band separate one-time six digit code. You need this to complete the login. It makes it far more likely that you are who you say you are because you also have access to your smartphone (the token, or second factor – which you have protected separately, right?). And if you don’t have a smartphone, Google will generate the code itself and SMS or voice it to your antiquated mobile phone. This is a massive security improvement, and raises Google Apps to a level of security similar to that used by many banks.
But it’s not enough. It only authenticates Google Apps. What about all of the other web applications? What about all of the social networks that do or will exist? What about the rest of the cloud. What we really need is free 2FA that can be integrated anywhere on the web.
And here’s my prediction. It’s coming; because, as Feigenbaum says, it’s the logical next step. I don’t know how or by whom it will be done. It might be Google or it might be one of the big internet security companies. It will probably involve Firefox. But it will definitely happen. Soon.
“I have been saying for years,” said Philippe Courtot, chairman and CEO of Qualys, “that we are simply not meant to be dependent on a huge complex operating system like Windows on the desktop; and that in the future, most of our computing will be done in the cloud.”
That prediction is now coming true with shrinking clients and expanding clouds. “Look at the audience of technology professionals in any conference,” he continued. “They’ve all got their iPads and or their smartphones; and nothing else. You can get your email on your smartphone; and if you need to write a longer report you can use your iPad and Google Apps.” We no longer need, and probably never wanted, bloated operating systems on huge desktop computers that served primarily to shackle us to our desks.
We were actually talking about security and the cloud. Courtot’s point here is that because of the cloud, we now only need thin clients. This has two ramifications. Firstly, use of the cloud will, counterintuitively, make us more secure since thin clients can more easily be hardened; and secondly, tied-down clients have a head-start on open clients.
Think of this last point. As we enter the Second Computer Wars (the First Computer Wars was 25 years ago between Apple and Microsoft, and the theatre was The Desktop; this one is between Apple and Google, and the theatre is The Internet), we must remember that weapons have changed. In the first war, Apple lost because it was closed. But in this war we must ask ourselves whether that very closed nature is now an advantage. Philippe Courtot certainly seems to think so.
Today it is closed Apple versus open Android/Chrome. “Microsoft and Nokia will be left in the dust,” adds Courtot; they each took wrong turnings. Microsoft thought it could carry on with its old philosophy while Nokia never really committed itself one way or the other.
Android versus iOS. Open versus closed. Logic leans towards closed; my heart hopes for open.
Psychology is strange. A threat from a master is far more acceptable than a threat from a novice – even though the danger is greater. Being phished by a novice is insulting: is that all they think of me?
Here’s one. Look at the grammar. Look at the spelling. Look at the punctuation. Surely I’m worth a bit more effort than that!
The site hidden under the link is ‘admemex dot com’. I thought I’d have a look – but Firefox tried to stop me. That was reassuring.
But I persisted – I wanted to see how good (or bad, considering the text grammar) a site forgery might be. This is where I landed; and as far as I went.
But compare the site forgery to the genuine page:
Not bad, eh? If the email author had spent as much effort in his (or her) text as was spent on the website forgery, then we might not be as safe as we are.
But remember this: curiosity compromised the cat. Don’t click on dubious links at home – go play in the road where it’s safer.
Last week, Qualys’ CTO Wolfgang Kandek told me that the “modern attacker has decided that the easiest thing to do is to attack the website that the user is going to visit rather than setting up special malicious sites and trying to drive users to them.” (The Top Cyber Security Risks Report) I found this quite disturbing because it makes me wonder if I am actually as safe on the internet as I had always thought I am.
You see, I use Firefox and NoScript. And NoScript will stop any script at all, whether benign or malicious, running in Firefox – unless I temporarily or permanently whitelist the page in question. This has to be a good thing. It means that when I visit a site and nothing much happens, I am forced to ask myself: do I trust this site? If I do, I can whitelist it and get the full experience. If I don’t, I can just move on confident that nothing untoward has happened.
But that was before Kandek’s comment. This sounds like a game-changer. If the bad guys compromise a good site, when I ask myself ‘do I trust this site’ I will probably say yes. And if the site in question was either TechCrunch or SongLyrics (two good sites recently hacked), I might have whitelisted a site that had been compromised.
Does this mean, I had to ask myself, that NoScript is no longer as useful as I thought? Well, who better to really ask than NoScript’s developer, Giorgio Maone. “Had I visited TechCrunch a couple of weeks ago, even with NoScript, would I now be infected?” I asked him.
In other words, even when TechCrunch was compromised, it was not TC that was dangerous, it was the site that was linked – let’s call it GetHackedHere.ru – that was dangerous. And so long as you don’t whitelist GetHackedHere.ru, then NoScript will continue to keep you pretty safe.
But Maone didn’t stop there. Did you know, he asked, “that middle-clicking on site names shown in the NoScript menu opens a tab where a few tools are linked, giving information on that site?” I didn’t, so I tried it on TechCrunch. It gave me four options: the WOT Scorecard, the McAfee SiteAdvisor Rating, the Webmaster Tips Site Information, and Google’s Safe Browsing Diagnostic.
I clicked the last.
In the last 90 days, 58 pages on techcrunch.com have been compromised – although nothing since 6 September. But note that, confirming Maone’s comments, the actual malware was hosted on virtuellvorun.org, not on TechCrunch. So NoScript users would have remained protected even if they had whitelisted the compromised TechCrunch because NoScript would have disallowed any scripts from the still blacklisted virtuellvorun site.
I’m not quite as smug as I used to be – but I’m just as well protected by NoScript as I ever was. And I can and do wholeheartedly still recommend Firefox and NoScript to anyone who wants to stay safe on the Internet.
NoScript download (for Firefox users)
I asked Bruce Schneier one of the things that currently concerns me most. How can I be secure in the cloud?
“You can’t,” he replied. “In the cloud you’ve given your data to someone else. How can you secure what you don’t have? You don’t even know where it is.”
What about data tagging, I asked.
“Doesn’t work,” he said. “Technologically impossible. How can you tag a bit?”
Just for a moment, I thought I was being given a lesson by Heisenberg. “Wait a minute,” I said. “At a philosophic level exactly the same applies to the data on my desktop.”
“That’s right,” he replied.
And that’s when I realised. Schneier, in his trademark subtle-as-a-sledgehammer style, was giving me a lesson in security: it doesn’t exist and you can’t have it. What you can have, and must aim for, is an acceptable level of trust.
On your desktop you ask yourself, do I trust this hardware manufacturer not to have installed something nasty? Do I trust my software not to be full bugs and phone-homes? Do I trust my security software to raise my level of trust to an acceptable level?
And in the cloud, you ask yourself: do I trust this supplier to protect my data, and store it in the right place.
Maybe we should rename it: infotrust…
I was busy yesterday. Good job really – it meant that I didn’t even look at Twitter all day. In fact, the first I even knew of a problem was when Websense told me:
As of 3pm UK time Twitter Safety is reporting that the XSS flaw is no longer exploitable.
This morning we saw Proof Of Concepts of the Twitter command being posted by Twitter users and then began to see end users tweeting the code virally. There is the potential for malware authors to spread malicious tweets using the flaw to direct users to other Web sites.
As of writing, hundreds of new tweets per second are being published on twitter.com using the OnMouseOver flaw. Twitter users whose accounts have been affected by the flaw include journalists and high-profile celebrities.
One of these high-profile celebrities was, according to Graham Cluley, Sarah Brown.
Thousands of Twitter accounts have posted messages exploiting the flaw. Victims include Sarah Brown, wife of the former British Prime Minister.
It appears that in Sarah Brown’s case her Twitter page has been messed with in an attempt to redirect visitors to a hardcore porn site based in Japan. That’s obviously bad news for her followers – over one million of them.
Graham Cluley’s blog
It is the speed with which threats can spread through services like Twitter that is worrying.
While most examples of the ‘onmouseover’ security flaw seem to be people playing around with code without specific malicious aim [the early position]… …there’s a possibility that bad actors may use this to direct end-users to malware and phish pages [which of course happened very quickly]. I’d like to think Twitter will have this under control before that happens [not really; but nearly]. However, we are surprised that Twitter has not suspended the main twitter.com web site while it works on a fix.
Christopher Boyd, senior threat researcher at GFI Software
The fact is, Twitter did not immediately suspend the site. And it is this ‘unreliability’ of other people that leads Lumension to say we need a fundamental rethink.
We simply can’t just rely on spotting malicious activity and then reactively try to stop it from affecting us – we need to take proactive steps to ensure that regardless of what is happening on the web, corporate environments are trusted and safe.
To steer clear of infections introduced by these types of unpredictable web events, businesses need to move from a threat-centric model that focuses on trying to prevent the bad; to a trust-centric model that only allows what is known to be good to run on the machine.
Don Leatham, senior director of solutions & strategy, Lumension
Last Thursday, Qualys (in conjunction with TippingPoint and SANS) published The Top Cyber Security Risks Report. I consider this report to be more valuable than most, because it
…features in-depth analysis and attack data from HP TippingPoint DVLabs, vulnerability data from Qualys and additional analysis provided by the Internet Storm Center and SANS.
The Top Cyber Security Risks Report
In short, it combines genuine data with the highest quality professional analysis. Compare this approach to the two recent ‘perception’ surveys I discuss here and here. Perception is, of course, highly valuable for marketing purposes: the danger is that other users might confuse the perception of what works with the reality of what works – and make bad choices. I put Wolfgang Kandek, CTO of Qualys on the spot by asking him if his experience of reality confirmed the general perception held by both of the perception reports that data loss prevention (DLP) and encryption are two of the best security controls for preventing security breaches.
“I haven’t seen that impact, I have to say,” he responded. “For me, encryption is very helpful on, let’s say, on the laptop that is lost or stolen. It’s good then if it’s encrypted; it makes it very difficult for someone who finds or steals that laptop to actually get to the data. It’s also very useful between two points, if someone eavesdrops on the line or the internet connection. In these situations it is very, very useful. However, with the attacks we are seeing today, the attackers actually get into the end point where the data is unencrypted, where you actually write your emails, or where you submit your bank transfer before you type in your password. At that point it has to be unencrypted; and that is where the modern attackers are acting right now.
“DLP is again a useful technology for the unintentional leakage points; but I’m not sure how well it works against a determined attacker who is able to use encryption in his communications.” To illustrate his point, I could do no better than point to the section Analysis of a PDF attack in The Top Cyber Security Risks Report. It includes a series of graphics to illustrate the process of the attack – and I include the final graphic here. It shows the endgame. The attacker has compromised the victim’s network, and is communicating sensitive data back to home base. How effective, we have to ask ourselves, would DLP be if the attacker’s malware is able to encrypt the communications?
And we have to assume that today’s professional criminal is well able to do this.
One of the more alarming trends observed in the previous six months is the increased sophistication of attacks. Attackers have not only become more organized, they are also increasingly subversive and inconspicuous in the way they execute their attacks. The attacks are so sophisticated and subtle that few victims realize they are under attack until it is too late. It is increasingly common to hear of attackers remaining inside a compromised organization for months, gathering information with which they design and build even more sophisticated attacks. Once the desired information is obtained, the attackers launch attacks that are both more devastating and more covert.
The Top Cyber Security Risks Report
“What we’re seeing,” explains Kandek, is that the modern attacker is moving away from emailing threats or malicious attachments and is instead attacking the tools that the user is using: the web browser, all the plug-ins, the web itself, and so on. The modern attacker has decided that the easiest thing to do is to attack the website that the user is going to visit rather than setting up special malicious sites and trying to drive users to them.We’ve learnt how to recognise bad sites and not to go there – so the bad guys are focusing their attention today on normal websites that people go to anyway, and say that if I could infect that site with a little pointer that then makes that client visitor do my bidding, well, that would be really good. The intent today is not to deface the website, and publish a political message or something like that – but to put a little code or malware on the site that then infects the client browser that visits the websites.”
To prove Kandek’s point, it is worth mentioning that last week (6 September) the popular site TechCrunch was compromised and started serving its visitors with malware. And on 17 September, the day after the Qualys report was published, Websense announced that the music site Songlyrics.com had been compromised. Songlyrics gets something like 200,000 visitors each day, making it a far more attractive proposal (for the attackers) than creating a new site and trying to drive people to it.
Once a user accesses the main page of the song lyrics site, injected code redirects to an exploit site loaded with the Crimepack exploit kit. Attempted exploits result in a malicious binary (VT 39.5%) file that’s run on the victim’s computer. Once infected, the machine becomes another zombie-bot in the wild.
It is interesting to note that the malicious code injected on Songlyrics.com uses a similar obfuscation algorithm as Crimepack – a prepackaged commercial software used by attackers to deliver malicious Web-based code. It appears that the majority of pages served by Songlyrics.com are compromised. Crimepack has become one of the best selling exploit packs on the market due to its huge number of pre-compiled exploits offering a great base for the “drive-by-download & execute” business implication.
Websense report: Singing a malicious song
So, in short, if you want to know what’s really happening out there so that you can work out how to stop it, then I cannot more highly recommend that you get and read The Top Cyber Security Risks Report.