Arguably, there is no security incident without end-user involvement; either by the user actively doing something he shouldn’t, or passively not doing something he should. The criminals’ usual route is to socially engineer the target into doing something he shouldn’t (see The art of social engineering); like click a dubious link or open a malicious attachment. This is basic phishing. The original mass phishing campaigns, sending the same email to hundreds of thousands of targets, have an increasingly lower return for the criminals: users have become adept at spotting them. So today criminals are choosing higher value targets and sending personalized emails to an individual or small group of individuals. This is spear-phishing.
Criminals – whether individuals, organized criminal gangs or state-sponsored groups – are all selecting spear-phishing as the attack method of choice. A recent study by Trend Micro has shown that 91% of all successful APT attacks start via a spear-phishing attack; and 94% of those are emails carrying a malicious attachment. To put this into perspective, many (not all) security experts believe that any organization targeted by an APT will fall to the APT. The corollary, and one that I accept, is that anybody targeted by a well-crafted and researched spear attack will succumb to that attack, or the next one, or the one after that.
This is because there is no guaranteed defence against spear-phishing. It is man versus man – technology won’t work. You can filter incoming emails, but you might miss one. You can filter the target URLs, provided you know about all of them, but that misses the disguised malicious attachments.
This all begs the question of why spear-phishing is so successful; and it’s because the criminals do their homework. They treat the internet as their own big data playground, and harvest little snippets of information from different places to combine into a remarkably detailed profile of potential targets. There are huge criminal databases of stolen data. Just this week it emerged that the Nationwide insurance group in the US had personal details of 1.1 million Americans stolen, including “Social Security number, driver’s license number and/or date of birth and possibly marital status, gender, and occupation, and the name and address of their employer.” A couple of months ago, 3.6 million South Carolina tax payers had details stolen (itself via a spear-phishing attack) from the Department of Revenue.
What they don’t already have they get from the social networks and indeed the target’s company website. Email, personal interests, friends, position in company, age and location can all be found. From this profile it becomes relatively easy to compile a compelling email that looks 100% genuine and irresistible.
Indeed, the very way in which we do computing makes phishing very effective. A fascinating PhD study thesis by Michele Daryanani (Desensitizing the User: A Study of the Efficacy of Warning Messages) made available this summer draws a connection between hyperactive operating system warnings and desensitizing the user – including to phishing attacks.
So what can we do? The main defence is user education. There are specialist training companies; and PhishMe in particular specialises in teaching how to avoid being phished.
Yesterday, Metasploit announced it is joining the battle with a new release of Metasploit Pro 4.5, introducing ‘advanced capabilities to simulate social engineering attacks’. HD Moore, the originator of Metasploit and chief security officer at Rapid7, describes it thus: “Many organizations already conduct end-user trainings and implement technical security controls to protect their data, but it’s hard to know how effective these measures are, or even if you’re focusing on the right things. Metasploit assesses the effectiveness of these measures, and provides metrics and management for each step in the chain of compromise to help you reduce your risk.” In other words, it allows you to test your users and see which of them fall to phishing under what circumstances – spear-training against spear-phishing as it were.
But I’d like to add my own recommendation: that governments should understand that more often than not, education is better than legislation. If government would spend a fraction of its security budget, and a fraction of their energy, on educating users rather than legislating against choice, then we would all be a lot safer. And happier.
My stories on Infosecurity Mag for 17 May 2012
25 civil servants reprimanded weekly for data breach
Government databases are full of highly prized and highly sensitive personal information. The upcoming Communications Bill will generate one of the very largest databases. The government says it will not include personal information.
17 May 2012
Vulnerability found in Mobile Spy spyware app
Mobile Spy is covert spyware designed to allow parents to monitor their children’s smartphones, employers to catch time-wasters, and partners to detect cheating spouses. But vulnerabilities mean the covertly spied-upon can become the covert spy.
17 May 2012
Governments make a grab for the internet
Although the internet is officially governed by a bottom-up multi-stakeholder non-governmental model, many governments around the world believe it leaves the US with too much control; and they want things to change.
17 May 2012
My news stories on Infosecurity Magazine for Tuesday 20 March…
New twist in social engineering rogue AV
Rogue anti-virus products continue to be a major source of malware. The trick for the criminal is in getting the victim to click the link; and GFI has spotted a new development.
20 March 2012
Cost of data breaches outstripping inflation
The average cost to UK business per record lost, according to the latest Symantec/Ponemon study, has increased from £47 in 2007 to £79 in 2011. Had it been inflation alone, it would have increased to just over £53.
20 March 2012
Infosec human factor solved only by education
Information security is among the most popular of all the training courses offered by SkillSoft, with ‘An introduction to Information Security’ second only to the ‘Fundamentals of Networking’ in the top 100 IT courses says the company.
20 March 2012
Eighteen months ago we had news of a sophisticated attack against Google. It became known as the Aurora attack and it spawned a new term: advanced persistent threat, or APT. It may or may not have had the direction, connivance or knowledge of the Chinese government. But it made us rethink the threat landscape.
A year ago we heard about Stuxnet, a new intricate attack originally targeting the Iranian nuclear programme. This too may or may not have had government direction, connivance or knowledge. But again, we had to rethink the landscape: the unhackable, computers not even attached to the internet, had become hackable.
A few months ago, one of the world’s leading security companies, RSA, was breached and SecurID tokens were compromised. A while later, Lockheed Martin and Northrop Grumann, two leading US defence companies, were both attacked with the stolen RSA data. Another new development – the implication is that the RSA attack was a planned precursor of the defence attacks – and once again the finger has been pointed at China.
What can we conclude from all this? That cybercrime has been taken over by government cyber warfare agencies? Well, yes and no. Cybercrime today is a PPP, a public/private partnership, with freelance cybercriminals employed by and selling to government agencies. And these same criminals also work for highly organized criminal gangs.
Do we deduce, then, that our security industry has failed us? Again, yes and no. The security industry failed in these and many more instances. But without that industry, without the anti-malware companies, without our firewalls and filters and intrusion prevention, it would be chaos. The security industry stops far more than it lets through.
But what does get through is now so sophisticated that many security experts privately admit that there is no defence against a determined, targeted attack. And if the big companies, and even security companies, cannot defend themselves, what hope is there for the rest of us? Dr Kevin Curran, a lecturer in computer science and senior member of the I Tripple E told me in a conversation about the recent Sony hacks, “There’s nothing we can do to stop a targeted attack. We’re all vulnerable.”
So, do we load ourselves up with layers of cyber defences, and then just hope? Do we have to accept that if our name is on the bullet, that’s it? That if a foreign government wants our inventions for its own industry we have to accept it? That if a criminal gang wants our card details for themselves they will take them?
No, we don’t have to, and shouldn’t, just give up. There is a common factor, a common weak link exploited by all hackers; and if we strengthen that link, we will do much to prevent the attacks. What is this weak link? It’s you. It’s me. It’s all of us. It’s Joe User.
Joe User is both the cause and the solution. We have to change our behaviour. Consider these details from the Spanish anti-malware company Panda Labs.
It shows the type of successful malware attack currently out there. Similar graphs could be drawn for the different types of email scam or spam. Others could be drawn for categories of phish attacks. Endless graphs could be drawn to help us understand the threats we face from the e-criminals. But there is one statistic always left off. 100% of all these attacks depend upon just one element. Joe User.
If we were to include Joe User’s involvement in these attack graphs, he would always stand at 100%. Think about this. Not one single successful hack from the nerd in his bedroom to the Russian Mafia to the secretive government cyberwarfare agency has ever succeeded without the conscious or unconscious connivance of Joe. Joe, of course, is the single user at his desk in the corner, or working on the train going home – but he is equally the body corporate. It may be that he doesn’t do what he should, or does do something he shouldn’t; he might do it willingly or unwillingly or in ignorance – but if that act of collusion doesn’t happen, then the hacker can’t get in.
The hacker is like a vampire at the door. If Joe doesn’t invite him in, he can’t get in. But if Joe does let him in, he’ll own you, and he’ll bleed you dry. And the good hacker won’t even leave a shadow while he’s doing it.
We can illustrate this with a reconstruction of the way in which the Aurora attack was probably perpetrated. The attackers first chose their target. How? Possibly by using a business network like LinkedIn. Try it yourself. Choose any company and check it on LinkedIn. You’ll get a list of many of the internet-active employees, and probably which department they work in or what they do. Choose the person most likely to have good access to the corporate network or have direct knowledge of the company information you want to steal. Then switch to Facebook. See if he is there – probably he, or she, is. You already know what Joe does; now you can find out what he likes. Who his friends are. What interests him outside of work.
Now you have to hack one of those friends. It’s not as hard as you would hope. For example, there are long lists of stolen passwords available to the criminal. Maybe an innocuous gaming site was hacked, and user details stolen. From Sony, perhaps. Sony seems to have stored Joe’s password in plaintext. If you can find your friend-target on one of these lists, the chances are, because we all do it, don’t we, he’s using the same password throughout the internet.
So now we can own Joe User’s friend’s Facebook account. We already know what Joe does, and we now know what interests him – and we’re his friend.
The next step is to forge a personal message from the friend, based around something of mutual interest to both parties. The intent is to get Joe to visit a particular site that we have already compromised. Again, that’s not too difficult – drive-by downloading from compromised sites is one of the cybercriminals’ current weapons of choice. But this is where the hacker might play his trump card – the use of a zero-day vulnerability in Joe’s browser.
The problem with zero-day vulnerabilities is that the security industry doesn’t know anything about them. We don’t even know how many there are. In this instance it was an unknown vulnerability in the old browser (IE6) that Joe was still using; and it was just one of a string of doors left open. This open door allowed the hacker to install a Trojan on Joe’s network – a Trojan designed to find and quietly steal information.
Joe left the doors open – an open invitation to the hacker – and the hacker quietly slipped in. And we all do it, all of the time. We do the wrong things. We click on bad links in emails we receive, we open attachments and we respond to spam. On the internet we get carried away and visit dubious sites using old and unpatched browsers, and we allow scripts to run willy-nilly rather than blocking them with something like a combination of the latest version of Firefox and NoScript. In short, we trust the internet to do us no harm; when we really shouldn’t.
And then there’s social networking, a Pandora’s Box of goodies for the hacker. Where there are privacy options, we ignore them, and upload vast amounts of personal and sensitive and often embarrassing information. We indulge in ‘my Friend List is bigger than your Friend List’, becoming a friend or contact or follow of any stranger that asks – and then, because it’s a social network, we trust those strangers as if they really are long-lost buddies from school.
But it’s not just a case of actively doing the wrong thing.
We also fail to do the right thing. Too many of us are still not using adequate and up-to-date anti-malware and firewall defences. We forget to patch or update our software when the supplier issues an update to solve a vulnerability, leaving that software vulnerable to the hacker. In short, we behave with insufficient paranoia about the internet. Paranoia is the best security defence.
Joe Corporate is no better. He often fails to develop and enforce a strict security policy. He forgets the importance of adequate provisioning and deprovisioning procedures – sometimes giving Joe User greater privileges than necessary, and not taking them away again fast enough; allowing disaffected Joe User to become Joe Hacker. He almost invariably fails to encrypt sensitive data, and once again fails the paranoia test.
So are we saying that all cybercrime could be stopped if every Joe only did the right thing? Yes, we are. Are we saying it will ever happen? No. It won’t. But the fact remains that e-crime would be dramatically reduced if more of us users were less inviting to the criminals. We need to take a leaf out of physical policing and architecture: crime prevention through environmental design, known as CPTED. We make our systems so difficult to penetrate that the criminals go elsewhere. And if there’s nowhere else to go, they give up. That’s the theory. But if Joe continually opens or leaves open the doors, then no amount of other defences will help.
Security is a partnership – a partnership between the company defences supplied by the security industry, and Joe’s personal practices. We need anti-virus products, and firewalls and intrusion detection and content filters; but more than anything we need Joe User to behave in a responsible manner. Cybercrime, whether it emanates from the lone computer nerd in his bedroom or a nation state’s cyberwarfare agency, can only be defeated if Joe User closes the door in the face of hackers.
That means we need to take security awareness more seriously. The message is simple: to defeat cybercrime companies need to spend as much time, effort and money on educating Joe User as they do on buying security products. It’s not an either or situation. We need both. But at the moment, Joe User is the weakest link.
Rosario Valotta has published a new 0-day attack against all Internet Explorers on all Windows boxes. It’s a variant of cookiejacking; that is, stealing a victim’s cookies. If you can steal the cookies, you can steal the session key and access whatever the user is accessing.
Now I’m not qualified to give a technical comment on this attack. Others will do that soon enough. But what I do want to say is that Rosario’s attack still depends on social engineering. It still requires you do something that, if you knew what it was, you wouldn’t. His technique involves a disguised ‘dragging’ process. The victim is led to believe he is dragging something innocuous, while in reality he is opening the door to his computer.
Rosario’s example includes a simple 4-piece jigsaw of an attractive lady, with the tag line: ‘Solve the jigsaw to watch Denise naked’
That’s the social engineering. It looks pretty innocuous, and gives (the men among us) a reward at the end. But that’s the disguised dragging. Do it and you’re got.
Sooner or later most 0-day attacks are patched. Like I said, I cannot comment on the technical detail of Rosario’s attack – but my point is that our defence against almost any attack is to avoid being socially engineered. It’s not easy. But the offer of naked pictures is invariably a trick.
Do you need any further proof that the modern cybercriminal is a thinking animal? OK. There are two current spam/phishing campaigns I am particularly aware of: one aimed at the US and one aimed at the UK. Both are centred on the taxman.
I first reported on the UK campaign at the end of last month: The Inland Revenue owes me money. Hurrah!.
But simultaneously there is a tax phishing campaign underway in the USA. This from Websense:
Websense Security Labs™ ThreatSeeker™ Network has detected a wave of tax-themed malicious email. While the tax theme in spam email is common all year round, it is interesting to see the different strategies malicious authors use in their campaigns.
We have seen reports last June about email with the subject “Notice of Underreported Income”. Today, we have seen a couple of email having the same subject but with different attack strategies.
2010 Tax-Themed Malicious Emails
Notice the different themes. In the UK, the taxman is generally seen as the possible source of a windfall. In the US, the taxman is a figure of fear, more likely to deliver a penalty or a criminal action than anything else. The phisher is playing on the hopes of the UK citizen, and the fears of the US citizen. Psychology is an important part of social engineering.
‘Social engineering’ means the application of psychological manipulation to change the behaviour of the target; that is, you. It is what the con artist does online to help him steal from you. Here we’re going to look at the development of online social engineering from simple overt attacks through more complex covert attacks to the modern threat of targeted social engineering; and then see if there’s anything we can do about it.
Overt social engineering
“The depersonalized nature of Internet communications in general is at the moment most exploited by crooks aiming for a short con: small pay-offs, but lots of them, using and re-using tried-and-tested techniques based on social engineering,” says David Harley, ESET research fellow & director of malware Intelligence. The initial approach will involve a plausible story to gain your trust, but will often include an element of panic to persuade you to act quickly or lose the opportunity. The story itself will appeal to one of the basic human instincts. It will offer you money for nothing (greed); it will solicit humanitarian aid either for a friend in trouble or for a population suffering after a natural catastrophe such as an earthquake, tsunami or hurricane (sympathy); or it will be outright threatening to persuade you to pay up or face the consequences (fear).
Current simple overt attacks include
- Advance fee fraud. You pay a little now to get a lot more later, which never materialises. Examples include Nigerian frauds and foreign lottery wins.
- Auction fraud. You bid for a bargain, pay the money but never get the goods.
- Counterfeit goods. Most commonly expensive watches and Viagra.
- Disaster appeals. Fake requests follow all natural disasters.
- Extortion. Pay up or suffer the consequences.
- Financial fraud. Typical scams include Ponzi and pump and dump schemes.
- Londoning. I’ve been mugged in London/Lagos/Belgrade: please send me the air fare home.
- Money laundering. You are offered a job as a shipping or finance agent for a foreign company. You end up shipping stolen goods (cybermule) or stolen money (money mule) abroad.
Complex covert social engineering
Covert social engineering attacks do not openly ask you for money; their purpose is to steal your financial details without your knowledge. The principles remain the same: a plausible story to gain trust, followed by an appeal to basic emotions – and once again there are some tried and tested methods that are repeatedly used by the attackers. The most common of these are phishing, drive-by malware, false codecs and rogue software.
Phishing persuades you to visit a false website and simply hand over your bank details. “Indiscriminate phishing,” explains Harley, “where deceptive emails are spammed/mass-mailed in the hope of tricking a percentage of users of the phished service into divulging sensitive data, usually exemplifies the re-use of malicious resources to attack high volumes of potential victims, though use of such techniques as dynamic DNS and botherding is intended to make it harder to track and close down malicious or compromised machines hosting those resources.”
The social engineering aspect here is to persuade you to visit a particular site. But attackers have already ‘poisoned’ that site. They have compromised it with their own malware that infects the computer, via the browser, of any visitor to the infected page. This malware will then open a covert channel to the attacker who can subsequently install more sophisticated malware, likely to be spyware, a keylogger, a rootkit capable of turning you into a zombie within a botnet, or a combination of all three.
A more subtle variation uses the technique known as ‘search engine poisoning’. This will likely involve a specially crafted website that contains the malware. As soon as an incident of international interest occurs, the attackers use search engine optimisation techniques to make this website appear high on the search engine returns. So, if there’s an earthquake or plane crash or ash cloud that gets your interest, don’t just click on the first few links that turn up in Google or Bing or Yahoo: they may be false links to a bad website. (Having said this, the search engines are very good at recognising this attack and removing the links – but some get through for long enough to be a threat.)
The false codec
Another attack can be the false codec. Pornography is the most common lure – but it could be anything that has video content. Nigel Hawthorn, VP EMEA marketing at Blue Coat, takes up the story: “One of the ‘old standby’ malware vectors has recently added a bit of extra bling to increase its believability,” he says. “I’m referring to what I call “fake codec” malware — a web page that presents you with what looks like a video player window, but then tells you that your computer needs a new video codec (or a Flash upgrade, or a new version of Windows Media Player, or whatever) in order to view the video. Since the typical victim is in hot pursuit of a supposed pornographic video clip, the bad guys are counting on them not taking too long to think about the setup. But a little extra bling never hurts, so the latest version actually has some random ‘scrambled video’ bits flashing through the window for a second or two before it announces that you need a software upgrade to see the porn.”
Whether you get to see the video becomes irrelevant: what you do get is infected.
Rogue software is almost always false anti-virus software. You are offered a free scan. This scan will locate fictitious malware on your computer. From here there are many variations. You may be offered the full anti-virus package for just a few pounds, and you may even get a disguised version of one of the genuine free AV packages. All it cost you was a few pounds – and of course your credit card details. Or you could be offered a free ‘repair’ tool, which may or may not fix the supposed infection but will also include hidden malware.
Targeted social engineering
Carl Leonard, Websense security research manager, EMEA has published details of a social engineering attack that is current as this is written. It’s a mass spamming email specifically aimed at human resource staff with an attached resumé and the request, “Please review my CV”. The CV is disguised as a zip file and contains the Oficia bot. This in turn downloads and installs the rogue AV package known as Security Essentials 2010. “HR departments are used to receiving CVs over email and this kind of malicious activity is indicative of the modern-day hacker. The broad-brush approach to seeding malware is now out of favour; fraudsters know they can infect more computers, and steal more data, if they use techniques that fit the target.”
This attack shows the beginning of the move away from broad-brush mass malicious spam to a more targeted and direct form of social engineering. The key that unlocks targeted attacks is web 2.0 in general, and social networking in particular. “Social networks are just magic for the bad guys,” says David Marcus, director of security research and communications at McAfee. “You’re out there giving the con man everything he needs to be able to con you.”
Graham Cluley, senior technology consultant, Sophos, takes it up. “Take LinkedIn – one of the things you can do is get a company profile. This is effectively a corporate directory of that company – a list of everybody on LinkedIn that works for that company, with job title, and even those who have just joined the company. It is easy for a hacker to forge an email that appears to come from the head of HR to all new employees saying, “Welcome, congrats on joining our company. Click on this link to our company intranet and find out about all the wonderful advantages and opportunities.” It would, of course, be a false website containing drive-by malware. “The guys in HR are a prime target,” adds Cluley. “They have access to some of the most sensitive information in the company, often with the ability to log into payroll, personal info and so on.”
Marcus highlights the opportunities for the con artist on Twitter. “People tag words in their tweets to say this is a subject I’m talking about,” and Twitter itself tells you the most popular subjects at any point (it’s ‘Monaco Grand Prix’ at the moment). “There are 75 million people tweeting, and if one of the main subjects is (Monaco Grand Prix), that’s a magnificent piece of information for me as a social engineer. I can then send out into the twittersphere a tweet tagged with the phrase (Monaco Grand Prix) or whatever and a shortened link – and I can guarantee that I will get an almost one-to-one chance that most people who are following that word will click that link.” Gotcha.
FaceBook has been a scammers’ fishing pool for some time. Ed Rowley, product Manager at M86 Security, gives an example: “A Facebook scam originating from the Pushdo botnet in October 2009 showed two aims – to steal users’ Facebook account credentials and to distribute the Zbot (Zeus Bot) Trojan. This particular phishing scam diverts the user to the fake Facebook login page, allowing cybercriminals to phish the person’s Facebook account (first hit). Then, to add insult to injury, the user is taken to a page that informs them that they need to download the “Facebook update tool”, which is the Zbot trojan (second hit).”
But this is just the beginning. The full details of the attack against Google earlier this year are still unknown; but it is believed that the attackers researched the targets on social networks before sending them forged emails. That can be done by just about anyone: you find the right target within the right company on LinkedIn, and then you learn about their personal interests on Facebook. “The weak link in this is always the user,” says Luis Corrons, technical director at PandaLabs, “and in general the user is easy to fool – and that’s why so many people get infected. Even if you know about security, and you know you have to be careful on the internet, no-one is safe when something is really targeted at you.”Defending against social engineering
“The best definition of social engineering is hacking the human brain,” says Marcus. “In a thousand years’ time,” adds Cluley, “we will still have social engineering attacks – they might be delivered by 3D holograms, but they will still be social engineering because we cannot upgrade and patch human brains.” The problem is that social engineering is not a technology problem, so it has no absolute technology solution. “Education, education, education,” says Marcus. He doesn’t want people to be paranoid but believes that ‘suspicion’ should be the keyword to everything we do on the internet. “You cannot effectively get off the grid anymore,” he says. “Your information is out there, and if you’re telling people about your interests on social networks, you’re inviting the bad guys to lure you with more than everything they need to be successful.” Harley advocates using your own social engineering techniques: you should change users’ bad habits into good habits by “countering malicious social engineering with constructive social engineering through education.” But in the final analysis we need to remember Corrons concern: “I’m not really optimistic – there is no way to be 100% safe – you can be pretty safe, but you cannot guarantee security. OK, you’ve got your anti-virus and it’s up to date, but they will know which anti-virus you’re using and they will test their trojan against your anti-virus to see if it is detected before they attack you with it. They will have studied your movements and know your weak points.”
This blog is moving to ITsecurity.co.uk, where it will be bigger and better than ever. Please join us.
The all-time most popular stories on this site
- On the traceability of IP addresses and the viability of detecting unlawful file sharing…
- The Miranda judgment is a sad day for Britain
- BLOGS: Facebook Platform Vulnerability Enabled Silent Data Harvesting
- NEWS: Behavioural Biometrics – an ENISA Briefing Paper
- Joe User is the weakest link – a presentation at the Infosecurity Virtual Conference
- June 2014
- May 2014
- April 2014
- March 2014
- February 2014
- January 2014
- December 2013
- November 2013
- October 2013
- September 2013
- August 2013
- July 2013
- June 2013
- May 2013
- April 2013
- March 2013
- February 2013
- January 2013
- December 2012
- November 2012
- October 2012
- September 2012
- August 2012
- July 2012
- June 2012
- May 2012
- April 2012
- March 2012
- February 2012
- January 2012
- December 2011
- November 2011
- October 2011
- September 2011
- August 2011
- July 2011
- June 2011
- May 2011
- April 2011
- March 2011
- February 2011
- January 2011
- December 2010
- November 2010
- October 2010
- September 2010
- August 2010
- July 2010
- June 2010
- May 2010
- April 2010
- March 2010
- February 2010
- January 2010