Botnets. If we’re so clever, why are they so successful? A conversation with Kaspersky’s David Emm
Botnets. If we’re so clever, why are they so successful? I turned to David Emm, senior security researcher at Kaspersky Lab, for some answers; and it seems that there are two fundamental reasons. Firstly, the bad guys are getting pretty damn sophisticated themselves; and secondly, the good guys are hamstrung in their takedown efforts by international legal and ethical constraints.
The problems start at the very beginning. Botnets are not targeted: they’re not trying to crack specific well-defended systems, they simply seek to compromise any and every system that can be compromised. So a typical methodology will be to tempt the less savvy user to visit a malicious or compromised site (which redirects to the compromised site). The less savvy user is also the user less likely to patch his systems. “Quite often,” continued David, “there will be an exploit bundle that will cycle through a range of different possible vulnerabilities: if WinZip present, apply this; else if IE6 present, apply this; else if QuickTime present, apply this… and so on.” There’s a fair possibility that if the user can be enticed to the website, he can be compromised on the website. And so it starts. And so it grows, because the fledgling botnet can then be used to tempt other users to the malicious sites in order to grow itself.
From then on, it becomes a game of cat and mouse between the good guys and the bad guys. The good guys are looking for the bad guys, while the bad guys seek to stay under the detection/nuisance radar. They do this in two ways: technical sophistication, and not being too greedy. “For example,” explains David, “the criminals will now quite often encrypt traffic between the zombie and the command server to make it difficult for us to look into that traffic and find out what’s going on. But more importantly, they will now often deploy a sort of P2P model where they pick domain names at random and try those for the C&C server – so in a sense any given number could actually be the command server at any one time. Because it has a distributed model, it means that if one of them is taken down any number of others can take its place. It’s the classic spy structure where two or three people know two or three people and that’s your lot.”
Put simply, the bad guys are using their own security techniques to defend themselves. Just like the security researchers are looking for the bad guys, the bad guys watch out for the researchers. “They will specifically blacklist certain IP addresses that they know are related to research bodies like LEAs, AV companies and individual researchers. If one of these IPs tries to connect to their server, they’ll just block it. They’ll obfuscate their code. They’ll deploy techniques whereby the code will behave differently depending on where it is. So, if a researcher takes what he thinks is malware and executes it in a sandbox or a virtual machine to see what it does, it won’t do anything malicious. They deploy these and other techniques to try to make analysis difficult as well as trying to balance the overall operation of the network in order to stay under the radar.”
But it’s ‘balancing the overall operation of the network’ that is perhaps the most surprising. One way to avoid the wrath of the righteous is to avoid being too much of a nuisance – and this is something the bad guys do. “We know that criminals do this. Consider the people distributing fake anti-virus products. Clearly, if I’ve fallen for some scam, I might complain and ask to get my money back; and if I don’t get it back, I may go to the bank. Well, for obvious reasons, any bank is unlikely to raise merry hell over such issues unless there has been a sufficient volume of complaints. So if the cybercriminals keep the likely level of complaints to a certain level, then they’re not likely to be picked up and have their merchant credentials cancelled. To keep within limits, the criminals will even do refunds, which seems a bit bizarre. But if it’s going to cause difficulties with too many victims going to their banks to complain, the criminals will do it. Spammers and DDoS attackers will do the same thing – they will throttle the activity backwards and forwards to try to keep complaints to a minimum. ISPs do get involved with monitoring traffic, but unless something draws their attention to a specific IP address (could be law enforcement, could be researchers, etc) then looking for individual botnets is unlikely to be uppermost in their minds.”
Given these difficulties, what can we do about it? What about co-operation between the different stakeholders, such as that between SOCA and Virgin Media?
“Co-operation can work, inter-agency, inter-jurisdiction,” said David, “but there are difficulties… There have been a couple of incidences where the Dutch police have effectively taken control of the control servers – but having got control, what then? They have access to tens of thousands if not millions of compromised machines, unknown to the innocent users of those machines. How do you then mitigate that? On one occasion they pushed out messages to the compromised computers, saying your machine has been compromised, click here to get more information and a mechanism for removing this threat. But at this point you start to get into territory that is at least debatable, because you’re beginning to talk about legality (in this country we have the Computer Misuse Act which says you mustn’t make an unauthorised modification to somebody’s computer without their knowledge and consent) and also the ethics of the situation. The counter argument is that in the off-line world, if you know a particular physical address is being used to carry out a crime, then LEAs can get a warrant, go into that property, seize what’s there and so on. Some would argue that the Dutch police action is just an extension of this into the virtual world. But there’s no nay or yea on this – these are issues that are debated backwards and forwards… I don’t think you’ll even get consensus within the research community, within the law enforcement community, let alone the user community…”
And just to confirm this, I recalled a recent briefing from David’s own company, Kaspersky. Vitaly Kamluk, one of the original members of the Conficker Working Group, recounted a personal anecdote. He believes that most users would be happy for an outside agency to step in and cleanse their systems. “My own parents,” he said, “became infected. Because of my background, we decided I would disinfect them myself. But the infection had become so deep, it was impossible – and the result was that they lost all access to the computer. But the weren’t angry. They knew that I had to try to disinfect the computer.” From this, Vitaly extrapolates that users will accept the need for external incursions without too much objection. Personally, I don’t think you can compare invited relatives to uninvited, unknown and undemocratic third parties who come in and break your computer. Other discussions from the same briefing suggested that LEAs should be able to enter an infected computer using the same principle as the physical world’s ‘hot pursuit’, and that ISPs should disconnect infected computers until they are clean (shades of Scott Charney’s Internet Health Certificate here).
Psychology tells us that we will interpret arguments to prove our pre-conceptions. David Emm is, I suspect, in a relative minority of security professionals who are aware of the ethical implications of intrusive security (with my apologies to the many others I know). In the main, most security professionals tend to think security trumps ethics. I believe the opposite. Either way, my conversation with David Emm has led me to believe that the battle against crime, whether physical or cyber or botnet-specific, is one of containment. We will never end crime; so we shouldn’t believe we can end cybercrime. What we can and must do is everything possible to mitigate the extent and effect of that crime. Ethically.