Two things caught my eye over the last few days. Firstly, a paper produced by the Oxford Internet Institute (OII) and published in Scientific Reports on 15 December 2011: The Dynamics of Protest Recruitment through an Online Network. And secondly, an article by Clay Claiborne: The Year in Review: They should have left that street vendor alone!
The first is an academic study on the role of social media (specifically Twitter) in the dynamics of an evolving social protest – specifically in this case the Spanish riots of May 2011.
We study recruitment patterns in the Twitter network and find evidence of social influence and complex contagion… We find that early participants cannot be characterized by a typical topological position but spreaders tend to be more central in the network. These findings shed light on the connection between online networks, social contagion, and collective dynamics, and offer an empirical test to the recruitment mechanisms theorized in formal models of collective action.
Key to the spread of social contagion would seem to be the involvement of central figures in each network, and that the speed of contagion is linked to the number of different exposures received:
The existence of recruitment bursts indicates that the effects of complex contagion are boosted by accelerated exposure, that is, by multiple stimuli received from different sources that take place within a small time window… [providing] empirical evidence of what scholars of social movements have called, metaphorically, collective effervescence.
One interesting conclusion is that traditional media publicity has little effect on the spread of unrest. Depending upon personal prejudice, of course, this could be a good or bad thing: either that traditional media merely reports the news without exhortations one way or the other, or that traditional media is in the pocket of the Establishment.
But the paper does conclude with the rider that recent “events, like the riots in London in August 2011, suggest that different online platforms are being used to mobilize different populations. The question that future research should consider is if the same recruitment patterns apply regardless of the technology being used, or if the affordances of the technology (i.e. public/private by default) shape the collective dynamics that they help coordinate.”
I can’t help wondering, however, if we are already moving beyond the study of individual social networks. The growth of social media apps that automatically post your tweet to Facebook and LinkedIn and all the other social networks you inhabit would suggest that all networks need to be considered together.
Which brings us to the second article, which I simply recommend as an excellent read on the evolution of the Arab Spring, involving Twitter, Anonymous, Wikileaks and more. For example, it highlights Google stepping in to bypass Mubarrak’s block on Twitter by providing the Speech-to-Tweet service. “We hope that this will go some way to helping people in Egypt stay connected at this very difficult time. Our thoughts are with everyone there,” said Google on 31 January 2011.
Social evolution, or revolution if you like, is already a complex issue involving all aspects of the internet. It’s another reason for not letting our own governments get control of our internet via the pretence of doing so to protect intellectual property and copyright.
Moving swiftly on from Stefan Viehböck’s published WPS vulnerability (see Vulnerability in WiFi’s WPS is likely to affect the majority of home users), Tactical Network Solutions has already released a WPS cracking tool called Reaver. Reaver, says the company,
is a capability that we at TNS have been testing, perfecting and using for nearly a year. But now that this vulnerability has been discussed publicly we have decided to announce and release Reaver, our WPS attack tool, to the open source community. Reaver is capable of breaking WPS pins and recovering the plain text WPA/WPA2 passphrase of the target access point in approximately 4-10 hours (attack time varies based on the access point).
According to TNS, attacking WPS is much faster than attacking WPA directly yet gets you the same results: the WPA passphrase. The disadvantage is that WPS can be disabled. “However,” says Tactical,”in our experience even security experts with otherwise secure configurations neglect to disable WPS; further, some access points don’t provide an option to disable WPS, or don’t actually disable WPS when the owner tells it to.”
On 27 December Stefan Viehböck disclosed a WiFi Protected Setup (WPS) vulnerability. WPS was developed by the WiFi Alliance in 2007. Its purpose is to provide easy WiFi security for home users. “I noticed a few really bad design decisions,” wrote Stefan, “which enable an efficient brute force attack, thus effectively breaking the security of pretty much all WPS-enabled Wi-Fi routers. As all of the more recent router models come with WPS enabled by default, this affects millions of devices worldwide.”
More details are provided in his paper Brute forcing Wi-Fi Protected Setup. He notes two basic design flaws in WPS.
As the External Registrar option does not require any kind of authentication apart from providing the PIN, it is potentially vulnerable to brute force attacks.
An attacker can derive information about the correctness of parts the PIN from the AP ́s responses.
The latter ‘flaw’ effectively reduces the length of the PIN, allowing an attacker to try all possibilities within a short period of time. Stefan wrote a ‘proof of concept’ brute force attack. This is usually circumvented by a ‘lock-down’ facility; that is, further log-in attempts are automatically blocked after, say, three failures. But, he writes,
Some vendors did not implement any kind of blocking mechanism to prevent brute force attacks. This allows an attacker to try all possible PIN combinations in less than four hours (at 1.3 seconds/attempt).
On average an attack will succeed in half the time.
Stefan’s vulnerability has now been accepted by CERT. CERT’s advisory comments
We are currently unaware of a practical solution to this problem.
Although the following will not mitigate this specific vulnerability, best practices also recommend only using WPA2 encryption with a strong password, disabling UPnP, and enabling MAC address filtering so only trusted computers and devices can connect to the wireless network.
Ironic, isn’t it? The ‘official’ security solution often provided by default for non-technical home users requires a technical capability beyond the average home user in order to stop it being a weakness… But irony or no irony, the simple fact is that the majority of home users everywhere are likely to be vulnerable.
Sometimes you just have to laugh for fear of crying. The Information commissioners Office (ICO) strategy for 2012 makes me do just that. It is a 17 page purple prose self-aggrandizing Declaration of Independence, declaring itself to be independent of political, public and media pressure. It should just simply say that ‘we will uphold the law in our role defined by the law.’
But it doesn’t do that. It seems more concerned to distance itself from the letter of the law by defining its own interpretation of the law, and to align itself with that interpretation. It has, in short, evolved an overblown idea of its function, which it attempts to define in this rather long and mis-titled public-relations document. I give just a few examples:
we will neither be exclusively an educator nor exclusively an enforcer. We are both, even though we prefer to deliver our desired outcomes through help and encouragement rather than force. This means we are primarily a facilitator…
In the time-honoured liberal tradition it has failed to understand that facilitation is delivered by enforcement, not enforcement delivered by facilitation.
We cannot address all risks to the upholding of information rights equally nor should we attempt to do so.
Yes it most certainly should attempt to address all risks to the upholding of information rights equally.
we will treat all cases that come to us fairly and properly but not necessarily pursue them with equal vigour.
This is perhaps one of the most worrying comments. The ICO is declaring that it will decide, arbitrarily, whether your complaint is worth its attention. Not the law, not the judiciary, not parliament, not you, but its own self will pre-judge a case and decide whether or not to pursue it with vigour.
we will devote particular effort to investigating, analysing and ultimately enforcing in those cases that we see as contributing most to the delivery of our desired outcomes and not just those presenting the biggest risk…
Not just those presenting the biggest risk. It really does say that it, the body responsible for enforcing the Data Protection Act, is not necessarily going to spend its effort on the biggest risk.
Laugh or cry? You decide.
I am pretty much losing faith in the Leveson Enquiry. There have been a few niggles; but allowing Piers Morgan, surely absolutely central to an enquiry into newspaper phone hacking, to give evidence by video link is patently absurd.
I had thought that it was an enquiry into newspapers hacking private mobile phones. I’m wrong, of course, but I suspect the general opinion is just that. Really, Leveson has four aims:
- phone hacking
- press and police
- press and politicians
and, the missing bit
- Recommendations for a more effective policy and regulation that supports the integrity and freedom of the press while encouraging the highest ethical standards.
That’s the bit we’re all going to miss in the wonderfully enjoyable and sometimes tragic gossip and scandal about who did what, when and where to whom. It explains the otherwise strange questions that Leveson has sent to, and demanded replies from, Guido (a long time thorn and annoyance to the Establishment). These include:
- what material your website “Guido Fawkes” publishes, and Why;
- where are your servers located? Do you consider the UK courts to have jurisdiction over the Way in which your website is operated in the UK, and how far does this jurisdiction extend?
- how do you consider yourself to be regulated?
- the Inquiry would also welcome your views on the extent to which the content of Websites, and the manner in which you operate, can be regulated by a domestic system of regulation.
Clearly Leveson believes his scope to go way beyond phone hacking. I’m beginning to fear it is a cloak upon which the Establishment can hang wide-ranging restrictive regulations on the freedom of the press.
Consultants and statisticians have a similar function: to confirm the preferences and prejudices of the client.
On December 9, Accuvant LABS produced a security analysis of the different browsers – and demonstrated that Chrome is the most secure. Well what do you expect? It was commissioned by Google. Now this is not to suggest for one moment that there is anything misleading in Accuvant’s report, nor that Google is attempting anything underhand: merely that prejudice will out.
But some have certainly cried ‘bias!’. One has been NSS Labs. “If you choose to read the Google/Accuvant report, do so with the understanding that the methodology appears to be skewed in Google’s favor, and does not reflect real world attack scenarios.” It is, of course, purely co-incidental that NSS’ own browser test comes to a completely different conclusion – that IE9 is considerably, nay, very, very considerably, superior security-wise to Chrome. While NSS claims that Google is undermining Firefox in favour of Chrome, one could also suggest that NSS is undermining the Accuvant report in favour of its own.
Prejudice will always out.
Having said that, NSS certainly has a point. Firefox, once a close friend of Google, is now a pain. I tried Chrome for a few weeks because, as a user, I love its built-in searching capabilities. But I soon got fed up with the adverts Google was spraying all over the place – adverts that Firefox or its add-ons were seamlessly hiding from me; and I went back to Firefox. This hits Google below the financial belt – no adverts equals less revenue.
So what is the answer? How can we navigate our way through this minefield of well-funded unprovable prejudice? “Who has the manliest browser?” asks Rob Rachwald, Director of Security Strategy at Imperva.
Browsers are very much like cars only in earlier stage of their life cycle. In the beginning, the competition was on who has the best basic features (e.g., driving from point A to point B or showing web content). After the basic functionality was achieved, Maslow’s law of hierarchal needs sets in. Namely, users’ focus moves to functionality and efficiency (e.g., fuel consumption or speed of rendering).
However, when comparing security features, some of the logical conundrums that plague cars similarly plague browsers:
- If one car has ABS system and the other one has air bags – who is safer?
- If one browser runs flash in sandbox and the other has anti-XSS filter – who is the safer?
Rachwald points to some basic differences in the way the two tests were conducted: “The NSS study focused solely on malware blocking… The Accuvant study, by contrast, added and focused on other criteria. URL reputation and application reputation are barely considered. In fact, the category “URL Blacklisting” is – oddly – virtually ignored…”
But, he concludes
If you’re a geek, go for security through obscurity: The best way to minimize accidents’ consequences to is to avoid it altogether. The way to avoid cyber accident is by using a platform which is less targeted by hackers due to its small market share. Such an example would have been Firefox with Linux when Windows and IE dominated the web. At the time, Firefox wasn’t less vulnerable than IE but it was less exploited due to its marginal market share. This method is of course limited to tech geeks willing to invest in installing learning and dealing with exotic platforms in rapid manner. But this won’t work for the masses who may not have the time nor expertise to learn a new browser.
For consumers, use newer browsers…
But I don’t know. Nothing I read changes my own prejudices. I want to believe in Firefox. I love its open source philosophy. I feel safe with its own and added security add-ons (especially NoScript!). And I couldn’t live without Scrapbook. Therefore, relying on the final arbiter, my own prejudice, I do believe in Firefox.
Who Makes the Manliest Browser? Imperva
The self-employed and small businesses of the UK have always felt, rightly or wrongly, victimised by Revenue and Customs. And the people of this nation have always believed that there are dual standards: one law for the rich and one for the poor; one set of rules for government and another for governed.
They may not be wrong. Parliament’s Public Accounts Committee – Sixty-First Report is a bombshell. It states:
The Department [HMRC] is not being even handed in its treatment of taxpayers. It is unfair that large companies can settle their tax disputes with the advice of professionals at less than the full amount due and that they have been allowed up to 10 years to pay their tax liabilities, while small businesses and individuals on tax credits are not allowed similar leeway.
But that’s not the half of it. Other criticisms include
- The Department chose to depart from normal governance procedures in several cases, which allowed Commissioners to sign off on settlements that they themselves negotiated.
- Governance procedures have lacked the independence and transparency needed to provide sufficient assurance to Parliament.
- The Department’s failure to comply with its own processes resulted in a substantial amount of money being lost to the Exchequer.
- Those at the top of the Department have not taken personal responsibility for serious errors.
- The Department has left itself open to suspicion that its relationships with large companies are too cosy.
I’d be interested to know the legal distinction between ‘cosy’ and ‘corrupt’. I suspect that a lawyer could explain a vast difference; but a taxpayer will discern very little.