Archive for February, 2010

Government censorship of the Internet begins: Cryptome taken down

February 25, 2010 Leave a comment

If ever we needed absolute unarguable proof that we must

  • support Iceland’s attempt to provide a safe haven for investigative journalists
  • support WikiLeaks
  • stop the headlong descent into government control of the internet

it is this. John Young’s Cryptome has been taken down by a crass misuse of US copyright laws despite the American constitutional right to freedom of speech.

The villains? The US Digital Millenium Copyright Act (DMCA) and Microsoft.

The crime? Cryptome published a copy of a Microsoft document entitled

Microsoft(R) Online Services
Global Criminal Compliance Handbook

and marked

Microsoft Confidential For Law Enforcement Use Only

I have a few Urban Heroes. John Young/Cryptome is one. The EFF is another. I hope that the latter can come to the aid of the former.

Meanwhile, have no doubt that the UK’s Digital Economy Bill is our DMCA. Like the DMCA it pretends to use the protection of intellectual property as its raison d’etre. But its real purpose is to hand control of the internet in the UK to the UK government. And that is what will happen if it comes into force.

Consider this Microsoft document. It is a secret document that someone somewhere, quite rightly in my mind, considered that in the public interest it should be public. Microsoft (and obviously Law Enforcement, but it’s not their property) objected. So they used the law ostensibly designed to protect against the theft of intellectual property to shut down a whole website. But where is the theft? We’re talking about property. Where is the loss? If there is no loss, how can there be a theft?

This is a blatant misuse of the purpose of copyright. It is used as a means of censorship, to give the authorities the ability to censor what they don’t like on the internet. And this is what Mandelson is bringing to the UK in the Digital Economy Bill. He and it must be stopped while we still have some semblance of democracy left in this country.


Mandelson’s Digital Economy Bill is really a sub-section of ACTA

February 24, 2010 Leave a comment

Her Majesty’s Revenue and Customs (HMRC) used to accept a statement from an IP owner that goods infringed copyright at a UK border, and would then automatically seize those goods. It required no proof and no court process.

Last summer this policy was ruled to be incompatible with EU law. A new Statutory Instrument consequently comes into effect on 10 March revoking the old HMRC rules and bringing in new ones. But the really interesting bit is the letter that HMRC sent out to businesses last year following the EU ruling. It said

“We now accept that the burden of proof should be upon the rights holder who must confirm the infringing nature of the goods by taking legal proceedings…”

If legal proceedings are good enough for HMRC, why are they effectively (not, I agree, literally, but certainly effectively, Mr – sorry, Lord – Mandelson) not good enough for the Digital Economy Bill?

I think I’ve sussed it. We know little of the details of the ACTA negotiations. But it is fairly common knowledge that Rights Holders are demanding the equivalent of a three strikes rule, and they do not want to be hampered by the need for court proceedings. But at the same time, a repeated claim coming from government negotiators is that any ACTA treaty will not impose new requirements on or alterations to existing national law.

There is now a new leaked document that claims to be part of the ACTA negotiations, and is believed by many to be genuine. The heading is “Article 2.17: Enforcement procedures in the digital environment”, and the first paragraph reads:

Each Party shall ensure that enforcement procedures, to the  extent set forth in the civil and criminal enforcement sections of this Agreement, are available under its law so as to permit effective action against an act of, trademark, copyright or related rights infringement which takes place by means of the Internet, including expeditious remedies to prevent infringement and remedies which constitute a deterrent to further infringement.

Look at that again. Each party shall ensure that the enforcement sections of this agreement are available under their national law. In other words, ACTA is saying what our national laws should be. Not Parliament, which is specifically being kept ignorant of the ACTA negotiations.

Jump now to section 3 (b) (I)

an online service provider adopting and reasonably implementing a policy(6) to address the unauthorised storage or transmission of materials protected by copyright or related rights except that no Party may condition the limitations in subparagraph (a) on the online service provider’s monitoring its services or affirmatively seeking facts indicating that infringing activity is occurring; …

The important bit is note (6) which states: “An example of such a policy is providing for the termination in appropriate circumstances of subscriptions and accounts in the service provider’s system or network of repeat infringers.”

We don’t have that in our national law. Well, not until Mandelson gets his Digital Economy Bill on the books. It’s why he’s clinging on to the three strikes proposal – although in fairness he has altered the words ‘termination’ to ‘temporary suspension’ for a period that he will determine in each case without parliamentary discussion; ie, termination. Only when the Digital Economy Bill becomes an Act will he be able to sign off ACTA.

So here’s an open letter to all our UK Members of Parliament: “Are you really such total wusses that you will let an unelected manufactured Lord allow largely foreign interests to dictate UK law without any reference to you or the electorate? Shame on you!

Surprise message via Skype: Ignore, Block, Report

February 24, 2010 2 comments

I got this Skype message from someone I don’t know. That means I’m going to ignore it.

Don't even go there!

It’s trying to sell me something I didn’t ask for. It could be genuine marketing, but it probably means it’s spam. So ignore it.

It’s offering me cheap OEM software. Statistically, that means it’s likely to be pirated. Block it.

The visible link is not identical to the link that appears when copied into a text editor. That would suggest it will take you to an evil site. Report it.

If you get a message like this, ignore it, block it, report it. If you have any doubt that it might possibly be a genuine message from genuine people, just ignore it and block it.

And am I paranoid? I do hope so. It’s the best security device I’ve got.

The perfect phishing email?

February 23, 2010 Leave a comment

This morning I got a phishing email; and damn it was good. Usually, whenever I get an email from a bank – especially if I don’t have an account with that bank – I just dump it. But this time I decided to read it.

It had all the hallmarks of a good phish: from a bank; included the requisite security warning (“Always be wary of ‘phishing emails’. These may appear to be from your bank…”); had a zipped attachment that was too small to be a serious document or app (13K); and told me that the whole purpose was to improve my security. Got to be a phish.

Press Release announcing Rapport

Actually, it’s not a phish. It’s not even an email, in the traditional sense. It’s a press release, and that’s why I received a copy even though I don’t have an account with the bank. It’s the bank trying to be a good guy, and offering a tied down VPN to its customers for online banking.

Just goes to show how paranoia makes us doubt even good intentions. Still, better that than trust bad intentions.

Log Management: a necessary part of information security

February 22, 2010 5 comments

Log Management (the use of software to automatically manage computer system logs) started as a good idea, became an important part of computer security, and now stands on the verge of being mandatory. This parallels the history of computer security itself: from ‘good idea’ through ‘essential’ to ‘legal requirement’. The reason for the similarity is simple: log management provides one of the fundamental pillars of information security itself: accountability (the principle that the cause of any event should be both identified and responsible)1. So, as the concept of security has become more important, so has log management; for it is primarily through the analysis of a computer system’s logs (the electronic audit trail) that the perpetrator of a security event can be identified, and that associated vulnerabilities can be found and fixed.

This paper was commissioned by ITproPortal.

It sounds very simple, the management of logs, but it is actually very hard. So we are going to look at why it is essential and what we need to look for before we invest in Log Management (LM).

The problems in LM

The primary task of log management is managing the collection, protection, organisation and storage of logs in a manner that will create a reliable forensic audit trail. That’s not just some of the logs, but all of the logs, all of the time, and under all conditions. So a log management solution must automatically manage the whole process of monitoring the status of disparate log files; rotating them at predetermined times or thresholds; collecting essential meta data to make them useful; securing them and forwarding them to central collection points. It is not easy, and could not be achieved manually.

This whole process is then complicated firstly by the sheer size of the logs involved, and secondly by the complexity of their content. Every computer in the enterprise will generate its own logs. Within each server there will be operating system logs. Major applications, such as databases, will produce their own logs. Each different security device or system, including firewalls, anti-malware software, intrusion detection/prevention systems, the corporate VPN and so on will also produce their own logs. This very soon amounts to a huge amount of data, and when multiplied by every computer in the network, and indexed in order to be managed, it is clear that storage rapidly becomes a major and ever increasing issue.

The second difficulty is the complexity of content. We don’t mean that the log content is difficult to understand, but simply that each different system log is likely to be in a different format. Some are in text, some in binary; some in comma separated databases, some in tab delimited databases; some are SNMP and some are XML. The problem here is that the LM will need to recognise and relate events across multiple system formats from multiple system logs over multiple computers in order to create a single audit trail for any particular security incident — and that means understanding and correlating all of the different formats.

These two difficulties inevitably generate a third: the security of the logs themselves. This applies both to the data in transit from the source device to the amalgamated database, and the data when at rest in the database. This is particularly important given that rootkits, a major and growing security problem, specifically seek to alter log data in order to hide their presence.

Why bother?

If LM is so difficult, why bother? Why not simply gather the logs into a central source and examine them manually when necessary? Well, there are three main reasons. The first is that it is simply impractical (it would take too long, cost too much and be too subject to inevitable human error that would invalidate any forensic reliability) to attempt to monitor logs without LM. The second reason is that LM provides a much improved corporate security stance. And the third is that we don’t really have a choice any more: compliance with legal, commercial or bureaucratic regulations means that we can no longer realistically conduct business without LM.

Improved security stance.
Automated analysis of aggregated logs provides a number of features, including:

  • the identification of security incidents as they happen. This makes it possible for a breach or vulnerability to be closed before any serious damage occurs; fraud to be stopped before huge losses transpire; and new policies to be developed to minimise future risk.
  • the monitoring of high risk activity in near-realtime. This allows us to take action to prevent the activity developing into a fully fledged security incident. Examples of such activity could include access to specific critical files and folders; escalation of enhanced privileges to normal users; suspicious configuration changes such as switching off auditing; alteration of network configuration files and many more.
  • the identification of policy violations. It’s no good having the perfect security policy if we don’t enforce it; and we can’t enforce it if we don’t know when it is violated and by whom. LM provides the perfect way to maintain, enforce and continually improve our security policy; and thereby improve our actual security.
  • the identification of operational problems. This, surprisingly, is still mainstream security, since another of its fundamental pillars is ‘availability’; and availability can be damaged as much be internal network overloads and staff malfeasance as it can by external denial of service attacks.

Compliance is adherence to the standards specified in legal, commercial and sometimes simply bureaucratic regulations. The one thing they have in common is that we cannot ignore them if we wish to continue in business.

  • legal requirements. Governments are obsessive in passing legislation that has a direct effect on information security. But it is rarely consistent. As usual, there is a fundamental difference between the United States and Europe. The US tends to produce specific laws for a specific purpose with a specific penalty. Europe tries to produce generalised laws that will catch everything, and tend to be nonspecific. Thus we have the US Sarbanes-Oxley Act (SOX) that is quite precise in its requirements (including the requirement for an audit trail to be kept for seven years) and has significant sanctions against individual directors. (The UK’s equivalent, the Code of Conduct, merely says “comply or explain why you don’t”.)
    In Europe, however, we have national implementations of the very wide-ranging Data Protection Directive with its insistence that personal data be kept secure – without ever explaining what that means or how it should be achieved. (This is relevant to LM since logs could contain personal data which must therefore be held securely; that is, the confidentiality of the audit trail must be ensured.) The whole thing is then further complicated by the international nature of modern business. Any non-European company that wishes to do business with Europe needs to be acceptable to Europe’s data protection requirements. Similarly, any company anywhere in the world that is a subsidiary of a US company or is itself listed in the US must comply with SOX. And we need to add to this many different laws with different requirements over different lengths of time; and the ones in the pipeline; and the ones that we yet know nothing about.

The seventh data protection principle of the UK’s Data Protection Act

Appropriate technical and organisational measures shall be taken against unauthorised or unlawful processing of personal data and against accidental loss or destruction of, or damage to, personal data.

Schedule 1: The Data Protection Principles

  • commercial requirements. The most pressing commercial requirement is to comply with the Payment Card Industry’s Data Security Standard (PCI-DSS). It’s not required by law; but if we don’t comply we will not be able to process credit card payments: which makes it rather compelling. One of the requirements is that we “Retain audit trail history for at least one year, with a minimum of three months immediately available for analysis…” There are also detailed instructions on how the audit trail should be managed.
  • bureaucratic requirements. Bureaucracy has always laid down its own rules. In the UK, the government’s Code of Connection (CoCo) mandates that organisations must create an audit trail of system activity in order to facilitate future investigations, and that they should do this by collecting and storing logs from relevant systems and devices. It’s not the law; but it is required by any public body (including the Law) or other approved organisation that wishes or needs to connect to the Government Secure Internet; which means that we cannot do business with government without CoCo even if we are government.

Compliance, as we can see, is a complex soup. Large companies with their own corporate lawyers could, in theory, employ a lawyer to specify exactly what laws and regulations they need to follow in order to conduct their business. But it wouldn’t be very satisfactory and few companies could afford to do so. So how can the average business manage its compliance requirements without breaking the bank? One solution could be to conform to the single most widely recognised and highly regarded set of security specifications currently available: ISO/IEC 27000. This has no validity in law; but it is widely believed that if we conform to ISO/IEC 27000, we will most likely comply with just about all legal, commercial and bureaucratic security requirements.

Summary: what to look for in LM

We have seen that LM will improve our security and is necessary for compliance with multiple targets that are often inconsistent and unspecified. But we have also concluded that to maximise our security and simultaneously obtain the widest possible compliance at the minimum effort we can follow the LM requirements specified in ISO/IEC 27000. Section 13 of 27002 concentrates on just this. It is entitled Information Security Incident Management, and specifies the necessary level of log management in considerable detail. What is particularly clear, however, is that great care should be taken in not just collecting and using event information, care should also be taken in preserving the integrity of that data. The driver is legally admissible forensic evidence. If the audit trail has been or could be corrupted or altered between its source and its analysis it cannot definitively prove the cause or perpetrator of the event. It is effectively and legally irrelevant.

But we are finally in a position to summarise what we need to look for in a Log Management product:

  • the original logs should be maintained in their original format to ensure forensic readiness, to allow re-analysis under different analysis rules, and to allow use by other applications
  • scalability: logs grow and continue to grow very rapidly; and if we grow the company as well, then we need a system able to handle a huge and increasing load – the logs of anything from ten to thousands of servers
  • minimising spurious event data: storage requirements can be reduced with a system that offers control over the level of auditing undertaken; but note that this requires considerable care over the audit and retention policies implemented
  • central store: logs should be collected from throughout the enterprise into a central database
  • the transfer of the logs from source to database must be done securely in a manner that retains forensic capability
  • the database must be secure, so that its own integrity, confidentiality, availability and accountability – and completeness – is maintained
  • features should be available to allow logs to be indexed to provide rapid and unstructured queries as required
  • extensible rules-based analysis should be available to allow alerts on new types of threat
  • it should be managed centrally in a virtualised environment to ensure ease and flexibility of use.

Choosing the right log management system will inevitably improve security, almost certainly help with compliance, and probably save us money.

1 The other pillars are availability (information should be available to authorised people when they need it), confidentiality (sensitive information should only be available to the people authorised to see it) and integrity (information should not be subject to any unauthorised alteration): and all four are relevant to any discussion of log management. (back)

Categories: All, Security Issues

Is Twitter the new Terrorist?

February 19, 2010 1 comment

Twitter and Facebook are the reason governments need to control the Internet. Far-fetched? Let’s see.

First off, though, it begs two questions.

  • Are governments seeking to control the internet?
  • Is Twitter/Facebook of any real importance?

Government control
China. We know about China. We expect an authoritarian regime to be paranoid about the Internet.  So we discount China.

South Korea blocks certain North Korean websites. Again, that’s probably to be expected; so we needn’t worry too much.

Turkey blocks numerous websites, usually on religious or decency grounds. For example, Richard Dawkins, a western scientist, had his site blocked after complaints from a leading muslim creationist. But, well, despite Turkey’s attempts to join the European Union, it is not really a western state with our western values of freedom of expression. So, no real threat there either.

Australia is in the process of implementing compulsory filtering on all its ISPs. If your website is on the government blacklist, Australians won’t be able to see it. But, then, Australia is on the other side of the world, so it doesn’t affect us and isn’t any of our business anyway.

France. Wait a minute, France is next door to us. In fact, I live closer to France than I do to London. Anyway, didn’t France donate the Statue of Liberty to the USA? The French will surely have nothing to do with censorship! But have you heard of LOPPSI2? No, you probably haven’t. I’ve just done a search on the Times, Guardian and Independent websites and found no mention of LOPPSI2. Is this a conspiracy of silence? Or is LOPPSI2 unimportant to its geographically adjacent neighbour and EU partner? You decide.

LOPPSI2 has already passed through the French Lower House and is expected to pass through the Senate in the next few weeks. Then it will be law. “LOPPSI2,” writes Nate Anderson in Ars Technica, “is a grab bag of security items that includes state-sanctioned computer Trojans, a massive new database of citizen data (dubbed “Pericles”), and a requirement that ISPs start censoring sites on a government blacklist.” Pure and simple state censorship and control.

Let’s finally cut to the chase. The internet is no more safe from government control in the UK than it is in France. In fact, I would contend that it is under even greater threat. Mandelson’s Digital Economy Bill is just the beginning of a long and slippery downward slope. If he gets it through in its current state he will be able to gradually increase its invasiveness without further reference to parliament. I simply do not believe its purpose is solely to protect our creative industries – most of which historically go abroad for funding anyway. And look around at all the other things this government is doing, is planning on doing, or has already done.

So I think the bottom line is clear. Governments, probably all governments, all round the world, are intent on taking control of the internet. That’s our first beggared question answered. The second beggar is Twitter/Facebook.

What about Twitter/Facebook?
I’m treating these as a single entity. The reason is that more and more users are combining the two; using special apps so that a comment on FB’s Wall is automatically and simultaneously tweeted, and vice versa. This is important. Facebook brings two things to Twitter. Firstly, it introduces a huge new audience (Twitter has a mere 75 million users while FB has something like 400 million – don’t hesitate to correct me if I’m wrong). And secondly it adds a permanence that Twitter tends to lose simply through the sheer volume of tweets. FB is user centric; Twitter is data centric. Combine the two and you have a powerful and almost unstoppable way of disseminating information to interested people in almost real-time.

The knock on effect is that PR companies are having to reconsider their whole attitude to crisis management. Consider the recent Eurostar debacle. It would seem that the company first tried the old method of containment: say nothing. But the news rapidly escaped and spread through Twitter.

Consider the current Toyota catastrophe. Matthew DeBord wrote (17 Feb 2010) at The Big Money: “None of this, though, can contend with the breakneck, crowdsourced, unmediated reputation-wrecker that is the 140 characters of a tweet. As the recall story exploded last week and I pondered the collapse of the vaunted Toyota Way, I checked the #Toyota Twitter tag frequently. The tweet-rate was blistering: Dozens of new tweets every 30 seconds. Give it half an hour and you had a thousand more. Even the most hardened PR warrior would have looked at that and wet his pants.”

It is this that terrifies governments: the breakneck, crowdsourced, unmediated reputation-wrecker that is the 140 characters of a tweet. Would the UK government have got away with its claims that David Kelly committed suicide if it hadn’t been able to control the release of information through those same newspapers that now seem uninterested in LOPPSI2?  Would the missing car in the French tunnel have really disappeared if a million French tweeters had been instantly looking for it? Would the doubts of the majority of American citizens over what happened on 9/11 really be just doubts if Twitter had been there, giving instant voice to the doubts that only emerged slowly over time?

Governments cannot control information with Twitter around. But most governments will not survive if they cannot control information. Ergo, governments need control of the internet.

And we cannot afford to let them have it.

Product testing: valuable or meaningless?

February 16, 2010 12 comments

Last Thursday Mike Rothman wrote a thoughtful piece on The Death of Product Reviews (Securosis blog). I’d like to take up this argument. “Unfortunately, the patient (the product review) has died,” he says. “The autopsy is still in process, but I suspect the product review died of starvation. There just hasn’t been enough food around to sustain this legend of media output.” He’s right: there simply isn’t enough advertising revenue to pay for genuine magazine reviews.

But I think the subject has evolved rather than died. And it has the worm of sickness at its heart already. The product review has been replaced by the product certification, the product test. Government, as always, is to blame (although in fairness, what has happened is not government’s fault). Certifying, or testing products is now essential (because of government requirements). Because of this, the money that could have gone into the advertising that would support independent magazine reviews is now used to fund the far more expensive, but much more necessary, government certification schemes.

In the main, certification is less testing than validating: validating marketing claims. Here’s the problem. The vendor simply claims its product’s strengths, and ignores the weaknesses – and hey presto it gets a government-recognized seal of approval, irrespective of otherwise glaring faults and weaknesses that are simply ignored.

That’s not the worst of it. If the product in question is in any way anti-malware, the vendor can simply claim that the product kills 99% of all known germs. The validation process will inevitably prove it to be true and the company has a marketing bonus that is actually meaningless. Why? Because the product will inevitably be tested against the Wild List.

There are two problems here. Firstly, it takes several months for the latest Wild List to be compiled, and released; so it is unlikely to contain any seriously testing zero-day malware. Secondly, the company being tested is more than likely a member of the Wild List organization; and as such will have already seen the list before the test. Anything less than 100% success should be seen as incompetence.

The bottom line is that magazine reviews have been replaced by independent testing that purports to be more rigourous and exacting, but ultimately means less. The true value of certification is not what it says about the product, but merely that the manufacturer has sufficient financial stability to afford the cost.