Archive

Posts Tagged ‘privacy’

Don’t let them pull the Aircloak over our eyes

May 4, 2014 Leave a comment

Big business wants our personal information so it can make even more money. Privacy laws say it cannot have our personal data, but do not define anonymized data as ‘personal’. Business-friendly privacy regulators, such as the UK’s Information Commissioner, specifically declare that anonymized personal data is not regulated by the data protection laws — and is therefore fair game and up for grabs by all and sundry. Academics say, hang on, it is impossible in the modern age to truly anonymize personal data — determination can always de-anonymize it. The whole issue becomes even more befuddled with the introduction of pseudonymized data which is even more easily unwrapped — and the Information Commissioner would dearly like to, and probably will, declare pseudonymized personal data unprotected by law and equally up for grabs. Meanwhile, big business doesn’t want any anonymization at all, and is lobbying government ferociously to get what it wants.

It’s a mess; and there is no imminent solution.

Aircloak
Enter Aircloak. Aircloak is a company and product spun out of Max Planck Institute (computer science) research. It tries to turn the argument on its head. If we cannot, or at least cannot agree that we can, truly anonymize personal data, then let’s keep the data raw — but anonymize the statistical analysis from that data. Big business can therefore analyse the full data set, but will receive results that cannot identify any individuals. If this can be done, it will almost certainly be acceptable to the world’s privacy legislators.

aircloak_webAircloak accepts data into the system, stores it in encrypted format, allows only authorized users to query it, and returns results that (in theory) cannot identify individuals. To provide trust in the Aircloak software and ‘guarantee’ it free of backdoors, it provides the software as open source. This can thus be analysed and verified by anyone. It then uses the Trusted Computing Platform’s attestation option to ensure that the software being used is exactly the same as that which was verified.

This sounds as if it should be acceptable — but it is not. Criminals, whether they be government, business or organized, will always find ways round or over if not through any defences. Anything that is made by software can be unmade by software. Open source does not guarantee secure — just think of Heartbleed. And government agencies are brilliant at finding ways round defences — consider, for example, the hardware on which the system will run.

The prize is huge — complete, raw, big data personal information — and any intelligence agency would be in dereliction of its duty if it did not try to get it.

The solution
The real solution to this problem is simple: ban all and any collection of personal data unless the user specifically and unequivocally allows it in the informed knowledge that anyone and everyone will subsequently have access to it for any purpose.

Big business will throw its hands up in horror at such a suggestion, claiming that it is either necessary or massively beneficial to society that they have unfettered access to everything. So let’s consider three business cases: personal data used for law enforcement, targeted advertising, and medical research.

Law enforcement
Law and intelligence agencies claim that access to communications big data (including content) enables them to locate criminals and terrorists and prevent crime and acts of terror. But they already have that access. All they need is to go to a judge with a reasonable argument that a specific target is a threat to society and they will be granted access to that target’s communications. I don’t think anybody objects to this — judicial oversight (but not by some secret and unaccountable court like FISC) is all we need.

Targeted advertising
The marketing companies claim that collecting personal data allows companies to deliver targeted (that is, relevant) adverts to internet browsers. They claim this is a benefit to the user. But they go further and suggest that without the ability to target, advertisers will desert the internet and the free internet will collapse. The argument is that advertisers need to be able to target.

This is codswallop. Advertisers could and did basic targeting even before the internet — but it was based on location subject. Sports goods were advertised on the sports pages, and shoes were advertised on the fashion pages. Not being able to display a shoe advert to only those readers known to be looking for shoes did not stop the advertisers from advertising. So why should it now? Shoes can be advertised on fashion sites, and sports goods on sports sites. It’s the same thing.

bad pharma jacketMedical research
The UK’s care.data is the perfect example (see Care.data, pseudonymised data and the ICO). The NHS wants to collect every English patient’s full and intimately personal health data into a single database for sale to researchers for medical research. Nobody doubts that one of the main customers for this data will be Big Pharma, the drug companies. The NHS argument is that this will result in better and faster development of new drugs and new drug campaigns for the benefit of all.

Ben Goldacre

Ben Goldacre

Here’s a passage from Waterstone’s blurb on Ben Goldacre’s book, Bad Pharma. Goldacre is an author, broadcaster, campaigner, medical doctor and academic; and is highly respected.

…companies run bad trials on their own drugs, which distort and exaggerate the benefits by design. When these trials produce unflattering results, the data is simply buried. All of this is perfectly legal. In fact, even government regulators withhold vitally important data from the people who need it most. Doctors and patient groups have stood by too, and failed to protect us. Instead, they take money and favours, in a world so fractured that medics and nurses are now educated by the drugs industry. The result: patients are harmed in huge numbers.
Bad Pharma

Consider this. Big pharma is not interested in our health — it is solely interested in its own profits. When independent research began to show that Chinese herbs are more efficacious than western drugs, it campaigned for and succeeded in getting them banned. Do we really want to hand our personal health information to such people? I think not.

The reality
The reality is that none of these organizations want our data for our benefit — they want it to increase their profits at any cost to society. They make up arguments that do not stand scrutiny — they argue that because something is technologically possible, it is necessary.

Big business controls government — consider the way in which Monsanto has infiltrated the relevant US agencies and now seems to control the UK government as well (see: Defra battles to keep public in the dark over GM industry influence on policy and media — GeneWatch); and how the US telecoms companies have got inside the FCC and reversed its position on net neutrality.

The only solution to the personal privacy problem is to forbid the collection of any personal data. Where it is part of the service provided — such as that provided by GPs — the data must remain with the GP, and the GP must be held responsible for keeping it safe.

Aircloak is a dangerous development because it offers the suggestion that things can be different. They cannot.

Categories: All, Politics, Security Issues

Google amends its Terms of Service

April 16, 2014 Leave a comment

google logoWith most privacy laws you can pretty much do what you want provided you are up front about it. The key is the ‘informed consent’ of the user.

Google has been getting grief from legislators who claim that the complexity of its privacy policies make it impossible for users to be informed, and difficult for them to opt out if they do not consent.

One continuing argument is over Google’s scanning of email content in order to provide targeted advertising to Gmail users. The nub of the argument is that claimants say they have not given consent to this scanning while Google’s response is that consent is implied by use.

Now Google has made its practices explicit with a Monday addition to its terms of service. It has added a new paragraph:

Our automated systems analyze your content (including emails) to provide you personally relevant product features, such as customized search results, tailored advertising, and spam and malware detection. This analysis occurs as the content is sent, received, and when it is stored.
Google Terms of Service

I think Google was correct in reality if not legality when it claimed that consent was implicit in use — most if not all users are perfectly aware that email content is scanned electronically. The new paragraph makes this explicit: informed consent is now given by use of Google services.

What I still find interesting is that this consent is said to apply to received emails. If a non-Google user sends me a message, how is that user giving consent for the message to be scanned by Google? Is it realistic for non-Gmail users to read Google’s terms of service before emailing a Google user?

I don’t believe it is. So who owns the content: the sender or receiver? Copyright would suggest it is the sender — in which case this amendment to the terms of service will go some way, but not all the way, towards solving Google’s privacy issues.

Categories: All, Security Issues

United States Trade Representative threatens the EU

April 7, 2014 Leave a comment

UsvsEUThe United States is accustomed to getting its way internationally through trade threats. One method is the Special 301 Report Watch List, which is an annual list of countries which the US believes are failing in their duties towards copyright protection (specifically, US copyright protection). Once included in the Priority Watch List, a foreign country is liable for legal and/or trade sanctions. The Special 301 Report is compiled by the Office of the United States Trade Representative (USTR), and is seen as a method of bullying recalcitrant nations into conformity with US preferences.

This is not the only annual report from the USTR. It also produces the Section 1377 Review which examines international compliance with telecommunications trade agreements. This too, perhaps because it has become entrenched in the USTR way of doing business, can take a bullying tone. The latest report was released on Friday – but I would suggest that it thinks again if it believes it can bully the European Union at this stage of EU/US relations.

Background
Following the Snowden revelations on NSA/GCHQ spying, the now former head of Deutsche Telekom, René Obermann, proposed in November 2013 that Europe should establish a Schengen-routing and Schengen-cloud. The idea was that any communication from one point in Europe to another point in Europe should never leave Europe; and that personal European data should remain within Europe. This latter would effectively remove the existing safe harbour agreement with the US.

‘Schengen’ was chosen specifically as a mechanism for excluding the UK. The Schengen Area comprises 26 European countries that have abolished border control for Europeans between common borders – the UK has always remained outside of this agreement. As Die Welt described in March, the ‘Schengen-routing’ is intended to be “a defensive measure against the encroachments of the Anglo-Saxon intelligence on European internet users.”

Germany’s Angela Merkel and France’s François Hollande (that is, the central axis of the European Union) have declared support for the idea.

USTR’s Section 1377 Review
At the end of last week the USTR released its 2014 Section 1377 Review. On cross-border data flows it has two concerns: Turkey and the EU.

In Turkey, in the run-up to the recent local elections (‘won’ by Prime Minister Erdogan’s AKP party) and ahead of the presidential elections in August, the government has been tightening its grip on and control over the internet. USTR is concerned over restrictions on data flows and will seek “to ensure that data flows supporting legitimate trade can expand unimpeded.”

In Europe, the report notes that

DTAG [Deutsche Telekom AG] has called for statutory requirements that all data generated within the EU not be unnecessarily routed outside of the EU; and has called for revocation of the U.S.-EU “Safe Harbor” Framework, which has provided a practical mechanism for both U.S companies and their business partners in Europe to export data to the United States, while adhering to EU privacy requirements.

Well, obviously, this is a false statement. The safe harbour agreement requires that US companies holding European data do not pass that data to any third-party – but clearly they do pass it to the NSA and law enforcement. The report continues,

The United States and the EU share common interests in protecting their citizens’ privacy, but the draconian approach proposed by DTAG and others appears to be a means of providing protectionist advantage to EU-based ICT suppliers. Given the breath [sic] of legitimate services that rely on geographically-dispersed data processing and storage, a requirement to route all traffic involving EU consumers within Europe, would decrease efficiency and stifle innovation. For example, a supplier may transmit, store, and process its data outside the EU more efficiently, depending on the location of its data centers. An innovative supplier from outside of Europe may refrain from offering its services in the EU because it may find EU-based storage and processing requirements infeasible for nascent services launched from outside of Europe.

This is riddled with emotive language and inaccuracies. Draconian? Protectionist advantage? (Now I freely accept that DTAG will be looking for commercial opportunities, and that it is not a company I personally wish to use. From personal experience, I will never have dealings with T-Mobile again. But it is interesting that it seems to be willing to trade the US market for the European market.)

And the inaccuracies… Europeans would suggest that the US has shown scant regard for anyone’s privacy, while it is the US that delivers protectionist advantage (sometimes via economic espionage) to its own companies. Secondly, it completely misrepresents the proposals. European point-to-point communications should stay within Europe (that’s the ‘routing’); while personal data should not leave Europe (that’s the ‘cloud’). But the USTR is lumping the two together into some form of balkanised European intranet completely cut off from the rest of the internet. In reality, it should have little effect on legitimate trade between the EU and US.

It is not, for example, nearly as draconian as the US exclusion of Huawei from the US markets without any proof of actual threat (other than economic).

Then comes the USTR threat:

Furthermore, any mandatory intra-EU routing may raise questions with respect to compliance with the EU’s trade obligations with respect to Internet-enabled services. Accordingly, USTR will be carefully monitoring the development of any such proposals.

In reality we should not take this too seriously. It’s a form of lobbying – perhaps the first of much more to come – and we already know that USTR is not averse to lobbying on behalf of US industry. But it does show that the US is beginning to take the Schengen threat seriously. The UK should too. In the meantime, it should be said that US industry is not without its European allies. Neelie Kroes, the European Commissioner in charge of the European Digital Agenda, has said: “It’s not realistic that we can keep data in the EU, and the trial could jeopardize the open Internet.” Neelie Kroes is the commissioner who recently tried to redefine ‘net neutrality’ to suit big telecoms companies, only to have her definition rejected by the European Parliament.

Categories: All, Politics, Security Issues

Microsoft’s new secret weapon: listening to its customers

March 29, 2014 Leave a comment

I am not Microsoft’s greatest fan. It is a dinosaur stuck on the beach while the fleeter of foot are soaring through the clouds. The reality is that it has no, and has never had, any visionaries. Even its domination of the desktop was more down to luck and sharp practices than genuine vision.

It was lucky that Gary Kildall rejected IBM’s overtures, else there would never have been an MS-DOS; and it was sharp practices that killed off Digital Research — its one serious and technically superior competitor. It was lucky that Apple demonstrated the value of Xerox Parc research and paved the way for Windows. It was lucky Jobs was so far ahead of his time he thought he could have a walled garden in the ’80s; and almost destroyed Apple in the process.

But it was sheer arrogant blindness that made Gates think he could ignore the internet. For the last two decades Microsoft has been forced into playing catch up; but catch up never works if you don’t have the vision to get ahead of the competition.

Now, in just one area, Microsoft is showing visionary signs that could differentiate it from all of its competitors. Microsoft has started listening to its customers rather than imposing its will on its customers.

While Facebook is telling everyone that they don’t want privacy, Microsoft is listening and saying, OK, we will give you privacy. While Google is fighting the European Union over privacy and cloud storage, Microsoft is listening to the EU and saying, OK, we can accommodate and store European data in European data centres.

Now, it’s not as simple as that. The US government can still demand customer data from Microsoft’s European data centres simply because Microsoft is a US company. But it’s making that data much more defensible, and telling the EU that it is willing to cooperate rather than fight.

Similar over privacy. When it became clear last week that Microsoft had, quite legally, searched the emails of one of its customers concerning the theft of Microsoft IP, it knew there would be privacy issues. It immediately said two things: firstly that it would in future get a pseudo warrant from an independent lawyer who had previously been a judge, and secondly that it would include its own searches in future ‘transparency reports’ (the ones that publish the number of law enforcement searches).

It wasn’t enough for the privacy advocates who pointed to the hypocrisy of criticising NSA warrantless surveillance and then doing its own.

To Microsoft’s great credit, within a week, it has listened, heard and understood. Brad Smith announced yesterday,

Effective immediately, if we receive information indicating that someone is using our services to traffic in stolen intellectual or physical property from Microsoft, we will not inspect a customer’s private content ourselves. Instead, we will refer the matter to law enforcement if further action is required.
We’re listening: Additional steps to protect your privacy

Is this a new Microsoft — the genuinely ‘listening’ company? It no longer dominates the world’s operating systems, and is losing ground on desktop office software. But it seems to be doing one thing that none of its competitors are doing. It is listening to its customers, and giving them what they want. That alone, over the next few years, could catapult Microsoft back into a leading position.

Categories: All, Security Issues
Follow

Get every new post delivered to your Inbox.

Join 140 other followers