Archive

Posts Tagged ‘full disclosure’

Dropbox waits almost six months to fix a flaw that probably took less than a day

May 7, 2014 1 comment
Security expert Graham Cluley

Security expert Graham Cluley

Graham Cluley is a much respected security expert – but we don’t always agree. Full disclosure – the early public disclosure of a vulnerability whether or not the vendor has a fix available – is an example.

I believe that vendors should be notified when a flaw is discovered, and then given 7 days to fix it. After that, whether the fix has been made or not, the flaw should be made public.

Graham does not believe a flaw should ever be made public before the fix is ready. When I asked him, back in March this year, “What if the vendor does nothing or takes a ridiculously long time to fix it?…”

Graham sticks to his basic principle. You still don’t go public. Instead, you could, for example, go to the press “and demonstrate the flaw to them (to apply pressure to the vendor) rather than make the intimate details of how to exploit a weakness public.”
Phoenix-like, Full Disclosure returns

dropboxThis is exactly what happened with the newly disclosed and fixed Dropbox vulnerability. This flaw (not in the coding, but in the way the system works) allowed third parties to view privately shared, and sometimes confidential and sensitive documents. There were two separate but related problems. The first would occur if a user put a shared URL into a search box rather than the browser URL box. In this instance, the owner of the search engine would receive the shared link as part of the referring URL.

The second problem occurred

if a document stored on Dropbox contains a clickable link to a third-party site, guess what happens if someone clicks on the link within Dropbox’s web-based preview of the document?

The Dropbox Share Link to that document will be included in the referring URL sent to the third-party site.

On 5 May 2014, Dropbox blogged:

We wanted to let you know about a web vulnerability that impacted shared links to files containing hyperlinks. We’ve taken steps to address this issue and you don’t need to take any further action.
Web vulnerability affecting shared links

On 6 May 2014 (actually on the same day if you take time differences into account), IntraLinks (who ‘found’ the flaw), the BBC and Graham Cluley all wrote about it:

But each of them talk as if they had prior knowledge of the issue and at greater depth than that revealed by Dropbox. So what exactly is the history of this disclosure?

From the IntraLinks blog we learn:

We notified Dropbox about this issue when we first uncovered files, back in November 2013, to give them time to respond and deal with the problem. They sent a short response saying, “we do not believe this is a vulnerability.”

So for almost six months Dropbox knew about this flaw but did nothing about it. Graham explained by email how it came to a head, and Dropbox was forced to respond:

Intralinks told Dropbox and Box back in November last year.

Intralinks told me a few weeks ago. My advice was to get a big media outlet interested. They went to the BBC.

The BBC spoke to me on Monday (the 5th) and contacted Dropbox. The BBC were due to publish their story that day, but Dropbox convinced them to wait until the following day (presumably they were responding).

Dropbox then published their blog in the hours before the BBC and I published our articles (Tuesday morning).

This seems to be the perfect vindication of Graham’s preferred disclosure route: use the media to force the vendor’s hand before public disclosure of a vulnerability.

But just to keep the argument going, it also vindicates my own position. Dropbox users were exposed to this vulnerability for more than four months longer than they need have been. There is simply no way of knowing whether criminals were already aware of and using the flaw, and we consequently have no way of knowing how many Dropbox users may have had sensitive information compromised during those four months. After all, the NSA knew about Heartbleed, and were most likely using it, for two years before it was disclosed and fixed.

Full Disclosure shuts down again

April 2, 2014 Leave a comment

Just how hard it can be to operate a controversial mailing list that frequently sails close to the legal wind was amply demonstrated yesterday when, for the second time in a month, the Full Disclosure mailing list was suddenly shut down. Fyodor, its new operator, announced yesterday:

Sorry I can’t do this anymore. List closed!
Hello everyone. I know I just started the new Full Disclosure list, but
it’s not working out :(. Everything may seem fine from the outside, but it
has been nonstop grief from here. I’m not just talking about the (normal
and expected) troll posts or all the petty complainers. We’ve already
gotten a DMCA takedown demand, two other legal threats, and a sustained DoS
attack which is disrupting all our other services too. And now our ISP is
threatening to shut us down!

It’s kind of embarrassing that John Cartwright lasted 12 years and I
couldn’t even handle a week, but there it is. I do appreciate the many of
you who were supportive.

List closed!
-Fyodor

Then he added, April Fool! “We already have 7,226 members and more than 100 posts in this first week. That includes numerous new vulns, from SQL injection to privacy issues and even a physical security problem. Please keep up the good work!”

Categories: All, Security Issues

Phoenix-like, Full Disclosure returns

March 30, 2014 Leave a comment

When the Full Disclosure mailing list suddenly closed down just over a week ago it took most people by surprise. The precise cause — although undoubtedly known to some — remains a mystery. It appears to have been just one problem too many for list moderator John Cartwright; made all the more unbearable because it came from within the research fraternity rather than from vendors.

Be that as it may, full disclosure has been and remains one of the longest-running contentious issues in security. If you discover a vulnerability, do you tell everyone (full disclosure); tell no-one (non-disclosure); or just tell the vendor (so-called ‘responsible’ disclosure).

There are strong and strongly-held arguments for all options. Graham Cluley and I differ, for example; although perhaps more in degree than absolutes. “For my money, it’s always been more responsible to inform the vendor concerned that there is a security weakness in their product, and work with them to get it fixed rather than get the glory of an early public disclosure that could endanger internet users,” he told me when the mailing list shut down.

Graham’s view is that we should do nothing that might help criminals break into innocent users’ computers. So far we agree: always tell the vendors first, so that they can fix flaws before they become widely known. But what next? What if the vendor does nothing or takes a ridiculously long time to fix it?

Graham sticks to his basic principle. You still don’t go public. Instead, you could, for example, go to the press “and demonstrate the flaw to them (to apply pressure to the vendor) rather than make the intimate details of how to exploit a weakness public.”

There are ample examples to prove his point. When you combine full disclosure with the ‘full exploitation’ of Metasploit, all done before the vendor can fix it, then the bad guys have a ready-made crime-kit — and the general public has no defence.

The basic principle behind responsible disclosure is that if you don’t go public, the vulnerability is less likely to be exploited. But that’s my problem: ‘less likely’ is no defence. If the researcher has discovered the vulnerability, how many criminals have also already discovered the same vulnerability — and are already using it, or are ready to use it in earnest? To know about a vulnerability and not do everything possible to force the vendor to fix it is, in my opinion, irresponsible rather than responsible behaviour.

But, as Graham added, “it’s a religious debate, frankly, with strongly held opinions on both sides.”

So it will be with a mixed reception that we now learn that like the Phoenix, the Full Disclosure mailing list is reborn, courtesy of Seclists‘ Fyodor.

Upon hearing the bad news, I immediately wrote to John offering help. He said he was through with the list, but suggested: “you don’t need me. If you want to start a replacement, go for it.” After some soul searching about how much I personally miss the list (despite all its flaws), I’ve decided to do so! I’m already quite familiar with handling legal threats and removal demands (usually by ignoring them) since I run Seclists.org, which has long been the most popular archive for Full Disclosure and many other great security lists. I already maintain mail servers and Mailman software because I run various other large lists including Nmap Dev and Nmap Announce.
Full Disclosure Mailing List: A Fresh Start

I for one welcome its return. Full Disclosure is, to my mind, an essential part of the security landscape. You can sign up here.

Categories: All, Security Issues

Disclosure timeline for vulnerabilities under active attack

May 30, 2013 Leave a comment

This is the headline of a new Google blog: Disclosure timeline for vulnerabilities under active attack. It’s beautiful, and I like to think intentional. On the surface, it simply says that we, Google, are explaining our new timeline for the disclosure of vulnerabilities discovered by our engineers, if they are being actively exploited.

But underneath there is a subtle dig at Microsoft. Microsoft has always demanded a lengthy timeline; and would probably prefer indefinite non-disclosure. Google, however, has always championed a short timeline. It is oh so easy to read this headline as: Microsoft’s disclosure timeline for vulnerabilities is now under active attack by Google.

This new disclosure timeline for actively exploited vulnerabilities is seven days. You cannot fault the logic – with dissidents increasingly targeted by spyware, failure to disclose could potentially be life-threatening. Hell, I would say that it should be a 24 hour timeline. Be that as it may, Google has for now settled on seven days.

And it’s going to be contentious. But here’s the genius. If you’re gonna cause a ruckus, why not get in a sly dig, cloaked in the genius of ambiguous deniability, at the same time?

Categories: All, Security Issues

Do you believe in full Metasploit or responsible Metasploit?

February 22, 2013 Leave a comment

I did a blog posting for Lumension yesterday: Metasploit – Is it a Good Thing, or a Bad Thing?

I tried to give an idea of what the industry thinks, and it includes some interesting observations from luminaries such as HD Moore (the founder of Metasploit and CSO at Rapid7) and Rik Ferguson (VP of security research at Trend Micro).

One thing it doesn’t do is give my opinion. Assuming that we can relate Metasploit to ‘full disclosure’…

Question:
Do you believe in full disclosure or responsible disclosure?

Answer:
Unequivocally, categorically, yes.

It’s a neat marketing trick by some of the vendors: full disclosure is responsible disclosure. Delayed disclosure is irresponsible disclosure. I believe that full, immediate and responsible disclosure is the only way to improve security. Any other suggestion is a sleight of hand from the vendors.

Categories: All, Security Issues

Java vulnerability and ir/responsible disclosure

September 1, 2012 Leave a comment

There are two forms of irresponsible disclosure that are illustrated by the last week in Java world. The first is to rush to full public disclosure as soon as a new vulnerability is discovered or a new exploit developed without giving the vendor any time to fix it. The second is to refuse to disclose until after the vendor has produced a patch. Google’s approach – to give the vendor 30 days to fix the vulnerability before it is made public is responsible disclosure. But I don’t want to defend Google, I want to nail the idea that it is somehow responsible to stay shtum until the fault is officially patched.

Last week a new Java 0-day exploit was made public and went ballistic. The problem is that Oracle knew about the problem from 2 April at the latest: it was a known 0-day vulnerability that Oracle then ignored. Oracle ignored it in its first round of quarterly patches, so the earliest it could fix it would be 16 October (or they could just ignore it again).

An exploit for this vulnerability went public last weekend and was rapidly added to and used by the Blackhole exploit kit – making the internet an even more dangerous place for Java users. But we know that an exploit was active in the wild before it became public knowledge because both Kaspersky and Symantec have said so. What we don’t know is how extensively nor for how long it had been in the wild.

So what we have is an actively exploited 0-day vulnerability that the vendor knew about but had no plans to patch for at least another six weeks – or put another way had already ignored for almost five months. That is unacceptable.

But then the vulnerability was publicly disclosed and shame was heaped upon Oracle. And in just a couple of days it was fixed. This would never have happened without full public disclosure.

So just as giving a vendor no time to fix a vulnerability is irresponsible, so is it even more irresponsible to give that vendor a blank rain check. Oracle and Java prove this – so next time a security researcher publicly discloses a 0-day exploit, don’t condemn the action – it may just save you a whole lot of grief.

Is Elcomsoft a force for Good or Evil? You decide

September 29, 2010 Leave a comment

Elcomsoft, a Russian cryptanalysis company, has a history of upsetting the West. Way back in 2001, Dmitry Sklyarov, an Elcomsoft programmer, was arrested in the USA after presenting at DEF CON. He had developed a product, The Advanced eBook Processor, that would decrypt encrypted Adobe e-books. He had not broken any US laws while in the USA, nor was his product illegal in Russia. But it certainly upset Adobe and other western publishers at the time.

Today we have a new Elcomsoft product: the Elcomsoft Wireless Security Auditor, complete with WPA2 brute force password cracking. And they’re still upsetting people. Idappcom’s CTO, Roger Haywood, has commented:

…the reality is that the software can brute force crack as many as 103,000 WiFi passwords per second – which equates to more than six million passwords a minute – on an HD5390 graphics card-equipped PC. Furthermore, if you extrapolate these figures to a multi-processor, multiple graphics card system, it can be seen that this significantly reduces the time take to crack a company WiFi network to the point where a dedicated hacker could compromise a corporate wireless network.

Our observations at Idappcom is that this is another irresponsible and unethical release from a Russian-based company that has clearly produced a `thinly disguised’ wireless network hacking tool with the deliberate intention of brute force hacking wireless networks.

The solution is clearly and intentionally priced within the grasp of any hacker or individual intent on malicious wireless attacks. Assuming you have no password and access control recovery system, if you do forget the password to a wireless network that you own, how difficult do you think it is to walk over to the device and press the reset button? In most situations resetting a wireless device, restoring a configuration and setting a new password is a process that can be achieved in minutes.

This is an absolutely valid viewpoint. But I’d like to suggest an alternative view. Was Adobe’s encryption weak in 2001 because of Dmitry Sklyarov; or did/could Dmitry Sklyarov produce his software because Adobe’s encryption was weak? Adobe’s security is far stronger today. Is that partly because of Elcomsoft?

And now, does the Elcomsoft EWSA product create insecure networks, or merely demonstrate that those networks are already insecure? One thing we can be sure of; the security of those WiFi networks will now have to improve. Is that a bad thing?

There is a similarity here with the full disclosure debate. And I suspect that people will take similar sides. You may have guessed that, on balance, I believe that security is improved by full disclosure; and by companies like Elcomsoft. Those who believe that full disclosure is irresponsible disclosure will probably believe that Elcomsoft is irresponsible.

And never the twain shall meet.

Idappcom
Elcomsoft

Categories: All, Security Issues

Microsoft rushes out a patch for the LNK vulnerability: but what else does it tell us?

August 2, 2010 Leave a comment

I’m sorry to harp on about this, but, well frankly there are things I just don’t understand. I’m talking about Microsoft, vulnerabilities, Patch Tuesday and responsible disclosure. Microsoft usually delivers all of its security patches at the same time each month: Patch Tuesday, the second Tuesday of the month. There is, therefore, approximately a 30-day gap between each security update. This 30 days sits nicely with the standard definition of responsible disclosure that I have always understood: if you find a vulnerability, report it quietly to the vendor and give that vendor 30 days to fix it before you go public.

Ah, but what if you discover a Windows vulnerability immediately after Patch Tuesday? If you give Microsoft the 30 days, it might just miss the next Patch Tuesday and still be another 30 days until the following one. Surely what you do here then, to be responsible, is to give Microsoft 60 days, not 30 days, before you go public?

But what do you do if you have reason to believe that the vulnerability you have discovered is already being, or about to be, exploited? Such knowledge would not be unusual. Security researchers can sit very close to the fence. Some were hackers; most still know hackers; pretty well all will liaise with hackers.

So, believing a vulnerability is about to be exploited, and offering a 60 day period of grace before going public, what does the responsible security researcher do if he can get no committment for a patch within 60 days? It seems to me perfectly responsible, for the security of the user, to force the vendor’s hand by going public immediately.

We are, of course, talking about Tavis Ormandy and his ‘infamous’ exposure of a vulnerability just five days after revealing it to Microsoft, but supposedly because Microsoft would not commit to fixing it within 60 days. For this he was castigated as ‘irresponsible’. See The case of Tavis Ormandy; and when does a blogger become a journalist? and GoogleWars! Is the Phoney War now over? for more information on this issue.

Why am I bringing this up again? Because today Microsoft is issuing a patch for the altogether different LNK vulnerability. This LNK vulnerability (see Some of the issues around the LNK zero-day vulnerability for some of the issues around…) was not disclosed to them before being used; it was discovered in use as an attack on SCADA systems. But Microsoft is not waiting for Patch Tuesday (8 more days). So how can MS rush out a patch for this vulnerability in 20 days but not commit to fixing Ormandy’s vulnerability in 60 days?

It’s not as if relative complexity is an issue. As Lumension’s Paul Henry blogs about this LNK patch:

Some security experts question how effective an out-of-band patch will be. Microsoft has never implemented a security process around LNK files. This is not a matter of adjusting the security process in their use, it is a matter of attempting to insert a fix in to a problem that does not have any security process current in place – not a simple task.
Paul Henry

So what is left? Frankly, I’m pretty certain they’re playing politics with our security; and I’m pretty certain I don’t like it.

Categories: All, Security Issues

Forget full disclosure; forget responsible disclosure; sign up for Microsoft’s new Coordinated Disclosure

July 28, 2010 Leave a comment

This is clever. Microsoft has taken some stick over Tavis Ormandy and full disclosure (not as much as Tavis, but some): the whole issue has raised the possibility that companies like Microsoft might sit on vulnerabilities, sometimes for years, if the researcher doesn’t go fully public.

No company likes that sort of accusation floating around, so Microsoft has come up with a new disclosure policy. It’s not ‘responsible disclosure’ (a term and approach that is ridiculed by many serious security researchers), nor yet is it the fearful ‘full disclosure’ (a term and approach that is ridiculed by many serious security vendors). It’s a new one. It’s ‘coordinated disclosure’.

The idea is this:

Definition of coordinated vulnerability disclosure. Microsoft believes coordinated vulnerability disclosure is when newly discovered vulnerabilities in hardware, software and services are disclosed directly to the vendors of the affected product, to a CERT-CC or other coordinator who will report to the vendor privately, or to a private service that will likewise report to the vendor privately. The finder allows the vendor an opportunity to diagnose and offer fully tested updates, workarounds or other corrective measures before detailed vulnerability or exploit information is shared publicly. If attacks are underway in the wild, earlier public vulnerability details disclosure can occur with both the finder and vendor working together as closely as possible to provide consistent messaging and guidance to customers to protect themselves.

Frankly I can’t see much difference between this and responsible disclosure, except that a CERT-CC becomes involved. This is the clever bit. CERT-CCs are pretty well trusted by the general public (well, they’re often ‘government’, so they must be trustworthy, right?). It’s a bit like David Cameron inviting the LibDems into government: if it goes right, it’s the Conservatives what did it; but if it goes wrong, we can blame the LibDems. Ah, but it’s more than just sharing the blame. The CERTs I’ve come across all have a policy of not going public with a known vulnerability until the vendor produces a patch.

In other words, there is no actual difference between this new ‘coordinated disclosure’ and the old ‘responsible disclosure’ except that we are given the false impression that the whole process will be policed by a CERT. That’s clever. And, of course, it’s followed by pretty standard emotional blackmail:

Microsoft calls on the broader community — from security researchers to vendors — to move to coordinated vulnerability disclosure. The need for coordination and shared responsibility has never been greater, as the computing ecosystem faces an unprecedented level of threat from the criminal element. To overcome that element, we must work together to improve the security of the entire ecosystem — and, as always, making customer protection our highest priority.

I suspect this will make not the slightest difference in reality. Existing full disclosure proponents believe that full disclosure is making user protection the highest priority; just as responsible disclosure proponents believe in their procedure. Coordinated exposure is just meaningless new semantics: but it does make good PR.

Categories: All, Vendor News

GoogleWars! Is the Phoney War now over?

July 21, 2010 1 comment

Well, now. Things just get interestinger and interestinger.

Tavis Ormandy recently disclosed an MS zero-day bug on the Full Disclosure mailing list. This caused a bit of a stir. Much of the anti-malware industry was aghast. The anti-malware industry, in general, is not overkeen on what is called ‘full disclosure’. It prefers what it terms ‘responsible disclosure’, cleverly implying that anything that does not fall within the definition of ‘responsible disclosure’ is ‘irresponsible disclosure’.

Tavis was criticised on two counts: firstly that he was irresponsible, and secondly that he was Google trying to score points against Microsoft. Let’s look at these.

Irresponsible
From Kurt Wismer

i’m a little too late to the party to bother with vilifying him, but the arguments used to support him could stand and be reused in the future and those need to be addressed…
full disclosure as disarmament

and from Graham Cluley:

In my opinion, Ormandy irresponsibly disclosed the vulnerability before Microsoft had a chance to fix the problem, making it easy for cybercriminals to exploit the flaw and infect innocent users.

The good news is that now Microsoft has now issued a fix for the problem. But I bet they (and countless other internet users and industry observers) wish that the first that they had heard of this problem was when the patch was rolled-out, rather than when Ormandy acted petulantly.
Patch Tavis Day

I am yet to be convinced that ‘full’ equates to ‘irresponsible’, or that Ormandy is petulant. He claimed that he chose the full disclosure route because Microsoft declined to commit to a patch within 60 days. ‘Responsible’ disclosure is often taken to mean giving the vendor 30 days to fix the problem before going public. Ormandy ‘offered’ 60 days; but because MS couldn’t/wouldn’t commit to the patch within that time, Ormandy disclosed within five days rather than waiting the 30 days.

As it happens, MS rolled out the patch in approximately 40 days – and probably had it ready in less than that but waited until its next ‘Patch Tuesday’. (Notice that it has not waited until the next Patch Tuesday to respond to the LNK 0-day flaw of its own making; although we must wait to see when the actual patch is rolled out.) So if we ask, ‘why did Tavis Ormandy not wait 30 days?’ we should equally ask ‘why did Microsoft not accept the 60 days offered?’ This leads us inexorably to the second issue: Tavis Ormandy’s employment orientation.

GoogleWars
Many of the early commentators who thought Ormandy was irresponsible dwelt somewhat on his employer, Google, suggesting that here was an attempt by Google to embarrass Microsoft. My own initial thoughts were that there was no evidence to support this. Now I am beginning to wonder. On 2 June I posted: Google dumps Windows: the first shot in the coming war

There is a battle looming. While many pundits see a contest between Google and Facebook for control of the internet, and Google and Apple for control of the airwaves, I suspect Google is aiming higher. Google is getting ready to take on Windows, head-on. Chrome and the Cloud beats Windows in almost every way: cost, agility, security, you name it.

So Google dumping Windows has little to do with security. It says to the big corporates, hey guys, we can live without Windows. You can too.

Now Ormandy has added his name, along with Chris Evans, Eric Grosse, Neel Mehta, Matt Moore, Julien Tinnes, Michal Zalewski; Google Security Team to a new post on the Google Online Security Blog: Rebooting Responsible Disclosure: a focus on protecting end users.

The article nowhere mentions Ormandy by name (other than in the credits), but comments:

So, is the current take on responsible disclosure working to best protect end users in 2010? Not in all cases, no. The emotionally loaded name suggests that it is the most responsible way to conduct vulnerability research – but if we define being responsible as doing whatever it best takes to make end users safer, we will find a disconnect. We’ve seen an increase in vendors invoking the principles of “responsible” disclosure to delay fixing vulnerabilities indefinitely, sometimes for years; in that timeframe, these flaws are often rediscovered and used by rogue parties using the same tools and methodologies used by ethical researchers. The important implication of referring to this process as “responsible” is that researchers who do not comply are seen as behaving improperly. However, the inverse situation is often true: it can be irresponsible to permit a flaw to remain live for such an extended period of time.

It goes on to “suggest that 60 days is a reasonable upper bound for a genuinely critical issue in widely deployed software.” The article is clearly a statement of Google’s disclosure policy while at the same time defending Ormandy’s disclosure. They are one and the same thing.

In the light of all this I have to revisit my initial thoughts. Was Ormandy irresponsible? Absolutely not: he offered 60 days to Microsoft. Was it a set-up to embarrass Microsoft? I don’t think it was a set-up. Did Ormandy/Google hope to embarrass Microsoft? Absolutely. Did Microsoft hope to embarrass Google? Absolutely. The war has started.

Categories: All, Security Issues