Archive

Posts Tagged ‘full disclosure’

Dropbox waits almost six months to fix a flaw that probably took less than a day

May 7, 2014 1 comment
Security expert Graham Cluley

Security expert Graham Cluley

Graham Cluley is a much respected security expert – but we don’t always agree. Full disclosure – the early public disclosure of a vulnerability whether or not the vendor has a fix available – is an example.

I believe that vendors should be notified when a flaw is discovered, and then given 7 days to fix it. After that, whether the fix has been made or not, the flaw should be made public.

Graham does not believe a flaw should ever be made public before the fix is ready. When I asked him, back in March this year, “What if the vendor does nothing or takes a ridiculously long time to fix it?…”

Graham sticks to his basic principle. You still don’t go public. Instead, you could, for example, go to the press “and demonstrate the flaw to them (to apply pressure to the vendor) rather than make the intimate details of how to exploit a weakness public.”
Phoenix-like, Full Disclosure returns

dropboxThis is exactly what happened with the newly disclosed and fixed Dropbox vulnerability. This flaw (not in the coding, but in the way the system works) allowed third parties to view privately shared, and sometimes confidential and sensitive documents. There were two separate but related problems. The first would occur if a user put a shared URL into a search box rather than the browser URL box. In this instance, the owner of the search engine would receive the shared link as part of the referring URL.

The second problem occurred

if a document stored on Dropbox contains a clickable link to a third-party site, guess what happens if someone clicks on the link within Dropbox’s web-based preview of the document?

The Dropbox Share Link to that document will be included in the referring URL sent to the third-party site.

On 5 May 2014, Dropbox blogged:

We wanted to let you know about a web vulnerability that impacted shared links to files containing hyperlinks. We’ve taken steps to address this issue and you don’t need to take any further action.
Web vulnerability affecting shared links

On 6 May 2014 (actually on the same day if you take time differences into account), IntraLinks (who ‘found’ the flaw), the BBC and Graham Cluley all wrote about it:

But each of them talk as if they had prior knowledge of the issue and at greater depth than that revealed by Dropbox. So what exactly is the history of this disclosure?

From the IntraLinks blog we learn:

We notified Dropbox about this issue when we first uncovered files, back in November 2013, to give them time to respond and deal with the problem. They sent a short response saying, “we do not believe this is a vulnerability.”

So for almost six months Dropbox knew about this flaw but did nothing about it. Graham explained by email how it came to a head, and Dropbox was forced to respond:

Intralinks told Dropbox and Box back in November last year.

Intralinks told me a few weeks ago. My advice was to get a big media outlet interested. They went to the BBC.

The BBC spoke to me on Monday (the 5th) and contacted Dropbox. The BBC were due to publish their story that day, but Dropbox convinced them to wait until the following day (presumably they were responding).

Dropbox then published their blog in the hours before the BBC and I published our articles (Tuesday morning).

This seems to be the perfect vindication of Graham’s preferred disclosure route: use the media to force the vendor’s hand before public disclosure of a vulnerability.

But just to keep the argument going, it also vindicates my own position. Dropbox users were exposed to this vulnerability for more than four months longer than they need have been. There is simply no way of knowing whether criminals were already aware of and using the flaw, and we consequently have no way of knowing how many Dropbox users may have had sensitive information compromised during those four months. After all, the NSA knew about Heartbleed, and were most likely using it, for two years before it was disclosed and fixed.

Advertisements

Full Disclosure shuts down again

April 2, 2014 Leave a comment

Just how hard it can be to operate a controversial mailing list that frequently sails close to the legal wind was amply demonstrated yesterday when, for the second time in a month, the Full Disclosure mailing list was suddenly shut down. Fyodor, its new operator, announced yesterday:

Sorry I can’t do this anymore. List closed!
Hello everyone. I know I just started the new Full Disclosure list, but
it’s not working out :(. Everything may seem fine from the outside, but it
has been nonstop grief from here. I’m not just talking about the (normal
and expected) troll posts or all the petty complainers. We’ve already
gotten a DMCA takedown demand, two other legal threats, and a sustained DoS
attack which is disrupting all our other services too. And now our ISP is
threatening to shut us down!

It’s kind of embarrassing that John Cartwright lasted 12 years and I
couldn’t even handle a week, but there it is. I do appreciate the many of
you who were supportive.

List closed!
-Fyodor

Then he added, April Fool! “We already have 7,226 members and more than 100 posts in this first week. That includes numerous new vulns, from SQL injection to privacy issues and even a physical security problem. Please keep up the good work!”

Categories: All, Security Issues

Phoenix-like, Full Disclosure returns

March 30, 2014 Leave a comment

When the Full Disclosure mailing list suddenly closed down just over a week ago it took most people by surprise. The precise cause — although undoubtedly known to some — remains a mystery. It appears to have been just one problem too many for list moderator John Cartwright; made all the more unbearable because it came from within the research fraternity rather than from vendors.

Be that as it may, full disclosure has been and remains one of the longest-running contentious issues in security. If you discover a vulnerability, do you tell everyone (full disclosure); tell no-one (non-disclosure); or just tell the vendor (so-called ‘responsible’ disclosure).

There are strong and strongly-held arguments for all options. Graham Cluley and I differ, for example; although perhaps more in degree than absolutes. “For my money, it’s always been more responsible to inform the vendor concerned that there is a security weakness in their product, and work with them to get it fixed rather than get the glory of an early public disclosure that could endanger internet users,” he told me when the mailing list shut down.

Graham’s view is that we should do nothing that might help criminals break into innocent users’ computers. So far we agree: always tell the vendors first, so that they can fix flaws before they become widely known. But what next? What if the vendor does nothing or takes a ridiculously long time to fix it?

Graham sticks to his basic principle. You still don’t go public. Instead, you could, for example, go to the press “and demonstrate the flaw to them (to apply pressure to the vendor) rather than make the intimate details of how to exploit a weakness public.”

There are ample examples to prove his point. When you combine full disclosure with the ‘full exploitation’ of Metasploit, all done before the vendor can fix it, then the bad guys have a ready-made crime-kit — and the general public has no defence.

The basic principle behind responsible disclosure is that if you don’t go public, the vulnerability is less likely to be exploited. But that’s my problem: ‘less likely’ is no defence. If the researcher has discovered the vulnerability, how many criminals have also already discovered the same vulnerability — and are already using it, or are ready to use it in earnest? To know about a vulnerability and not do everything possible to force the vendor to fix it is, in my opinion, irresponsible rather than responsible behaviour.

But, as Graham added, “it’s a religious debate, frankly, with strongly held opinions on both sides.”

So it will be with a mixed reception that we now learn that like the Phoenix, the Full Disclosure mailing list is reborn, courtesy of Seclists‘ Fyodor.

Upon hearing the bad news, I immediately wrote to John offering help. He said he was through with the list, but suggested: “you don’t need me. If you want to start a replacement, go for it.” After some soul searching about how much I personally miss the list (despite all its flaws), I’ve decided to do so! I’m already quite familiar with handling legal threats and removal demands (usually by ignoring them) since I run Seclists.org, which has long been the most popular archive for Full Disclosure and many other great security lists. I already maintain mail servers and Mailman software because I run various other large lists including Nmap Dev and Nmap Announce.
Full Disclosure Mailing List: A Fresh Start

I for one welcome its return. Full Disclosure is, to my mind, an essential part of the security landscape. You can sign up here.

Categories: All, Security Issues

Disclosure timeline for vulnerabilities under active attack

May 30, 2013 Leave a comment

This is the headline of a new Google blog: Disclosure timeline for vulnerabilities under active attack. It’s beautiful, and I like to think intentional. On the surface, it simply says that we, Google, are explaining our new timeline for the disclosure of vulnerabilities discovered by our engineers, if they are being actively exploited.

But underneath there is a subtle dig at Microsoft. Microsoft has always demanded a lengthy timeline; and would probably prefer indefinite non-disclosure. Google, however, has always championed a short timeline. It is oh so easy to read this headline as: Microsoft’s disclosure timeline for vulnerabilities is now under active attack by Google.

This new disclosure timeline for actively exploited vulnerabilities is seven days. You cannot fault the logic – with dissidents increasingly targeted by spyware, failure to disclose could potentially be life-threatening. Hell, I would say that it should be a 24 hour timeline. Be that as it may, Google has for now settled on seven days.

And it’s going to be contentious. But here’s the genius. If you’re gonna cause a ruckus, why not get in a sly dig, cloaked in the genius of ambiguous deniability, at the same time?

Categories: All, Security Issues

Do you believe in full Metasploit or responsible Metasploit?

February 22, 2013 Leave a comment

I did a blog posting for Lumension yesterday: Metasploit – Is it a Good Thing, or a Bad Thing?

I tried to give an idea of what the industry thinks, and it includes some interesting observations from luminaries such as HD Moore (the founder of Metasploit and CSO at Rapid7) and Rik Ferguson (VP of security research at Trend Micro).

One thing it doesn’t do is give my opinion. Assuming that we can relate Metasploit to ‘full disclosure’…

Question:
Do you believe in full disclosure or responsible disclosure?

Answer:
Unequivocally, categorically, yes.

It’s a neat marketing trick by some of the vendors: full disclosure is responsible disclosure. Delayed disclosure is irresponsible disclosure. I believe that full, immediate and responsible disclosure is the only way to improve security. Any other suggestion is a sleight of hand from the vendors.

Categories: All, Security Issues

Java vulnerability and ir/responsible disclosure

September 1, 2012 Leave a comment

There are two forms of irresponsible disclosure that are illustrated by the last week in Java world. The first is to rush to full public disclosure as soon as a new vulnerability is discovered or a new exploit developed without giving the vendor any time to fix it. The second is to refuse to disclose until after the vendor has produced a patch. Google’s approach – to give the vendor 30 days to fix the vulnerability before it is made public is responsible disclosure. But I don’t want to defend Google, I want to nail the idea that it is somehow responsible to stay shtum until the fault is officially patched.

Last week a new Java 0-day exploit was made public and went ballistic. The problem is that Oracle knew about the problem from 2 April at the latest: it was a known 0-day vulnerability that Oracle then ignored. Oracle ignored it in its first round of quarterly patches, so the earliest it could fix it would be 16 October (or they could just ignore it again).

An exploit for this vulnerability went public last weekend and was rapidly added to and used by the Blackhole exploit kit – making the internet an even more dangerous place for Java users. But we know that an exploit was active in the wild before it became public knowledge because both Kaspersky and Symantec have said so. What we don’t know is how extensively nor for how long it had been in the wild.

So what we have is an actively exploited 0-day vulnerability that the vendor knew about but had no plans to patch for at least another six weeks – or put another way had already ignored for almost five months. That is unacceptable.

But then the vulnerability was publicly disclosed and shame was heaped upon Oracle. And in just a couple of days it was fixed. This would never have happened without full public disclosure.

So just as giving a vendor no time to fix a vulnerability is irresponsible, so is it even more irresponsible to give that vendor a blank rain check. Oracle and Java prove this – so next time a security researcher publicly discloses a 0-day exploit, don’t condemn the action – it may just save you a whole lot of grief.

Is Elcomsoft a force for Good or Evil? You decide

September 29, 2010 Leave a comment

Elcomsoft, a Russian cryptanalysis company, has a history of upsetting the West. Way back in 2001, Dmitry Sklyarov, an Elcomsoft programmer, was arrested in the USA after presenting at DEF CON. He had developed a product, The Advanced eBook Processor, that would decrypt encrypted Adobe e-books. He had not broken any US laws while in the USA, nor was his product illegal in Russia. But it certainly upset Adobe and other western publishers at the time.

Today we have a new Elcomsoft product: the Elcomsoft Wireless Security Auditor, complete with WPA2 brute force password cracking. And they’re still upsetting people. Idappcom’s CTO, Roger Haywood, has commented:

…the reality is that the software can brute force crack as many as 103,000 WiFi passwords per second – which equates to more than six million passwords a minute – on an HD5390 graphics card-equipped PC. Furthermore, if you extrapolate these figures to a multi-processor, multiple graphics card system, it can be seen that this significantly reduces the time take to crack a company WiFi network to the point where a dedicated hacker could compromise a corporate wireless network.

Our observations at Idappcom is that this is another irresponsible and unethical release from a Russian-based company that has clearly produced a `thinly disguised’ wireless network hacking tool with the deliberate intention of brute force hacking wireless networks.

The solution is clearly and intentionally priced within the grasp of any hacker or individual intent on malicious wireless attacks. Assuming you have no password and access control recovery system, if you do forget the password to a wireless network that you own, how difficult do you think it is to walk over to the device and press the reset button? In most situations resetting a wireless device, restoring a configuration and setting a new password is a process that can be achieved in minutes.

This is an absolutely valid viewpoint. But I’d like to suggest an alternative view. Was Adobe’s encryption weak in 2001 because of Dmitry Sklyarov; or did/could Dmitry Sklyarov produce his software because Adobe’s encryption was weak? Adobe’s security is far stronger today. Is that partly because of Elcomsoft?

And now, does the Elcomsoft EWSA product create insecure networks, or merely demonstrate that those networks are already insecure? One thing we can be sure of; the security of those WiFi networks will now have to improve. Is that a bad thing?

There is a similarity here with the full disclosure debate. And I suspect that people will take similar sides. You may have guessed that, on balance, I believe that security is improved by full disclosure; and by companies like Elcomsoft. Those who believe that full disclosure is irresponsible disclosure will probably believe that Elcomsoft is irresponsible.

And never the twain shall meet.

Idappcom
Elcomsoft

Categories: All, Security Issues