Let me begin by making one thing very clear... Something can be a REALLY good idea, and still be wrong. These two things are NOT mutually exclusive.
The world of “security research” has recently begun overflowing with a sense of entitlement. I’m obviously generalizing, but researchers insist that the security battlefield is tipped against them, and therefore they should be above the law (sometimes, literally) because it’s in everyone’s best interest.
Many people in the Wide World of Twitter have recently been discussing the blog post “No, You Really Can’t” by Mary Ann Davidson, Oracle CSO. In her post, she discusses customers who report security vulnerabilities discovered by way of static analysis (those that almost always result from some sort of reverse engineering). The post has since been unpublished, but you can read the full text here. The response has overwhelmingly been in opposition to everything she had to say. And, I would argue, the response has been overwhelmingly wrong.
Let’s get a quick summary of her points to start off with:
- Reverse engineering Oracle code is a violation of the Oracle license agreement
- Customers are doing exactly that by running certain static analysis tools
- Often, customers are having 3rd parties do it for them, and are unaware of this
- Customers generally do this without inquiring about the security program at Oracle
- Customers often have no proof-of-concept of the “vulnerabilities” they find
- Oracle requires each issue to be reported individually
- Oracle will not accept a report from these tools en masse
- Oracle runs many of these tools anyway
- Oracle has access to the actual source and can verify issues they find
- Most of these tools have a “near 100% false positive rate”
To me, these are all perfectly valid points. It seems the response, however, from the security industry is best summed up by the first tweet I saw on the topic, which read: “My first assumption after reading this was that Oracle's web server was hacked and this article is a parody."
This echoes a much wider (what I consider) problem in the infosec industry - we assume we have a right to do whatever we want in the name of security. I’d like to repeat my opening statement now, in case you skipped it: something can be a REALLY good idea, and still be wrong; the two are NOT mutually exclusive.
It’s (at least sometimes) a REALLY good idea, as an author of software, to put your software out there and say “OK guys, have at it. Let me know if you find issues." This has been proven over and over in the Open Source community. The issue is that Oracle has a right to say “no, actually, we don’t want you to look at our code and look for issues,” regardless of whether it’s a good idea or not. They have a legal right to enforce their software license terms, and outside folks, especially customers who agree to the license agreement, have ZERO legal right to break the terms in the name of security.
It is accurate to say, in response, “but bad guys don’t follow the license” or any other “but, bad guys …” statement. However, when you agree to the license terms of Oracle, you agree to them. If you don’t like them, don’t buy the product.
Oracle, presumably, employs a team of security people who are responsible for looking at Oracle products to find security issues. According to the statistics listed in the blog post (believe them or don’t), 3% of new security issues are found by security researchers, 10% by customers, and the remaining 87% are found by internal Oracle audits. If this is to be believed (and if you don’t, well you won’t believe anything I’m saying so you might as well stop), then Oracle indeed does have a decent security team doing investigations and finding issues.
The assumption that Oracle must be incapable of finding security bugs because “look how insecure their products are” is a terrible one. Few of us outside of Oracle know their product development practices. The security team could be literally the best in the world, but if their development cycle is “ship first, pentest later,” it’s irrelevant.
Oracle is probably fighting a battle that a lot of software companies fight - they’ve got some technical debt, and lot’s of old, legacy code that they’re trying to fix up... But, practically speaking, they can’t stop all new development while they go back through and fix it. Or, maybe they really do “ship first, pentest later."
Hurricane Labs may be a security vendor ourselves, but we are also a customer of many products and services. When we decide to purchase a product or, more recently, a cloud service, we make it clear to the potential vendor that we need to understand, and verify, their security program.
If the vendor is unwilling, we find a new vendor. We would never sign up, agree to their terms, and then violate them by performing a pentest. We welcome customers that would like to pentest the services we provide for them, provided they coordinate such tests with us. But we do not welcome anyone, customer or otherwise, who does so without our consent.
As an industry, we need to push a collaborative approach. We need to work WITH vendors to help them produce more secure products. Taking a stance of immunity does nothing good for our image as an industry, nor does it really, in the end, help the people we’re supposed to be protecting.
The software industry has come a long way in supporting the efforts of security researchers, and we need to continue to support companies that support us, not break legal agreements with those who do not.