Silently fixing security bugs - how dare they!
Over in "Random Things from Dark Places", Hellnbak posts about reducing vulnerability counts by applying the SDL (Security Development Lifecycle), and makes the very reasonable point that vulnerabilities found prior to release by a scan that is part of the SDL process cannot be counted as failures of the SDL process. What's more, those vulnerabilities can be silently fixed by the vendor before shipping / deploying the product being reviewed. [Obviously, not fixing them would be a really bad idea]
What intrigued me, though, was this line:
But, as Ryan [Naraine] said — issues found in public code that are fixed silently are a real issue. While I have picked on Microsoft specifically for this practice the sad reality (that I quickly learned after publicly picking on MS) is that pretty much all vendors do this.
So, let's see now... this is talking about a patch, hotfix, or service pack, that removes a security vulnerability from a product, but where the vulnerability (and its fix) does not get announced publicly.
There are two reasons not to announce a security vulnerability, in my view:
- You don't want to.
- You can't.
Let's subdivide reason 1, "You don't want to":
- You feel it would adversely affect public opinion, stock price, user retention...
Well, that's kind of bogus, isn't it? Given some of the vulnerability announcements that have appeared, what on earth could be worse than remote execution, elevation of privilege, and complete control over your system? The only way to make this accusation is to assert that the vendor randomly picks vulnerabilities to announce or not announce, to somehow reduce the overall numbers - and then manages to do so in such a way that noone else notices the vulnerability that was fixed.
That's not security, and any vendor who did that would find its security staff soon revolting against that practice. There isn't enough of a glut of security workers to be engaging in a practice that assumes you can hire more to replace the disgusted ones that quit.
- You're tired of going through the process of documenting the bug, its workarounds and/or mitigations, and would rather be doing something else, like, oh, I don't know, fixing more vulnerabilities.
That's not good security - create a more streamlined and automated process for creating the announcements, and do both - find and fix more vulnerabilities and make announcements for the ones you find. If you're too busy to announce all the vulnerabilities in your product, you're too busy to fix them all.
- You found the vulnerability internally, and would like to prevent it from being exploited, by releasing the patch along with an announced fix and hoping people install it.
That's not terribly reliable as a patching policy. It makes some small sense for related fixes, but then why wouldn't you announce that as a related fix in the related announcement? Perhaps it makes sense for architectural fixes, where the only good fix is to go to the next level of service pack, but then wouldn't you want to publicise workarounds for those who can't apply the next service pack for one reason or another?
But the biggest reason not to do this is that when you release a patch, people will reverse-engineer it, to figure out how to exploit the unpatched version - and they'll find the change you didn't mention as well as the one you did, and will exploit both of them. But your users will only be aware of one problem that needs patching, and may have decided that they can mitigate that without patching.
So, pretty much bad security on that approach, too.
So, "You don't want to" comes out as bad security, and it's the sort of bad security that you would have to fix to employ - and continue to employ - a halfway decent security team.
What about "You can't" - how could that come about?
- You have a legal judgement or contract requirement forbidding you from disclosing vulnerabilities. Hey, Microsoft has some of the best and most expensive lawyers on the planet, but even they get stuck with tough legal decisions that they have to abide with, and can't do anything about. If a security vulnerability was considered to be a "threat to national security", the current administration (and possibly many others) would be only too quick to deem it so secret that no-one could reveal its presence. And once you accept that possibility, it isn't hard to think of too many circumstances where a company might be forced to keep a vulnerability quiet.
- You know enough to fix the code, but not enough to classify the vulnerability or explain its workarounds or mitigations.
Yeah, that's pretty much the truth for all the announced vulnerabilities, too - how many times have you seen a vulnerability announcement that says "this cannot be exploited remotely", followed by one a few days later with updated information that reveals that, oh yes it can. This doesn't appear to be a good reason not to announce a vulnerability.
- You don't know the vulnerability is there, or you don't realise that you fixed a vulnerability.
Okay, that last one's the topper, isn't it? How can you announce a fix for a vulnerability that you don't know about?
Clearly, you can't.
Just as clearly, perhaps you're thinking, you can't fix a vulnerability that you don't know about, right?
Wrong. You can very easily fix a vulnerability about which you know nothing. Here's a couple of hypothetical examples:
After we moved into our new house, we changed all the locks on the doors. Why? Because the new locks were prettier. In doing so, we fixed a vulnerability (the former owner could have kept the keys, and exploited us through the old locks) - but we didn't intend to fix the vulnerability, we just wanted prettier locks.
Years ago, I needed a piece of functionality that wasn't provided by the Win16 API, so I wrote my own routine to do file path parsing. A couple of years back, I dropped support for Windows 3.1, and in a recent code review, I spotted that the file path parsing routine was superfluous. So I removed it. In removing it, I didn't spend a lot of time looking at the code - there was a vulnerability in there, but who does a code review of a function they're removing? So now, I've fixed a vulnerability that I didn't know existed.
Too many times, we assert evil intent for those actions that we disagree with. Ignorance is a far better explanation, along with incompetence, expediency, and just plain lack of choice. Note that ignorance is no bad thing - as in my hypothetical case, a genuine attempt to improve quality leads to a security improvement of which the developer was wholly ignorant.
Whether vendors don't want to disclose all of their vulnerabilities when patching, or simply can't, because they didn't realise the scope of a fixed vulnerability, it's important to stay current with patches wherever that would not interfere with your production applications. Because one day there will be a flaw patched, which your company will be attacked through. If you didn't apply that patch, you will be owned.