There's no folk without some ire
[I was going to title this "PATRIOT - Piddling Around The Real Issues Of Terrorism", but I figured that'd be a little too inflammatory.]
The other day, I was listening to good-old-fashioned talk radio, and something the host said surprised me. He was blathering about how Democrats wanted to make friends with terrorists.
It sounds really stupid when you put it in those terms, but yes - that's essentially the approach that has to happen. Like a pyramid scheme, the terrorists at the top feed hatred down, and get power back up the chain. While that feed of hatred is accepted by their "down-line", the feed of power up the line continues. You don't stop terrorism by making friends with the guys at the top, you stop terrorism by making nice to the guys at the bottom; you remove the power-base by making it difficult for people to hate you.
So, how does that remotely connect to the usual topic of this blog, computer security? Like this:
Vendors [think Microsoft, but it also applies to small vendors like me] face this sort of behaviour, on a smaller level, when it comes to vulnerability reports. Rightly or not, there's a whole pile of hatred built up among some security researchers against vendors, initially because over the years vendors have ignored and dismissed vulnerability reports on a regular basis. As a result, those researchers believe that the only way they can cause vendors to fix their vulnerabilities is to publicly shame the vendors by posting vulnerability announcements in public without first contacting the vendor.
I'm really not trying to suggest that vulnerability researchers are akin to terrorists. They're akin to an oppressed and misunderstood minority, some members of which have been known to engage in acts which are inadvertently destructive.
Microsoft and others have been reaching out of late to vulnerability researchers, introducing them to the processes that a vendor must take when receiving a vulnerability report, and before a patch or official bulletin can be released. Some researchers are still adamant that immediate public disclosure is the only acceptable way; others have been brought over to what I think is the correct way of thinking - that it helps protect the users if the first evidence that exists in public is a bulletin and a patch.
The security industry gets regularly excited by the idea of a "zero-day exploit" - a piece of malware that exploits a vulnerability from the moment that the vulnerability is first reported. I think it's about time we got excited about every release of a "zero-day patch".