US-CERT still lacks clues
I used to work there- So I know for a fact that US-CERT has some very questionable metrics, statistics, and opinions as to what they consider Internet Vulnerabilities, threats, and current activity on the internet. This article here points out that US-CERT has failed to normalize their vulnerability database prior to announcing year end totals for vulnerabilities, thus giving the false impression that some types of Operating Systems are more vulnerable to attack than others. Essentially, they neglected to remove updated entries from the database tally, thus often including three and four entries for the same vul.
But it goes a bit deeper than “forgetting” to normalize the data. Why would US-CERT, part of the Department of Homeland Security, neglect to remove duplicate entries? After all, most of the Vul Team operates out of Carnegie Melon University in Pittsburgh, and were formerly CERT/CC. These propeller-heads” (if you met them, you would agree) can wax existential about the meaning of the word “severe” and use a complex twelve step equation to determine the severity of a vulnerability, when normal people would quickly agree, “whoa, that looks bad.” So why would an organization of educated elites, who take millions of Homeland Security funds from the government make such a mistake?
It wouldn’t be to pad their stats, now would it? It wouldn’t be that way so the congressional committees that have oversight of the organization would be fooled into thinking that there is more work produced than there really is, now would it? After almost three years, there is still disagreement among the various internal departments of US-CERT between the government, the former CERT/CC, the CVE team run by Mitre, and the participating private industry partners over how best to secure the nation’s cyber components against attack. And it will continue until CERT/CC separates itself from DHS.