AfterBites: Incident Reporting and Science 101
I need to preface this with a disclaimer: I am not criticizing SANS for carrying the article. It's instructive, and that's always useful. I wish, however, that technology journalists were a bit more skeptical or clueful - and - as they say, "that's our story."
Reports of Cyber Incidents on the Rise
(February 17, 2009)
The number of cyber security incidents at federal civilian agencies reported to the US Department of Homeland Security's US-CERT has tripled since 2006. In fiscal 2008, 18,050 incidents were reported, compared with 12,986 in fiscal 2007 and 5,144 in fiscal 2006.
Agencies are required to report cyber security incidents under the Federal Information Security Management Act (FISMA); such incidents include unauthorized access, denial of service, malicious code, improper use, scans, probes and attempted unauthorized access. The significant increase over the last several years can be attributed to both an increase in malware and a heightened awareness of and ability to detect incidents.
Small Businesses Want Centralized Cyber Incident Reporting Organization
(February 19, 2009)
A report from the Federation of Small Businesses says that 54 percent of small businesses have experienced fraud or cyber crime over the last year. Although more than one-third of respondents do not report the incidents to police or to banks because they believe it would not make a difference, 53 percent of those surveyed would like specific information about how and where to report the incidents. Eighty-five percent of respondents said that they would make use of organizations established specifically to gather the information and use it to combat fraud. The average annual cost of cyber crime and fraud to small businesses in the UK is GBP 800 (US $1,140).
Let's start with the second article first, because it's less interesting. The headline should have said "UK small businesses" but that's a minor detail. Does this set off your stealth marketing alarm? It pegged the needle on mine; so I'd like to make a prediction: someone is out beating the bushes, right now, to start up that reporting center. Let's see if I'm right and, within the next year, someone announces that they're either member-funded (in which case they will quickly vanish) or government-funded and are offering that capability. Those of you who've been around information security since the early 1990's will remember the spectacular rise and fall of break-in reporting in the US, with attrition.org, CERT, and CSI/FBI publishing various statistics that meant - uh - various things. Usually, what they meant, to me, was "security reporting is a hard problem." ... And that's the topic of the first article.
Aside from avoiding self-selected samples in statistics, scientists learn early on that they must always try to measure and report consistently: apples count against apples, past reporting practices need to be normalized against, etc. We've all seen what happens when reporting goes wrong - you get things like the apparent "epidemic" of autism in children; it's not an epidemic, it's that reporting standards have changed and doctors are more consistently diagnosing problems by one name, rather than under a disparate group of other developmental problems. Reporting variations cause all kinds of trouble in science, and aid and abet a lot of pseudoscience; this is why patients report all kinds of beneficial effects from self-medication with "alternative cures" but those cures perform on par with a placebo when subjected to clinical trials. Don't worry - I won't turn this into an extended anti-pseudoscience rant - the point is: if you're not careful with how things are reported you can see trends that simply aren't there. And, from that, you can jump to all kinds of conclusions.
From the Federal Computer Week article:
"Federal civilian agencies reported three times as many cyber-related incidents in fiscal 2008 as they did in fiscal 2006 to the Homeland Security Department's office that coordinates defenses and responses to cyberattacks. Meanwhile, an official says the office suspects the actual number of cyber incidents is higher."
Uh-huh. It could be. But then again, maybe not. It depends on how well everyone reports incidents, and whether they are reporting consistently. If what you're reporting is incidents, then the crucial questions are:
- What is an "incident"?
- How consistently are "incidents" being counted?
As we search through the article for answers, we discover that the definition of "incident" is:
"cyber incidents, which are defined as acts that violate computer security or acceptable-use policies."
Wow <sarcasm>there's no room for interpretation, there!</sarcasm>!! So, one agency might report a count of users who got malware on their desktops, while another might just forward "critical" severity IDS log data? Unless there is a useful, common, definition of "incident" for purposes of reporting, they may as well be handing in their golf scores. If every agency had the same type of IDS, plugged into the same (logical) place in their Internet link, with the same rules enabled, then there might be some interesting data in the events reported - but even then, the results would be highly product-dependent.
More importantly, reporting such as this actually makes bad practices appear good - suppose Agency A reports 10,000 incidents while Agency B reports 3. It could be that Agency B only reported 3 because the only system that they monitor is the network manager's personal desktop machine and all the other systems are happy participants in a huge 'botnet. Or, since incidents aren't graded, it's possible that Agency B's 3 incidents consist of theft of nuclear weapons secrets, while Agency A's represent p0rn-surfing attempts.
Usually, when I attack pseudo-science in computer security, someone replies, "Yes, but some data is better than none at all!" Absolutely not true! Deceptive, inaccurate, and misleading data is worse than none at all, because it can encourage you to spend your time and energy barking up the wrong tree. In the case of computer security, this is a very real problem, since there are limited budget dollars to allocate and it's crucial to understand the problems an agency is experiencing, in order to form a technology strategy based on something more than wishful thinking. Most of the time, the person who says "some data is better than none at all" has "regional sales manager" on his business card. Go ahead: call me "cynic".
The reporting requirements were part of Federal Information Security Management Act (FISMA) standards, and it shouldn't surprise any of you to know that FISMA compliance didn't immediately happen at all federal agencies. So, as more agencies began to put reporting in place, what do you think is going to happen? Of course: the number of incidents reported goes up! This article, from 2005 reports that 35% of agencies (I wonder how they calculated that!?) expect to not be compliant by the end of 2005 - but the contents of the article indicate a lot of agencies are only going to be compliant with a few aspects of the standard. So we shouldn't be surprised by:
(reporting to US-CERT) a total of 18,050 incidents in fiscal 2008, compared with 12,986 in fiscal 2007 and 5,144 in fiscal 2006, according to DHS officials.
As I look at these pieces of "data" I can't help but wonder if what they're really measuring has more to do with the rate of FISMA compliance than anything to do with security incidents at all.
I'll give the penultimate word to US-CERT:
Mischel Kwon, US-CERT’s director, said that the numbers represent both an increase in malware and improvements in the capabilities of US-CERT and agencies to detect and report cyber incidents.
“As we mature and become more robust, and we deploy more tools, incident numbers will go up,” she said. “
Both parts of the story are true: There is an increase in mal events, and there is an increase in capabilities in order to detect those mal events.” We don't really have any useful data, yet, but our magic 8-ball says "signs point to yes."