Ah yes, our favorite part of FISMA: the ongoing reporting of metrics to OMB. Last year’s guidance on what to report is in OMB Memo 07-19. It’s worth the time to read, and you probably won’t follow with the rest of this blog post if you don’t at least skim it to find out what kind of items get reported.
Still haven’t read it? Fer chrissakes, just look at pages 24-28, it’s a fast read.
If you look through the data that OMB wants, there are 2 recurring themes: What is the scope/extent/size of your IT systems, and how well are you doing what we told you to do to protect them? In other words, how effectively are you, Mr CISO, executing at the operational level?
We’re missing one crucial bit of process here–what are we actually going to do with scoping metrics and operational performance metrics at the national, strategic level? What we are collecting and reporting are primarily operational-level metrics that any good CISO should at least know or be able to guess at to do their job, but it’s not really the type of metrics that we need to be collecting at levels above the CISO unless our sole purpose is to watch over their shoulder.
As our metrics gurus will point out, the following are characteristics of good metrics:
- Easy to collect: I think the metrics that OMB is asking for are fairly easy to collect now that people know what to expect. Originally, they were not.
- Objective: Um, I’ll intentionally side-step this one. Suffice it to say that I’ve heard from several people a story where the punch line goes something like “Your security can’t be this good, we’ve already decided that you’re getting a “D”.
- Consistent: Our consistency is inconsistent. Look at how many times the FISMA grading scale has changed, and we still wonder why people think it’s not rooted in any kind of reality. And yes, I’m advocating yet another change, so I’m probably more an accomplice than not.
- Relevant: We do a fairly good job at this. Scoping and performance metrics are fairly relevant. I have some questions about if our metrics are relevant at the appropriate level, but I’ve already mentioned that.
- Actionable: This is where I think we fall apart because we’re collecting metrics that we’re not really using for anything. More on this later….
Now, as Dan Geer says in his outstanding metrics tutorial, the key to metrics is to start measuring anything you can (caveat, 6-MB PDF). The line of though goes that if you can collect a preliminary set of data and do some analysis on it, it will tell you where you really need to be collecting metrics.
The techie version of this is that the first server install you do, you will blow it away in 6 months because you now know better how you operate and what you need the configuration really to be.
Now ain’t that special? =)
So the question I pose is this: after 6 years, have we reached the watershed point where we’ve outgrown our initial set of metrics and are ready to tailor our metrics based on what we now know?
I think the answer is yes, and applying our criteria for good metrics, what we need to answer is a good set of questions:
- What national-level programs can reduce the aggregate risk to the government?
- What additional support do the agencies need and how do we translate that into policy?
- As an executive branch, are we spending too much or too little on security? Yes, I know what the analysts say, but their model is for companies, not the Government.
- What additional threats are there to government information and missions? Yes, I’m talking about state-sponsored hacking and some of the other things specific to the government. Is it cost-effective to blackhole IP ranges for some countries for some services?
- Is it more cost-effective to convert all the agencies to one single NSM/SIEM/$foo ala Einstein or is it better to do it on a per-agency basis?
- What is the cost of implementing FDCC, and is it more cost-effective and risk-effective to do it immediately or to wait until the next tech refresh on desktops as we migrate to Vista or upgrade Vista to the next major service pack?
- What is the cost-benefit-risk comparison for the Trusted Internet Connections initiative, and why did we come up with 50 as a number v/s 10 or 100?
- Is there a common theme in unmitigated vulnerabilities (long-term, recurring POA&Ms) across all the agencies that can be “fixed” with policy and funding at the national level? Say, for example, the fact that many systems don’t have a decent backup site, so why not a federal-level DR “Hotel”?
- Many more that are above and beyond my ability to generate today…
In other words, I want to see metrics that produce action or at least steer us to where we need to be. I’ve said it before, I’ll say it again: metrics without actionability means that what we’ve ended up doing is performing information security management through public shame. Yes, some of that is necessary to serve as a catalyst to generate public support which generates Congressional support which gets the laws on the books to initiate action, but do we still need it now that we have those pieces in place?
If I had my druthers, this is what I would like to see happen, maybe one day I’ll get somebody’s attention:
- OMB and GAO directly engage Mr Jacquith to help them build a national-level metrics program.
- We produce metrics that are actionable.
- We find a way to say what our problems are without overreacting. I don’t know if this can happen because of cultural issues.
- We share the metrics and the corresponding results with the information security management world because we’ve just generated the largest-scale metrics program ever.
And oh yeah, while I’m making wishes, I want a friggin’ pony for Christmas! =)
Similar Posts: