More GAO Testimony

Posted March 14th, 2008 by

GAO has delivered an updated version of the testimony from February 14th that I talked about here. I’m not going to rehash what I’ve already said, but I want to focus your attention on something I didn’t talk about then: incident statistics.

According to GAO, the number of incidents that were reported to US-CERT increased 259% (*cue shock and awe*, but I think that they forgot to add “average annual increase of 259%” because otherwise the math doesn’t even pass BOTE calculations) from 3634 in FY2005 to 13029 in FY2007. OK, so the number is increasing. But there are several failures in GAO’s logic here that need to be pointed out:

“The need for effective information security policies and practices is further illustrated by the number of security incidents experienced by federal agencies that put sensitive information at risk.”

In other words, they’re trying to indirectly draw a conclusion that the high number of incidents is directly proportional to their audit findings. While this may be true in some (most?) ways, it’s also bad to make this comparison in other ways because you would expect the number of incidents to go down over 2 years because the number of implemented, tested, and integrated security controls has gone up.

So really, what’s the dealio?

The first thing that I would like to point out is that security policies and practices have an indirect impact on security incidents. You don’t have a solid one-for-one comparison that you can use, so I think GAO is doing itself an injustice by trying to correlate these two things. However, you can use incident metrics as a holistic metric for measuring how well your information security program is doing, but overall it’s a very coarse method.

The second thing that I need to point out is the trend of the incident number itself. Anybody who starts tracking incident metrics has to ask themselves one question: because we’re now tracking the number of incidents, does it mean that we’ll now notice that there are more incidents simply due to the fact that we’re now measuring them? It’s the incident response equivalent to Schrödinger’s cat and the Measurement Problem. =)

There’s a couple of reasons that the incident count has increased 259% in just two years:

  • First is the awareness of incidents. Government-wide, 2 things have happened in these 2 years that should have increased the number of reportable incidents: maturity of US-CERT to receive and categorize larger amounts of incident data; and the maturity of agencies to have their own incident response and reporting procedures. In short: the infrastructure to respond and report now exists where it really didn’t 3 years ago.
  • A series of high-profile incidents around PII followed by OMB mandating that all incidents related to PII be reported to US-CERT within one hour. As a result, many more incidents are now being reported if there is a possibility that there is an incident and if there is a possibility that the incident involved PII because it’s the career-safe move: “When in doubt, report it up”. Whether they admit it or not, the people out in the agencies are now what we could call “gun shy” about PII incidents, and that increases the amount of reported incidents.
  • The criteria for an incident is very broad and includes “improper usage”, “scans/probes attempted access”, and “investigations” which is classified as “Unconfirmed incidents that are potentially malicious or anomalous activity deemed by the reporting entity to warrant further review”.

If this were an SIEM or IDS, I would say that we’re flagging on too many things and need to tune our systems down a little bit. Keep in mind that it’s the nature of Government to underreport (when they’re not required to report) and overreport (when they are required to report).

You still need to track the aggregate number of incidents reported to US-CERT and in theory this number should trend downward as we get better at governance at the national level as sort of a “trickle-down infosec economy”. Keep in mind that this number should peak within 5-10 years and then slowly be reduced as we fine-tune our reporting criteria and as we get better at securing information. Of course, I won’t be surprised if it doesn’t due to the threat environment, but that’s a conversation for another day.

However, what I propose is the middle-ground on incident reporting: what we really need to pay attention to for the next couple of years is the number of “severe” incidents. Those are the incidents that have actually have an impact that we really care about. These are mentioned in the GAO report, and we should all be able to recall a handful of them without even seeing what GAO had to say.

Knowing this town, I propose we use “Rybolov’s Washington Post Metric”: How many security incidents were significant enough to be deemed “newsworthy” by the Washington Post and mentioned somewhere. For fine tuning, you could use, say, daily front page v/s the Sunday supplement technology section.

My parting shot for the FISMA-haters:  in the years of yore before FISMA (or GISRA if you want to go back that far), how many of these incidents would have been reported?  It seems like we’re failing if you take the numbers and the reports at face-value, but as GAO says in their title:  “Progress Reported, but Weaknesses at Federal Agencies Persist”.  What more do you need to know?



Similar Posts:

Posted in FISMA | 3 Comments »

On the Dangers of SP 800-26

Posted March 11th, 2008 by

OK, let’s kick it old-sk00l-FISMA-stylie.  Back in the day, there was Special Publication 800-26.  It was part of the first set of guidance to come from NIST on information security (for those of you who can’t count, as of today we’re up to 800-115).  I guess you could say that the original 800-26 was the primordial beginnings of a catalog of controls combined with a self-assessment questionaire.

The cool thing about 800-26 that I liked was the fact that it’s a thinly-disguised version of CMMI:  5 levels of maturity, with level one being “do you have a policy that addresses this” and the plateau level being “have you integrated this control by feeding the results of testing back into all the other levels?”  Hey, sounds like fairly competent engineering and technical management practices (no, I’m not open to debate the merits and warts of CMMI today, tyvm) and is something familiar enough that we can instinctively get the idea of what we’re doing with it.

Now for the bad things:  some of the questions in 800-26 were um… I guess the phrase would be “irrelevant” or “deprecated due to time” or even “worn around the edges”.  The original 800-26 was good for a stop-gap measure, now it’s fallen into the class of “Cute, reminds me of the halcyon days of 2003 when we were so naive in our desire to rid the world of enencrypted telnet sessions”.

Our friends at NIST are going through a revision of 800-26 and have “pulled SP 800-26 off the market” for the time being.  Sometime in the future it will be a questionaire based on SP 800-53, the catalog of controls we all know and love.  The idea being that if you have a low-impact/criticality system, you can do a self-assessment using the new and improved 800-26 and it satisfies quite a few of your security controls requirements.  And hey, we all know that assessment of any IT system begins with self-assessment as some sort of gap assessment:  where are you now, where do you need to be, and how do you bridge the gap between these 2 points.

Of course, the concept of relying on self-assessment for security makes me cringe deep down inside, but keep in mind that this is only for low-criticality systems which means that they do not include PII, financial data, or classified information.  However, if you’re a FISMA-hater, you can always point to 800-26 and say “see, they think that by filling out a questionnaire, they’re making their IT more secure”.

Only here’s the problem:  I still see people on teh Intarweb still referring people to go “Read the Fine Manual” that is 800-26.  I know of at least one agency that requires a completed self-assessment to be submitted as part of their C&A package, and usually as a simple checkbox:  Have you filled one out or not?

The CISO deep down inside of me still wants to know what the value added is.  Sounds to me like we have the typical “Security Wonks Gone Wild” in that we’re so obsessed with filling out checklists and forms that we lost track of what our original intent was.

Now if you know me, you’ll remember that I usually don’t complain about something without having an alternative.  In this case, my alternative is this:  Don’t use 800-26 or recommend it to others and please do point out to people who require you to use 800-26 that its use has been rescinded by NIST and that your organization’s policy should have changed to keep up.

This is the official story from NIST, keep the link handy for the future:

Status of NIST Special Publication 800-26, Security Self-Assessment Guide for Information Technology Systems

NIST SP 800-26 is superseded by NIST SP 800-53 and the draft NIST SP 800 53A.

Agencies are required to use FIPS 200/NIST Special Publication 800-53 for the specification of security controls and NIST Special Publication 800-53A for the assessment of security control effectiveness.



Similar Posts:

Posted in FISMA, What Doesn't Work | 5 Comments »

Towards Actionable Metrics

Posted March 4th, 2008 by

Ah yes, our favorite part of FISMA:  the ongoing reporting of metrics to OMB.  Last year’s guidance on what to report is in OMB Memo 07-19.  It’s worth the time to read, and you probably won’t follow with the rest of this blog post if you don’t at least skim it to find out what kind of items get reported.

 Still haven’t read it?  Fer chrissakes, just look at pages 24-28, it’s a fast read.

If you look through the data that OMB wants, there are 2 recurring themes:  What is the scope/extent/size of your IT systems, and how well are you doing what we told you to do to protect them?  In other words, how effectively are you, Mr CISO, executing at the operational level?

We’re missing one crucial bit of process here–what are we actually going to do with scoping metrics and operational performance metrics at the national, strategic level?  What we are collecting and reporting are primarily operational-level metrics that any good CISO should at least know or be able to guess at to do their job, but it’s not really the type of metrics that we need to be collecting at levels above the CISO unless our sole purpose is to watch over their shoulder.

As our metrics gurus will point out, the following are characteristics of good metrics:

  • Easy to collect:  I think the metrics that OMB is asking for are fairly easy to collect now that people know what to expect.  Originally, they were not.
  • Objective:  Um, I’ll intentionally side-step this one.  Suffice it to say that I’ve heard from several people a story where the punch line goes something like “Your security can’t be this good, we’ve already decided that you’re getting a “D”.
  • Consistent:  Our consistency is inconsistent.  Look at how many times the FISMA grading scale has changed, and we still wonder why people think it’s not rooted in any kind of reality.  And yes, I’m advocating yet another change, so I’m probably more an accomplice than not.
  • Relevant:  We do a fairly good job at this.  Scoping and performance metrics are fairly relevant.  I have some questions about if our metrics are relevant at the appropriate level, but I’ve already mentioned that.
  • Actionable:  This is where I think we fall apart because we’re collecting metrics that we’re not really using for anything.  More on this later….

Now, as Dan Geer says in his outstanding metrics tutorial, the key to metrics is to start measuring anything you can (caveat, 6-MB PDF).  The line of though goes that if you can collect a preliminary set of data and do some analysis on it, it will tell you where you really need to be collecting metrics.

The techie version of this is that the first server install you do, you will blow it away in 6 months because you now know better how you operate and what you need the configuration really to be.

Now ain’t that special?  =)

So the question I pose is this:  after 6 years, have we reached the watershed point where we’ve outgrown our initial set of metrics and are ready to tailor our metrics based on what we now know?

I think the answer is yes, and applying our criteria for good metrics, what we need to answer is a good set of questions:

  • What national-level programs can reduce the aggregate risk to the government?
  • What additional support do the agencies need and how do we translate that into policy?
  • As an executive branch, are we spending too much or too little on security?  Yes, I know what the analysts say, but their model is for companies, not the Government.
  • What additional threats are there to government information and missions?  Yes, I’m talking about state-sponsored hacking and some of the other things specific to the government.  Is it cost-effective to blackhole IP ranges for some countries for some services?
  • Is it more cost-effective to convert all the agencies to one single NSM/SIEM/$foo ala Einstein or is it better to do it on a per-agency basis?
  • What is the cost of implementing FDCC, and is it more cost-effective and risk-effective to do it immediately or to wait until the next tech refresh on desktops as we migrate to Vista or upgrade Vista to the next major service pack?
  • What is the cost-benefit-risk comparison for the Trusted Internet Connections initiative, and why did we come up with 50 as a number v/s 10 or 100?
  • Is there a common theme in unmitigated vulnerabilities (long-term, recurring POA&Ms) across all the agencies that can be “fixed” with policy and funding at the national level?  Say, for example, the fact that many systems don’t have a decent backup site, so why not a federal-level DR “Hotel”?
  • Many more that are above and beyond my ability to generate today…

In other words, I want to see metrics that produce action or at least steer us to where we need to be.  I’ve said it before, I’ll say it again:  metrics without actionability means that what we’ve ended up doing is performing information security management through public shame.  Yes, some of that is necessary to serve as a catalyst to generate public support which generates Congressional support which gets the laws on the books to initiate action, but do we still need it now that we have those pieces in place?

If I had my druthers, this is what I would like to see happen, maybe one day I’ll get somebody’s attention:

  • OMB and GAO directly engage Mr Jacquith to help them build a national-level metrics program.
  • We produce metrics that are actionable.
  • We find a way to say what our problems are without overreacting.  I don’t know if this can happen because of cultural issues.
  • We share the metrics and the corresponding results with the information security management world because we’ve just generated the largest-scale metrics program ever. 

And oh yeah, while I’m making wishes, I want a friggin’ pony for Christmas! =)



Similar Posts:

Posted in FISMA, What Doesn't Work, What Works | 3 Comments »

Been Off Teaching

Posted February 27th, 2008 by

I taught a 2-day seminar yesterday and the day before on Certification and Accreditation (NIST SP 800-37).  It’s fun but tiring, and yesterday I definitely got worked pretty hard, teaching 800-53, 800-53A, and C&A in the SDLC

I like to teach because I always learn when I do it.  But then again, I learn when I blog, too.

Anyway, revelations from yesterday, and things that I don’t really have an answer to yet:

#1 We need a better tool than a POA&M because we’re trying to use them in 2 different ways.  For those of you who don’t govorit’ govie, a POA&M is a “Plan of Actions and Milestones”, what you in the civilian world would call an action items list, a punch-list, or even a list of vulnerabilities.  The problem is that we’re trying to use the POA&M both as a short-term tasklist and as a long-term strategic planning tool.  I need to be able to do both, and I’m not sure if one POA&M list is the end-all be-all.  What I really need is 2 lists, one with a 30-60-90-day scope that is the ISSO/Project Team’s view, and one that is a long-term 1-2-5-year scope to mitigate programatic, enterprise-wide vulnerabilities that require CapEx or other investment such as standing up a secondary data center to support DR/COOP/BCP/$FooFlavorOfTheMonth.

#2 We have 2 conflicting purposes in information security in Government.  One is presenting a zero-defects face to the world.  The other is being able to freely discuss problems so that we can get them fixed.  Understanding the dynamics between these 2 competing ideas is understanding why the Government succeeds in some areas and fails in others.  To be bluntfully honest, I don’t think that as a profession, security people have a good, valid model to deal with this conflict, and until we do, we will have a significant cultural obstacle to go around.

#3 As a government (and as an industry), we are good at the tactical level and fairly good at the operational level, but where we need peoples’ thought-power to go is towards the strategic level.  This is my big heartburn about FISMA report cards:  what we should be doing is to collect Government-wide metrics in order to answer questions that we need to understand before we make strategic decisions.  As it is right now, our strategic moves are ad-hoc and consist mostly of trying to upscale some good security concepts (FDCC, limiting Internet connections, etc) into something that might or might not work at such a huge, megalithic scale.

#4 We’ve bought into the fact that CISOs work for the CIO.  This is old-school stylie and I’m not convinced that this is the way to do it.  If you look at the security controls in SP 800-53, there are activities entirely out of scope of the IT department.  Usually these involve gates, guards, and guns; personnel security; and facilities management.  For the time being, the official response is that “well, the CISO has to work with the people in charge of those areas to get their job done” and I’m thinking that maybe we’ve done a disservice to the senior security officer in our agencies by not having them report directly to the agency head.  Maybe we need true CSOs to take care of the non-IT security aspects and a CISO to take care of the geekspace.

The funny thing to me is that some of our students come in expecting to get spoon-fed information on the one true way to do C&A, and what most of them walk away with is ideas for thought on what are the strengths and weaknesses to how we do business as Government information security people and how do we make it better.

The last thing that I noticed yesterday after all the classes were over:  we taught a C&A class and did not have a dedicated session on what a System Security Plan is and how to write one.  Deep down inside, I like this, because if you do the right things security-wise, you’ll find that the SSP practically writes itself.  =)



Similar Posts:

Posted in FISMA, NIST, Rants, What Doesn't Work | 2 Comments »

Government Can’t Turn on a Dime, News at 11

Posted February 27th, 2008 by

Are we done with the Federal Desktop Core Configuration yet? Are we compliant with OMB Memo 07-11?? Have we staved off dozens of script-kiddies armed with nmap and some ‘sploits they downloaded from teh Intarweb, all through hardening our desktops to the one true standard?

No? I didn’t think we would. Of course, neither did the CISOs and other security managers out there in the agencies. It’s too much too fast, and the government is too large to turn on a time. Or even a quarter, for that matter. =)

Now get ready for a blamestorm at the end of the month. By that time, all the agencies are supposed to report on their status to OMB. It’s not going to be pretty, but it’s hardly unexpected.

So why haven’t we finished this yet? Inquiring minds want to know.

Well, it all goes back to the big question of “how many directions can today’s government CISO be pulled in?” Think about it: You’ve got IPV6, HSPD-12, all the PII guidance (Memo 06-16 et al), reducing Internet connections down to 50, aligning your IT systems with the Federal Enterprise Architecture, getting your Internet connections monitored by Einstein, and the usual administrative overhead. that’s too many major initiatives all at the same time, and it’s a good way to be torn in too many directions at the same time. In government-speak, these are all what we call “unfunded mandates”, and one is bad enough to cripple your budget, much less a handful of them.

Where we’re at right now with FDCC is that the implementers are finding out what applications are broken, and we’re starting to impact operations–not being able to get the job done. Yes, this is the desired effect, it puts the pressure on the OS vendors and the application vendors, and it’s a good thing, IMO–we won’t buy your software if it doesn’t support our security model, and we’ll take our $75B IT budget with us. Suddenly, it’s the gorilla of market pressure throwing its weight around, and the BSOFH inside me likes this.

Now don’t get me wrong, I’m a big believer in FDCC (for both the Government and with a payoff for the civilian world), and I think it’s security-sound once it’s implemented, but in order for it to work, the following “infrastructure” needs to be in place:

  • An official image shared between agencies
  • Ability to buy a hardened FDCC OS as part of purchasing the hardware
  • Microsoft rolling FDCC into its standard COTS build that it offers to the rest of the world
  • Applications that are certified to run on the “one standard to rule them all” and on a list so I can pick one and know that it works
  • Security people who understand GPOs and that even though it’s a desktop configuration standard, it affects servers, too
  • An automated tool to validate technical policy compliance (there, I said it, and in this space it actually makes sense for a change)

Until you have these things, what OMB is asking for the agencies to get squeezed between a vendor who can’t ship a default-hardened OS, lazy applications vendors who won’t/can’t fix their software, and the 5+ levels of oversight that are watching over the shoulder of the average ISSO at the implementation level. In short, we’re throwing the implementers under the bus and making them do our dirty work because at the national level we have failed to build the right kind of influence over the vendors.

Gosh, it sounds like this would go so much better if we phased in FDCC along with the next tech refresh of our desktops, doesn’t it? That’s how the “sane world” would tackle something like this. Not a sermon, just a thought. =)



Similar Posts:

Posted in DISA, FISMA, NIST, Rants, Technical, What Doesn't Work, What Works | 1 Comment »

In Search of a Better Catalog of Controls

Posted February 25th, 2008 by

I’ve been thinking about SP 800-53 lately because almost all of our efforts in the information assurance world lately are revolving around a catalog of controls concept.

Advantages that you get with a catalog of controls:

  • Standardization (Important when you’re dealing with auditors)
  • A minimum level of due dilligence “across the board”
  • Each control can have an objective/intent, implementation guidelines, and test cases to validate effectiveness
  • Easy to score (my cynical readers can retort with “auditor checklist” any time now)
  • Applicable to a workforce with varying degrees of training and ability (ie, you don’t all have to have PHDs in IA)

And the dark underbelly of a catalog of controls:

  • People just do the bare minimum because that’s all they get credit for
  • Controls still need to be tailored by highly-educated people
  • Enforces a “one size fits all” mentality which doesn’t work in the real world
  • Your security is not streamlined because you’re doing extra where you don’t need it and not enough where you do need it

If you had to recreate your own catalog of controls for internal and external use, how would you go about it?  Well, this is the approach that NIST used to make 800-53:

  • Collect all the control requirements from the various applicable laws (FISMA, Privacy Act, etc)
  • Collect all the control requirements from other applicable standards, policies, and procedures (PPD-63, OMB Memoranda)
  • Collect best practices from vendors and experienced security managers
  • Consider the control requirements from comparable control catalogs (27001, PCI, A-123, etc)
  • Lump all the requirements together and take the high-water mark
  • Add some housekeeping controls that have been implied to bridge the current-state with the desired-state (document your security controls, perform a formal risk assessment, etc)

So far, this all makes sense and is exactly how most of us would do it, right?  NIST even did us a HUGE favor and gave us a traceability matrix in the back of the book to show us where each control requirement came from.

Except for one thing:  this is a compliance-based model, and I’ve just described how to build your own compliance framework.  No, that wasn’t my intent to prove, I realized it as I recreated the process.

We live in a risk management world where what I really need to provide effective and adequate security is a control framework based on threat, risk, and countermeasure.  A catalog of controls does the first 2 for me, and what I end up with as the security practitioner downstream is just the list of countermeasures detatched from what we’re really trying to accomplish–the intent.

So there are 2 approaches that NIST took to minimize some of the negatives of the catalog of controls approach.  The first one is that they allow you to catagorize your system (FIPS-199) and pick a level of controls.  It’s not perfect by any means, but it does that first part of tailoring the controls into large, xtra-large, and jumbonormous-sized buckets so you can choose your own level of involvement.  Sadly, though, most often this process involves the managerial equivalent of throwing darts at a dartboard with H, M, and L written on it.  Yes, it’s easy, but the best thing you can do for security in the government is to pick the right size of security helping that you’re going to eat.

The second thing that NIST gives you is the ability to tailor your controls.  If you’re not doing business impact assessments for where you undershoot and overshoot the untailored 800-53 controls, you’re doing yourself a great injustice.

However, just like the compliance-driven model that it supports, a catalog of controls is only a 75% solution.  The geek in me cringes that we would be using a rock chisel for rocket surgery, but in effect that’s what we’ve done so far as an industry.  And yes, 75% is better than 0%, but it’s still 25% short of perfection.

Will our catalog of controls be around in 5 years?  I don’t know.  There might be other ways to do what we want to do, and I’m sure a couple bright and not-so-bright people would step forward with their opinions if you asked them.  I for one would like to see 2 control catalogs:  one based on the minimum level of compliance where you do not have an option to deviate (and kept very small), and a second catalog that breaks down into threat-risk-countermeasure tuples so that I can exclude controls based on the fact that the threat and subsequent risk does not exist.

After all, that’s the point of tailoring security controls:  to answer the age-old question “How much is enough?”



Similar Posts:

Posted in FISMA, NIST, Risk Management, What Doesn't Work, What Works | 3 Comments »

« Previous Entries Next Entries »


Visitor Geolocationing Widget: