Some Thoughts on POA&M Abuse

Posted June 8th, 2009 by

Ack, Plans of Action and Milestones.  I love them and I hate them.

For those of you who “don’t habla Federali”, a POA&M is basically an IOU from the system owner to the accreditor that yes, we will fix something but for some reason we can’t do it right now.  Usually these are findings from Security Test and Evaluation (ST&E) or Certification and Accreditation (C&A).  In fact, some places I’ve worked, they won’t make new POA&Ms unless they’re traceable back to ST&E results.

Functions that a POA&M fulfills:

  • Issue tracking to resolution
  • Serves as a “risk register”
  • Used as the justification for budget
  • Generate mitigation metrics
  • Can be used for data-mining to find common vulnerabilities across systems

But today, we’re going to talk about POA&M abuse.  I’ve seen my fair share of this.

Conflicting Goals: The basic problem is that we want POA&Ms to satisfy too many conflicting functions.  IE, if we use the number of open POA&Ms as a metric to determine if our system owners are doing their job and closing out issues but we also turn around and report these at an enterprise level to OMB or at the department level, then it’s a conflict of interest to get these closed as fast as possible, even if it means losing your ability to track things at the system level or to spend the time doing things that solve long-term security problems–our vulnerability/weakness/risk management process forces us into creating small, easily-to-satisfy POA&Ms instead of long-term projects.

Near-Term v/s Long-Term:  If we set up POA&Ms with due dates of 30-60-90 (for high, moderate, and low risks) days, we don’t really have time at all to turn these POA&Ms into budget support.  Well, if we manage the budget up to 3 years in advance and we have 90 days for high-risk findings, then that means we’ll have exactly 0 input into the budget from any POA&M unless we can delay the bugger for 2 years or so, much too long for it to actually be fixable.

Bad POA&Ms:  Let’s face it, sometimes the one-for-one nature of ST&E, C&A, and risk assessment findings to POA&Ms means that you get POA&Ms that are “bad” and by that I mean that they can’t be satisfied or they’re not really something that you need to fix.

Some of the bad POA&Ms I’ve seen, these are paraphrased from the original:

  • The solution uses {Microsoft|Sun|Oracle} products which has a history of vulnerabilities.
  • The project team needs to tell the vendor to put IPV6 into their product roadmap
  • The project team needs to implement X which is a common control provided at the enterprise level
  • The System Owner and DAA have accepted this risk but we’re still turning it into a POA&M
  • This is a common control that we really should handle at the enterprise level but we’re putting it on your POA&M list for a simple web application

Plan of Action for Refresh Philly photo by jonny goldstein.

Keys to POA&M Nirvana:  So over the years, I’ve observed some techniques for success in working with POA&Ms:

  • Agree on the evidence/proof of POA&M closure when the POA&M is created
  • Fix it before it becomes a POA&M
  • Have a waiver or exception process that requires a cost-benefit-risk analysis
  • Start with”high-level” POA&Ms and work down to more detailed POA&Ms as your security program matures
  • POA&Ms are between the System Owner and the DAA, but the System Owner can turn around and negotiate a POA&M as a cedural with an outsourced IT provider

And then the keys to Building Good POA&Ms:

  • Actionable–ie, they have something that you need to do
  • Achievable–they can be accomplished
  • Demonstrable–you can demonstrate that the POA&M has been satisfied
  • Properly-Scoped–absorbed at the agency level, the common control level, or the system level
  • They are SMART: Specific, Manageable, Attainable, Relevant, and within a specified Timeframe
  • They are DUMB: Doable, Understandable, Manageable, and Beneficial

Yes, I stole the last 2 bullets from the picture above, but they make really good sense in a way that “know thyself” is awesome advice from the Oracle at Delphi.



Similar Posts:

Posted in BSOFH, FISMA | No Comments »
Tags:

When Standards Aren’t Good Enough

Posted May 22nd, 2009 by

One of the best things about being almost older than dirt is that I’ve seen several cycles within the security community.  Just like fashion and ladies’ hemlines, if you pay attention long enough, you’ll see history repeat itself, or something that closely resembles history.  Time for a short trip “down memory lane…”

In the early days of computer security, all eyes were fixed on Linthicum and the security labs associated with the NSA.  In the late 80’s and early 90’s the NSA evaluation program was notoriously slow – glacial would be a word one could use…  Bottom line, the process just wasn’t responsive enough to keep up with the changes and improvements in technology.  Products would be in evaluation for years before coming out of the process with their enabling technology nearly obsolete.   It didn’t matter, it was the only game in town until NIST and the Common Criteria labs  came onto the scene.  This has worked well, however the reality is, it’s not much better at vetting and moving technology from vendors to users.  The problem is, the evaluation process takes time and time means money, but it also means that the code submitted for evaluation will most likely be several revisions old by the time it emerges from evaluation.   Granted, it may only be 6 months, but it might take a year – regardless, this is far better than before.

So…  practically speaking, if the base version of FooOS submitted for evaluation is, say Version 5.0.1, several revisions —  each solving operational problems affecting the  organization — may have been released.  We may find that we need to run Version 5.6.10r3 in order to pass encrypted traffic via the network.  Because we encrypt traffic we must use FIPS-Level 2 certified code – but in the example above, the validated version of the FooOS will not work in our network…    What does the CISO do?  We’ll return to this in a moment, it gets better!

In order to reach levels of FIPS-140 goodness, one vendor in particular has instituted “FIPS Mode.”  What this does is require administration of the box from apposition directly in front  of the equipment, or at the length of your longest console cable…  Clearly, this is not suitable for organizations with equipment deployed worldwide to locations that do not have qualified administrators or network engineers.  Further, having to fly a technician to Burundi to clear sessions on a box every time it becomes catatonic is ridiculous at worst.  At best it’s not in accordance with the network concept of operations.  How does the CISO propose a workable, secure solution?


Standard Hill photo by timparkinson.

Now to my point.  (about time Vlad)   How does the CISO approach this situation?  Allow me to tell you the approach I’ve taken….

1. Accept the fact that once Foo OS has achieved a level of FIPS-140 goodness, the likelihood that the modules of code within the OS implementing cryptographic functionality in follow-on versions have not been changed.  This also means you have to assume the vendor has done a good job of documenting the changes to their baseline in their release notes, and that they HAVE modular code…

2. Delve into vendor documentation and FIPS-140 to find out exactly what “FIPS Mode” is, its benefits and the requirement.  Much of the written documentation in the standard deals with physical security of the cryptographic module itself (e.g., tamper-evident seals) – but most helpful is Table 1.

Security Level  1 Security Level 2 Security Level 3 Security Level 4
Cryptographic

Module Specification

Specification of cryptographic module, cryptographic boundary, Approved algorithms, and Approved modes of operation. Description of cryptographic module, including all hardware, software, and firmware components. Statement of module security policy.
Cryptographic Module Ports and Interfaces Required and optional interfaces. Specification of all interfaces and of all input and output data paths. Data ports for unprotected critical security parameters logically or physically separated from other data ports.
Roles, Services, and Authentication Logical separation of required and optional roles and services Role-based or identity-based operator authentication Identity-based operator authentication.
Finite State Model Specification of finite state model.  Required and optional states.  State transition diagram and specification of state transitions.
Physical Security Production grade equipment. Locks or tamper evidence. Tamper detection and response for covers and doors. Tamper detection and response envelope.  EFP or EFT.
Operational Environment Single operator. Executable code. Approved integrity technique. Referenced PPs evaluated at EAL2 with specified discretionary access control mechanisms and auditing. Referenced PPs plus trusted path evaluated at EAL3 plus security policy modeling. Referenced PPs plus trusted path evaluated at EAL4.
Cryptographic Key Management Key management mechanisms: random number and key generation, key establishment, key distribution, key entry/output, key storage, and key zeroization.
Secret and private keys established using manual methods may be entered or output in plaintext form. Secret and private keys established using manual methods shall be entered or output encrypted or with split knowledge procedures.
EMI/EMC 47 CFR FCC Part 15. Subpart B, Class A (Business use). Applicable FCC requirements (for radio). 47 CFR FCC Part 15. Subpart B, Class B (Home use).
Self-Tests Power-up tests: cryptographic algorithm tests, software/firmware integrity tests, critical functions tests. Conditional tests.
Design Assurance Configuration management (CM). Secure installation and generation. Design and policy correspondence. Guidance documents. CM system. Secure distribution. Functional specification. High-level language implementation. Formal model. Detailed explanations (informal proofs). Preconditions and postconditions.
Mitigation of Other Attacks Specification of mitigation of attacks for which no testable requirements are currently available.

Summary of Security Requirements From FIPS-140-2

Bottom line — some “features” are indeed useful,  but this one particular vendor’s implementation into a “one-size fits all” option tends to limit the use of the feature at all in some operational scenarios (most notably, the one your humble author is dealing with.)  BTW, changing vendors is not an option.

3. Upon analyzing the FIPS requirements against operational needs, and (importantly) the environment the equipment is operating in, one has to draw the line between “operating in vendor FIPS Mode,” and using FIPS 140-2 encryption.

4. Document the decision and the rationale.

Once again, security professionals have to help managers to strike a healthy balance between “enough” security and operational requirements.   You would think that using approved equipment, operating systems, and vendors using the CC evaluation process would be enough.  Reading the standard, we see the official acknowledgement that “Your Mileage May Indeed Vary:” TM

While the security requirements specified in this standard are intended to maintain the security provided by a cryptographic module, conformance to this standard is not sufficient to ensure that a particular module is secure. The operator of a cryptographic module is responsible for ensuring that the security provided by a module is sufficient and acceptable to the owner of the information that is being protected and that any residual risk is acknowledged and accepted.”     FIPS 140-2 Sec 15, Qualifications

The next paragraph constitutes validation of the approach I’ve embraced:

“Similarly, the use of a validated cryptographic module in a computer or telecommunications system does not guarantee the security of the overall system. The responsible authority in each agency shall ensure that the security of the system is sufficient and acceptable.”  (Emphasis added.)

One could say, “it depends,” but you wouldn’t think so at first glance – it’s a Standard for Pete’s sake!

Then again, nobody said this job would be easy!

Vlad



Similar Posts:

Posted in Rants, Risk Management, Technical | 4 Comments »
Tags:

NIST Framework for FISMA Dates Announced

Posted April 10th, 2009 by

Some of my friends (and maybe myself) will be teaching the NIST Framework for FISMA in May and June with Potomac Forum.   This really is an awesome program.  Some highlights:

  • Attendance is limited to Government employees only so that you can talk openly with your peers.
  • Be part of a cohort that trains together over the course of a month.
  • The course is 5 Fridays so that you can learn something then take it back to work the next week.
  • We have a Government speaker ever week, from the NIST FISMA guys to agency CISOs and CIOs.
  • No pitching, no marketing, no product placement (OK, maybe we’ll go through DoJ’s CSAM but only as an example of what kinds of tools are out there) , no BS.

See you all there!



Similar Posts:

Posted in NIST, Speaking | 1 Comment »
Tags:

Analyzing Fortify’s Plan to “Fix” the Government’s Security Problem

Posted April 1st, 2009 by

So I like reading about what people think about security and the Government.  I know, you’re all surprised, so cue shock and awe amongst my reader population.

Anyway, this week it’s Fortify and a well-placed article in NextGov.  You remember Fortify, they are the guys with the cool FUD movie about how code scanning is going to save the world.  And oh yeah, there was this gem from SC Magazine: “Fortify’s Rachwald agrees that FISMA isn’t going anywhere, especially with the support of the paper shufflers. ‘It’s been great for people who know how to fill out forms. Why would they want it to go away?'”  OK, so far my opinion has been partially tainted–somehow I think I’m supposed to take something here personal but I’m not sure exactly what.

Fortify has been trying to step up to the Government feed trough over the past year or so.  In a rare moment of being touch-feely intuitive, from their marketing I get the feeling that Fortify is a bunch of Silicon Valley technologists who think they know what’s best for DC–digital carpetbagging.  Nothing new, all y’alls been doing this for as long as I’ve been working with the Government.

Now don’t get me wrong, I think Fortify makes some good products.  I think that universal adoption of code scanning, while not as foolproof as advertised, is a good thing.  I also think that software vendors should use scanning tools as part of their testing and QA.

Fortified cité of Carcassonne photo by http2007.

Now for a couple basic points that I want to get across:

  • Security is not a differentiator between competing products unless it’s the classified world. People buy IT products based on features, not security.
  • The IT industry is a broken market because there is no incentive to sell secure code.
  • In fact, software vendors are often rewarded market-wise because if you arrive first to market with the largest market penetration, you become the defacto standard.
  • The vendors are abstracted from the problems faced by their customers thanks to the terms of most EULAs–they don’t really have to fix security problems since the software is sold with no guarantees.
  • The Government is dependent upon the private sector to provide it with secure software.
  • It is a conflict of interest for the vendors to accurately represent their flaws unless the Government is going to pay to have them fixed.
  • It’s been proposed numerous the Government use its “huge” IT budget to require vendors to sell secure projects.
  • How do you determine that a vendor is shipping a secure product?

Or more to the point, how do I as a software vendor reasonably demonstrate that I have provided a secure product to the government without a making the economics infeasible for smaller vendors, creating an industry of certifiers ala PCI-DSS and SOX, or dramatically lengthening my development/procurement schedules?  Think of the problems with common criteria, because that’s our previous attempt.

We run into this problem all the time in Government IT security, but it’s mostly at the system integrator level.  It’s highly problematic to make contract requirements that are objective, demonstrable, and testable yet still take into account threats and vulnerabilities that do not exist today.

I’ve spent the past month writing a security requirements document for integrated special-purpose devices sold to the Government.  Part of this exercise was the realization that I can require that the vendor perform vulnerability scanning, but it becomes extremely difficult to include an amount of common sense into requirements when it comes to deciding what to fix.  “That depends” keeps coming back to bite me in the buttocks time and time again.  At this point, I usually tell my boss how I hate security folks, self included, because of their indecisiveness.

The end result is that I can specify a process (Common Criteria for software/hardware, Certification and Accreditation for integration projects) and an outcome (certification, product acceptance, “go live” authorization), leave the decision-making authority with the Government, and put it in the hands of contracts officers and subject-matter experts who know how to manage security.  Problems with this technique:

  • I can’t find enough contracts officers who are security experts.
  • As a contractor, how do I account for the costs I’m going to incur since it’s apparently “at the whim of the Government”?
  • I have to apply this “across the board” to all my suppliers due to procurement law.  This might not be possible right now for some kinds of outsourced development.
  • We haven’t really solved the problem of defining what constitutes a secure product.
  • We’ve just deferred the problem from a strategic solution to a tactical process depending on a handful of clueful people.

Honestly, though, I think that’s as good as we’re going to get.  Ours is not a perfect world.

And as for Fortify?  Guys, quit trying to insult the people who will ultimately recommend your product.  It’s bad mojo, especially in a town where the toes you step on today may be attached to the butt you kiss tomorrow.  =)



Similar Posts:

Posted in Outsourcing, Technical, What Doesn't Work, What Works | 2 Comments »
Tags:

Certification and Accreditation Seminar, March 30th and 31st

Posted March 13th, 2009 by

We’ve got another good US Government Security Certification and Accreditation (C&A) Seminar/Workshop coming up at the end of March with Potomac Forum.

Graydon McKee (Ascension Risk Management and associated blog) and Dan Philpott (Fismapedia Mastermind and Guerilla-CISO Contributor) are going to the core of the instruction, with a couple others thrown in to round it all out.  I might stop by if I have the time.

What we promise:

  • An opportunity to hear NIST’s version of events and what they’re trying to accomplish
  • An opportunity to ask as many questions as you possibly can in 2 days
  • Good materials put together
  • An update on some of the recent security initiatives
  • An opportunity to commiserate with security folks from other agencies and contractors
  • No sales pitches and no products

See you all there!



Similar Posts:

Posted in FISMA, NIST, Speaking | No Comments »
Tags:

It’s a Blogiversary

Posted February 23rd, 2009 by

While I’ve been busy running all over the US and Canada, I missed a quasi-momentus date: the second anniversary of the Guerilla-CISO.  You can read the “Hello World” post if you want to see why this blog was started.

Blah Blah blah much has happened since then.  I swapped out blog platforms early on.  I started playing the didgeridoo.  I went on a zombie stint for 9 months.  I switched employers.  I added FISMA lolcats (IKANHAZFIZMA).  I started getting the one-liners out on twitter.  Most momentous is that I’ve picked up other authors.

  • Ian Charters (ian99), an international man of mystery, is a retired govie with a background in attacking stuff and doing forensics.
  • Joe Faraone (Vlad the Impaler), besides being a spitting imitation of George Lucas, is the guy who did one of the earliest certification and accreditations and informally laid down some of the concepts that became doctrine.
  • Dan Philpott (danphilpott), Government 2.0 security pundit extraordinaire, is the genius behind Fismapedia.org and one of the sharpest guys I know.
  • Mini-Me, he’s short, he’s bald, and he guest-blogs from time to time about needing a hairdryer.

So in a way, I’ve become “the pusher”–the guy who harrasses the other authors until they write something just to quiet me up for a couple of weeks.



Similar Posts:

Posted in The Guerilla CISO | 1 Comment »
Tags:

« Previous Entries Next Entries »


Visitor Geolocationing Widget: