Federated Vulnerability Management

Posted July 14th, 2009 by

Why hello there private sector folks.  It’s no big surprise, I work in the US Federal Government Space and we have some unique challenges of scale.  Glad to meet you, I hear you’ve got the same problems only not in the same kind of scale as the US Federal Government.  Sit back, read, and learn.

You see, I work in places where everything is siloed into different environments.  We have crazy zones for databases, client-facing DMZs, managment segments, and then the federal IT architecture itself: a loose federation of semi-independent enterprises that are rapidly coming together in strange ways under the wonderful initiative known as “The TIC”.  We’re also one of the most heavily audited sectors in IT.

And yet, the way we manage patch and vulnerability information is something out of the mid-80’s.

Current State of Confusion

Our current patch management information flow goes something like this:

  • Department SOC/US-CERT/CISOs Office releases a vulnerability alert (IAVA, ISVM, something along those lines)
  • Somebody makes a spreadsheet with the following on it:
    • Number of places with this vulnerability.
    • How many have been fixed.
    • When you’re going to have it fixed.
    • A percentage of completion
  • We then manage by spreadsheets until the spreadsheets say “100%”.
  • The spreadsheets are aggregated somewhere.  If we’re lucky, we have some kind of management tool that we dump our info into like eMASS.
  • We wonder why we get pwned (by either haxxorz or the IG).

Now for how we manage vulnerability scan information:

  • We run a tool.
  • The tool spits out a .csv or worse, a .html.
  • We pull up the .csv in Excel and add some columns.
  • We assign dates and responsibilities to people.
  • We have a weekly meeting togo over what’s been completed.
  • When we finish something, we provide evidence of what we did.
  • We still really don’t know how effective we were.

Problems with this approach:

  • It’s too easy to game.  If I’m doing reporting, the only thing really keeping me reporting the truth is my sense of ethics.
  • It’s slow as hell.  If somebody updates a spreadsheet, how does the change get echoed into the upstream spreadsheets?
  • It isn’t accurate at any given moment in time, mostly because changes quicker than the process can keep up.  What this means is that we always look like liars who are trying to hide something because our spreadsheet doesn’t match up with the “facts on ground”.
  • It doesn’t compare with our other management tools like Plans of Action and Milestone (POA&M).  They usually are managed in a different application than the technical parts, and this means that we need a human with a spreadsheet to act as the intermediary.

So this is my proposal to “fix” government patch and vulnerability management: Federated Patch and Vulnerability Management through SCAP.

Trade Federation Battle Droid photo by Stéfan.  Roger, Roger, SCAP means business.

Whatchu Talkin’ Bout With This “Federated” Stuff, Willis?

This is what I mean, my “Plan for BSOFH Happiness”:

Really what I want is every agency to have an “orchestrator” ala Ed Bellis’s little SCAP tool of horrors. =)  Then we federate them so that information can roll up to a top-level dashboard for the entire executive branch.

In my beautiful world, every IT asset reports into a patch management system of some sort.  Servers, workstations, laptops, all of it.  Yes, databases too.  Printers–yep.  If we can get network devices to get reported on config info via SCAP-enabled NMS, let’s get that pushing content into our orchestrator. We don’t even really  have to push patches using these tools–what I’m primarily concerned with at this point is to have the ability to pull reports.

I group all of my IT assets in my system into a bucket of some sort in the orchestrator.  That way, we know who’s responsible when something has a problem.  It also fits into our “system” concept from FISMA/C&A/Project Management/etc.

We do periodic network scanning to identify everything on our network and feed them into the orchestrator.  We do regular vulnerability scans and any findings feed into the orchestrator.  The more data, the better aggregate information we can get.

Our orchestrator correlates network scans with patch management status and gives us a ticket/alert/whatever where we have unmanaged devices.  Yes, most enterprise management tools do this today, but the more scan results I have feeding them, the better chance I have at finding all my assets.  Thanks to our crazy segmented architecture models, we have all these independent zones that break patch, vulnerability, and configuration management as the rest of the IT world performs it.  Flat is better for management, but failing that, I’ll take SCAP hierarchies of reporting.

The Department takes a National Vulnerability Database feed and pushes down to the Agencies what they used to send in an IAVA, only they also send down the check to see if your system is vulnerable.  My orchestrator automagically tests and reports back on status before I’m even awake in the morning.

I get hardening guides pushed from the Department or Agency in SCAP form, then pull an audit on my IT assets and have the differences automagically entered into my workflow and reporting.

I become a ticket monkey.  Everything is in workflow.  I can be replaced with somebody less expensive and can now focus on finding the answer to infosec nirvana.

We provide a feed upstream to our Department, the Department provides a feed to somebody (NCSD/US-CERT/OMB/Cybersecurity Coordinator) who now has the view across the entire Government.  Want to be bold, let Vivek K and the Sunlight Foundation at the data feeds and have truly open and transparent, “Unbreakable Government 2.1”.  Who needs FISMA report cards when our vulnerability data is on display?

Keys to Making Federated Patch and Vulnerability Management Work

Security policy that requires SCAP-compatible vulnerability and patch management products.  Instead of parroting back 800-53, please give me a requirement in your security policy that every patch and vulnerability management tool that we buy MUST BE SCAP-CERTIFIED.  Yes, I know we won’t get it done right now, but if we get it in policy, then it will trickle down into product choices eventually.  This is compliance I can live with, boo-yeah!

Security architecture models (FEA anyone?) that show federated patch and vulnerability management deployments as part of their standard configuration.  OK with the firewall pictures and zones of trust, I understand what you’re saying, now give me patch and vulnerability management flows across all the zones so I can do the other 85% of my job.

Network traffic from the edges of the hierarchy to…somewhere.  OK, you just need network connectivity throughout the hierarchy to aggregate and update patch and vulnerability information, this is basic data flow stuff.  US-CERT in a future incarnation could be the top-level aggregator, maybe.  Right now I would be happy building aggregation up to the Department level because that’s the level at which we’re graded.

Understanding.  Hey, I can’t fix everything all the time–what I’m doing is using automation to make the job of fixing things easier by aggregation, correlation, status reporting, and dashboarding.  These are all concepts behind good IT management, why shouldn’t we apply them to security managment also?  Yes, I’ll have times when I’m behind on something or another, but guess what, I’m behind today and you just don’t know it.  However, with near-real-time reporting, we need a culture shift away from trying to police each other up all the time to understanding that sometimes nothing is really perfect.

Patch and vulnerability information is all-in.  It has to be reporting in 100% across the board, or you don’t have anything–back to spreadsheets hell for you.  And simply put, why don’t you have everything in the patch management system already?  Come on, that’s not a good enough reason.

POA&Ms need to be more fluid.  Face it, with automated patch and vulnerability management, POA&Ms become more like trouble tickets.  But yes, that’s much awesome, smaller, easily-satisfied POA&Ms are much easier to manage provided that the administrative overhead for each of these is reduced to practically nothing… just like IT trouble tickets.

Regression testing and providing proof becomes easier because it’s all automated.  Once you fix something and it’s marked in the aggregator as completed, it gets slid into the queue for retesting, and the results become the evidence.

Interfaces with existing FISMA management tools.  This one is tough.  But we have a very well-entrenched software base geared around artifact management, POA&M management, and Security Test and Evaluation results.  This class of software exists because none of the tools vendors really understand how the Government does security management, and I mean NONE of them.  There has to be some weird unnatural data import/export acts going on here to make the orchestrator of technical data match up with the orchestrator of managment data, and this is the part that scares me in a federated world.

SCAP spreads to IT management suites.  They already have a footprint out there on everything, and odds are we’re using them for patch and configuration management anyway.  If they don’t talk SCAP, push the vendor to get it working.

Where Life Gets Surreal

Then I woke up and realized that if I provide my Department CISO with near-real-time patch and vulnerability mangement information, I suddenly have become responsible for patch and management instead of playing “kick it to the contractors” and hiding behind working groups.  It could be that if I get Federated Patch and Vulnerabilty Management off the ground, I’ve given my Department CISO the rope to hang me with.  =)

Somehow, somewhere, I’ve done most of what CAG was talking about and automated it.  I feel so… um… dirty.  Really, folks, I’m not a shill for anybody.



Similar Posts:

Posted in DISA, NIST, Rants, Technical | 12 Comments »
Tags:

GAO’s 5 Steps to “Fix” FISMA

Posted July 2nd, 2009 by

Letter from GAO on how Congress can fix FISMA.  And oh yeah, the press coverage on it.

Now supposedly this was in response to an inquiry from Congress about “Please comment on the need for improved cyber security relating to S.773, the proposed Cybersecurity Act of 2009.”  This is S.773.

GAO is mixing issues and has missed the mark on what Congress asked for.  S.773 is all about protecting critical infrastructure.  It only rarely mentions government internal IT issues.  S.773 has nothing at all to do with FISMA reform.  However, GAO doesn’t have much expertise in cybersecurity outside of the Federal Agencies (they have some, but I would never call it extensive), so they reported on what they know.

The GAO report used the often-cited metric of an increase in cybersecurity attacks against Government IT systems growing from “5,503 incidents reported in fiscal year 2006 to 16,843 incidents in fiscal year 2008” as proof that the agencies are not doing anything to fix the problem.  I’ve questioned these figures before, it’s associated with the measurement problem and increased reporting requirements more than an increase in attacks.  Truth be told, nobody knows if the attacks are increasing and, if so, at what rate.  I would guess they’re increasing, but we don’t know, so quit citing some “whacked” metric as proof.

Reform photo by shevy.

GAO’s recommendations for FISMA Reform:

Clarify requirements for testing and evaluating security controls.  In other words, the auditing shall continue until the scores improve.  Hate to tell you this, but really all you can test at the national level is if the FISMA framework is in place, the execution of the framework (and by extension, if an agency is secure or not) is largely untestable using any kind of a framework.

Require agency heads to provide an assurance statement on the overall adequacy and effectiveness of the agency’s information security program.  This is harkening back to the accounting roots of GAO.  Basically what we’re talking here is for the agency head to attest that his agency has made the best effort that it can to protect their IT.  I like part of this because part of what’s missing is “executive support” for IT security.  To be honest, though, most agency heads aren’t IT security dweebs, they would be signing an assurance statement based upon what their CIO/CISO put in the executive summary.

Enhance independent annual evaluations.  This has significant cost implications.  Besides, we’re getting more and more evaluations as time goes on with an increase in audit burden.  IE, in the Government IT security space, how much of your time is spent providing proof to auditors versus building security?  For some people, it’s their full-time job.

Strengthen annual reporting mechanisms.  More reporting.  I don’t think it needs to get strengthened, I think it needs to get “fixed”.  And by “fixed” I mean real metrics.  I’ve touched on this at least a hundred times, go check out some of it….

Strengthen OMB oversight of agency information security programs.  This one gives me brain-hurt.  OMB has exactly the amount of oversight that they need to do their job.  Just like more auditing, if you increase the oversight and the people doing the execution have the same amount of people and the same amount of funding and the same types of skills, do you really expect them to perform differently?

Rybolov’s synopsis:

When the only tool you have is a hammer, every problem looks like a nail, and I think that’s what GAO is doing here.  Since performance in IT security is obviously down, they suggest that more auditing and oversight will help.  But then again, at what point does the audit burden tip to the point where nobody is really doing any work at all except for answering to audit requests?

Going back to what Congress really asked for, We run up against a problem.  There isn’t a huge set of information about how the rest of the nation is doing with cybersecurity.  There’s the Verizon DBIR, the Data Loss DB, some surveys, and that’s about it.

So really, when you ask GAO to find out what the national cybersecurity situation is, all you’re going to get is a bunch of information about how government IT systems line up and maybe some anecdotes about critical infrastructure.

Coming to a blog near you (hopefully soon): Rybolov’s 5 steps to “fix” FISMA.



Similar Posts:

Posted in FISMA | 2 Comments »
Tags:

Some Thoughts on POA&M Abuse

Posted June 8th, 2009 by

Ack, Plans of Action and Milestones.  I love them and I hate them.

For those of you who “don’t habla Federali”, a POA&M is basically an IOU from the system owner to the accreditor that yes, we will fix something but for some reason we can’t do it right now.  Usually these are findings from Security Test and Evaluation (ST&E) or Certification and Accreditation (C&A).  In fact, some places I’ve worked, they won’t make new POA&Ms unless they’re traceable back to ST&E results.

Functions that a POA&M fulfills:

  • Issue tracking to resolution
  • Serves as a “risk register”
  • Used as the justification for budget
  • Generate mitigation metrics
  • Can be used for data-mining to find common vulnerabilities across systems

But today, we’re going to talk about POA&M abuse.  I’ve seen my fair share of this.

Conflicting Goals: The basic problem is that we want POA&Ms to satisfy too many conflicting functions.  IE, if we use the number of open POA&Ms as a metric to determine if our system owners are doing their job and closing out issues but we also turn around and report these at an enterprise level to OMB or at the department level, then it’s a conflict of interest to get these closed as fast as possible, even if it means losing your ability to track things at the system level or to spend the time doing things that solve long-term security problems–our vulnerability/weakness/risk management process forces us into creating small, easily-to-satisfy POA&Ms instead of long-term projects.

Near-Term v/s Long-Term:  If we set up POA&Ms with due dates of 30-60-90 (for high, moderate, and low risks) days, we don’t really have time at all to turn these POA&Ms into budget support.  Well, if we manage the budget up to 3 years in advance and we have 90 days for high-risk findings, then that means we’ll have exactly 0 input into the budget from any POA&M unless we can delay the bugger for 2 years or so, much too long for it to actually be fixable.

Bad POA&Ms:  Let’s face it, sometimes the one-for-one nature of ST&E, C&A, and risk assessment findings to POA&Ms means that you get POA&Ms that are “bad” and by that I mean that they can’t be satisfied or they’re not really something that you need to fix.

Some of the bad POA&Ms I’ve seen, these are paraphrased from the original:

  • The solution uses {Microsoft|Sun|Oracle} products which has a history of vulnerabilities.
  • The project team needs to tell the vendor to put IPV6 into their product roadmap
  • The project team needs to implement X which is a common control provided at the enterprise level
  • The System Owner and DAA have accepted this risk but we’re still turning it into a POA&M
  • This is a common control that we really should handle at the enterprise level but we’re putting it on your POA&M list for a simple web application

Plan of Action for Refresh Philly photo by jonny goldstein.

Keys to POA&M Nirvana:  So over the years, I’ve observed some techniques for success in working with POA&Ms:

  • Agree on the evidence/proof of POA&M closure when the POA&M is created
  • Fix it before it becomes a POA&M
  • Have a waiver or exception process that requires a cost-benefit-risk analysis
  • Start with”high-level” POA&Ms and work down to more detailed POA&Ms as your security program matures
  • POA&Ms are between the System Owner and the DAA, but the System Owner can turn around and negotiate a POA&M as a cedural with an outsourced IT provider

And then the keys to Building Good POA&Ms:

  • Actionable–ie, they have something that you need to do
  • Achievable–they can be accomplished
  • Demonstrable–you can demonstrate that the POA&M has been satisfied
  • Properly-Scoped–absorbed at the agency level, the common control level, or the system level
  • They are SMART: Specific, Manageable, Attainable, Relevant, and within a specified Timeframe
  • They are DUMB: Doable, Understandable, Manageable, and Beneficial

Yes, I stole the last 2 bullets from the picture above, but they make really good sense in a way that “know thyself” is awesome advice from the Oracle at Delphi.



Similar Posts:

Posted in BSOFH, FISMA | No Comments »
Tags:

Wanted: Some SCAP Wranglers

Posted May 18th, 2009 by

So I was doing my usual “Beltway Bandit Perusal of Opportunities for Filthy Lucre” also known as diving into FedBizOps and I found this gem.  Basically what this means is that sometime this summer, NIST is going to put out an RFP for contractors to further develop SCAP using ARRA funds.

Keeping in mind that this isn’t the official list of what NIST wants done under this contract, but it’s interesting to look at from an angle of where SCAP will go over the next couple of years:

  1. Evolution of the SCAP protocol and specifications thereof
  2. Feasibility studies, development, documenting, prototyping, and road-mapping of SCAP expansions (e.g., remediation capability) and analog protocols (e.g., Network Event Content Automation Protocol
  3. Implementation and maintenance support for the Security Automation Content Validation Program
  4. Maintenance support for the SCAP Product Validation Program
  5. Pilot, beta, and production support for SCAP and security automation use-cases
  6. Content development, modification, and testing
  7. Infrastructure and reference implementation development in JAVA, C++, and C programming languages
  8. Data trust models and data provenance solutions.

So how do you play?  Well, the first thing is that you respond to the notice with a capabilities statement saying “yes, we have experience in doing what you want”–there is a list of specifics in the original notice.  Then sign up for FedBizOps and follow the announcement so you can get changes and the RFP when it comes out.



Similar Posts:

Posted in NIST, Outsourcing | 5 Comments »
Tags:

NIST Framework for FISMA Dates Announced

Posted April 10th, 2009 by

Some of my friends (and maybe myself) will be teaching the NIST Framework for FISMA in May and June with Potomac Forum.   This really is an awesome program.  Some highlights:

  • Attendance is limited to Government employees only so that you can talk openly with your peers.
  • Be part of a cohort that trains together over the course of a month.
  • The course is 5 Fridays so that you can learn something then take it back to work the next week.
  • We have a Government speaker ever week, from the NIST FISMA guys to agency CISOs and CIOs.
  • No pitching, no marketing, no product placement (OK, maybe we’ll go through DoJ’s CSAM but only as an example of what kinds of tools are out there) , no BS.

See you all there!



Similar Posts:

Posted in NIST, Speaking | 1 Comment »
Tags:

In Response to “Cyber Security Coming to a Boil” Comments….

Posted March 24th, 2009 by

Rybolov’s comment: This is Ian’s response to the comments for his post on Cybersecurity Coming to a Boil.  It was such a good dialog that he wanted to make a large comment which as we all know, eventually transforms itself into a blog post.  =)

You are making some excellent points; putting the leadership of the Administration’s new Cyber security initiative directly in the White House might appear to be a temporary solution or a quick fix. From my point of view, it looks more like an honest approach. By that I mean that I think the Administration is acknowledging a few things:

  • This is a significant problem
  • There is no coherent approach across the government
  • There is no clear leadership or authority to act on the issue across the government
  • Because of the perception that a large budget commitment will have to be allocated to any effective solution, many Agencies are claiming leadership or competing for leadership to scoop up those resources
  • The Administration does not know what the specific solution they are proposing is — YET

I think this last point is the most important and is driving the 60-day security assessment. I also think that assessment is much more complex than a simple review of FISMA scores for the past few years. I suspect that the 60-day review is also considering things like legal mandates and authorities for various aspects of Cyber security on a National level. If that is the case, I’m not familiar with a similar review ever having taken place.

2004 World Cyber Games photo by jurvetson.  Contrary to what the LiquidMatrix Security folks might think, the purpose of this post isn’t to jam “cyber” into every 5th word.  =)

So, where does this take us? Well, I think we will see the Cyber Security Czar, propose a unified policy, a unified approach and probably some basic enabling legislation. I suspect that this will mean that the Czar will have direct control over existing programs and resources. I think the Cyber Security Czar taking control of Cyber Security-related research programs will be one of the most visible first steps toward establishing central control.

From this we will see new organizational and reporting authorities that will span existing Agencies. I think we can also anticipate that we will see new policies put in place and a new set of guidelines of minimum level of security capabilities mandated for all Agency networks (raising bottom-line security). This last point will probably prove to be the most trying or contentious effort within the existing Agency structure. It is not clear how existing Agencies that are clearly underfunding or under supporting Cyber Security will be assessed. It is even less clear where remedial funding or personnel positions will come from. And the stickiest point of all is…. how do you reform the leadership and policy in those Agencies to positively change their security culture? I noticed that someone used the C-word in response to my initial comments. This goes way beyond compliance. In the case of some Federal Agencies and perhaps some industries we may be talking about a complete change sea-change with respect to the emphasis and priority given to Cyber Security.

These are all difficult issues. And I believe the Administration will address them one step at a time.
In the long-term it is less clear how Cyber Security will be managed. The so-called war on drugs has been managed by central authority directly from the White House for decades. And to be sure, to put a working national system together that protects our Government and critical national infrastructure from Cyber attack will probably take a similar level of effort and perhaps require a similar long-term commitment. Let’s just hope that it is better thought-out and more effective than the so-called war on drugs.

Vlad’s point concerning Intelligence Community taking the lead with respect to Cyber Security is an interesting one, I think the Intelligence Community will be important players in this new initiative. To be frank, between the Defense and Intelligence Communities there is considerable technical expertise that will be sorely needed. However, for legal reasons, there are real limits as to what the Intelligence and Defense Communities can do in many situations. This is a parallel problem to the Cyber Security as a Law Enforcement problem. The “solution” will clearly involve a variety of players each with their own expertise and authorities. And while I am not anticipating that Tom Clancy will be appointed the Cyber Security Czar any time soon. I do expect that a long-term approach will require the stand-up of either a new organization empowered to act across current legal boundaries (that will require new legislation), or a new coordinating organization like the Counter Terrorism Center, that will allow all of the current players bring their individual strengths and authorities to focus on a situation on a case by case basis as they are needed (that may require new legislation).

If you press me, I think a joint coordinating body will be the preferred choice of the Administration. Everyone likes the idea of everyone working and playing well together. And, that option also sounds a lot less expensive. And that is important in today’s economic climate.



Similar Posts:

Posted in FISMA, Public Policy, Technical | 2 Comments »
Tags:

« Previous Entries Next Entries »


Visitor Geolocationing Widget: