Federated Vulnerability Management
Posted July 14th, 2009 by rybolovWhy hello there private sector folks. It’s no big surprise, I work in the US Federal Government Space and we have some unique challenges of scale. Glad to meet you, I hear you’ve got the same problems only not in the same kind of scale as the US Federal Government. Sit back, read, and learn.
You see, I work in places where everything is siloed into different environments. We have crazy zones for databases, client-facing DMZs, managment segments, and then the federal IT architecture itself: a loose federation of semi-independent enterprises that are rapidly coming together in strange ways under the wonderful initiative known as “The TIC”. We’re also one of the most heavily audited sectors in IT.
And yet, the way we manage patch and vulnerability information is something out of the mid-80’s.
Current State of Confusion
Our current patch management information flow goes something like this:
- Department SOC/US-CERT/CISOs Office releases a vulnerability alert (IAVA, ISVM, something along those lines)
- Somebody makes a spreadsheet with the following on it:
- Number of places with this vulnerability.
- How many have been fixed.
- When you’re going to have it fixed.
- A percentage of completion
- We then manage by spreadsheets until the spreadsheets say “100%”.
- The spreadsheets are aggregated somewhere. If we’re lucky, we have some kind of management tool that we dump our info into like eMASS.
- We wonder why we get pwned (by either haxxorz or the IG).
Now for how we manage vulnerability scan information:
- We run a tool.
- The tool spits out a .csv or worse, a .html.
- We pull up the .csv in Excel and add some columns.
- We assign dates and responsibilities to people.
- We have a weekly meeting togo over what’s been completed.
- When we finish something, we provide evidence of what we did.
- We still really don’t know how effective we were.
Problems with this approach:
- It’s too easy to game. If I’m doing reporting, the only thing really keeping me reporting the truth is my sense of ethics.
- It’s slow as hell. If somebody updates a spreadsheet, how does the change get echoed into the upstream spreadsheets?
- It isn’t accurate at any given moment in time, mostly because changes quicker than the process can keep up. What this means is that we always look like liars who are trying to hide something because our spreadsheet doesn’t match up with the “facts on ground”.
- It doesn’t compare with our other management tools like Plans of Action and Milestone (POA&M). They usually are managed in a different application than the technical parts, and this means that we need a human with a spreadsheet to act as the intermediary.
So this is my proposal to “fix” government patch and vulnerability management: Federated Patch and Vulnerability Management through SCAP.
Trade Federation Battle Droid photo by Stéfan. Roger, Roger, SCAP means business.
Whatchu Talkin’ Bout With This “Federated” Stuff, Willis?
This is what I mean, my “Plan for BSOFH Happiness”:
Really what I want is every agency to have an “orchestrator” ala Ed Bellis’s little SCAP tool of horrors. =) Then we federate them so that information can roll up to a top-level dashboard for the entire executive branch.
In my beautiful world, every IT asset reports into a patch management system of some sort. Servers, workstations, laptops, all of it. Yes, databases too. Printers–yep. If we can get network devices to get reported on config info via SCAP-enabled NMS, let’s get that pushing content into our orchestrator. We don’t even really have to push patches using these tools–what I’m primarily concerned with at this point is to have the ability to pull reports.
I group all of my IT assets in my system into a bucket of some sort in the orchestrator. That way, we know who’s responsible when something has a problem. It also fits into our “system” concept from FISMA/C&A/Project Management/etc.
We do periodic network scanning to identify everything on our network and feed them into the orchestrator. We do regular vulnerability scans and any findings feed into the orchestrator. The more data, the better aggregate information we can get.
Our orchestrator correlates network scans with patch management status and gives us a ticket/alert/whatever where we have unmanaged devices. Yes, most enterprise management tools do this today, but the more scan results I have feeding them, the better chance I have at finding all my assets. Thanks to our crazy segmented architecture models, we have all these independent zones that break patch, vulnerability, and configuration management as the rest of the IT world performs it. Flat is better for management, but failing that, I’ll take SCAP hierarchies of reporting.
The Department takes a National Vulnerability Database feed and pushes down to the Agencies what they used to send in an IAVA, only they also send down the check to see if your system is vulnerable. My orchestrator automagically tests and reports back on status before I’m even awake in the morning.
I get hardening guides pushed from the Department or Agency in SCAP form, then pull an audit on my IT assets and have the differences automagically entered into my workflow and reporting.
I become a ticket monkey. Everything is in workflow. I can be replaced with somebody less expensive and can now focus on finding the answer to infosec nirvana.
We provide a feed upstream to our Department, the Department provides a feed to somebody (NCSD/US-CERT/OMB/Cybersecurity Coordinator) who now has the view across the entire Government. Want to be bold, let Vivek K and the Sunlight Foundation at the data feeds and have truly open and transparent, “Unbreakable Government 2.1”. Who needs FISMA report cards when our vulnerability data is on display?
Keys to Making Federated Patch and Vulnerability Management Work
Security policy that requires SCAP-compatible vulnerability and patch management products. Instead of parroting back 800-53, please give me a requirement in your security policy that every patch and vulnerability management tool that we buy MUST BE SCAP-CERTIFIED. Yes, I know we won’t get it done right now, but if we get it in policy, then it will trickle down into product choices eventually. This is compliance I can live with, boo-yeah!
Security architecture models (FEA anyone?) that show federated patch and vulnerability management deployments as part of their standard configuration. OK with the firewall pictures and zones of trust, I understand what you’re saying, now give me patch and vulnerability management flows across all the zones so I can do the other 85% of my job.
Network traffic from the edges of the hierarchy to…somewhere. OK, you just need network connectivity throughout the hierarchy to aggregate and update patch and vulnerability information, this is basic data flow stuff. US-CERT in a future incarnation could be the top-level aggregator, maybe. Right now I would be happy building aggregation up to the Department level because that’s the level at which we’re graded.
Understanding. Hey, I can’t fix everything all the time–what I’m doing is using automation to make the job of fixing things easier by aggregation, correlation, status reporting, and dashboarding. These are all concepts behind good IT management, why shouldn’t we apply them to security managment also? Yes, I’ll have times when I’m behind on something or another, but guess what, I’m behind today and you just don’t know it. However, with near-real-time reporting, we need a culture shift away from trying to police each other up all the time to understanding that sometimes nothing is really perfect.
Patch and vulnerability information is all-in. It has to be reporting in 100% across the board, or you don’t have anything–back to spreadsheets hell for you. And simply put, why don’t you have everything in the patch management system already? Come on, that’s not a good enough reason.
POA&Ms need to be more fluid. Face it, with automated patch and vulnerability management, POA&Ms become more like trouble tickets. But yes, that’s much awesome, smaller, easily-satisfied POA&Ms are much easier to manage provided that the administrative overhead for each of these is reduced to practically nothing… just like IT trouble tickets.
Regression testing and providing proof becomes easier because it’s all automated. Once you fix something and it’s marked in the aggregator as completed, it gets slid into the queue for retesting, and the results become the evidence.
Interfaces with existing FISMA management tools. This one is tough. But we have a very well-entrenched software base geared around artifact management, POA&M management, and Security Test and Evaluation results. This class of software exists because none of the tools vendors really understand how the Government does security management, and I mean NONE of them. There has to be some weird unnatural data import/export acts going on here to make the orchestrator of technical data match up with the orchestrator of managment data, and this is the part that scares me in a federated world.
SCAP spreads to IT management suites. They already have a footprint out there on everything, and odds are we’re using them for patch and configuration management anyway. If they don’t talk SCAP, push the vendor to get it working.
Where Life Gets Surreal
Then I woke up and realized that if I provide my Department CISO with near-real-time patch and vulnerability mangement information, I suddenly have become responsible for patch and management instead of playing “kick it to the contractors” and hiding behind working groups. It could be that if I get Federated Patch and Vulnerabilty Management off the ground, I’ve given my Department CISO the rope to hang me with. =)
Somehow, somewhere, I’ve done most of what CAG was talking about and automated it. I feel so… um… dirty. Really, folks, I’m not a shill for anybody.
Similar Posts:
Posted in DISA, NIST, Rants, Technical | 12 Comments »
Tags: auditor • compatibility • compliance • fdcc • fisma • government • infosec • infosharing • management • NIST • omb • pwnage • risk • scalability • scap • security • tools
July 15th, 2009 at 12:12 am
This is, like, awesome! Well, awesome as long as you don’t have to actually do it on the scale of fed govt.
July 15th, 2009 at 6:38 am
But that’s the point, Anton. We do manage this at that scale, only we do it with spreadsheets and auditors.
July 15th, 2009 at 8:14 am
You have been drinking the NIST punch. SCAP is a hope of a promise of a dream of a concept. It is already mandated (see Karen Evans July 31, 2007 OMB memo). SCAP validation just means your results are as good (or better) than the reference implementation of the OVAL and XCCDF interpreters which means you can check things that are easy to check. If you have a really good scanner, all you should care about is that it output results in SCAP format and even that seems silly. OVAL and XCCDF reports are fat and are not anonymized. And let us not forget the many political problems as well. NIST does not want to dilute the brand of FDCC saying that it cannot be used for Red Hat, Solaris and friends.
Other issues with SCAP.
Servers are too hard.
Mitigation is neither green nor red. If it has no color, does it really exist?
If software is not owned/supported by a major vendor it will not get SCAP content for many years if ever. So if I use a firewall other than the Microsoft one, then you can’t check my firewall configuration.
I am part of the SCAP establishment and we are trying to make everything better. What are we doing tomorrow Brain? The same thing we do every night Pinky… Trying to take over the world! Seriously, SCAP will not be nirvana. I would even say security is worse for now. What is much better is that if you have content (which no one has but NIST with FDCC) then you are not tied to a product.
SCAP validation hampers innovation. Reduces the role of Open Source. It is validated by the same labs that do FIPS validation. They add little value and invalidate many useful tools.
If you want to reach SCAP Valhallah / nirvana / etc, you need to pay people to start writing a lot of SCAP content. Tools just consume it. The only content out there right now is FDCC which is a start but not nearly enough. Talk to you minions. To paraphrase a movie about pipe dreams: If you write the content the tools will come. The mandate there, but monkeys do what monkeys do.
If your IAVA, ISVM, whatever comes in SCAP, that might help too. Red Hat produces patch definitions in OVAL. What other vendors do?
All in all, you have a great dream. I bought in to it a few years ago in fact. Just don’t rush to talk about silver bullets and other panacea because there is a big thunderstorm behind that silver lining you are looking at.
Love the blog!
SCAP Insider
July 15th, 2009 at 8:33 am
is anyone listening?
July 15th, 2009 at 8:49 am
[…] Guerilla CISO blog recently posted a very interesting proposal on Federated Vulnerability Management. I think it’s a fantastic idea that should be seriously considered. If we use modern linked […]
July 15th, 2009 at 9:33 am
This isn’t a pipe dream – totally feasible but the scale in your ‘industry’ is going to be a sinker – unless – (going out on a limb) – you use each business unit as a pod and develop them one at a time. In doing that each pod reports up to it’s exec and remains autonomous for the time being – as pods come online and become viable, they can then report up to a predefined master server. Should one pod break, the others are not affected – should the master server break, the pods are not affected. Ouch – the scale – doable? Oh hell yeah.
July 15th, 2009 at 10:39 am
Hi SCAP Insider
Maybe I have had a sip of the coolaid. Based on responses to this post, now the whole world thinks I’m a genius or a complete and utter madman. =)
However, I think that some of the pieces are already in play. Even if I get vulnerability scan results federated and they do bulk delete/replace at the Department level, then I’ve made my job much more easier.
Interesting take on the certification problem. Yep, that’s a serious issue. What I want to say is that I would settle for “I’ll take ‘works right now’ over ‘is certified next year'” but that’s not the politically-correct answer.
When it comes to SCAP content, most technical policy compliance tools already have “canned” checks based in theory on 800-53 controls, DISA STIGS, and CIS Benchmarks. That’s some of the way there.
You also have a call to action: People, write content. In my strange way, the more I write about SCAP, the more awareness the user community has, and the more that people think in the back of their heads “Hmmm, maybe I need to write that in SCAP format while I’m writing the free-text version”. Step one is to get CISO/ISSO awareness of what SCAP can do for you.
July 16th, 2009 at 11:10 am
Hi Rybolov,
I’m actually interning at MITRE who developed 4 out of the 6 components of SCAP. My job this summer is to convince software developers to come to our free course to learn how to create security automation guides (SCAP content). Check out the website benchmarkdevelopment.mitre.org I’m interested in your thoughts on this venture.
August 9th, 2009 at 8:24 pm
[…] a plan on fixing government patch and vulnerability management through SCAP in the post, “Federated Vulnerability Management.” Here are a few of the ideas discussed in the […]
April 1st, 2010 at 10:12 pm
[…] and I’ll give you some pointers? Or rather, check out what I’ve already said about federated patch and vulnerability management then give me a […]
June 6th, 2010 at 7:14 pm
[…] Gorilla CISO has a blog post about vulnerability management that is worth reading. It sounds really familiar, though I’m dealing with it on a much much […]
January 13th, 2011 at 12:10 am
[…] often do this, but the latest post on the Guerilla CISO blog is worth a re-post. Go check it out here. I have been talking about this a lot lately. SCAP is still coming into its own but has a lot of […]