Federated Vulnerability Management
Posted July 14th, 2009 by rybolovWhy hello there private sector folks. It’s no big surprise, I work in the US Federal Government Space and we have some unique challenges of scale. Glad to meet you, I hear you’ve got the same problems only not in the same kind of scale as the US Federal Government. Sit back, read, and learn.
You see, I work in places where everything is siloed into different environments. We have crazy zones for databases, client-facing DMZs, managment segments, and then the federal IT architecture itself: a loose federation of semi-independent enterprises that are rapidly coming together in strange ways under the wonderful initiative known as “The TIC”. We’re also one of the most heavily audited sectors in IT.
And yet, the way we manage patch and vulnerability information is something out of the mid-80’s.
Current State of Confusion
Our current patch management information flow goes something like this:
- Department SOC/US-CERT/CISOs Office releases a vulnerability alert (IAVA, ISVM, something along those lines)
- Somebody makes a spreadsheet with the following on it:
- Number of places with this vulnerability.
- How many have been fixed.
- When you’re going to have it fixed.
- A percentage of completion
- We then manage by spreadsheets until the spreadsheets say “100%”.
- The spreadsheets are aggregated somewhere. If we’re lucky, we have some kind of management tool that we dump our info into like eMASS.
- We wonder why we get pwned (by either haxxorz or the IG).
Now for how we manage vulnerability scan information:
- We run a tool.
- The tool spits out a .csv or worse, a .html.
- We pull up the .csv in Excel and add some columns.
- We assign dates and responsibilities to people.
- We have a weekly meeting togo over what’s been completed.
- When we finish something, we provide evidence of what we did.
- We still really don’t know how effective we were.
Problems with this approach:
- It’s too easy to game. If I’m doing reporting, the only thing really keeping me reporting the truth is my sense of ethics.
- It’s slow as hell. If somebody updates a spreadsheet, how does the change get echoed into the upstream spreadsheets?
- It isn’t accurate at any given moment in time, mostly because changes quicker than the process can keep up. What this means is that we always look like liars who are trying to hide something because our spreadsheet doesn’t match up with the “facts on ground”.
- It doesn’t compare with our other management tools like Plans of Action and Milestone (POA&M). They usually are managed in a different application than the technical parts, and this means that we need a human with a spreadsheet to act as the intermediary.
So this is my proposal to “fix” government patch and vulnerability management: Federated Patch and Vulnerability Management through SCAP.
Trade Federation Battle Droid photo by Stéfan. Roger, Roger, SCAP means business.
Whatchu Talkin’ Bout With This “Federated” Stuff, Willis?
This is what I mean, my “Plan for BSOFH Happiness”:
Really what I want is every agency to have an “orchestrator” ala Ed Bellis’s little SCAP tool of horrors. =) Then we federate them so that information can roll up to a top-level dashboard for the entire executive branch.
In my beautiful world, every IT asset reports into a patch management system of some sort. Servers, workstations, laptops, all of it. Yes, databases too. Printers–yep. If we can get network devices to get reported on config info via SCAP-enabled NMS, let’s get that pushing content into our orchestrator. We don’t even really have to push patches using these tools–what I’m primarily concerned with at this point is to have the ability to pull reports.
I group all of my IT assets in my system into a bucket of some sort in the orchestrator. That way, we know who’s responsible when something has a problem. It also fits into our “system” concept from FISMA/C&A/Project Management/etc.
We do periodic network scanning to identify everything on our network and feed them into the orchestrator. We do regular vulnerability scans and any findings feed into the orchestrator. The more data, the better aggregate information we can get.
Our orchestrator correlates network scans with patch management status and gives us a ticket/alert/whatever where we have unmanaged devices. Yes, most enterprise management tools do this today, but the more scan results I have feeding them, the better chance I have at finding all my assets. Thanks to our crazy segmented architecture models, we have all these independent zones that break patch, vulnerability, and configuration management as the rest of the IT world performs it. Flat is better for management, but failing that, I’ll take SCAP hierarchies of reporting.
The Department takes a National Vulnerability Database feed and pushes down to the Agencies what they used to send in an IAVA, only they also send down the check to see if your system is vulnerable. My orchestrator automagically tests and reports back on status before I’m even awake in the morning.
I get hardening guides pushed from the Department or Agency in SCAP form, then pull an audit on my IT assets and have the differences automagically entered into my workflow and reporting.
I become a ticket monkey. Everything is in workflow. I can be replaced with somebody less expensive and can now focus on finding the answer to infosec nirvana.
We provide a feed upstream to our Department, the Department provides a feed to somebody (NCSD/US-CERT/OMB/Cybersecurity Coordinator) who now has the view across the entire Government. Want to be bold, let Vivek K and the Sunlight Foundation at the data feeds and have truly open and transparent, “Unbreakable Government 2.1”. Who needs FISMA report cards when our vulnerability data is on display?
Keys to Making Federated Patch and Vulnerability Management Work
Security policy that requires SCAP-compatible vulnerability and patch management products. Instead of parroting back 800-53, please give me a requirement in your security policy that every patch and vulnerability management tool that we buy MUST BE SCAP-CERTIFIED. Yes, I know we won’t get it done right now, but if we get it in policy, then it will trickle down into product choices eventually. This is compliance I can live with, boo-yeah!
Security architecture models (FEA anyone?) that show federated patch and vulnerability management deployments as part of their standard configuration. OK with the firewall pictures and zones of trust, I understand what you’re saying, now give me patch and vulnerability management flows across all the zones so I can do the other 85% of my job.
Network traffic from the edges of the hierarchy to…somewhere. OK, you just need network connectivity throughout the hierarchy to aggregate and update patch and vulnerability information, this is basic data flow stuff. US-CERT in a future incarnation could be the top-level aggregator, maybe. Right now I would be happy building aggregation up to the Department level because that’s the level at which we’re graded.
Understanding. Hey, I can’t fix everything all the time–what I’m doing is using automation to make the job of fixing things easier by aggregation, correlation, status reporting, and dashboarding. These are all concepts behind good IT management, why shouldn’t we apply them to security managment also? Yes, I’ll have times when I’m behind on something or another, but guess what, I’m behind today and you just don’t know it. However, with near-real-time reporting, we need a culture shift away from trying to police each other up all the time to understanding that sometimes nothing is really perfect.
Patch and vulnerability information is all-in. It has to be reporting in 100% across the board, or you don’t have anything–back to spreadsheets hell for you. And simply put, why don’t you have everything in the patch management system already? Come on, that’s not a good enough reason.
POA&Ms need to be more fluid. Face it, with automated patch and vulnerability management, POA&Ms become more like trouble tickets. But yes, that’s much awesome, smaller, easily-satisfied POA&Ms are much easier to manage provided that the administrative overhead for each of these is reduced to practically nothing… just like IT trouble tickets.
Regression testing and providing proof becomes easier because it’s all automated. Once you fix something and it’s marked in the aggregator as completed, it gets slid into the queue for retesting, and the results become the evidence.
Interfaces with existing FISMA management tools. This one is tough. But we have a very well-entrenched software base geared around artifact management, POA&M management, and Security Test and Evaluation results. This class of software exists because none of the tools vendors really understand how the Government does security management, and I mean NONE of them. There has to be some weird unnatural data import/export acts going on here to make the orchestrator of technical data match up with the orchestrator of managment data, and this is the part that scares me in a federated world.
SCAP spreads to IT management suites. They already have a footprint out there on everything, and odds are we’re using them for patch and configuration management anyway. If they don’t talk SCAP, push the vendor to get it working.
Where Life Gets Surreal
Then I woke up and realized that if I provide my Department CISO with near-real-time patch and vulnerability mangement information, I suddenly have become responsible for patch and management instead of playing “kick it to the contractors” and hiding behind working groups. It could be that if I get Federated Patch and Vulnerabilty Management off the ground, I’ve given my Department CISO the rope to hang me with. =)
Somehow, somewhere, I’ve done most of what CAG was talking about and automated it. I feel so… um… dirty. Really, folks, I’m not a shill for anybody.
Similar Posts:
Posted in DISA, NIST, Rants, Technical | 12 Comments »
Tags: auditor • compatibility • compliance • fdcc • fisma • government • infosec • infosharing • management • NIST • omb • pwnage • risk • scalability • scap • security • tools