OMB Wants a Direct Report

Posted August 28th, 2009 by

The big news in OMB’s M-09-29 FY 2009 Reporting Instructions for the Federal Information Security Management Act and Agency Privacy Management is that instead of fiddling with document files reporting will now be done directly through an online tool. This has been covered elsewhere and it is the one big change since last year.  However having less paper in the paperwork is not the only change.

Piles of Paper photo by °Florian.

So what will this tool be like? It is hard to tell at this point. Some information will be entered directly but the system appears designed to accept uploads of some documents, such as those supporting M-07-16. Similar to the spreadsheets used for FY 2008 there will be separate questions for the Chief Information Officer, Inspector General and Senior Agency Official for Privacy. Microagencies will still have abbreviated questions to fill out. Additional information on the automated tool, including full instructions and a beta version will be available in August, 2009.

Given the required information has changed very little the automated system is unlikely to significantly ease the reporting burden. This system appears primarily designed to ease the data processing requirements for OMB. With Excel spreadsheets no longer holding data many concerns relating to file versions, data aggregation and analysis are greatly eased.

It is worth noting that a common outcome of systems re-engineered to become more efficient is that managers look to find ways to utilize the new efficiency. What does this mean? Now that OMB has the ability to easily analyze data which took a great amount of effort to process before they may want to improve what is reported. A great deal has been said over the years about the inefficiencies in the current reporting regime. This may be OMB’s opportunity to start collecting an increased amount of information that may better reflect agencies actual security posture. This is pure speculation and other factors may moderate OMB’s next steps, such as the reporting burden on agencies, but it is worth consideration.

One pleasant outcome to the implementation of this new automated tool is the reporting deadline has been pushed back to November 18, 2009.

Agencies are still responsible for submitting document files to satisfy M-07-16. The automated tool does not appear to allow direct input of this information. However the document requirements are slightly different. Breach notification policy document need only be submitted if it has changed. It is no longer sufficient to simply report progress on eliminating SSNs and reducing PII, an implementation plan and a progress update must be submitted. The requirement for a policy document covering rules of behavior and consequences has been removed.

In addition to the automated tool there are other, more subtle changes to OMB’s FY 2009 reporting. Let’s step through them, point by point:

10. It is reiterated that NIST guidance is required. This point has been expanded to state that legacy systems, agencies have one year to come into compliance with NIST documents new material. For new systems agencies are expected to be in compliance upon system deployment.

13 & 15. Wording indicating that disagreements on reports should be resolved prior to submission and that the agency head’s view will be authoritative have been removed. This may have been done to reduce redundancy as M-09-29’s preface indicates agency reports must reflect the agency head’s view.

52. The requirement for an central web page with working links to agency PIAs and Federal Register published SORNs has been removed.

A complete side-by-side comparison of changes between the two documents is available at FISMApedia.org.

All in all the changes to OMB’s guidance this year will not change agencies reporting burden significantly. And that may not be a bad thing.



Similar Posts:

Posted in FISMA, NIST, Public Policy | 1 Comment »
Tags:

A Layered Model for Massively-Scaled Security Management

Posted August 24th, 2009 by

So we all know the OSI model by heart, right?   Well, I’m offering up my model of technology management. Really at this stage I’m looking for feedback

  • Layer 7: Global Layer. This layer is regulated by treaties with other nation-states or international standards.  I fit cybercrime treaties in here along with the RFCs that make the Internet work.  Problem is that security hasn’t really reached much to this level unless you want to consider multinational vendors and top-level cert coordination centers like CERT-CC.
  • Layer 6: National-Level Layer. This layer is an aggregation of Federations and industries and primarily consists of Federal law and everything lumped into a “critical infrastructure” bucket.  Most US Federal laws fit into this layer.
  • Layer 5: Federation/Community Layer. What I’m talking here with this layer is an industry federated or formed in some sort of community.  Think major verticals such as energy supply.  It’s not a coincidence that this layer lines up with DHS’s critical infrastructure and key resources breakdown but it can also refer to self-regulated industries such as the function of PCI-DSS or NERC.
  • Layer 4: Enterprise Layer. Most security thought, products, and tools are focused on this layer and the layers below.  This is the realm of the CSO and CISO and roughly equates to a large corporation.
  • Layer 3: Project Layer. Collecting disparate technologies and data into a similar piece such as the LAN/WAN, a web application project, etc.  In the Government world, this is the location for the Information System Security Officer (ISSO) or the System Security Engineer (SSE).
  • Layer 2: Integration Layer. Hardware, software, and firmware combine to become products and solutions and is focused primarily on engineering.
  • Layer 1: Code Layer. Down into the code that makes everything work.  This is where the application security people live.

There are tons of way to use the model.I’m thinking each layer has a set of characteristics like the following:

  • Scope
  • Level of centralization
  • Responsiveness
  • Domain expertise
  • Authority
  • Timeliness
  • Stakeholders
  • Regulatory bodies
  • Many more that I haven’t thought about yet

Chocolate Layer Cake photo by foooooey.

My whole point for this model is that I’m going to try to use it to describe the levels at which a particular problem resides at and to stimulate discussion on what is the appropriate level at which to solve it.  For instance, take a technology and you can trace it up and down the stack. Say Security Event and Incident Monitoring:

  • Layer 7: Global Layer. Coordination between national-level CERTs in stopping malware and hacking attacks.
  • Layer 6: National-Level Layer. Attack data from Layer 5 is aggregated and correlated to respond to large incidents on the scale of Cyberwar.
  • Layer 5: Federation/Community Layer. Events are filtered from Layer 4 and only the confirmed events or interest are correlated to determine trends.
  • Layer 4: Enterprise Layer. Events are aggregated by a SIEM with events of interest flagged for response.
  • Layer 3: Project Layer. Logs are analyzed in some manner.  This is most likely the highest in the model that we
  • Layer 2: Integration Layer. Event logs have to be written to disk and stored for a period of time.
  • Layer 1: Code Layer. Code has to be programmed to create event logs.

I do have an ulterior motive.  I created this model because most of our security thought, doctrine, tools, products, and solutions work at Layer 4 and below.  What we need is discussion on Layers 5 and above because when we try to create massively-scaled security solutions, we start to run into a drought of information at what to do above the Enterprise.  There are other bits of doctrine that I want to bring up, like trying to solve any problem at the lowest level for which it makes sense.  So in other words, we can use the model to propose changes to the way we manage security… say we have a problem like the lack of data on data breaches.  What we’re saying when we say that we need a Federal data breach law is that because of the scope and the amount of responsibility and competing interests at Layer 5, that we need a solution at Layer 6, but in any case we should start at the bottom and work our way up the model until we find an adequate scope and scale.

So, this is my question to you, Internet: have I just reinvented enterprise public policy, IT architecture (Federal Enterprise Architecture) and business blueprinting, or did I create some kind of derivative view of technology, security, and public policy that I can now use?



Similar Posts:

Posted in Public Policy | 6 Comments »
Tags:

Random Thoughts on “The FISMA Challenge” in eHealthcare

Posted August 4th, 2009 by

OK, so there’s this article being bounced all over the place.  Basic synopsis is that FISMA is keeping the government from doing any kind of electronic health records stuff because FISMA requirements extend to health care providers and researchers when they take data from the Government.

Read one version of the story here

So the whole solution is that, well, we can’t extend FISMA to eHealthcare when the data is owned by the Government because that security management stuff gets in the way.  And this post is about why they’re wrong and right, but not in the places that they think they are.

Government agencies need to protect the data that they have by providing “adequate security”.  I’ve covered this a bazillion places already. Somewhere somehow along the lines we let the definition of adequate security mean “You have to play by our rulebook” which is complete and utter bunk.  The framework is an expedient and a level-setting experience across the government.  It’s not made to be one-size-fits-all, but is instead meant to be tailored to each individual case.

The Government Information Security Trickle-Down Effect is a name I use for FISMA/NIST Framework requirements being transferred from the Government to service providers, whether they’re in healthcare or IT or making screws that sometimes can be used on the B2 bombers.  It will hit you if you take Government data but only because you have no regulatory framework of your own with which you can demonstrate that you have “adequate security”.  In other words, if you provide a demonstrable level of data protection equal to or superior to what the Government provides, then you should reasonably be able to take the Government data, it’s finding the right “esperanto” to demonstrate your security foo.

If only there was a regulatory scheme already in place that we could use to hold the healthcare industry to.  Oh wait, there is: HIPAA.  Granted, HIPAA doesn’t really have a lot of teeth and its effects are maybe demonstrable, but it does fill most of the legal requirement to provide “adequate security”, and that’s what’s the important thing, and more importantly, what is required by FISMA.

And this is my problem with this whole string of articles:  The power vacuum has claimed eHealthcare.  Seriously, there should be somebody who is in charge of the data who can make a decision on what kinds of protections that they want for it.  In this case, there are plenty of people with different ideas on what that level of protection is so they are asking OMB for an official ruling.  If you go to OMB asking for their guidance on applying FISMA to eHealthcare records, you get what you deserve, which is a “Yes, it’s in scope, how do you think you should do this?”

So what the eHealthcare people really are looking for is a set of firm requirements from their handlers (aka OMB) on how to hold service providers accountable for the data that they are giving them.  This isn’t a binary question on whether FISMA applies to eHealthcare data (yes, it does), it’s a question of “how much is enough?” or even “what level of compensating controls do we need?”

But then again, we’re beaten down by compliance at this point.  I know I feel it from time to time.  After you’ve been beaten badly for years, all you want to do is for the batterer to tell you what you need to do so the hurting will stop.

So for the eHealthcare agencies, here is a solution for you.  In your agreements/contracts to provide data to the healthcare providers, require the following:

  • Provider shall produde annual statements for HIPAA compliance
  • Provider shall be certified under a security management program such as an  ISO 27001, SAS-70 Type II, or even PCI-DSS
  • Provider shall report any incident resulting in a potential data breach of 500 or more records within 24 hours
  • Financial penalties for data breaches based on number of records
  • Provider shall allow the Government to perform risk assessments of their data protection controls

That should be enough compensating controls to provide “adequate security” for your eHealthcare data.  You can even put a line through some of these that are too draconian or high-cost.  Take that to OMB and tell them how you’re doing it and how they would like to spend the taxpayers’ money to do anything other than this.

Case Files and Medical Records photo by benuski.



Similar Posts:

Posted in FISMA, Rants | 1 Comment »
Tags:

Federated Vulnerability Management

Posted July 14th, 2009 by

Why hello there private sector folks.  It’s no big surprise, I work in the US Federal Government Space and we have some unique challenges of scale.  Glad to meet you, I hear you’ve got the same problems only not in the same kind of scale as the US Federal Government.  Sit back, read, and learn.

You see, I work in places where everything is siloed into different environments.  We have crazy zones for databases, client-facing DMZs, managment segments, and then the federal IT architecture itself: a loose federation of semi-independent enterprises that are rapidly coming together in strange ways under the wonderful initiative known as “The TIC”.  We’re also one of the most heavily audited sectors in IT.

And yet, the way we manage patch and vulnerability information is something out of the mid-80’s.

Current State of Confusion

Our current patch management information flow goes something like this:

  • Department SOC/US-CERT/CISOs Office releases a vulnerability alert (IAVA, ISVM, something along those lines)
  • Somebody makes a spreadsheet with the following on it:
    • Number of places with this vulnerability.
    • How many have been fixed.
    • When you’re going to have it fixed.
    • A percentage of completion
  • We then manage by spreadsheets until the spreadsheets say “100%”.
  • The spreadsheets are aggregated somewhere.  If we’re lucky, we have some kind of management tool that we dump our info into like eMASS.
  • We wonder why we get pwned (by either haxxorz or the IG).

Now for how we manage vulnerability scan information:

  • We run a tool.
  • The tool spits out a .csv or worse, a .html.
  • We pull up the .csv in Excel and add some columns.
  • We assign dates and responsibilities to people.
  • We have a weekly meeting togo over what’s been completed.
  • When we finish something, we provide evidence of what we did.
  • We still really don’t know how effective we were.

Problems with this approach:

  • It’s too easy to game.  If I’m doing reporting, the only thing really keeping me reporting the truth is my sense of ethics.
  • It’s slow as hell.  If somebody updates a spreadsheet, how does the change get echoed into the upstream spreadsheets?
  • It isn’t accurate at any given moment in time, mostly because changes quicker than the process can keep up.  What this means is that we always look like liars who are trying to hide something because our spreadsheet doesn’t match up with the “facts on ground”.
  • It doesn’t compare with our other management tools like Plans of Action and Milestone (POA&M).  They usually are managed in a different application than the technical parts, and this means that we need a human with a spreadsheet to act as the intermediary.

So this is my proposal to “fix” government patch and vulnerability management: Federated Patch and Vulnerability Management through SCAP.

Trade Federation Battle Droid photo by Stéfan.  Roger, Roger, SCAP means business.

Whatchu Talkin’ Bout With This “Federated” Stuff, Willis?

This is what I mean, my “Plan for BSOFH Happiness”:

Really what I want is every agency to have an “orchestrator” ala Ed Bellis’s little SCAP tool of horrors. =)  Then we federate them so that information can roll up to a top-level dashboard for the entire executive branch.

In my beautiful world, every IT asset reports into a patch management system of some sort.  Servers, workstations, laptops, all of it.  Yes, databases too.  Printers–yep.  If we can get network devices to get reported on config info via SCAP-enabled NMS, let’s get that pushing content into our orchestrator. We don’t even really  have to push patches using these tools–what I’m primarily concerned with at this point is to have the ability to pull reports.

I group all of my IT assets in my system into a bucket of some sort in the orchestrator.  That way, we know who’s responsible when something has a problem.  It also fits into our “system” concept from FISMA/C&A/Project Management/etc.

We do periodic network scanning to identify everything on our network and feed them into the orchestrator.  We do regular vulnerability scans and any findings feed into the orchestrator.  The more data, the better aggregate information we can get.

Our orchestrator correlates network scans with patch management status and gives us a ticket/alert/whatever where we have unmanaged devices.  Yes, most enterprise management tools do this today, but the more scan results I have feeding them, the better chance I have at finding all my assets.  Thanks to our crazy segmented architecture models, we have all these independent zones that break patch, vulnerability, and configuration management as the rest of the IT world performs it.  Flat is better for management, but failing that, I’ll take SCAP hierarchies of reporting.

The Department takes a National Vulnerability Database feed and pushes down to the Agencies what they used to send in an IAVA, only they also send down the check to see if your system is vulnerable.  My orchestrator automagically tests and reports back on status before I’m even awake in the morning.

I get hardening guides pushed from the Department or Agency in SCAP form, then pull an audit on my IT assets and have the differences automagically entered into my workflow and reporting.

I become a ticket monkey.  Everything is in workflow.  I can be replaced with somebody less expensive and can now focus on finding the answer to infosec nirvana.

We provide a feed upstream to our Department, the Department provides a feed to somebody (NCSD/US-CERT/OMB/Cybersecurity Coordinator) who now has the view across the entire Government.  Want to be bold, let Vivek K and the Sunlight Foundation at the data feeds and have truly open and transparent, “Unbreakable Government 2.1”.  Who needs FISMA report cards when our vulnerability data is on display?

Keys to Making Federated Patch and Vulnerability Management Work

Security policy that requires SCAP-compatible vulnerability and patch management products.  Instead of parroting back 800-53, please give me a requirement in your security policy that every patch and vulnerability management tool that we buy MUST BE SCAP-CERTIFIED.  Yes, I know we won’t get it done right now, but if we get it in policy, then it will trickle down into product choices eventually.  This is compliance I can live with, boo-yeah!

Security architecture models (FEA anyone?) that show federated patch and vulnerability management deployments as part of their standard configuration.  OK with the firewall pictures and zones of trust, I understand what you’re saying, now give me patch and vulnerability management flows across all the zones so I can do the other 85% of my job.

Network traffic from the edges of the hierarchy to…somewhere.  OK, you just need network connectivity throughout the hierarchy to aggregate and update patch and vulnerability information, this is basic data flow stuff.  US-CERT in a future incarnation could be the top-level aggregator, maybe.  Right now I would be happy building aggregation up to the Department level because that’s the level at which we’re graded.

Understanding.  Hey, I can’t fix everything all the time–what I’m doing is using automation to make the job of fixing things easier by aggregation, correlation, status reporting, and dashboarding.  These are all concepts behind good IT management, why shouldn’t we apply them to security managment also?  Yes, I’ll have times when I’m behind on something or another, but guess what, I’m behind today and you just don’t know it.  However, with near-real-time reporting, we need a culture shift away from trying to police each other up all the time to understanding that sometimes nothing is really perfect.

Patch and vulnerability information is all-in.  It has to be reporting in 100% across the board, or you don’t have anything–back to spreadsheets hell for you.  And simply put, why don’t you have everything in the patch management system already?  Come on, that’s not a good enough reason.

POA&Ms need to be more fluid.  Face it, with automated patch and vulnerability management, POA&Ms become more like trouble tickets.  But yes, that’s much awesome, smaller, easily-satisfied POA&Ms are much easier to manage provided that the administrative overhead for each of these is reduced to practically nothing… just like IT trouble tickets.

Regression testing and providing proof becomes easier because it’s all automated.  Once you fix something and it’s marked in the aggregator as completed, it gets slid into the queue for retesting, and the results become the evidence.

Interfaces with existing FISMA management tools.  This one is tough.  But we have a very well-entrenched software base geared around artifact management, POA&M management, and Security Test and Evaluation results.  This class of software exists because none of the tools vendors really understand how the Government does security management, and I mean NONE of them.  There has to be some weird unnatural data import/export acts going on here to make the orchestrator of technical data match up with the orchestrator of managment data, and this is the part that scares me in a federated world.

SCAP spreads to IT management suites.  They already have a footprint out there on everything, and odds are we’re using them for patch and configuration management anyway.  If they don’t talk SCAP, push the vendor to get it working.

Where Life Gets Surreal

Then I woke up and realized that if I provide my Department CISO with near-real-time patch and vulnerability mangement information, I suddenly have become responsible for patch and management instead of playing “kick it to the contractors” and hiding behind working groups.  It could be that if I get Federated Patch and Vulnerabilty Management off the ground, I’ve given my Department CISO the rope to hang me with.  =)

Somehow, somewhere, I’ve done most of what CAG was talking about and automated it.  I feel so… um… dirty.  Really, folks, I’m not a shill for anybody.



Similar Posts:

Posted in DISA, NIST, Rants, Technical | 12 Comments »
Tags:

Security Automation Developers Conference Slides

Posted July 2nd, 2009 by

Eh? What’s that mean?  Developer Days is a weeklong conference where they get down into the weeds about the various SCAP schemas and how they fit into the overall program of security automation. 

Highlights and new ideas:

Remedial Markup Language: Fledgeling schema to describe how to remediate a vulnerability.  A fully automated security system would scan and then use the RML content to automagically fix the finding… say, changing a configuration setting or installing a patch.  this would be much awesome if combined with the CVE/CWE so you have a vulnerability scanner that scans and fixes the problem.  Also needs to be kept in a bottle because the operations guys will have a heartattack if we are doing this without any human intervention.

Computer Network Defense: There is a pretty good scenario slide deck on using SCAP to automate hardening, auditing, monitoring, and defense.  The key from this deck is how the information flows using automation.

Common Control Identifier:  This schema is basically a catalog of controls (800-53, 8500.2, PCI, SoX, etc) in XML.  The awesomeness with this is that one control can contain a reference implementation for each technology and the checklist to validate it in XCCDF.  At this point, I get all misty…

Open Checklist Interactive Language: This schema is to capture questionaires.  Think managerial controls, operational controls, policy, and procedure captured in electronic format and fed into the regular mitigation and workflow tools that you use so that you can view “security of the enterprise at a glance” across technical and non-technical security.

Network Event Content Automation Protocol:  This is just a concept floating around right now on using XML to describe and automate responses to attacks.  If you’re familiar with ArcSight’s Common Event Format, this would be something similar but on steroids with workflow and a pony!

Attendance at developer days is limited, but thanks to all the “Powar of teh Intarwebs, you can go here and read the slides!



Similar Posts:

Posted in NIST, Technical | 3 Comments »
Tags:

GAO’s 5 Steps to “Fix” FISMA

Posted July 2nd, 2009 by

Letter from GAO on how Congress can fix FISMA.  And oh yeah, the press coverage on it.

Now supposedly this was in response to an inquiry from Congress about “Please comment on the need for improved cyber security relating to S.773, the proposed Cybersecurity Act of 2009.”  This is S.773.

GAO is mixing issues and has missed the mark on what Congress asked for.  S.773 is all about protecting critical infrastructure.  It only rarely mentions government internal IT issues.  S.773 has nothing at all to do with FISMA reform.  However, GAO doesn’t have much expertise in cybersecurity outside of the Federal Agencies (they have some, but I would never call it extensive), so they reported on what they know.

The GAO report used the often-cited metric of an increase in cybersecurity attacks against Government IT systems growing from “5,503 incidents reported in fiscal year 2006 to 16,843 incidents in fiscal year 2008” as proof that the agencies are not doing anything to fix the problem.  I’ve questioned these figures before, it’s associated with the measurement problem and increased reporting requirements more than an increase in attacks.  Truth be told, nobody knows if the attacks are increasing and, if so, at what rate.  I would guess they’re increasing, but we don’t know, so quit citing some “whacked” metric as proof.

Reform photo by shevy.

GAO’s recommendations for FISMA Reform:

Clarify requirements for testing and evaluating security controls.  In other words, the auditing shall continue until the scores improve.  Hate to tell you this, but really all you can test at the national level is if the FISMA framework is in place, the execution of the framework (and by extension, if an agency is secure or not) is largely untestable using any kind of a framework.

Require agency heads to provide an assurance statement on the overall adequacy and effectiveness of the agency’s information security program.  This is harkening back to the accounting roots of GAO.  Basically what we’re talking here is for the agency head to attest that his agency has made the best effort that it can to protect their IT.  I like part of this because part of what’s missing is “executive support” for IT security.  To be honest, though, most agency heads aren’t IT security dweebs, they would be signing an assurance statement based upon what their CIO/CISO put in the executive summary.

Enhance independent annual evaluations.  This has significant cost implications.  Besides, we’re getting more and more evaluations as time goes on with an increase in audit burden.  IE, in the Government IT security space, how much of your time is spent providing proof to auditors versus building security?  For some people, it’s their full-time job.

Strengthen annual reporting mechanisms.  More reporting.  I don’t think it needs to get strengthened, I think it needs to get “fixed”.  And by “fixed” I mean real metrics.  I’ve touched on this at least a hundred times, go check out some of it….

Strengthen OMB oversight of agency information security programs.  This one gives me brain-hurt.  OMB has exactly the amount of oversight that they need to do their job.  Just like more auditing, if you increase the oversight and the people doing the execution have the same amount of people and the same amount of funding and the same types of skills, do you really expect them to perform differently?

Rybolov’s synopsis:

When the only tool you have is a hammer, every problem looks like a nail, and I think that’s what GAO is doing here.  Since performance in IT security is obviously down, they suggest that more auditing and oversight will help.  But then again, at what point does the audit burden tip to the point where nobody is really doing any work at all except for answering to audit requests?

Going back to what Congress really asked for, We run up against a problem.  There isn’t a huge set of information about how the rest of the nation is doing with cybersecurity.  There’s the Verizon DBIR, the Data Loss DB, some surveys, and that’s about it.

So really, when you ask GAO to find out what the national cybersecurity situation is, all you’re going to get is a bunch of information about how government IT systems line up and maybe some anecdotes about critical infrastructure.

Coming to a blog near you (hopefully soon): Rybolov’s 5 steps to “fix” FISMA.



Similar Posts:

Posted in FISMA | 2 Comments »
Tags:

« Previous Entries Next Entries »


Visitor Geolocationing Widget: