Posted June 13th, 2007 by
rybolov
After commenting on Mike Rothman’s Security Incite and Alex Hutton’s riskanalysis.is, I’m about ready to explain how C&A works and doesn’t work.
Let’s roleplay, shall we?
You’re the US government. You have an IT budget of $65 ba-ba-ba-ba-billion (stuttering added for additional effect) every year (2007 budget). If you wanted to, you might be able to make an offer to buy Microsoft based on one year’s worth of budget.
So how do you manage security risks associated with such a huge amount of cash? Same way you would manage those IT systems in the non-security world:
- Break it all down into bite-sized pieces
- Have some sort of methodology to manage the pieces effectively
- Delegate responsibility for each piece to somebody
- Use metrics to track where you are going
- Focus on risks to the business and the financial investment
- Provide oversight on all of the pieces that you delegated
- Evaluate each piece to see how well it is doing
Hmm, sounds exactly like what the government has done so far. It’s exactly like an agency’s investment (system) inventory/portfolio, OMB budget process, and the GAO metrics.
Now how would you manage each bite-sized piece? This is roughly the way a systems engineer would do it:
- Define needs
- Define requirements
- Build a tentative design
- Design review
- Build the system
- Test that the requirements are met
- Flip the switch to take it live
- Support anything that breaks
Hmm, that’s suspiciously like a system development life-cycle, isn’t it? There’s a reason we use project management and SDLC–in order to get from here to there, you need to have a plan or methodology to follow, and SDLC makes sense.
So then let’s do the same exercise and add in the security pieces of the puzzle.
- Define needs: Determine how much thesystem and the information is worth–categorization (FIPS-199 and NIST SP 800-60)
- Define requirements (FIPS-200 andNIST SP 800-53 along with a ton of tailoring)
- Build a tentative design (first security plan draft)
- Design review (security plan approval)
- Build the system
- Test that the needs and requirements are met (security test and evaluation)
- Flip the switch to take it live (accreditation decision)
- Support anything that breaks (continuous monitoring)
Guess what? That’s C&A in a nutshell. All this other junk is just that–junk. If you’re not managing security risk throughout the SDLC, what are you doing except for posturing for the other security people to see and arguing about triviata?
This picture (blatantly stolen from NIST SP 800-64, Security Considerations in the Information System Development Life Cycle) shows you how the core components of C&A fit in with the rest of the SDLC:
My theory is that the majority of systems have already been built and are in O&M phase of their SDLC. What that means is that we are trying to do C&A for these systems too late to really change anything. It also means that for the most part we will be trying to do C&A on systems that have already been built, so, just like how people confused war communism with pure communism, we confuse the emergency state of C&A post-facto with the pure state of C&A.
Now let’s look at where C&A typically falls apart:
- Confusing compliance (check the box) with risk management (are we providing “adequate security”?)
- Focusing too much on the certification statement and accreditation decision, which should be a gate instead of the entire process
- Not hiring smart people
- Failure to associate technical and system-specific risks with the business case
- Disassociation from the rest of the SDLC
- Disassociation from reality (liarware)
- Trying to certify and accredit systems in the implementation phase of the SDLC
- Risk-adverse decision-making
- Using C&A as quality assurance
Keys to success at this game follow roughly along what ISM-Community has proposed as an ISM Top 10. Those ISM guys, they’re pretty smart. =)
Similar Posts:
Posted in FISMA, ISM-Community, NIST, Risk Management, What Doesn't Work, What Works | 2 Comments »
Posted June 4th, 2007 by
rybolov
Zombies, compliance, and auditors all at the same time?
Alex, you know you’re trolling for a link from me.
Check out the zombie auditors faux banner ad.
Postscript: Alex added in a bigger, better zombie ad and some shirts.
Similar Posts:
Posted in Risk Management, Zombies | 3 Comments »
Posted May 31st, 2007 by
rybolov
There’s a nice article at the Smithsonian about the difference between riddles and mysteries. I received this via the security metrics email list.
Risks and Riddles
This reminds me of intelligence work, for obvious reasons.
There are 2 major types of offensive actions an army can conduct: deliberate attack and movement to contact. (Yes, those of you pedantic enough will bring up hasty attacks and a dozen other scenarios, I’m being a generalist here =) )
In a deliberate attack, you know roughly what the Bad Guys are doing–they are defending key terrain. The task for the intelligence people is to find specific Bad Guy battle positions down the the platoon level. This is a puzzle with a fairly established framework, you are interested in the details.
In a movement to contact, you have a very hazy idea that there are Bad Guys out there. You move with an eye towards retaining flexibility so that you can develop the situation based on what you learn during the mission. The task for the intelligence people is to determine the overall trend on what the Bad Guys are doing. This is a mystery, and you’re more concerned with finding out the overall direction than you are with the specifics–they’ll get lost due to “friction” anyway.
Now translated to information security, there is some of what we know about the Bad Guys that is static and therefore more of a puzzle–think about threats that have mature technologies like firewalls, Anti-virus, etc to counter them. Solutions to these threats are all about products.
On the other hand, we have the mysteries: 0-day attacks, covert channels, and the ever-popular insider threat. Just like a well-established military has problems understanding the mystery that is movement to contact, information security practitioners have problems responding to threats that have not been well-defined.
So information security, viewed in the light of puzzle v/s mystery becomes the following scenario: how much time, effort, and money do we spend on the puzzles versus how much time do we spend on mysteries? The risk geek in me wants to sit down and determine probabilities, rate of occurance, etc in order to make the all-important cost-benefit-risk comparison. But for mysteries I can’t, by definition of what a mystery is, do that, and our model goes back to peddling voodoo to the business consumers.
Similar Posts:
Posted in Army, Rants, Risk Management, What Doesn't Work, What Works | 1 Comment »
Posted May 10th, 2007 by
rybolov
I don’t think we should attach the word “requirement” to any controls in a framework or catalog of controls. I wish we could use the word “needs” instead.
While it’s a subtle distinction, it implies that there needs to be some wetware involved in order to translate the catalog of controls into real requirements that an engineer (security or otherwise) can build to. Until we do that, we’re only frustrating the people who have to implement.
Similar Posts:
Posted in Risk Management, What Doesn't Work, What Works | 2 Comments »
Posted May 9th, 2007 by
rybolov
Bottom Line Up Front: Even if on an organizational basis, the risk is acceptable, on a personal basis, there is no such thing as an acceptable risk.
We have these great Information Assurance frameworks. They’re scalable, modular, and they do work if you know what you are doing.
Then they all fall short in one thing: acceptable risk that is not acceptable. We teach people how to determine if a risk is acceptable. There are several formulas to use. It’s part of the CISSP CBK. At its heart, it’s a cost/benefit/risk comparison. Rationally, we know how to do this as an organization.
However, on a personal level, we live in a risk-avoidance, zero-defects society. To an individual, taking a risk means that you might have personal repercussions, and that is not acceptable. The end result is that we’re back to risk avoidance, which takes us back to the neolithic era of risk management.
So we’re stuck in this dual-standard security world with no end in sight. How do we fix it? I’m not sure, but somehow in order to have effective risk management, you need to establish a culture where it’s OK to fail occasionally.
Similar Posts:
Posted in Risk Management | No Comments »
Posted April 19th, 2007 by
rybolov
Back in my army days, most good leaders carried around a book with info on their squad. We jokingly called these our “Book of Death”.
Anyway, I aggregated all the spreadsheets I’ve used over the past year, sanitized them, genericized them, and put them up on the web. Feel free to borrow heavily or let me know what maybe needs to be added or expanded.
Really, I’m just testing the waters to see if there is interest in taking something like this on as a full project or if it should remain a Mike Smith skunkworks project like it has been so far.
CISO’s Book of Death V0.1
Similar Posts:
Posted in Army, ISM-Community, Risk Management, What Works | 3 Comments »