The 10 CAG-egorically Wrong Ways to Introduce Standards

Posted February 20th, 2009 by

The Consensus Audit Guidelines (CAG) appear, at this point, to be a reasonable set of guidelines for mediating some human threats. I’m looking forward to seeing what CAG offers and have no doubt there will be worthwhile and actionable controls in the document. That said, there are significant reasons approach CAG with skepticism and assess it critically.

The motivation for CAG is described in a set of slides at the Gilligan Group site. It starts with a focus on what CIO’s fear most: attacks, reduced operational capability, public criticism, data loss, etc. Then it rightly questions whether FISMA is adequately addressing those problems. It doesn’t and this is the genesis of the CAG.

Consensus photo by Eirik Newth.

Unfortunately CAG subsequently develops by pairing this first valid premise with a set of false premises.  These propositions are drawn from slides at gilligangroupinc.com, attributed to John Gilligan or Alan Paller:

  1. All that matters are attacks. The central tenet of Bush’s Comprehensive National Cyber Initiative (CNCI) is adopted as the CAG theme: “Defense Must Be Informed by the Offense”. CAG envisions security as defense against penetration attacks. As any seasoned security practitioner knows, attacks are a limited subset of the threats to confidentiality, integrity and availability that information and information systems face.
  2. Security through obscurity. CAG seems to have taken the unspoken CNCI theme to heart too, “The most effective security is not exposed to public criticism.” Since its very public December 11th announcement no drafts have been made publicly available for comment.
  3. False dichotomy. CAG has been promoted as an alternative to the OMB/NIST approach to FISMA. It isn’t. An alternative would target a fuller range of threats to information and information system security. CAG should be considered a complement to NIST guidance, an addendum of security controls focused on defense against penetration by hackers. NIST has even acted on this approach by including some CAG controls into the 800-53 Rev. 3 catalog of controls.
  4. There is too much NIST guidance! This is the implication of one CAG slide that lists 1200 pages of guidance, 15 FIPS docs and the assorted Special Publications not related to FISMA as detriments to security. It’s like complaining that Wikipedia has too many articles to contribute to improved learning. Speaking as someone who scrambled to secure Federal systems before FISMA and NIST’s extensive guidance, having that documentation greatly improves my ability to efficiently and effectively secure systems.
  5. NIST guidance doesn’t tell me how to secure my systems! NIST’s FISMA guidance doesn’t step you through securing your SQL Server. The Chairman of the Joint Chiefs also doesn’t deliver your milk. Why not? It’s not their job. NIST’s FISMA guidance helps you to assess the risks to the system, decide how to secure it, secure it accordingly, check that a minimum of controls are in place and then accept responsibility for operating the system. NIST also provides documents, checklists, repositories, standards, working groups and validation of automated tools that help with the actual security implementation.
  6. Automated security controls negate human errors. With the premise of all threats being attacks this is nearly a plausible premise. But not all security is technical. Not all threats come from the Internet. DHS, NIST, Mitre, and their partners have pursued automated security controls to enforce and audit security controls for years but automated security controls can only go so far. Human errors, glitches, unexpected conflicts and operational requirements will always factor into the implementation of security.
  7. Audit compatibility as a hallmark of good security. There is a conflict of focus at the heart of the CAG, it seeks to both improve its subset of security and improve audit compatibility. For technical controls this is somewhat achievable using automation, something NIST has pursued for years with government and industry partners. For operational and management controls it results in audit checklists. But audits are fundamentally concerned with testing the particular and repeatable, security needs focus on evaluating the whole to ensure the necessary security results. An audit sees if antivirus software is installed, an evaluation sees if the antivirus software is effective.
  8. Metrics, but only these metrics over here. When selecting the current crop of CAG controls decisions on what to include were reportedly based on metrics of the highest threats. Great idea, a quantitative approach often discovers counter-intuitive facts. Only the metrics were cherry picked. Instead of looking at all realized threats or real threat impacts only a count of common penetration attacks were considered.
  9. With a sample of 1. As a basis for determining what security should focus on the whole breadth of the security profession was queried, so long as they were penetration testers. Yes, penetration testers are some very smart and talented people but penetration testing is to security what HUMINT is to intelligence services. Important players, expert practitioners but limited in scope and best used in conjunction with other intelligence assets.
  10. Assessments rely on paper artifacts. The NIST guidance does not require paper artifacts. The first line in the NIST SP 800-53A preface is, “Security control assessments are not about checklists, simple pass-fail results, or generating paperwork to pass inspections or audits-rather, security controls assessments are the principal vehicle used to verify that the implementers and operators of information systems are meeting their stated security goals and objectives.” NIST SP 800-37 specifically and repeatedly states, “Security accreditation packages can be submitted in either paper or electronic format.”

CAG is a missed opportunity. Of the myriad problems with our current FISMA regime a lot of good could be achieved. The problems with guidance have many causes but can be addressed through cooperative development of best practices outside of NIST. The Assessment Cases for SP 800-53A is an example of how cooperative development can achieve great results and provide clear guidance. Other problems exist and can be addressed with better training and community developments.

My hope is that the Consensus Audit Guidelines will move towards a more open, collaborative development environment. The first release is sure to deliver useful security controls against penetration attacks. As with all good security practices it will likely need to go through a few iterations and lots of critical assessment to mature. An open environment would help foster a more complete consensus.

Consensus photo by mugley.



Similar Posts:

Posted in BSOFH, FISMA, Rants, Technical, What Doesn't Work, What Works | 9 Comments »
Tags:

Your Friendly Neighborhood C&A Podcast Panel

Posted February 17th, 2009 by

This weekend, Joe Faraone (Vlad the Impaler), Graydon Mckee, and I teamed up to be a guest panel for Michael Santarcangelo’s Security Catalyst podcast.  We wax esoterically on the fine points of certification and accreditation and what kind of value that it brings to an agency or company that does it right.

You can check it out here.



Similar Posts:

Posted in Speaking, What Works | No Comments »
Tags:

The Authorizing Official and the Lolcat

Posted February 12th, 2009 by

Hot on the heels of our DAA presentation, the Guerilla CISO is proud to present our lolcat Authorizing Official.

Yes, it’s all about the budget. If you do not have a budget, you do not have the ability to change things. We have a word for people who have all the authority of an Authorizing Official but none of the budget: scapegoat.

funny pictures

And since I’m in Toronto for an extended stay thanks to the weather, today is a 2-fer:
funny pictures



Similar Posts:

Posted in IKANHAZFIZMA | No Comments »
Tags:

The Accreditation Decision and the Authorizing Official

Posted February 10th, 2009 by

The accreditation decision is one of the most key activities in how the US Government secures its systems. It’s also one of the most misunderstood activities. This slideshow aims to explain the role of the Authorizing Official and to give you some understanding into why and how accreditation decisions are made.

I would like to give a big thanks to Joe Faraone and Graydon McKee who helped out.

The presentation is licensed under Creative Commons, so feel free to download it, email it, and use it in your own training.



Similar Posts:

Posted in FISMA, NIST, Risk Management, Speaking | 5 Comments »
Tags:

Could the Titanic have changed course?

Posted January 6th, 2009 by

Rybolov really struck a note with me (as he usually does) with his blog entry with his decision that S.3474 was a bad thing. It reminds me of a conversation I had with a friend recently. Basically she ask me why bad thing happen even after smart people put their heads together and try to deal with the problem before facing a crisis. Intrigued with her question, I asked her what specifically she was asking about. She shared that she had been thinking about the tragedy of the Titanic sinking.

Of course she was referring to the sinking of the passenger ship RMS Titanic on the evening of 14 April 1912. She made two points, first that the experts declared that the ship was “unsinkable” – how could they be so wrong. Second, she wondered how the ship could be so poorly equipped with boats and safety equipment such that there was such great loss of life.

The Titanic’s Disaster photo by bobster1985.

Little did she know that I have had an odd fascination with the Titanic disaster since childhood and have basically read much of the common public material about the event. So, I replied that that no expert had ever declared her unsinkable, that it was basically something that was made up by the press and the dark spineless things that hang around the press. However, I added the designers and owners of the ship had made much of her advanced safety features when she was launched. A critical feature was including water-tight bulkheads in her design. This was something of an advanced and novel feature at the time. What it meant was that you could poke a pretty big hole in the ship, and as long as the whole was not spread over several of these water-tight compartments she would stay afloat. The problem was that the iceberg that she hit (the Titanic, not my friend), ignored all of this a tore a big gash along about a third of the length of the ship.

So, my friend pressed again about the lack of safety equipment, especially lifeboats. I told her that the problem here was that the Titanic indeed did meet all of the safety requirements of the time. And that a big part of the problem was that the safety requirements were drafted in 1894 at a time when there were rapid changes and in the size and design of ships of this kind. Those regulations indicated that all passenger ships over 10,000 tons required 16 life boats, and that’s how many the Titanic had. At the time the regulations were written there were hardly any ships over 10,000 tons in size. However, when Titanic was launched she was designed to be over 50,000 tons when fully loaded. The fact was that if each of these lifeboats was fully loaded they could barely hold half of the of the passengers and crew of the ship if fully loaded. What is worse, when the ship did sink, not all of the boats were usable because of speed and angle in which the ship began sinking.

So, the bottom-line was that when the Titanic was reviewed by the safety accountants, they took out their check-list and went over the ship with a fine tooth comb. When the day was done the ship fully met all the safety criteria and was certified as safe.

This is where I see the parallels between root causes of the Titanic disaster and the odd situation we find ourselves in today in terms of IT security. Security by checklist –especially out of date checklists—simply doesn’t work. Moreover, the entire mental framework that mixes up accounting practices and thoughts with security discipline and research is an utter failure. Audits only uncover the most egregious security failures. And, they uncover them at a point in time. The result is that audits can be gamed, and even ignored. On the other hand, formal reviews by experienced security professionals are rarely ignored. Sometimes not all of the resources are available to militate against some of the vulnerabilities pointed out by the professionals. And sometimes there is debate about the validity of specific observations made by security professionals. But, they are rarely ignored.

Interesting enough, because of the mixed IT security record of many government agencies, Congress is proposing – more audits! It seems to me what they should be considering is strengthening the management of IT security and moving from security audits often performed by unqualified individuals and teams toward security assessments conducted by security professionals. And since professionals are conducting these proposed assessments, they should be required to comment on the seriousness of deficiencies and possible mitigation actions. An additional assessment that the professionals should be required to report on is the adequacy of funding, staffing and higher management support. I don’t really see any point in giving a security program a failing grade if the existing program is well managed but subverted and underfunded by the department’s leadership.



Similar Posts:

Posted in FISMA, NIST, Risk Management, The Guerilla CISO | 4 Comments »
Tags:

When the Feds Come Calling

Posted October 21st, 2008 by

I’ve seen the scenario about a dozen times in the last 2 months–contractors and service providers of all sorts responding to the Government’s security requirements in the middle of a contract.  It’s almost reached the stage where I have it programmed as a “battle drill” ala the infantryman’s Battle Drill 1A, and I’m here to share the secret of negotiating these things.

Let’s see, without naming names, let’s look at where I’ve seen this come up:

  • Non-Government Organizations that assist the Government with para-Government services to the citizens
  • Companies doing research and development funded by the Government–health care and military
  • Universities who do joint research with the Government
  • Anybody who runs something that the Government has designated as “critical infrastructure”
  • State and local governments who use Federal Government data for their social plans (unemployment system, food stamps, and ) and homeland security-esque activities (law enforcement, disaster response)
  • Health Care Providers who service Government insurance plans

For the purposes of this blog post, I’ll refer to all of these groups as contractors or service providers.  Yes, I’m mixing analogies, making huge generalizations, and I’m not precise at all.  However, these groups should all have the same goals and the approach is the same, so bear with me while I lump them all together.

Really, guys, you need to understand both sides of the story because this a cause for negotiations.  I’ll explain why in a minute.

On the Government side:  Well, we have some people we share data with.  It’s not a lot, and it’s sanitized so the value of it is minimal except for the Washington Post Front Page Metric.  Even so, the data is PII that we’ve taken an anonymizer to so that it’s just statistical data that doesn’t directly identify anybody.  We’ve got a pretty good handle on our own IT systems over the past 2 years, so our CISO and IG want us to focus on data that goes outside of our boundaries.  Now I don’t expect/want to “own” the contractor’s IT systems because they provide us a service, not an IT system.  My core problem is that I’m trying to take an existing contract and add security requirements retroactively to it and I’m not sure exactly how to do that.

Our Goals:

  • Accomplishing the goals of the program that we provided data to support
  • Protection of the data outside of our boundaries
  • Proving due-diligence to our 5 layers of oversight that we are doing the best we can to protect the data
  • Translating what we need into something the contractor understands
  • Being able to provide for the security of Government-owned data at little to no additional cost to the program

On the contractor/service provider side:  We took some data from the Government and now they’re coming out of the blue saying that we need to be FISMA-compliant.  Now I don’t want to sound whiney, but this FISMA thing is a huge undertaking and I’ve heard that for a small business such as ourselves, it can cripple us financially.  While I still want to help the Government add security to our project, I need to at least break even on the security support.  Our core problem is to keep security from impacting our project’s profitability.

Our Goals:

  • Accomplishing the goals of the program that we were provided data to support
  • Protection of the data given to us to keep the Government happy and continuing to fund us (the spice must flow!)
  • Giving something to the Government so that they can demonstrate due-diligence to their auditors and IG
  • Translating what we do into something the Government understands
  • Keeping the cost of security to an absolute minimum or at least funded for what we do add because it wasn’t scoped into the SOW

Hmm, looks like these goals are very much in alignment with each other.  About the only thing we need to figure out is scope and cost, which sounds very much like a negotiation.

Hardcore Negotiation Skills photo by shinosan.

Little-known facts that might help in our scenario here:

  • Section 2.4 of SP 800-53 discusses the use of compensating controls for contractor and service-provider systems.
  • One of the concepts in security and the Government is that agencies are to provide “adequate security” for their information and information systems.  Have a look at FISMA and OMB Circular A-130.
  • Repeat after me:  “The endstate is to provide a level of protection for the data equivalent or superior to what the Government would provide for that data.”
  • Appendix G in SP 800-53 has a traceability matrix through different standards that can serve as a “Rosetta Stone” for understanding each other.  Note to NIST:  let’s throw in PCI-DSS, Sarbanes-Oxley,  and change ISO 17799 to 27001.

So what’s a security geek to do?  Well, this, dear readers, is Rybolov’s 5-fold path to Government/contractor nirvana:

  1. Contractor and Government have a kickoff session to meet each other and build raport, starting from a common ground such as how you both have similar goals.  The problem really is one of managing each others’ expectations.
  2. Both Government and Contractor perform internal risk assessment to determine what kind of outcome they want to negotiate.
  3. Contractor and Government meet a week later to negotiate on security.
  4. Contractor provides documentation on what security controls they have in place.  This might be as minimal as a contract with the guard force company at their major sites, or it might be just employee background checks and
  5. Contractor and Government negotiate for a 6-month plan-of-action.  For most organizations considering ISO 27001, this is a good time to make a promise to get it done.  For smaller organizations or data , we may not even

Assumptions and dependencies:

  • The data we’re talking about is low-criticality or even moderate-criticality.
  • This isn’t an outsourced IT system that could be considered government-owned, contractor-operated (GO-CO)


Similar Posts:

Posted in FISMA, Outsourcing | 1 Comment »
Tags:

« Previous Entries Next Entries »


Visitor Geolocationing Widget: