IKANHAZFIZMA Tackles the Consensus Audit Guidelines

Posted February 26th, 2009 by

CAG Fever… we haz it here at Guerilla-CISO.  So far the konsensus is that CAG works well as a “Best Practices” document but not really as an auditable standard.  We’re thinking that CAG will provide the rope with which our IGs and GAO will hang us.

funny pictures



Similar Posts:

Posted in IKANHAZFIZMA | 3 Comments »
Tags:

Clouds of CAG Confusion

Posted February 26th, 2009 by

Did you know that the US Department of Defense published the Consensus Audit Guidelines?  Yes, it’s true!  At least according to a ZDNet UK article title, “US Dept of Defense lists top 20 security controls“.

There is a haze of confusion settling around the Consensus Audit Guidelines origins.  The text of the CAG press release (pdf) is clear that it is developed by a consortium of federal agencies and private organizations.  It further states CAG is part of the Center for Strategic and International Studies work on CSIS Commission report on Cybersecurity for the 44th Presidency.  The title of the CAG press release is also equally clear that it is from a “Consortium of US Federal Cybersecurity Experts” which is substantively different than a consortium of federal agencies and private organizations.

The press release relates that CAG was initiated when a team discovered similarities between massive data losses by the US defense industrial base (DIB) and attacks on Federal agencies.  The project then grew as more agencies agreed to become involved.  Following the current public review of CAG the next steps for development are listed as pilot implementations at government agencies, a CIO Council review and an IG review. The clear inference of this origin story and ennumeration of steps is that the project has official Federal backing.

Let’s test that inference.  Click here for a Google search of the entire *.gov hierarchy for “Consensus Audit Guidelines”.  As I write this there is exactly one entry.  From oregon.gov.  A search using usa.gov (which uses live.com) has the same results.  Looking around the various organizations listed as contributors doesn’t yield any official announcements.

So why the confusion in the press?  Why does it appear from the news articles that this is an Federal project?  I wouldn’t speculate.

On a slightly different topic, I’ve been reading through the Consensus Audit Guidelines themselves and enjoying the guidance it provides.  I’ll write up a more complete analysis of it once I have finished my read through.  My initial impression is that CAG controls provide worthwhile recommendations but the framework for implementation needs development.

All Aboard the Astroturfmobile photo by andydr.  Perhaps an explanation is in order….



Similar Posts:

Posted in Technical, What Doesn't Work | 7 Comments »
Tags:

Digital Forensics and the case for change

Posted February 24th, 2009 by

A couple of weeks ago I posted a whitepaper, “The History of Digital Forensics”. I am just delighted that Rybolov gave me the opportunity. I am also delighted with all of the comments and question that have come in, in response to the posting of the whitepaper. I want to thank each and every one of you who responded. One of the most common comments or themes is that while I did a fine job of outlining the History of Digital Forensics, many security and forensics professionals find themselves in an organization that has only the most rudimentary forensics policies, procedures or even capabilities. For those of you who offered such comments, you have my complete sympathy.

However, I should also point out that many of the organizations that have well planned and supported digital forensics programs are only in that condition because they have learned of their security and forensics needs the hard way. I think many IT security professionals can relate to my comment when I write that, no one appreciates the need for better security and procedures more than the members of a team that have just completed an incident response without the benefit of sufficient planning and support. Many of us have been there either as a member of an internal as hoc incident response team or as part of a team of outside consultants called in to assist. Incident response is difficult and filled with tension. It is even more tension filled when you are part of a team that is having to invent procedures with each step you make and also defend them in real-time, often with many successive levels of management. The last several incident response engagements I have led, I had no opportunity do any technical work at all. My entire time was spent trying to hammer out processes and procedures and generally educate the management and explain the process for them. Since incident response usually cuts across every part and work-unit in an organization, each with its own way of looking at things, and with its own interest and concerns, the process also involved a lot of repetition, sensitivity and frankly hand-holding. I have never had a technical member of the team say they envied me in that role.

However, in each case, an important part of my mission was also to document the policies, procedures, and ‘lessons-learned’ and act as an advocate to incorporate this body of knowledge into standard operating procedures. In some cases I was successful; in others I think the organization was so traumatized by the incident itself that they were burnt-out and incapable of taking the next step at that time. Fortunately, many of the later contacted me later and we had some wonderful meetings in a pretty relaxed and yet focused atmosphere.

I guess, in part what I’m trying to make two points here, first is that even in the thick of it, you should always take a mental step or two back and take in the bigger picture. The second point is that when you are acting as an advocate trying to advance the progress of a security or digital forensics program, always put a solution in from of your management, never a problem. And to make it easier for your manager to pick up the ball and support your idea at the next level, make sure that you make a business case for plan, not a technical case.

In the post-incident world, the window of opportunity for change is small. Senior managers and business leaders must get on with their day-to-day business responsibilities. Dwelling on a security incident is counter-productive for them. However, their receptiveness to change in the form of well reasoned and prudent measures that are integrated into the business process is great. Making the case for security is perhaps the most important part of our job. We must always make the case when the opportunity for change presents itself.

US Cryptologic Museum Pueblo Incident photo by austinmills.  More information about the Pueblo Incident is here.



Similar Posts:

Posted in The Guerilla CISO | 1 Comment »
Tags:

The 10 CAG-egorically Wrong Ways to Introduce Standards

Posted February 20th, 2009 by

The Consensus Audit Guidelines (CAG) appear, at this point, to be a reasonable set of guidelines for mediating some human threats. I’m looking forward to seeing what CAG offers and have no doubt there will be worthwhile and actionable controls in the document. That said, there are significant reasons approach CAG with skepticism and assess it critically.

The motivation for CAG is described in a set of slides at the Gilligan Group site. It starts with a focus on what CIO’s fear most: attacks, reduced operational capability, public criticism, data loss, etc. Then it rightly questions whether FISMA is adequately addressing those problems. It doesn’t and this is the genesis of the CAG.

Consensus photo by Eirik Newth.

Unfortunately CAG subsequently develops by pairing this first valid premise with a set of false premises.  These propositions are drawn from slides at gilligangroupinc.com, attributed to John Gilligan or Alan Paller:

  1. All that matters are attacks. The central tenet of Bush’s Comprehensive National Cyber Initiative (CNCI) is adopted as the CAG theme: “Defense Must Be Informed by the Offense”. CAG envisions security as defense against penetration attacks. As any seasoned security practitioner knows, attacks are a limited subset of the threats to confidentiality, integrity and availability that information and information systems face.
  2. Security through obscurity. CAG seems to have taken the unspoken CNCI theme to heart too, “The most effective security is not exposed to public criticism.” Since its very public December 11th announcement no drafts have been made publicly available for comment.
  3. False dichotomy. CAG has been promoted as an alternative to the OMB/NIST approach to FISMA. It isn’t. An alternative would target a fuller range of threats to information and information system security. CAG should be considered a complement to NIST guidance, an addendum of security controls focused on defense against penetration by hackers. NIST has even acted on this approach by including some CAG controls into the 800-53 Rev. 3 catalog of controls.
  4. There is too much NIST guidance! This is the implication of one CAG slide that lists 1200 pages of guidance, 15 FIPS docs and the assorted Special Publications not related to FISMA as detriments to security. It’s like complaining that Wikipedia has too many articles to contribute to improved learning. Speaking as someone who scrambled to secure Federal systems before FISMA and NIST’s extensive guidance, having that documentation greatly improves my ability to efficiently and effectively secure systems.
  5. NIST guidance doesn’t tell me how to secure my systems! NIST’s FISMA guidance doesn’t step you through securing your SQL Server. The Chairman of the Joint Chiefs also doesn’t deliver your milk. Why not? It’s not their job. NIST’s FISMA guidance helps you to assess the risks to the system, decide how to secure it, secure it accordingly, check that a minimum of controls are in place and then accept responsibility for operating the system. NIST also provides documents, checklists, repositories, standards, working groups and validation of automated tools that help with the actual security implementation.
  6. Automated security controls negate human errors. With the premise of all threats being attacks this is nearly a plausible premise. But not all security is technical. Not all threats come from the Internet. DHS, NIST, Mitre, and their partners have pursued automated security controls to enforce and audit security controls for years but automated security controls can only go so far. Human errors, glitches, unexpected conflicts and operational requirements will always factor into the implementation of security.
  7. Audit compatibility as a hallmark of good security. There is a conflict of focus at the heart of the CAG, it seeks to both improve its subset of security and improve audit compatibility. For technical controls this is somewhat achievable using automation, something NIST has pursued for years with government and industry partners. For operational and management controls it results in audit checklists. But audits are fundamentally concerned with testing the particular and repeatable, security needs focus on evaluating the whole to ensure the necessary security results. An audit sees if antivirus software is installed, an evaluation sees if the antivirus software is effective.
  8. Metrics, but only these metrics over here. When selecting the current crop of CAG controls decisions on what to include were reportedly based on metrics of the highest threats. Great idea, a quantitative approach often discovers counter-intuitive facts. Only the metrics were cherry picked. Instead of looking at all realized threats or real threat impacts only a count of common penetration attacks were considered.
  9. With a sample of 1. As a basis for determining what security should focus on the whole breadth of the security profession was queried, so long as they were penetration testers. Yes, penetration testers are some very smart and talented people but penetration testing is to security what HUMINT is to intelligence services. Important players, expert practitioners but limited in scope and best used in conjunction with other intelligence assets.
  10. Assessments rely on paper artifacts. The NIST guidance does not require paper artifacts. The first line in the NIST SP 800-53A preface is, “Security control assessments are not about checklists, simple pass-fail results, or generating paperwork to pass inspections or audits-rather, security controls assessments are the principal vehicle used to verify that the implementers and operators of information systems are meeting their stated security goals and objectives.” NIST SP 800-37 specifically and repeatedly states, “Security accreditation packages can be submitted in either paper or electronic format.”

CAG is a missed opportunity. Of the myriad problems with our current FISMA regime a lot of good could be achieved. The problems with guidance have many causes but can be addressed through cooperative development of best practices outside of NIST. The Assessment Cases for SP 800-53A is an example of how cooperative development can achieve great results and provide clear guidance. Other problems exist and can be addressed with better training and community developments.

My hope is that the Consensus Audit Guidelines will move towards a more open, collaborative development environment. The first release is sure to deliver useful security controls against penetration attacks. As with all good security practices it will likely need to go through a few iterations and lots of critical assessment to mature. An open environment would help foster a more complete consensus.

Consensus photo by mugley.



Similar Posts:

Posted in BSOFH, FISMA, Rants, Technical, What Doesn't Work, What Works | 9 Comments »
Tags:

Beware the Cyber-Katrina!

Posted February 19th, 2009 by

Scenario: American Internet connections are attacked.  In the resulting chaos, the Government fails to respond at all, primarily because of infighting over jurisdiction issues between responders.  Mass hysteria ensues–40 years of darkness, cats sleeping with dogs kind of stuff.

Sounds similar to New Orleans after Hurricane Katrina?  Well, this now has a name: Cyber-Katrina.

At least, this is what Paul Kurtz talked about this week at Black Hat DC.  Now I understand what Kurtz is saying:  that we need to figure out the national-level response while we have time so that when it happens we won’t be frozen with bureaucratic paralysis.  Yes, it works for me, I’ve been saying it ever since I thought I was somebody important last year.  =)

But Paul…. don’t say you want to create a new Cyber-FEMA for the Internet.  That’s where the metaphor you’re using failed–if you carry it too far, what you’re saying is that you want to make a Government organization that will eventually fail when the nation needs it the most.  Saying you want a Cyber-FEMA is just an ugly thing to say after you think about it too long.

What Kurtz really meant to say is that we don’t have a national-level CERT that coordinates between the major players–DoD, DoJ, DHS, state and local governments, and the private sector for large-scale incident response.  What’s Kurtz is really saying if you read between the lines is that US-CERT needs to be a national-level CERT and needs funding, training, people, and connections to do this mission.  In order to fulfill what the administration wants, needs, and is almost promising to the public through their management agenda, US-CERT has to get real big, real fast.

But the trick is, how do you explain this concept to somebody who doesn’t have either the security understanding or the national policy experience to understand the issue?  You resort back to Cyber-Katrina and maybe bank on a little FUD in the process.  Then the press gets all crazy on it–like breaking SSL means Cyber-Katrina Real Soon Now.

Now for those of you who will never be a candidate for Obama’s Cybersecurity Czar job, let me break this down for you big-bird stylie.  Right now there are 3 major candidates vying to get the job.  Since there is no official recommendation (and there probably won’t be until April when the 60 days to develop a strategy is over), the 3 candidates are making their move to prove that they’re the right person to pick.  Think of it as their mini-platforms, just look out for when they start talking about themselves in the 3rd person.

FEMA Disaster Relief photo by Infrogmation. Could a Cyber-FEMA coordinate incident response for a Cyber-Katrina?

And in other news, I3P (with ties to Dartmouth) has issued their National Cyber Security Research and Development Challenges document which um… hashes over the same stuff we’ve seen from the National Strategy to Secure Cyberspace, the Systems and Technology Research and Design Plan, the CSIS Recommendations, and the Obama Agenda.  Only the I3P report has all this weird psychologically-oriented mumbo-jumbo that when I read it my eyes glazed over.

Guys, I’ve said this so many times I feel like a complete cynic: talk is cheap, security isn’t.  It seems like everybody has a plan but nobody’s willing to step up and fix the problem.  Not only that, but they’re taking each others recommendations, throwing them in a blender, and reissuing their own.  Wake me up when somebody actually does something.

It leads me to believe that, once again, those who talk don’t know, and those who know don’t talk.

Therefore, here’s the BSOFH’s guide to protecting the nation from Cyber-Katrina:

  • Designate a Cybersecurity Czar
  • Equip the Cybersecurity Czar with an $100B/year budget
  • Nationalize Microsoft, Cisco, and one of the major all-in-one security companies (Symantec)
  • Integrate all the IT assets you now own and force them to write good software
  • Public execution of any developer who uses strcpy() because who knows what other stupid stuff they’ll do
  • Require code review and vulnerability assessments for any IT product that is sold on the market
  • Regulate all IT installations to follow Government-approved hardening guides
  • Use US-CERT to monitor the military-industrial complex
  • ?????
  • Live in a secure Cyber-World

But hey, that’s not the American way–we’re not socialists, damnit! (well, except for mortgage companies and banks and automakers and um yeah….)  So far all the plans have called for cooperation with the public sector, and that’s worked out just smashingly because of an industry-wide conflict of interest–writing junk software means that you can sell for upgrades or new products later.

I think the problem is fixable, but I predict these are the conditions for it to happen:

  • Massive failure of some infrastructure component due to IT security issues
  • Massive ownage of Government IT systems that actually gets publicized
  • Deaths caused by massive IT Security fail
  • Osama Bin Laden starts writing exploit code
  • Citizen outrage to the point where my grandmother writes a letter to the President

Until then, security issues will be always be a second-fiddle to wars, the economy, presidential impeachments, and a host of a bazillion other things.  Because of this, security conditions will get much, much worse before they get better.

And then the cynic in me can’t help but think that, deep down inside, what the nation needs is precisely an IT Security Fail along the lines of 9-11/Katrina/Pearl Harbor/Dien Bien Fu/Task Force Smith.



Similar Posts:

Posted in BSOFH, Public Policy, Rants | 6 Comments »
Tags:

Evolution of Penetration Testing: Part 1

Posted October 13th, 2008 by

Penetration testing is a controversial topic with an interesting history. It is made all that much more controversial and perplexing because of an common disconnect between the service provider and the consumer.

Penetration started as a grey-art that was often practiced/delivered in an unstructured and undisciplined manner by reformed or semi-reformed hackers. Penetration testers used their own techniques and either their own home-grown tools or tools borrowed or traded with close associates. There was little reproducibility or consistency of results or reporting. As a result, the services were hard to integrate into a security program.

As the art evolved it became more structure and disciplined and tools, techniques, and reporting became more standardized. This evolution was driven by papers, articles, technical notes that were both formally published and informally distributed. In the end, a standardized methodology emerged that was largely based on the disciplined approach used by the most successful hackers.

Hakker Kitteh photo by blmurch.

At about the same time open-source, government and commercial tools began to emerge that automated many of the steps of the standardized methodology. These tools had two divergent impacts on the art of penetration testing. As these tools were refined and constantly improved they reinforced the standard methodology, provided more consistent and reproducible results and improved and standardized penetration reporting. All of this made penetration testing easier for the consumer to absorb and integrate into security programs. As a result, regulations and security protocols emerged that required penetration and security assessments. Nmap and Nessus are excellent examples of the kind of tools that help shape and push this evolution. And, because of their utility they are still indispensable tools today.

However, Nessus also helped to automate both data collection and analysis, it has lowered the bar for the skills and experience needed to conduct portions of the penetration testing methodology. This lowered the cost of penetration testing and made them much more broadly available. Thus, giving rise to so-called “boutique firms.” The problem with penetration testing “boutique firms” is that they fall into two broad categories; specialized highly professional firms led by experienced and technical security professionals who can translate automated tool output into root-cause analysis of vulnerabilities, and security program flaws. The second category of firm consists of opportunist firms with just enough knowledge to run automated tools and cut and paste the tool output into client reports. The later firms are some times called “tool-firms” and their employees “tool-boys.”

The later flourish for two reasons. The first is that they can offer their services at rock bottom prices. The second reason is that security organizations are often so ill-informed of the intricacies of the penetration testing process that can’t make a meaningful distinction between the professional firms and the tool-boys except on the basis of costs.



Similar Posts:

Posted in Rants, Technical | 2 Comments »
Tags:

« Previous Entries Next Entries »


Visitor Geolocationing Widget: