Ed Bellis’s Little SCAP Project

Posted March 19th, 2009 by

So way back in the halcyon days of 2008 when Dan Philpott, Chris Burton, Ian Charters, and I went to the NIST SCAP Conference.  Just by a strange coincidence, Ed Bellis threw out a twit along the lines of “wow, I wish there was a way to import and export all this vulnerability data” and I replied back with “Um, you mean like SCAP?

Fast forward 6 months.  Ed Bellis has been busy.  He delivered this presentation at SnowFROC 2009 in Denver:

So some ideas I have about what Ed is doing:

#1 This vulnerability correllation and automation should be part of vulnerability assessment (VA) products.  In fact, most VA products include some kind of ticketing and workflow nowadays if you get the “enterprise edition”. That’s nice, but…

#2 The VA industry is a broken market with compatibility in workflow.  Everybody wants to sell you *their* product to be the authoritative manager. That’s cool and all, but what I really need is the connectors to your competitor’s products so that I can have one database of vulnerabilities, one set of charts to show my auditors, and one trouble ticket system. SCAP helps here but only for static, bulk data transfers–that gets ugly really quickly.

#3 Ed’s correllation and automation software is a perfect community project because it’s a conflict of interest for any VA vendor to write it themselves. And to be honest, I wouldn’t be surprised if there aren’t a dozen skunkwork projects that people will admit to creating just in the comments section of this post. I remember 5 years ago trying to hack together some perl to take the output from the DISA SRR Scripts and aggregate them into a .csv.

#4 The web application security world needs to adopt SCAP. So far it’s just been the OS and shrinkwrapped application vendors and the whole race to detection and patching. Now the interesting part to me is that the market is all around tying vulnerabilities to specific versions of software and a patch, where when you get to the web application world, it’s more along the lines of one-off misconfigurations and coding errors. It takes a little bit of a mindshift in the vulnerability world, but that’s OK in my book.

#5 This solution is exactly what the Government needs and is exactly why SCAP was created. Imagine you’re the Federal Government with 3.5 million desktops, the only way you can manage all those is through VA automation and a tool that aggregates information from various VA products across multiple zones of trust, environments, and even organizations.

#6 Help Ed out! We need this.



Similar Posts:

Posted in Technical, What Works | 4 Comments »
Tags:

FIPS and the Linux Kernel

Posted March 5th, 2009 by

Recently I was building a new kernel for my firewall and noticed an interesting new option in the Cryptographic API: “FIPS 200 compliance“.

You can imagine how very interesting and somewhat confusing this is to a stalwart FISMA practitioner. Reading through FIPS 200 it’s hard to find mention of cryptography, much less a technical specification that could be implemented in the Linux kernel. FIPS 140, FIPS 197, FIPS 186, FIPS 46 and FIPS 180 standards would be natural fits in the Cryptographic API but FIPS 200? The kernel help description didn’t clear things up:

CONFIG_CRYPTO_FIPS:

This options enables the fips boot option which is
required if you want to system to operate in a FIPS 200
certification. You should say no unless you know what
this is.

Symbol: CRYPTO_FIPS [=n]
Prompt: FIPS 200 compliance
Defined at crypto/Kconfig:24
Depends on: CRYPTO
Location:
-> Cryptographic API (CRYPTO [=y])
Selected by: CRYPTO_ANSI_CPRNG && CRYPTO

Given that examining the kernel code was a little beyond my ken and I couldn’t test to discover what it did I turned to the third of the 800-53A assessment methods, interview. A little digging on kernel.org turned up the man behind this kernel magic, Neil Horman. He was able to shed some light on what is called the fips_enabled flag.

As it turns out the FIPS 200 compliance function wasn’t as exciting as I’d hoped but it does point to interesting future possibilities.

So what does it do? In the words of Neil Horman, it is a “flag for determining if we need to be operating in some fips_compliant mode (without regard to the specific criteria)”. This means it is sort of a place holder for future developments so the kernel can operate in a mode that uses a FIPS 140-2 cryptographic module.

Did you notice the word that wasn’t included in the last paragraph? Validated. Yes, there are no validated cryptographic modules in the Linux upstream kernel. If you look at the kernel’s Cryptographic API you will find listed the “AES cipher algorithms” and “DES and Triple DES EDE cipher algorithms”. These may be compliant with FIPS standards but they are not validated.

This begs the question, why have a FIPS 200 compliance flag if you can’t meet the FIPS 140-2 requirement? This is the interesting part. Let’s say a distro decides it wants to become very FISMA friendly and get their kernel’s FIPS 140-2 cryptographic module validated. Well, if the validation of the OpenSSL VCM is an apt example the distro’s Linux kernel will need to operate in a FIPS compliant mode to verifiably load the cryptographic module. So the inclusion of the fips_enabled flag enables future compliance.

Sadly it is unlikely that any single Linux distro getting their cryptographic module validated will not translate to the upstream kernel having a validated cryptographic module. If you look at the catalog of FIPS 140-2 VCM’s the modules are only validated for particular code versions and operating mode. As the upstream kernel code won’t likely see the revisions made by the downstream distro in order to achieve the VCM until after the VCM is issued it doesn’t inherit the validation.

Polyester Resin Kernel photo by  Marshall Astor – Food Pornographer.

Two possible scenarios were discussed with Neil to allow for upstream Linux kernel incorporation of a VCM.

The first scenario would be that the upstream kernel gets all the revisions made by the downstream distro to gain the VCM designation. It then goes through the process to gain the VCM itself. Unfortunately as the code is under constant revision and can’t be locked as soon as a revision was committed to the code base the VCM would be invalidated. Only a particular build of the Linux kernel could claim to be validated.

The second scenario would be a revision to the Linux kernel that allowed for the downstream’s Linux distro’s VCM to be loaded instead of the standard Linux Cryptographic API. When asked about this scenario Neil had this to say:

“That said, theres no reason the crypto api couldn’t be ripped out and replaced with a different implementation, one that is maintained independently and its certification kept up. Of course, anyone so doing would need to keep up with the pace of kernel development, and that in turn brings the need for recertification, so its rather a lost effort in my opinion. I certainly wouldn’t recommend doing so, its just too much work.”

So the solution would either be short lived and costly or long lived and insecure.

Sadly this means that there is no easy way to include FIPS 140-2 VCM within the upstream Linux kernel. But each distro can modify their Cryptographic API and validate a cryptographic module to allow for FIPS 200 compliance. With the FIPS 200 compliance flag now in the Linux kernel it is possible for this to be verified. And that’s a happy thought for Federal Linux users.

My many thanks to Neil Horman, without whom I’d have nothing to write.



Similar Posts:

Posted in FISMA, Technical | No Comments »
Tags:

Clouds of CAG Confusion

Posted February 26th, 2009 by

Did you know that the US Department of Defense published the Consensus Audit Guidelines?  Yes, it’s true!  At least according to a ZDNet UK article title, “US Dept of Defense lists top 20 security controls“.

There is a haze of confusion settling around the Consensus Audit Guidelines origins.  The text of the CAG press release (pdf) is clear that it is developed by a consortium of federal agencies and private organizations.  It further states CAG is part of the Center for Strategic and International Studies work on CSIS Commission report on Cybersecurity for the 44th Presidency.  The title of the CAG press release is also equally clear that it is from a “Consortium of US Federal Cybersecurity Experts” which is substantively different than a consortium of federal agencies and private organizations.

The press release relates that CAG was initiated when a team discovered similarities between massive data losses by the US defense industrial base (DIB) and attacks on Federal agencies.  The project then grew as more agencies agreed to become involved.  Following the current public review of CAG the next steps for development are listed as pilot implementations at government agencies, a CIO Council review and an IG review. The clear inference of this origin story and ennumeration of steps is that the project has official Federal backing.

Let’s test that inference.  Click here for a Google search of the entire *.gov hierarchy for “Consensus Audit Guidelines”.  As I write this there is exactly one entry.  From oregon.gov.  A search using usa.gov (which uses live.com) has the same results.  Looking around the various organizations listed as contributors doesn’t yield any official announcements.

So why the confusion in the press?  Why does it appear from the news articles that this is an Federal project?  I wouldn’t speculate.

On a slightly different topic, I’ve been reading through the Consensus Audit Guidelines themselves and enjoying the guidance it provides.  I’ll write up a more complete analysis of it once I have finished my read through.  My initial impression is that CAG controls provide worthwhile recommendations but the framework for implementation needs development.

All Aboard the Astroturfmobile photo by andydr.  Perhaps an explanation is in order….



Similar Posts:

Posted in Technical, What Doesn't Work | 7 Comments »
Tags:

The 10 CAG-egorically Wrong Ways to Introduce Standards

Posted February 20th, 2009 by

The Consensus Audit Guidelines (CAG) appear, at this point, to be a reasonable set of guidelines for mediating some human threats. I’m looking forward to seeing what CAG offers and have no doubt there will be worthwhile and actionable controls in the document. That said, there are significant reasons approach CAG with skepticism and assess it critically.

The motivation for CAG is described in a set of slides at the Gilligan Group site. It starts with a focus on what CIO’s fear most: attacks, reduced operational capability, public criticism, data loss, etc. Then it rightly questions whether FISMA is adequately addressing those problems. It doesn’t and this is the genesis of the CAG.

Consensus photo by Eirik Newth.

Unfortunately CAG subsequently develops by pairing this first valid premise with a set of false premises.  These propositions are drawn from slides at gilligangroupinc.com, attributed to John Gilligan or Alan Paller:

  1. All that matters are attacks. The central tenet of Bush’s Comprehensive National Cyber Initiative (CNCI) is adopted as the CAG theme: “Defense Must Be Informed by the Offense”. CAG envisions security as defense against penetration attacks. As any seasoned security practitioner knows, attacks are a limited subset of the threats to confidentiality, integrity and availability that information and information systems face.
  2. Security through obscurity. CAG seems to have taken the unspoken CNCI theme to heart too, “The most effective security is not exposed to public criticism.” Since its very public December 11th announcement no drafts have been made publicly available for comment.
  3. False dichotomy. CAG has been promoted as an alternative to the OMB/NIST approach to FISMA. It isn’t. An alternative would target a fuller range of threats to information and information system security. CAG should be considered a complement to NIST guidance, an addendum of security controls focused on defense against penetration by hackers. NIST has even acted on this approach by including some CAG controls into the 800-53 Rev. 3 catalog of controls.
  4. There is too much NIST guidance! This is the implication of one CAG slide that lists 1200 pages of guidance, 15 FIPS docs and the assorted Special Publications not related to FISMA as detriments to security. It’s like complaining that Wikipedia has too many articles to contribute to improved learning. Speaking as someone who scrambled to secure Federal systems before FISMA and NIST’s extensive guidance, having that documentation greatly improves my ability to efficiently and effectively secure systems.
  5. NIST guidance doesn’t tell me how to secure my systems! NIST’s FISMA guidance doesn’t step you through securing your SQL Server. The Chairman of the Joint Chiefs also doesn’t deliver your milk. Why not? It’s not their job. NIST’s FISMA guidance helps you to assess the risks to the system, decide how to secure it, secure it accordingly, check that a minimum of controls are in place and then accept responsibility for operating the system. NIST also provides documents, checklists, repositories, standards, working groups and validation of automated tools that help with the actual security implementation.
  6. Automated security controls negate human errors. With the premise of all threats being attacks this is nearly a plausible premise. But not all security is technical. Not all threats come from the Internet. DHS, NIST, Mitre, and their partners have pursued automated security controls to enforce and audit security controls for years but automated security controls can only go so far. Human errors, glitches, unexpected conflicts and operational requirements will always factor into the implementation of security.
  7. Audit compatibility as a hallmark of good security. There is a conflict of focus at the heart of the CAG, it seeks to both improve its subset of security and improve audit compatibility. For technical controls this is somewhat achievable using automation, something NIST has pursued for years with government and industry partners. For operational and management controls it results in audit checklists. But audits are fundamentally concerned with testing the particular and repeatable, security needs focus on evaluating the whole to ensure the necessary security results. An audit sees if antivirus software is installed, an evaluation sees if the antivirus software is effective.
  8. Metrics, but only these metrics over here. When selecting the current crop of CAG controls decisions on what to include were reportedly based on metrics of the highest threats. Great idea, a quantitative approach often discovers counter-intuitive facts. Only the metrics were cherry picked. Instead of looking at all realized threats or real threat impacts only a count of common penetration attacks were considered.
  9. With a sample of 1. As a basis for determining what security should focus on the whole breadth of the security profession was queried, so long as they were penetration testers. Yes, penetration testers are some very smart and talented people but penetration testing is to security what HUMINT is to intelligence services. Important players, expert practitioners but limited in scope and best used in conjunction with other intelligence assets.
  10. Assessments rely on paper artifacts. The NIST guidance does not require paper artifacts. The first line in the NIST SP 800-53A preface is, “Security control assessments are not about checklists, simple pass-fail results, or generating paperwork to pass inspections or audits-rather, security controls assessments are the principal vehicle used to verify that the implementers and operators of information systems are meeting their stated security goals and objectives.” NIST SP 800-37 specifically and repeatedly states, “Security accreditation packages can be submitted in either paper or electronic format.”

CAG is a missed opportunity. Of the myriad problems with our current FISMA regime a lot of good could be achieved. The problems with guidance have many causes but can be addressed through cooperative development of best practices outside of NIST. The Assessment Cases for SP 800-53A is an example of how cooperative development can achieve great results and provide clear guidance. Other problems exist and can be addressed with better training and community developments.

My hope is that the Consensus Audit Guidelines will move towards a more open, collaborative development environment. The first release is sure to deliver useful security controls against penetration attacks. As with all good security practices it will likely need to go through a few iterations and lots of critical assessment to mature. An open environment would help foster a more complete consensus.

Consensus photo by mugley.



Similar Posts:

Posted in BSOFH, FISMA, Rants, Technical, What Doesn't Work, What Works | 9 Comments »
Tags:

A Perspective on the History of Digital Forensics

Posted January 27th, 2009 by

Back in 1995 the junior high school students around the world were taken in by a sensationalized and carefully marketed hoax film called Alien Autopsy. Alien Autopsy was in fact a cheap film purporting to be actual footage of an actual autopsy of the cadaver of an extraterrestrial. The film was marketed as footage shot during the famous 1947 Roswell incident.

Alien Autopsy photo by jurvetson.

Well, back in 1995 I was in a mood for a good laugh so I popped up some popcorn, chilled a six-pack of Mountain Dew and kicked up my feet for a little silly entertainment. A couple of friends came over just in time for the show. So, I popped more popcorn, chilled more drinks and we all had a great time giggling, guffawing, and generally acting like a bunch of nitwits having some good clean fun.

Then in 2005, my wife asked if I could sit down with her to watch something called Grey’s Anatomy. Thinking that I was about to relive a guilty pleasure from ten years before, I readily agreed. Let’s face it, a show called Grey’s Anatomy could only be a sequel to the 1995 Alien Autopsy.

Well, having been fooled, I shared my mistake and agony with the guys at work the next day. To say the least, they were amused at the story but entirely at my expense. Some mistakes in life should just never be mentioned again.

I hope that is not the case with today’s comments. Today, I’d like to encourage you all to down load and read my paper on the History of Digital Forensics (.pdf caveat applies). It is based on a paper I presented at NIST’s annual digital forensics conference. However, since the slides from briefings do not read well, I converted the presentation to prose. Dissect it as you think appropriate. That is to say, let me know what you think.



Similar Posts:

Posted in NIST, Technical | 2 Comments »
Tags:

Database Activity Monitoring for the Government

Posted November 11th, 2008 by

I’ve always wondered why I have yet to meet anyone in the Government using Database Activity Monitoring (DAM) solutions, and yet the Government has some of the largest, most sensitive databases around.  I’m going to try to lay out why I think it’s a great idea for Government to court the DAM vendors.

Volume of PII: The Government owns huge databases that are usually authoritative sources.  While the private sector laments the leaks of Social Security Numbers, let’s stop and think for a minute.  There is A database inside the Social Security Administration that holds everybody’s number and is THE database where SSNs are assigned.  DAM can help here by flagging queries that retrieve large sets of data.

Targetted Privacy Information:  Remember the news reports about people looking at the presidential candidate’s passport information?  Because of the depth of PII that the Government holds about any one individual, it provides a phenomenal opportunity for invation of someone’s privacy.  DAM can help here by flagging VIPs and sending an alert anytime one of them is searched for. (DHS guys, there’s an opportunity for you to host the list under LoB)

Sensitive Information: Some Government databases come from classified sources.  If you were to look at all that information in aggregate, you could determing the classified version of events.  And then there are the classified databases themselves.  Think about Robert Hanssen attacking the Automated Case System at the FBI–a proper DAM implementation would have noticed the activity.  One interesting DAM rule here:  queries where the user is also the subject of the query.

Financial Data:  The Government moves huge amounts of money, well into $Trillions.  We’re not just talking internal purchasing controls, it’s usually programs where the Government buys something or… I dunno… “loans” $700B to the financial industry to stay solvent.  All that data is stored in databases.

HR Data:  Being one of the largest employers in the world, the Government is sitting on one of the largest repository of employee data anywhere.  That’s in a database, DAM can help.

 

Guys, DAM in the Government just makes sense.

 

Problems with the Government adopting/using DAM solutions:

DAM not in catalog of controls: I’ve mentioned this before, it’s the dual-edge nature of a catalog of controls in that it’s hard to justify any kind of security that isn’t explicitly stated in the catalog.

Newness of DAM:  If it’s new, I can’t justify it to my management and my auditors.  This will get fixed in time, let the hype cycle run itself out.

Historical DAM Customer Base:  It’s the “Look, I’m not a friggin’ bank” problem again.  DAM vendors don’t actively pursue/understand Government clients–they’re usually looking for customers needing help with SOX and PCI-DSS controls.

 

 

London is in Our Database photo by Roger Lancefield.



Similar Posts:

Posted in Rants, Risk Management, Technical, What Works | 2 Comments »
Tags:

« Previous Entries Next Entries »


Visitor Geolocationing Widget: