DojoCon 2009 Presentation

Posted November 7th, 2009 by

For those of you who didn’t know the real purpose of DojoCon, it was to raise money and awareness for Hackers for Charity. If you like anything that is in this post, go to HFC and make a donation of time, equipment, tech support, and maybe money. If you’ve never heard of HFC because you’re not one of the “InfoSec Cool Kids”, now is your chance–go read about them.

The video of my dojocon presentation. The microphone was off for the first couple of minutes but I look pretty animated.

And then the compliance panel that I tried not to dominate:

And finally, my slides are up on slideshare:



Similar Posts:

Posted in FISMA, Speaking | 6 Comments »
Tags:

Massively Scaled Security Solutions for Massively Scaled IT

Posted October 16th, 2009 by

My presentation slides from Sector 2009.  This was a really fun conference, the Ontario people are really, really nice.

Presentation Abstract:

The US Federal Government is the world’s largest consumer of IT products and, by extension, one of the largest consumers of IT security products and services. This talk covers some of the problems with security on such a massive scale; how and why some technical, operational, and managerial solutions are working or not working; and how these lessons can be applied to smaller-scale security environments.



Similar Posts:

Posted in FISMA, NIST, Public Policy, Speaking, The Guerilla CISO, What Works | No Comments »
Tags:

The Guerilla CISO Rants: Don’t Write a System Security Plan

Posted October 1st, 2009 by

OK, I know you’re shocked…I’m saying something controversial.  But hear me out on this one, I’ll explain.

Now this is my major beef with the way we write SSPs today:  this is all information that is contained in other artifacts that I have to pay people to do cut-and-paste to get it into a SSP template.  As practiced, we seriously have a problem with polyinstantiation of data in various lifecycle artifacts that is cut-and-pasted into an SSP.  Every time you change the upstream document, you create a difference between that document and the SSP.

This is a practice I would like to change, but I can’t do it all by myself.

This is the skeleton outline of an SSP from Special Publication 800-18, the guide to writing an SSP:

  1. Information System Name/Title–On the investment/FISMA inventory, the Exhibit 300/53, etc
  2. Information System Categorization–usually on a FIPS-199 memorandum
  3. Information System Owner–In an assignment memo
  4. Authorizing Official–In an assignment memo
  5. Other Designated Contacts–In an assignment memo
  6. Assignment of Security Responsibility–In assignment memos
  7. Information System Operational Status–On the investment/FISMA inventory, the Exhibit 300/53, etc
  8. Information System Type–On the investment/FISMA inventory, the Exhibit 300/53, etc
  9. General System Description/Purpose–In the design document, Exhibit 300/53
  10. System Environment–Common controls not inside the scope of our system
  11. System Interconnections/Information Sharing–from Interconnection Security Agreements
  12. Related Laws/Regulations/Policies–Should be part of the system categorization but hardly ever is on templates
  13. Minimum Security Controls–800-53 controls descriptions which can easily be done in a Requirements Traceability Matrix
  14. Information System Security Plan Completion Date–specific to each document
  15. Information System Security Plan Approval Date–specific to each document

Now some of this has changed in practice a little bit–# 10 can functionally be replaced with a designation of common controls and hybrid controls.

So my line of thinking is that if we provide a 2-6-page system description with the names of the “guilty parties” and some inventory information, controls-specific Requirements Traceability Matrix, and a System Design Document, then we have the functional equivalent of an SSP.

Why have I declared an InfoSec fatwah against SSPs as currently practiced?

Well, my philosophy for operation is based on some concepts I’ve picked up through the years:

  • Why run when you can walk, why walk when you can sit, why sit when you can lay down.  There is a time to spend effort on determining what the security controls are for a project.  You need to have them documented but it’s not cost-effective to be worried about format, which we do probably too much of today.
  • Make it easy to do the right thing.  If we polyinstantiate security information, we have made something harder to maintain.  Easier to maintain means that it will get maintained instead of being shelfware.  I would rather have updated and accurate security information than overly verbose and well-polished documents that are inaccurate.
  • Security is not a “security guy thing”–most problems are actually a management and project team problem.  My idea uses their SDLC artifacts instead of security-specific versions of artifacts.  My idea puts the project problems back in the project space where it belongs.
  • If I have a security engineer who has a finite amount of hours in a day, I have to choose what they spend their time on.  If it’s a matter of vulnerability mitigation, patching, etc, or correcting SSP grammar, I know what I want him to do.  Then again, I’m still an infantryman deep down inside and I realize I have biases against flowery writing.

Criticisms to not writing a dedicated SSP document:

“My auditors are used to seeing the information in the same format at someplace they worked previously”. Believe it or not, I hear this quite a bit.  My response is along the lines of the fact that if you make your standard be what I’m suggesting for a security plan, then you’ve met all of the FISMA and 800-53 requirements and my personal requirement to “don’t do stupid stuff if you can help it”.

“My auditors will grill me to death if they have to page back and forth between several documents”.  This one also I’ve heard.  There are a couple of ways to deal with this.  One way to deal with this is that in your 800-53 Requirements Traceability Matrix you reference the source document.  Most auditors at this point bring up that you need to reference the official name, date of publication, and specific page/section of the reference and I think they need to get a life because they’ve taken us back to the maintainability problem.

“This is all too new-school and I can’t get over it”. Then you are a dinosaur and your kind deserves extinction.  =)

.

This blog post is for grecs at novainfosecportal.com who perked up instantly when I mentioned the concept months ago.  Finally got around to putting the text somewhere.

How to Plan the Perfect Dinner Party photo by kevindooley.



Similar Posts:

Posted in FISMA, NIST | 11 Comments »
Tags:

OMB Wants a Direct Report

Posted August 28th, 2009 by

The big news in OMB’s M-09-29 FY 2009 Reporting Instructions for the Federal Information Security Management Act and Agency Privacy Management is that instead of fiddling with document files reporting will now be done directly through an online tool. This has been covered elsewhere and it is the one big change since last year.  However having less paper in the paperwork is not the only change.

Piles of Paper photo by °Florian.

So what will this tool be like? It is hard to tell at this point. Some information will be entered directly but the system appears designed to accept uploads of some documents, such as those supporting M-07-16. Similar to the spreadsheets used for FY 2008 there will be separate questions for the Chief Information Officer, Inspector General and Senior Agency Official for Privacy. Microagencies will still have abbreviated questions to fill out. Additional information on the automated tool, including full instructions and a beta version will be available in August, 2009.

Given the required information has changed very little the automated system is unlikely to significantly ease the reporting burden. This system appears primarily designed to ease the data processing requirements for OMB. With Excel spreadsheets no longer holding data many concerns relating to file versions, data aggregation and analysis are greatly eased.

It is worth noting that a common outcome of systems re-engineered to become more efficient is that managers look to find ways to utilize the new efficiency. What does this mean? Now that OMB has the ability to easily analyze data which took a great amount of effort to process before they may want to improve what is reported. A great deal has been said over the years about the inefficiencies in the current reporting regime. This may be OMB’s opportunity to start collecting an increased amount of information that may better reflect agencies actual security posture. This is pure speculation and other factors may moderate OMB’s next steps, such as the reporting burden on agencies, but it is worth consideration.

One pleasant outcome to the implementation of this new automated tool is the reporting deadline has been pushed back to November 18, 2009.

Agencies are still responsible for submitting document files to satisfy M-07-16. The automated tool does not appear to allow direct input of this information. However the document requirements are slightly different. Breach notification policy document need only be submitted if it has changed. It is no longer sufficient to simply report progress on eliminating SSNs and reducing PII, an implementation plan and a progress update must be submitted. The requirement for a policy document covering rules of behavior and consequences has been removed.

In addition to the automated tool there are other, more subtle changes to OMB’s FY 2009 reporting. Let’s step through them, point by point:

10. It is reiterated that NIST guidance is required. This point has been expanded to state that legacy systems, agencies have one year to come into compliance with NIST documents new material. For new systems agencies are expected to be in compliance upon system deployment.

13 & 15. Wording indicating that disagreements on reports should be resolved prior to submission and that the agency head’s view will be authoritative have been removed. This may have been done to reduce redundancy as M-09-29’s preface indicates agency reports must reflect the agency head’s view.

52. The requirement for an central web page with working links to agency PIAs and Federal Register published SORNs has been removed.

A complete side-by-side comparison of changes between the two documents is available at FISMApedia.org.

All in all the changes to OMB’s guidance this year will not change agencies reporting burden significantly. And that may not be a bad thing.



Similar Posts:

Posted in FISMA, NIST, Public Policy | 1 Comment »
Tags:

GAO’s 5 Steps to “Fix” FISMA

Posted July 2nd, 2009 by

Letter from GAO on how Congress can fix FISMA.  And oh yeah, the press coverage on it.

Now supposedly this was in response to an inquiry from Congress about “Please comment on the need for improved cyber security relating to S.773, the proposed Cybersecurity Act of 2009.”  This is S.773.

GAO is mixing issues and has missed the mark on what Congress asked for.  S.773 is all about protecting critical infrastructure.  It only rarely mentions government internal IT issues.  S.773 has nothing at all to do with FISMA reform.  However, GAO doesn’t have much expertise in cybersecurity outside of the Federal Agencies (they have some, but I would never call it extensive), so they reported on what they know.

The GAO report used the often-cited metric of an increase in cybersecurity attacks against Government IT systems growing from “5,503 incidents reported in fiscal year 2006 to 16,843 incidents in fiscal year 2008” as proof that the agencies are not doing anything to fix the problem.  I’ve questioned these figures before, it’s associated with the measurement problem and increased reporting requirements more than an increase in attacks.  Truth be told, nobody knows if the attacks are increasing and, if so, at what rate.  I would guess they’re increasing, but we don’t know, so quit citing some “whacked” metric as proof.

Reform photo by shevy.

GAO’s recommendations for FISMA Reform:

Clarify requirements for testing and evaluating security controls.  In other words, the auditing shall continue until the scores improve.  Hate to tell you this, but really all you can test at the national level is if the FISMA framework is in place, the execution of the framework (and by extension, if an agency is secure or not) is largely untestable using any kind of a framework.

Require agency heads to provide an assurance statement on the overall adequacy and effectiveness of the agency’s information security program.  This is harkening back to the accounting roots of GAO.  Basically what we’re talking here is for the agency head to attest that his agency has made the best effort that it can to protect their IT.  I like part of this because part of what’s missing is “executive support” for IT security.  To be honest, though, most agency heads aren’t IT security dweebs, they would be signing an assurance statement based upon what their CIO/CISO put in the executive summary.

Enhance independent annual evaluations.  This has significant cost implications.  Besides, we’re getting more and more evaluations as time goes on with an increase in audit burden.  IE, in the Government IT security space, how much of your time is spent providing proof to auditors versus building security?  For some people, it’s their full-time job.

Strengthen annual reporting mechanisms.  More reporting.  I don’t think it needs to get strengthened, I think it needs to get “fixed”.  And by “fixed” I mean real metrics.  I’ve touched on this at least a hundred times, go check out some of it….

Strengthen OMB oversight of agency information security programs.  This one gives me brain-hurt.  OMB has exactly the amount of oversight that they need to do their job.  Just like more auditing, if you increase the oversight and the people doing the execution have the same amount of people and the same amount of funding and the same types of skills, do you really expect them to perform differently?

Rybolov’s synopsis:

When the only tool you have is a hammer, every problem looks like a nail, and I think that’s what GAO is doing here.  Since performance in IT security is obviously down, they suggest that more auditing and oversight will help.  But then again, at what point does the audit burden tip to the point where nobody is really doing any work at all except for answering to audit requests?

Going back to what Congress really asked for, We run up against a problem.  There isn’t a huge set of information about how the rest of the nation is doing with cybersecurity.  There’s the Verizon DBIR, the Data Loss DB, some surveys, and that’s about it.

So really, when you ask GAO to find out what the national cybersecurity situation is, all you’re going to get is a bunch of information about how government IT systems line up and maybe some anecdotes about critical infrastructure.

Coming to a blog near you (hopefully soon): Rybolov’s 5 steps to “fix” FISMA.



Similar Posts:

Posted in FISMA | 2 Comments »
Tags:

Your Security “Requirements” are Teh Suxxorz

Posted July 1st, 2009 by

Face it, your security requirements suck. I’ll tell you why.  You write down controls verbatim from your catalog of controls (800-53, SoX, PCI, 27001, etc), put it into a contract, and wonder how come when it comes time for security testing, we just aren’t talking the same language.  Even worse, you put in the cr*ptastic “Contractor shall be compliant with FISMA and all applicable NIST standards”.  Yes, this happens more often than I could ever care to count, and I’ve seen it from both sides.

The problem with quoting back the “requirements” from a catalog of controls is that they’re not really requirements, they’re control objectives–abstract representations of what you need in order to protect your data, IT system, or business.  It’s a bit like brain surgery using a hammer and chisel–yes, it might work out for you, but I don’t really feel comfortable doing it or being on the receiving end.

And this is my beef with the way we manage security controls nowadays.  They’re not requirements, functionally they’re a high-level needs statement or even a security concept of operations.  Security controls need to be tailored into real requirements that are buildable, testable, measurable, and achievable.

Requirements photo by yummiec00kies.  There’s a social commentary in there about “Single, slim, and pleasant looking” but even I’m afraid to touch that one. =)

Did you say “Wrecks and Female Pigs’? In the contracting world, we have 2 vehicles that we use primarily for security controls: Statements of Work (SOW) and Engineering Requirements.

  • Statements of Work follow along the lines of activities performed by people.  For instance, “contractor shall perform monthly 100% vulnerability scanning of the $FooProject.”
  • Engineering Requirements are exactly what you want to have build.  For instance, “Prior to displaying the login screen, the application shall display the approved Generic Government Agency warning banner as shown below…”

Let’s have a quick exercise, shall we?

What 800-53 says: The information system produces audit records that contain sufficient information to, at a minimum, establish what type of event occurred, when (date and time) the event occurred, where the event occurred, the source of the event, the outcome (success or failure) of the event, and the identity of any user/subject associated with the event.

How It gets translated into a contract: Since it’s more along the lines of a security functional requirement (ie, it’s a specific functionality not a task we want people to do), we brake it out into multiple requirements:

The $BarApplication shall produce audit records with the following content:

  • Event description such as the following:
    • Access the $Baz subsystem
    • Mounting external hard drive
    • Connecting to database
    • User entered administrator mode
  • Date/time stamp in ‘YYYY-MM-DD HH:MM:SS’ format;
  • Hostname where the event occured;
  • Process name or program that generated the event;
  • Outcome of the event as one of the following: success, warn, or fail; and
  • Username and UserID that generated the event.

For a COTS product (ie, Windows 2003 server, Cisco IOS), when it comes to logging, I get what I get, and this means I don’t have a requirement for logging unless I’m designing the engineering requirements for Windows.

What 800-53 says: The The organization configures the information system to provide only essential capabilities and specifically prohibits and/or restricts the use of the following functions, ports, protocols, and/or services: [Assignment: organization-defined list of prohibited and/or restricted functions, ports, protocols, and/or services].

How It gets translated into a contract: Since it’s more along the lines of a security functional requirement, we brake it out into multiple requirements:

The $Barsystem shall have the software firewall turned on and only the following traffic shall be allowed:

  • TCP port 443 to the command server
  • UDP port 123 to the time server at this address
  • etc…..

If we drop the system into a pre-existing infrastructure, we don’t need firewall rules per-se as part of the requirements, what we do need is a SOW along the following lines:

The system shall use our approved process for firewall change control, see a copy here…

So what’s missing, and how do we fix the sorry state of requirements?

This is the interesting part, and right now I’m not sure if we can, given the state of the industry and the infosec labor shortage:  we need security engineers who understand engineering requirements and project management in addition to vulnerability management.

Don’t abandon hope yet, let’s look at some things that can help….

Security requirements are a “best effort” proposition.  By this, I mean that we have our requirements and they don’t fit in all cases, so what we do is we throw them out there and if you can’t meet the requirement, we waiver it (live with it, hope for the best) or apply a compensating control (shield it from bad things happening).  This is unnerving because what we end up doing is arguing all the time over whether the requirements that were written need to be done or not.  This drives the engineers nuts.

It’s a significant amount of work to translate control objectives into requirements.  The easiest, fastest way to fix the “controls view” of a project is to scope out things that are provided by infrastructure or by policies and procedures at the enterprise level.  Hmmm, sounds like explicitly stating what our shared/common controls are.

You can manage controls by exclusion or inclusion:

  • Inclusion:  We have a “default null” for controls and we will explicitly say in the requirements what controls you do need.  This works for small projects like standing up a pair of webservers in an existing infrastructure.
  • Exclusion:  We give you the entire catalog of controls and then tell you which ones don’t apply to you.  This works best with large projects such as the outsourcing of an entire IT department.

We need a reference implementation per technology.  Let’s face it, how many times have I taken the 800-53 controls and broken them down into controls relevant for a desktop OS?  At least 5 in the last 3 years.  The way you really need to do this is that you have a hardening guide and that is the authoritative set of requirements for that technology.  It makes life simple.  Not that I’m saying deviate from doctrine and don’t do 800-53 controls and 800-53A test procedures, but that’s the point of having a hardening guide–it’s really just a set of tailored controls specific to a certain technology type.  The work has been done for you, quit trying to re-engineer the wheel.

Use a Joint Responsibilities Matrix.  Basically this breaks down the catalog of controls into the following columns:

  • Control Designator
  • Control Title
  • Provided by the Government/Infrastructure/Common Control
  • Provided by the Contractor/Project Team/Engineer


Similar Posts:

Posted in BSOFH, Outsourcing, Technical | 3 Comments »
Tags:

« Previous Entries Next Entries »


Visitor Geolocationing Widget: