“Machines Don’t Cause Risk, People Do!”

Posted May 26th, 2010 by

A few weeks back I read an article on an apparent shift in emphasis in government security… OMB outlines shift on FISMA” take a moment to give it a read. I’ll wait….

That was followed by NASA’s “bold move” to change the way they manage risk

Once again the over-emphasis and outright demagoguery on “compliance,” “FISMA reports,” “paper exercises,” and similar concepts that occupy our security geek thoughts have not given way to enlightenment. (At least “compliancy” wasn’t mentioned…) I was saddened by a return to the “FISMA BAD” school of thought so often espoused by the luminaries at SANS. Now NASA has leapt from the heights… At the risk of bashing Alan Paller yet again, I am often turned off by the approach of “being able to know the status of every machine at every minute, ” – as if machines by themselves cause bad security… It’s way too tactical (incorrect IMHO) and too easy to make that claim.

Hence the title of this rant – Machines don’t cause risk, people do!

The “people” I’m talking about are everyone from your agency director, down to the lowliest sysadmin… The problem? They may not be properly educated or lack the necessary skills for their position – another (excellent) point brought forth in the first article. Most importantly, even the most seasoned security veteran operating without a strategic vision within a comprehensive security program (trained people, budget, organizational will, technology and procedures) based upon the FISMA framework will be doomed to failure. Likewise, having all the “toys” in the world means nothing without a skilled labor force to operate them and analyze their output. (“He who dies with the most toys is still dead.”) Organizations and agency heads that do not develop and support a comprehensive security program that incorporates the NIST Risk Management Framework as well as the other facets listed above will FAIL. This is nothing new or revolutionary, except I don’t think we’ve really *done* FISMA yet. As I and others have said many times, it’s not about the paper, or the cost per page – it’s about the repeatable processes — and knowledgeable people — behind what the paper describes.

I also note the somewhat disingenuous mention of the risk management program at the State Department in the second article… As if that were all State was doing! What needs to be noted here is that State has approached security in the proper way, IMHO — from a Strategic, or Enterprise level. They have not thrown out the figurative baby with the bath water by dumping everything else in their security program in favor of the risk scoring system or some other bright, shiny object. I know first-hand from having worked with many elements in the diplomatic security hierarchy at State – these folks get it. They didn’t get to the current level of goodness in the program by decrying (dare I say whining about?) “paper.” They made the organizational commitment to providing contract vehicles for system owners to use to develop their security plans and document risk in Plans of Action and Milestones (POA&Ms). Then they provided the money to get it done. Is the State program a total “paragon of virtue?” Probably not, but the bottom line is that it’s an effective program.

Mammoth Strategy, Same as Last Year

Mammoth Strategy, Same as Last Year image by HikingArtist.com.

Desiring to know everything about everything may seem to some to be a worthy goal, but may be beyond many organization’s budgets. *Everything* is a point in time snapshot, no matter how many snapshots you take or how frequently you take them. Continuous, repeatable security processes followed by knowledgeable, responsible practitioners are what government needs. But you cannot develop these processes without starting from a larger, enterprise view. Successful organizations follow this–dare I say it–axiom whether discussing security governance, or system administration.

Government agencies need to concentrate on developing agency-wide security strategies that encompass, but do not concentrate on solely, what patch is on what machine, and what firewall has which policy. Likewise, system POA&Ms need to concentrate on higher-level strategic issues that affect agencies — things like changes to identity management schemes that will make working from home more practical and less risky for a larger percentage of the workforce. Or perhaps a dashboard system that provides the status of system authorization for the agency at-a-glance. “Burying your head in a foxhole” —becoming too tactical — is akin to burying it in the sand, or like getting lost in a bunch of trees that look like a forest. When organizations behave this way, everything becomes a threat, therefore they spray their resource firepower on the “threat of the day, or hour.”

An organization shouldn’t worry about patching servers if its perimeter security is non-existent. Developing the larger picture, while letting some bullets strike you, may allow you recognize threats, prioritize them, potentially allowing you to expend minimal resources to solve the largest problem. This approach is the one my organization is following today. It’s a crawl first, then walk, then run approach. It’s enabled management to identify, segregate, and protect critical information and resources while giving decision-makers solid information to make informed, risk-based decisions. We’ll get to the patches, but not until we’ve learned to crawl. Strangely, we don’t spend a lot of time or other organizational resources on “paper drills” — we’re actively performing security tasks, strategic and tactical that follow documented procedures, plans and workflows! Oh yes, there is the issue of scale. Sorry, I think over 250 sites in every country around the world, with over 62 different government customers tops most enterprises, government or otherwise, but then this isn’t about me or my organization’s accomplishments.

In my view, professional security education means providing at least two formal paths for security professionals – the one that SANS instantiates is excellent for administrators – i.e., folks operating on the tactical level. I believe we have these types of security practitioners in numbers. We currently lack sufficient seasoned professionals – inside government – who can approach security strategically, engaging agency management with plans that act both “globally” and “locally.” Folks like these exist in government but they are few. Many live in industry or the contractor space. Not even our intelligence community has a career path for security professionals! Government as a whole lacks a means to build competence in the security discipline. Somehow government agencies need to identify security up-and-comers within government and nurture them. What I’m calling for here is a government-sponsored internal mentorship program – having recognized winners in the security game mentor peers and subordinates.

Until we security practitioners can separate the hype from the facts, and can articulate these facts in terms management can understand and support, we will never get beyond the charlatans, headline grabbers and other “self-licking ice cream cones.” Some might even look upon this new, “bold initiative” by NASA as quitting at a game that’s seen by them as “too hard.” I doubt seriously that they tried to approach the problem using a non-academic, non-research approach. It needed to be said. Perhaps if the organization taking the “bold steps” were one that had succeeded at implementing the NIST guidance, there might be more followers, in greater numbers.

Perhaps it’s too hard because folks are merely staring at their organization’s navel and not looking at the larger picture?

Lastly, security needs to be approached strategically as well as tactically. As Sun Tzu said, “Tactics without strategy is the noise before defeat.”



Similar Posts:

Posted in FISMA, NIST, Public Policy, Rants, Risk Management, What Doesn't Work, What Works | 14 Comments »
Tags:

Categories of Security Controls in Outsourcing

Posted May 25th, 2010 by

As I’m going through a wide variety of control frameworks in a managed services/cloud environment, I’m reminded of how controls work when you’re a service provider.  Mentally, I break them down into four “buckets”:

  • Controls that I provide to all customers as part of my baseline. In other words, these are things that I do for all of my customers because it’s either part of the way that I do business or it makes sense to do it once and scale it out to everybody.  Typically these are holistic information security program things (ISO 17799/27001/27002 or similar) matched up with my service-delivery architecture.
  • Controls that I provide as an add-on service. Not all of my customers need these but I want to offer them to my customers to help them with their security program.  Usually these are services and products supporting a regulatory framework specific to one industry:  PCI-DSS, FISMA, GLBA, etc fit in here if my market is not exclusive to customers governed by those regulations.  In order to keep the base cost for the other customers low, these aren’t included in the base service but are available for a price.
  • Controls that I am planning on building. I don’t have them yet but they’re on my roadmap.  Sometimes this is how I get into new markets by building the products and services that match up against the regulatory framework for that market, then build to that as a specification.
  • Controls that I will not provide. Maybe this control doesn’t apply to my products and service (The “We don’t actually own a Windows/HP-UX/AIX server” problem).  Maybe the controls framework didn’t scope my solutions into its assumptions.  Maybe the economics of this didn’t work out.  Maybe I don’t provide this because it’s dishonest for both myself and you as my customer for me to say I provide this–think along the lines of accepting risk on your behalf which puts me into a conflict of interest.  This is why any vendor who says they provide 100% compliancy against FooFramework is lying.

Transparency ties it all together.  The good providers will tell you upfront which controls belong in which buckets.

Tool Bucket photo by tornatore.



Similar Posts:

Posted in Outsourcing, What Works | 2 Comments »
Tags:

Metricon is Coming to DC

Posted May 17th, 2010 by

This was announced a couple of weeks ago (at least 9000 days ago in Internet time) so now it’s “old news” but have a look at Metricon 5.0 which will be in DC on the 10th of August.

It’s a small group (attendance is capped at 60), but if you’re managing security in Government, I want to encourage you to do 2 things:

  • Submit a paper!
  • Attend and learn.

I’ll be there doing a bit of hero-worship of my own with the Security Metrics folks.



Similar Posts:

Posted in Public Policy, Speaking | 1 Comment »
Tags:

Professor Rybolov’s Guide to InfoSec and Public Policy Analysis

Posted May 17th, 2010 by

Having just finished our mini-semester class on InfoSec and Public Policy, I want to share with my old friend, the Interweblagosphere, a small process/framework for doing some analysis.  This can be a paper, legislation, or even a basic guideline for developing metrics.

  • Problemspace Definition: Give a point-of-view on a particular subject and why it is important.  Thinking more conventionally, what is it exactly that is your thesis statement?
  • History of Incident: prove the problem is worth time to solve.  Usually this involves identifying a handful of large-scale incidents that can serve as the model for your analysis.  Looking at these incidents, what worked and what didn’t work? Start to form some opinions.  You will revisit these incidents later on as models.
  • Regulatory Bodies: beginning of stakeholder definition.  Identify responsible Government or industry-specific organizations and their history of dealing with the problem.  What existing strategic plans and reports exist that you can use to feed your analysis.
  • Private Sector Support: more stakeholders.  How much responsibility does private industry have in this issue and what is the impact on them?  They can be owners (critical infrastructure), vendors (hardware, software, firmware), maintainers, etc.
  • Other Stakeholders: Consider end users, people who depend on the service that depends on the IT and the information therein.
  • Trend and Metrics: what do we know about the topic given published metrics or our analysis of themes across our key incidents?  If you notice a lack of metrics on the subject, what would be your “wish list” and what plan do you have for getting these metrics?  For information security, this typically a huge gap–either we’re creating metrics to show where we’ve succeeded at the tactical level or we’re generating metrics with surveys which are notoriously flawed.
  • Options and Alternatives Analysis: pros and cons, what evidence suggests each might succeed.  Take your model incidents and run your options through them, would they help with each scenario?  Gather up more incidents and see how the options would affect them.  As you run through each option and scenario, consider each of the following:
    • Efficacy of the Option–does it actually solve the root cause of the problem?
    • Winner Stakeholders
    • Loser Stakeholders
    • Audit Burden
    • Direct Costs
    • Indirect Costs
  • Build Strategic Plan and Recommendations: Based on your analysis of the situation (model incidents, metrics, and power dynamics), build recommendations from the high-performing options and form them into a strategic plan.  The more specific you can get, the better.

Note that for the most part these are not exclusive to information security but to public policy analysis in general.  There are a couple parts that need emphasis because of the immature nature of infosec.

Analysis of Hound Dog Behavior graph by MShades. Our analysis is a little bit more in-depth.  =)

Then the criteria for evaluating the strategic plan and the analysis leading up to it:

  • Has an opinion
  • Backs up the opinion by using facts
  • Has models that are used to test the options
  • Lays out a well-defined plan

As usual, I stand on the shoulders of giants, in this case my Favorite Govie provided quite a bit of input in the form of our joint syllabus.



Similar Posts:

Posted in Public Policy, What Works | 2 Comments »
Tags:

Privacy Camp DC–April 17th

Posted April 7th, 2010 by

Just a quick post to shill for Privacy Camp DC 2010 which will be taking place on the 17th of April in downtown DC.  I went last year and it was much fun.  The conversation ranged from recommendations for a rewrite of

The basic rundown of Privacy Camp is that it’s run like a Barcamp where the attendees are also the organizers and presenters.  If you’re tired of going to death-by-powerpoint, this is the place for you.  And it’s not just for government-types, there is a wide representation from non-profits and regular old commercial companies.

Anyway, what are you waiting for?  Go sign up now.



Similar Posts:

Posted in Odds-n-Sods, Public Policy, The Guerilla CISO | 1 Comment »
Tags:

Building A Modern Security Policy For Social Media and Government

Posted December 13th, 2009 by

A small presentation Dan Philpott and I put together for Potomac Forum about getting sane social media policy out of your security staff. I also recommend reading something I put out a couple of months ago about Social Media Threats and Web 2.0.



Similar Posts:

Posted in FISMA, NIST, Outsourcing, Risk Management, Speaking | 4 Comments »
Tags:

« Previous Entries Next Entries »


Visitor Geolocationing Widget: