A Funny Thing Happened Last Week on Capital Hill

Posted April 1st, 2010 by

Well, several funny things happened, they happen every week.  But specifically I’m talking about the hearing in the House Committee on Homeland Security on FISMA reform–Federal Information Security: Current Challenges and Future Policy Considerations.  If you’re in information security and Government, you need to go read through the prepared statements and even watch the hearing.

Also referenced is HR.4900 which was introduced by Representative Watson as a modification to FISMA.  I also recommend that you have a look at it.

Now for my comments and rebuttals to the testimony:

  • On the cost per sheet of FISMA compliance paper: If you buy into the State Department’s cost of $1700 per sheet, you’re absolutely daft.  The cost of a security program divided by the total number of sheets of paper is probably right.  In fact, if you do the security bits right, your cost per sheet will go up considerably because you’re doing much more security work while the volume of paperwork is reduced.
  • Allocating budget for red teams: Do we really need penetration testing to prove that we have problems?  In Mike Smith’s world, we’re just not there yet, and proving that we’re not there is just an excuse to throw the InfoSec practitioners under the bus when they’re not the people who created the situation in the first place.
  • Gus Guissanie: This guy is awesome and knows his stuff.  No, really, the guy is sharp.
  • State Department Scanning: Hey, it almost seems like NIST has this in 800-53.  Oh wait, they do, only it’s given the same precedence as everything else.  More on this later.
  • Technical Continuous Monitoring Tools: Does anybody else think that using products of FISMA (SCAP, CVE, CVSS) as evidence that FISMA is failing is a bit like dividing by zero?  We really have to be careful of this or we’ll destroy the universe.
  • Number of Detected Attacks and Incidents as a Metric: Um, this always gets a “WTF?” from me.  Is the number increasing because we’re monitoring better or is it because we’re counting a whole bunch of small events as an attack (ie, IDS flagged on something), or is it because the amount of attacks are really increasing?  I asked this almost 2 years ago and nobody has answered it yet.
  • The Limitations of GAO: GAO are just auditors.  Really, they depend on the agencies to not misrepresent facts and to give them an understanding of how their environment works.  Auditing and independent assessment is not the answer here because it’s not a fraud problem, it’s a resources and workforce development problem.
  • OMB Metrics: I hardly ever talk bad about OMB, but their metrics suck.  Can you guys give me a call and I’ll give you some pointers?  Or rather, check out what I’ve already said about federated patch and vulnerability management then give me a call.

So now for Rybolov’s plan to fix FISMA:

  1. You have to start with workforce management. This has been addressed numerous times and has a couple of different manifestations: DoDI 8570.10, contract clauses for levels of experience, role-based training, etc.  Until you have an adequate supply of clueful people to match the demand, you will continue to get subpar performance.
  2. More testing will not help, it’s about execution. In the current culture, we believe that the more testing we do, the more likely the people being tested will be able to execute.  This is highly wrong and I’ve commented on it before.  I think that if it was really a fact of people being lazy or fraudulent then we would have fixed it by now.  My theory is that the problem is that we have too many wonks who know the law but not the tech and not enough techs that know the law.  In order to do the job, you need both.  This is also where I deviate from the SANS/20 Critical Security Controls approach and the IGs that love it.
  3. Fix Plans of Actions and Milestones. These are supposed to be long-term/strategic problems, not the short-term/tactical application of patches–the tactical stuff should be automated.  The reasoning is that you use these plans for budget requests for the following years.
  4. Fix the budget train. Right now the people with the budget (programs) are not the people running the IT and the security of it (CIO/CISO).  I don’t know if the answer here is a larger dedicated budget for CISO’s staff or a larger “CISO Tax” on all program budgets.  I could really policy-geek out on you here, just take my word for it that the people with the money are not the people protecting information and until you account for that, you will always have a problem.

Sights Around Capital Hill: Twice Sold Tales photo by brewbooks. Somehow seems fitting, I’ll let you figure out if there’s a connection. =)



Similar Posts:

Posted in FISMA, Public Policy, Rants, Risk Management | 7 Comments »
Tags:

Opportunity Costs and the 20 Critical Security Controls

Posted January 13th, 2010 by

This is a multi-post series.  You are at post 1.  Read post 2 hereRead Post 3 here.

This post begins with me.  For the past hour or so I’ve been working on a control-by-control objective analysis of SANS 20 Critical Security Controls.  This is a blog post I’ve had sitting “in the hopper” for 9 months or so.  And to be honest, I see some good things in the 20 CSC literature.  I think that, from a holistic perspective, the 20 CSC is an attempt at creating a prioritization of this huge list of stuff that I have to do as an information security officer–something that’s really needed.  I go into 20 CSC with a very open mind.

Then I start reading the individual controls.  I’m a big believer in Bottom-Line-Up-Front, so let me get my opinion out there now: 20 Critical Security Controls is crap.  I’m sorry John G and Eric C.  Not only is 20 CSC bad from a perspective of controls, metrics, and auditing tests, but if it’s implemented across the Government, it will be the downfall of security programs.  I really believe this.

Now on to the rationale….

Opportunity Costs. I can’t get that phrase out of my head.  And I’m not talking money just yet–I’m talking time.  See, I’m an IT security guy working for a contractor supporting a Government agency–just like 75% of the people out there.  I have a whole bunch of things to do–both in the NIST guidelines and organizational policy.  If you add anything else to the stack without taking anything away,  all you’ve done is to dilute my efforts.  And that’s why I can’t support 20 CSC–they’re an unofficial standard that does not achieve its stated primary goal of reducing the amount of work that I have to do.  I know they wanted to create a parallel standard focusing on technical controls but you have to have one official standard because if it’s not official, I don’t have to do it and it’s not really a standard anymore, it’s it?

Scoping Problems. We really have 2 tiers inside of an agency that we need to look at: the Enterprise and the various components that depend on the Enterprise.  Let’s call them… general support systems and major applications.  Now the problem here is that when you make a catalog of controls, some controls are more applicable to one tier than the other.  With 20 CSC, you run the classic blunder of trying to reinvent the wheel for every small system that comes along.

Threat Capabilities != Controls. And this is maybe the secret why compliance doesn’t work like we think it will.  In a nice theoretical world, it’s a threat-vulnerability-countermeasure coupling and the catalog of controls accounts for all likely threats and vulnerabilities.  Well, it doesn’t work that way:  it’s not a 1-to-1 ratio.  Typical security management frameworks start from a regulatory perspective and work their way down to technical details while what we really want to do is to build controls based on the countermeasures that we need.  So yeah, 20 CSC has the right idea here, the problem is that it’s a set of controls created by people who don’t believe in controls–the authors have the threat and vulnerability piece down and some of the countermeasures but they don’t understand how to translate that into controls to give to implementers and their auditors.  The 20 CSC guys are smart, don’t get me wrong, but I can’t help but get the feeling that they don’t understand how the “rest of the world” is getting their job done out there in the Enterprise.

The Mapping is Weak. There is a traceability matrix in the 20 CSC to map each control back to NIST controls.  It’s really bad, mostly because the context of 800-53 controls doesn’t extend into 20 CSC.  I have serious heartburn with how this is presented inside the agencies because we’re not really doing audits using the 20 CSC, we’re using the mapping of NIST controls with a weird subtext and it’s a “voluntary assessment” not an audit.

Guidelines?!?!?! This is basic stuff.  If it’s something you audit against, it *HAS* to be a standard.  Guidelines are recommendations and can add in more technique and education.  Standards are like hard requirements, they only work if they’re narrowly-scoped, achievable, and testable.  This isn’t specific to 20 CSC, the NIST Risk Management Framework (intended to be a set of guidelines also) suffers from this problem, too.  However, if your intent is to design a technical security and auditing standard, you need to write it like a standard.  While I’m up on a soapbox, for the love of $Diety, quit calling security controls “requirements”.

Auditor Limitations. Let’s face it, how do I get an auditor to add an unmanaged device to the network and know if we’ve detected it or not.  This is a classic mistake in the controls world:  assuming that we have enough people with the correct skillsets who can conduct intrusive technical tests without the collusion of my IT staff.

And the real reason why I dislike the 20 Critical Security Controls:

Introduction of “Audit Requirements”. One of the chief criticisms of the NIST Risk Management Framework is that the controls are not specific enough.  20 CSC falls into this trap of nonspecificity (Controls 7, 8, 9, and 15, I’m talking to you) and is not official guidance–a combination that means that my auditor has just added requirements to my workload simply because of how they interpret the control.  This is very dangerous and why I believe 20 CSC will be the end of IT security as we know it.

In future posts (I had to break this into multiple segments):

  • Control-by-control analysis
  • What 20 CSC got right (Hey, some of it is good, just not for the reasons that it’s supposed to be good)

SA-2 “Guideline” photo by cliff1066™.



Similar Posts:

Posted in FISMA, NIST, Rants, Risk Management, Technical | 4 Comments »
Tags:

Building A Modern Security Policy For Social Media and Government

Posted December 13th, 2009 by

A small presentation Dan Philpott and I put together for Potomac Forum about getting sane social media policy out of your security staff. I also recommend reading something I put out a couple of months ago about Social Media Threats and Web 2.0.



Similar Posts:

Posted in FISMA, NIST, Outsourcing, Risk Management, Speaking | 4 Comments »
Tags:

Web 2.0 and Social Media Threats for Government

Posted September 30th, 2009 by

So most of the security world is familiar with the Web 2.0 and Social Media threats in the private sector.  Today we’re going to have an expose on the threats specific to Government because I don’t feel that they’ve been adequately represented in this whole push for Government 2.0 and transparency.

Threat: Evil Twin Agency Attack. A person registers on a social media site using the name of a Government entity.  They then represent that entity to the public and say whatever it is that they want that agency to say.

What’s the Big Deal: Since for the most part there is no way to prove the authenticity of Government entities on social media sites short of a “catch us on <social media site>” tag on their .gov homepage.  This isn’t an attack unique to Government but because of the authority that people give to Government Internet presences means that the attacker gains perceived legitimacy.

Countermeasures: Monitoring by the agencies looking for their official and unofficial presences on Social Media and Web 2.0 sites.  Any new registrations on social media are vetted for authenticity through the agency’s public affairs office.  Agencies should have an official presence on social media to reserve their namespace and put these account names on their official website.

References:

.

Threat: Web Hoax. A non-government person sets up their own social media or website and claims to be the Government.

What’s the Big Deal: This is similar to the evil twin attack only maybe of a different scale.  For example, an entire social media site can be set up pretending to be a Government agency doing social networking and collecting data on citizens or asking citizens to do things on behalf of the Government.  There is also a thin line between parody and

Countermeasures: Monitoring of URLs that claim to be Government-owned.  This is easily done with some Google advanced operators and some RSS fun.

References:

.

Threat: Privacy Violations on Forums. A Government-operated social media site collects Personally Identifiable Information about visitors when they register to participate in forums, blog comments, etc.

What’s the Big Deal: If you’re a Government agency and going to be collecting PII, you need to do a Privacy Impact Assessment which is overkill if you’re collecting names and email which could be false anyway.  However, the PIA is a lengthy process and utterly destroys the quickness of web development as we know it.

Countermeasures: It has been proposed in some circles that Government social media sites use third-party ID providers such as OpenID to authenticate simple commenters and forum posts.  This isn’t an original idea, Noel Dickover has been asking around about it for at least 9 months that I know of.

References:

.

Threat: Monitoring v/s Law Enforcement v/s Intelligence Collection. The Government has to be careful about monitoring social media sites.  Depending on which agency is doing it, at some point you collect enough information from enough sources that you’re now monitoring US persons.

What’s the Big Deal: If you’re collecting information and doing traffic analysis on people, you’re most likely running up against wiretap laws and/or FISA.

Countermeasures: Government needs Rules of Engagement for creating 2-way dialog with citizens complete with standards for the following practices:

  • RSS feed aggregation for primary and secondary purposes
  • RSS feed republishing
  • Social networking monitoring for evil twin and hoax site attacks
  • Typical “Web 2.0 Marketing” tactics such as group analysis

References:

.

Threat: Hacked?  Not Us! The Government does weird stuff with web sites.  My web browser always carps at the government-issued SSL certificates because they use their own certificate authority.

What’s the Big Deal: Even though I know a Government site is legitimate, I still have problems getting alert popups.  Being hacked with a XSS or other attack has much more weight than for other sites because people expect to get weird errors from Government sites and just click through.  Also the sheer volume of traffic on Government websites means that they are a lucrative target if the attacker’s end goal is to infect desktops.

Countermeasures: The standard web server anti-XSS and other web application security stuff works here.  Another happy thing would be to get the Federal CA Certificate embedded in web browsers by default like Thawt and Verisign.

References:

.

Threat: Oh Hai I Reset Your Password For You AKA “The Sarah Palin Attack”.  The password reset functions in social media sites work if you’re not a public figure.  Once the details of your life become scrutinized, your pet’s name, mother’s maiden name, etc, all become public knowledge.

What’s the Big Deal: It depends on what kind of data you have in the social media site.  This can range anywhere from the attacker getting access to one social media site that they get lucky with to complete pwnage of your VIP’s online accounts.

Countermeasures: Engagement with the social media site to get special considerations for Government VIPS.  Use of organizational accounts v/s personal accounts on social media sites.  Information poisoning on password reset questions for VIPs–don’t put the real data up there.  =)

References:

Tranparency in Action photo by Jeff Belmonte.



Similar Posts:

Posted in Risk Management | 2 Comments »
Tags:

Risk Management and Crazy People, a Script Using Stock Characters

Posted September 10th, 2009 by

Our BSOFH meets a Crazy Homeless Guy on the street just outside the Pentagon City metro station.

Crazy Homeless Guy: (walks up to BSOFH) Can I ask you a question?

BSOFH: (Somewhat startled, nobody really talks to him unless they’re trying to sell him something) Uhhhh, sure.

Crazy Homeless Guy: You know that there are people who claim to be able to say… take that truck over there and just by moving their finger make it fly into the Washington Monument.  Don’t you think that this is a threat to national security?

BSOFH: (Realizes that Crazy Homeless Guy is crazy and homeless) Not necessarily, you see.  I would definitely classify it as a threat.  However, when you’re looking at threats from people, you have to look at motives, opportunity, and motives.  Until you have all three, it’s more of an unrealized threat.

Crazy Homeless Guy: But what if these same guys could kill the President the same way, isn’t that a national threat?

BSOFH: Um, could be.  But then again, let’s look at a similar analogy:  firearm ownership.  Millions of people safely own weapons and yet there isn’t this huge upswell to shoot the President now is there?  Really, we have laws against shooting people and when somebody does that, we find them and put them in jail or *something*.  We don’t criminalize the threat, we criminalize the action.  Flicking a finger doesn’t kill people, psycho people kill people.

Crazy Homeless Guy: Or even if these same people could use the same amount of effort to kill everybody on the planet.  You know the <censored, I don’t like being sued by cults> people claim to have this ability.

BSOFH: (Jokingly, realizing that somebody has been taking 4chan too seriously) Well, I wouldn’t care too much because I would be… well, dead.  But yes, possibly.  But then again, since the dawn of the nuclear age and all through the Cold War we’ve had similar threats and people with capabilities created by technology instead of word study and the power of the human mind.  You have to look at these things from a risk standpoint.  While yes, these people have the capability to do something of high impact such as kill every human on the face of the earth, the track record of something like this happening is relatively small.  I mean, is there any historical record of a <censored, I don’t like being sued by cults> actually killing anybody through sheer force of their mind?  In other words, this is a very high impact, low probability event–something some people call a black swan event.  While yes, this is a matter of national security that these people potentially have this capability, we only have so many resources to protect things and we have our hands full dealing with risks that actually have occured in recent history.  In other words, risk management would say that this event you’re speaking of is an acceptable risk because of more pressing risks.

Crazy Homeless Guy: (Obviously beaten into oblivion by somebody crazier than himself) Well, I’ve never thought about it that way.  I’m really scared by these people.  Hold me, BSOFH.

BSOFH: Um, how about no?  You’re a Crazy Homeless Guy after all.  I have to get back to work now.  Come hang out sometime if you want to talk some quantitative risk analysis and we’ll start attaching dollar figures to the risks of <censored, I don’t like being sued by cults> killing all of humanity.  Doesn’t that sound like fun?  If we can get you cleared to get into the building, we can have a couple of whiteboarding sessions to determine the process flow and maybe an 800-30-stylie risk assessment just to present our case to the DHS Psychic Warfare Division.

Crazy Homeless Guy: Uh, I gotta find a better corner to stand on.  Maybe over by 16th and Pennsylvania I can find somebody more sympathetic to my cause.

BSOFH: You’re crazy, man!

Crazy Homeless Guy: You’re crazy, too, man!

And the moral of the story is that no matter how crazy you think you are, somebody else will always show up to prove you wrong.  And yeah, black swan events where we all die are dumb to prepare for because we’ll all be dead–near total fatalities only matter if you’re one of the survivors.

This story is dedicated to Alex H, David M, and some guy named Bayes.

OMG It’s a Psychic Black Swan photo by gnuckx cc0.



Similar Posts:

Posted in BSOFH, Risk Management, The Guerilla CISO | 5 Comments »
Tags:

Working with Interpreters, a Risk Manager’s Guide

Posted June 3rd, 2009 by

So how does the Guerilla-CISO staff communicate with the locals on jaunts to foreign lands such as Deleware, New Jersey, and Afghanistan?  The answer is simple, we use interpreters, known in infantrese as “terps”.  Yes, you might not trust them deep down inside because they harbor all kinds of loyalties so complex that you can spend the rest of your life figuring out, but you can’t do the job without them.

But in remembering how we used our interpreters, I’m reminded of some basic concepts that might be transferable to the IT security and risk management world.  Or maybe not, at least kick back and enjoy the storytelling while it’s free. =)

Know When to Treat Them Like Mushrooms: And by that, we mean “keep them in the dark and feed them bullsh*t”.  What really mean is to tell potentially adversarial people that you’re working with the least amount of information that they need to do their job in order to limit the frequency and impact of them doing something nasty.  When you’re planning a patrol, the worst way to ruin your week is to tell the terps when you’re leaving and where you’re going.  That way, they can call their Taliban friends when you’re not looking and they’ll have a surprise waiting for you.  No, it won’t be a birthday cake.  The way I would get a terp is that one would be assigned to me by our battalion staff and the night before the patrol I would tell the specific terp that we were leaving in the morning, give them a time that I would come by to check up on them, and that they would need to bring enough gear for 5 days.  Before they got into my vehicles and we rolled away, I would look through their gear to make sure they didn’t have any kind of communications device (radio or telephone) to let their buddies know where we were at.

Fudge the Schedule to Minimize Project Risk: Terps–even the good ones–are notorious for being on “local time”, which for a patrol means one hour later than you told them you were leaving.  The good part about this is that it’s way better than true local time, which has a margin of error of a week and a half.  In order to keep from being late, always tell the terps when you’ll need them an hour and a half before you really do, then check up on them every half hour or so.  Out on patrol, I would cut that margin down to half an hour because they didn’t have all the typical distractions to make them late.

Talk Slowly, Avoid Complex Sentences: The first skill to learn when using terps is to say things that their understanding of English can handle.  When they’re doing their job for you, simple sentences works best.  I know I’m walking down the road of heresy, but this is where quantitative risk assessment done poorly doesn’t work for me because now I something that’s entirely too complex to interpret to the non-IT crowd.  In fact, it probably is worse than no risk assessment at all because it comes accross as “consultantspeak” with no tangible link back to reality.

Put Your Resources Where the Greatest Risk Is: To a vehicle patrol out in the desert, most of the action happens at the front of the patrol.  That’s where you need a terp.  That way, the small stuff, such as asking a local farmer to move his goats and sheep out of the road so you can drive through, stays small–without a terp up front, a 2-minute conversation becomes 15 minutes of hassle as you first have to get the terp up to the front of the patrol then tell them what’s going on.

Pigs, Chicken, and Roadside Bombs: We all know the story about how in the eggs and bacon breakfast, the chicken is a participant but the pig is committed.  Well, when I go on a patrol with a terp, I want them to be committed.  That means riding in the front vehicle with me.  It’s my “poison pill” defense in knowing that if my terp tipped off the Taliban and they blow up the lead vehicle with me in it, at least they would also get the terp.  A little bit of risk-sharing in a venture goes a long way at getting honesty out of people.

Share Risk in a Culturally-Acceptable Way: Our terps would balk at the idea of riding in the front vehicle most of the time.  I don’t blame them, it’s the vehicle most likely to be turned into 2 tons of slag metal thanks to pressure plates hooked up to IEDs.  The typical American response is something along the lines of “It’s your country, you’re riding up front with me so if I get blown up, you do to”.  Yes, I share that ideal, but the Afghanis don’t understand country loyalties, the only thing they understand is their tribe, their village, and their family.  The Guerilla-CISO method here is to get down inside their heads by saying “Come ride with me, if we die, we die together like brothers”.  You’re saying the same thing basically but you’re framing it in a cultural context that they can’t say no to.

Reward People Willing to Embrace Your Risks: One of the ways that I was effective in dealing with the terps was that I would check in occassionally to see if they were doing alright during down-time from missions.  They would show me some Bollywood movies dubbed into Pashto, I would give them fatty American foods (Little Debbie FTW!).  They would play their music.  I would make fun of their music and amaze them because they never figured out how I knew that the song had drums, a stringed instrument, and somebody singing (hey, all their favorite songs have that).  They would share their “foot bread” (the bread is stamped flat by people walking on it before it’s cooked, I was too scared to ask if they washed their feet first) with me.  I would teach them how to say “Barbara (their assignment scheduler back on an airbase) was a <censored> for putting them out in the middle of nowhere on this assignment” and other savory phrases.  These forays weren’t for my own enjoyment, but to build rapport with the terps so that they would understand when I would give them some risk management love, Guerilla-CISO style.

Police, Afghan Army and an Interpreter photo by ME!.  The guy in the baseball cap and glasses is one of the best terps I ever worked with.



Similar Posts:

Posted in Army, Risk Management, The Guerilla CISO, What Works | 1 Comment »
Tags:

« Previous Entries Next Entries »


Visitor Geolocationing Widget: