Your Security “Requirements” are Teh Suxxorz

Posted July 1st, 2009 by

Face it, your security requirements suck. I’ll tell you why.  You write down controls verbatim from your catalog of controls (800-53, SoX, PCI, 27001, etc), put it into a contract, and wonder how come when it comes time for security testing, we just aren’t talking the same language.  Even worse, you put in the cr*ptastic “Contractor shall be compliant with FISMA and all applicable NIST standards”.  Yes, this happens more often than I could ever care to count, and I’ve seen it from both sides.

The problem with quoting back the “requirements” from a catalog of controls is that they’re not really requirements, they’re control objectives–abstract representations of what you need in order to protect your data, IT system, or business.  It’s a bit like brain surgery using a hammer and chisel–yes, it might work out for you, but I don’t really feel comfortable doing it or being on the receiving end.

And this is my beef with the way we manage security controls nowadays.  They’re not requirements, functionally they’re a high-level needs statement or even a security concept of operations.  Security controls need to be tailored into real requirements that are buildable, testable, measurable, and achievable.

Requirements photo by yummiec00kies.  There’s a social commentary in there about “Single, slim, and pleasant looking” but even I’m afraid to touch that one. =)

Did you say “Wrecks and Female Pigs’? In the contracting world, we have 2 vehicles that we use primarily for security controls: Statements of Work (SOW) and Engineering Requirements.

  • Statements of Work follow along the lines of activities performed by people.  For instance, “contractor shall perform monthly 100% vulnerability scanning of the $FooProject.”
  • Engineering Requirements are exactly what you want to have build.  For instance, “Prior to displaying the login screen, the application shall display the approved Generic Government Agency warning banner as shown below…”

Let’s have a quick exercise, shall we?

What 800-53 says: The information system produces audit records that contain sufficient information to, at a minimum, establish what type of event occurred, when (date and time) the event occurred, where the event occurred, the source of the event, the outcome (success or failure) of the event, and the identity of any user/subject associated with the event.

How It gets translated into a contract: Since it’s more along the lines of a security functional requirement (ie, it’s a specific functionality not a task we want people to do), we brake it out into multiple requirements:

The $BarApplication shall produce audit records with the following content:

  • Event description such as the following:
    • Access the $Baz subsystem
    • Mounting external hard drive
    • Connecting to database
    • User entered administrator mode
  • Date/time stamp in ‘YYYY-MM-DD HH:MM:SS’ format;
  • Hostname where the event occured;
  • Process name or program that generated the event;
  • Outcome of the event as one of the following: success, warn, or fail; and
  • Username and UserID that generated the event.

For a COTS product (ie, Windows 2003 server, Cisco IOS), when it comes to logging, I get what I get, and this means I don’t have a requirement for logging unless I’m designing the engineering requirements for Windows.

What 800-53 says: The The organization configures the information system to provide only essential capabilities and specifically prohibits and/or restricts the use of the following functions, ports, protocols, and/or services: [Assignment: organization-defined list of prohibited and/or restricted functions, ports, protocols, and/or services].

How It gets translated into a contract: Since it’s more along the lines of a security functional requirement, we brake it out into multiple requirements:

The $Barsystem shall have the software firewall turned on and only the following traffic shall be allowed:

  • TCP port 443 to the command server
  • UDP port 123 to the time server at this address
  • etc…..

If we drop the system into a pre-existing infrastructure, we don’t need firewall rules per-se as part of the requirements, what we do need is a SOW along the following lines:

The system shall use our approved process for firewall change control, see a copy here…

So what’s missing, and how do we fix the sorry state of requirements?

This is the interesting part, and right now I’m not sure if we can, given the state of the industry and the infosec labor shortage:  we need security engineers who understand engineering requirements and project management in addition to vulnerability management.

Don’t abandon hope yet, let’s look at some things that can help….

Security requirements are a “best effort” proposition.  By this, I mean that we have our requirements and they don’t fit in all cases, so what we do is we throw them out there and if you can’t meet the requirement, we waiver it (live with it, hope for the best) or apply a compensating control (shield it from bad things happening).  This is unnerving because what we end up doing is arguing all the time over whether the requirements that were written need to be done or not.  This drives the engineers nuts.

It’s a significant amount of work to translate control objectives into requirements.  The easiest, fastest way to fix the “controls view” of a project is to scope out things that are provided by infrastructure or by policies and procedures at the enterprise level.  Hmmm, sounds like explicitly stating what our shared/common controls are.

You can manage controls by exclusion or inclusion:

  • Inclusion:  We have a “default null” for controls and we will explicitly say in the requirements what controls you do need.  This works for small projects like standing up a pair of webservers in an existing infrastructure.
  • Exclusion:  We give you the entire catalog of controls and then tell you which ones don’t apply to you.  This works best with large projects such as the outsourcing of an entire IT department.

We need a reference implementation per technology.  Let’s face it, how many times have I taken the 800-53 controls and broken them down into controls relevant for a desktop OS?  At least 5 in the last 3 years.  The way you really need to do this is that you have a hardening guide and that is the authoritative set of requirements for that technology.  It makes life simple.  Not that I’m saying deviate from doctrine and don’t do 800-53 controls and 800-53A test procedures, but that’s the point of having a hardening guide–it’s really just a set of tailored controls specific to a certain technology type.  The work has been done for you, quit trying to re-engineer the wheel.

Use a Joint Responsibilities Matrix.  Basically this breaks down the catalog of controls into the following columns:

  • Control Designator
  • Control Title
  • Provided by the Government/Infrastructure/Common Control
  • Provided by the Contractor/Project Team/Engineer


Similar Posts:

Posted in BSOFH, Outsourcing, Technical | 3 Comments »
Tags:

The Spanish Civil War and the Rise of Cyberwar

Posted June 22nd, 2009 by

As usual, I greatly enjoyed your blog from 17 June, A Short History of Cyberwar Look-alikes, Rybolov. Moreover I really appreciated your historical examples. It warms my heart whenever an American uses the Russo-Japanese War of 1904/5 as a historic example of anything. Most Americans have never even heard of it. Yet, it is important event today if for no other reason than it established the tradition of having the US President intercede as a peace negotiator and win the Nobel Prize for Peace for his efforts. Because of this, some historians mark it as the historic point at which the US entered the world stage as a great power. By the way the President involved was Teddy Roosevelt.

Concerning the state and nature of Cyberwar today, I’ve seen Rybolov’s models and I think they make sense. Cyberwar as an extension of electronic warfare makes some sense. The analogy does break down at some point because of the peculiarity of the medium. For example, when considering exploitation of SCADA systems as we have seen in the Baltic States and in a less focused manner here in North America, it is hard to see a clear analogy in electronic warfare. The consequences look more like old-fashion kinetic warfare. Likewise, there are aspects of Cyberwarfare that look like good old-fashion human intelligence and espionage. Of course I also have reservations with the electronic warfare model based on government politics. Our friends at NSA have been suggesting that Cyberwarfare is an extension of signals intelligence for years, with the accompanying claim that they (NSA) should have the technical, legal, and of course budgetary resources that go along with it.

I’ve also have seen other writers propose other models of Cyberwarfare and they tend to be a mixed bag at best. At worst, many of the models proposed appear to be the laughable writings of individuals with no more insight to or knowledge of intelligence operations beyond the latest James Bond movie. My own opinion is that two models or driving forces behind international Cyberwarfare activity. The first is pure opportunism. Governments and criminal organizations alike, even authoritarian governments have seen the Hollywood myths and the media hysteria about hacker exploits. Over time, criminal gangs have created and expanded on their cyber capabilities driven by a calculation of profits and risks much like conventional businesses. Combine an international banking environment that allows funds to be transferred across borders with little effort and less time and an international legal environment that is largely out of touch with the Internet and international telecommunications, and we have a breeding ground for Cyber criminals in which the risks of cross-border criminal activity is often much less risky than domestic criminal activity.

As successful Cyber criminal gangs have emerged in totalitarian regimes, it shouldn’t be a surprise that eventually the governments involved would eventually take an interest in both their activities and techniques. There are several reasons that totalitarian government might want to do this. Perhaps the simplest motivation is that the corrupt officials would be drawn to share in the profits in exchange for protection. In addition, the intelligence arms of these nations could also leverage their services and techniques at a fraction of the cost of developing similar capabilities themselves. Additionally, using these capabilities would also provide the intelligence agencies and even the host government with an element of deniability if operations assigned to the criminal gangs were detected.

Monument to the International Brigade photo by Secret Pilgrim.  For more information, read the history of the International Brigade.

Perhaps the most interesting model of development and Cyberwarfare activity today would be based on the pre-WW II example of the Spanish Civil War. After World War I, a period of mental and societal exhaustion followed on the part of all participating nations. This was quickly follow by a period of self-assessment and rebuilding. In the case of the defeated Germany the reconstruction period protracted due to difficult economic conditions, in part created by the harsh conditions of surrender imposed by the winning European governments.

It was also important to remember that these same victorious European governments undermined many of social and moral underpinnings of German society by systematically all the basis of traditional German government and governmental legitimacy without regard for what should replace it. The assessments of most historians is that these factors combined to sow the seed of hatred against the victorious powers and created a social climate in which a return to open warfare at some time in the future was seen as unavoidable and perhaps desirable. The result was that Germany actively prepared and planned for what was seen as the commonly inevitable war in the future. New systems and technologies were considered, tested. However, treaty limitations also hampered some of these efforts.

In the Soviet Union a similar set of conclusions developed during this period of history within the ruling elite, specifically that renewed war with Germany was inevitable in the near term. Like Germany, the Soviet Union also actively prepared for this war. Likewise they considered and studied new technologies and approaches to war. Somewhat surprisingly, they also secretly conspired with the Germans to provide them with secret proving grounds and test facilities to study some to the new technologies and approaches to war that would otherwise have been banned under provisions of the peace treaties of World War I.

So, when Civil War broke out in Spain in the summer of 1936, both Germany and the Soviet Union were positively delirious at the prospects of testing their new military equipment and theories out under battlefield conditions but, without the risks of participating in a real shooting war as an active belligerent. So, both governments sent every military technology possible to their proxies in Spain under the auspices of “aid”. In some cases they even sent “advisors” who were nothing less than active soldiers and pilots in the conflict. At first, this activity took place under a shroud of secrecy. But, when you send military equipment and people to fight in foreign lands it usually takes no time at all for someone to notice that, “those guys aren’t from here”.

Bomber During the Spanish Civil War photo by -Merce-.  Military aviation, bombing in particular, was one of the new technologies that was tested during the Spanish Civil War.

Since the fall of the Soviet Union, I think the world has looked at the United States as the world’s sole superpower. Many, view this situation with fear and suspicion. Even some of our former Cold War allies have taken this view. Certainly our primary Cold War adversaries have adopted this stance. If you look at contemporary Chinese and Russian military writing it is clear that they have adopted a position similar to the pre- World War II notion that war between the US and Russia or war between the US and China is inevitable. To make matters worse, during much of the Cold War the US never seemed to pull it together militarily long enough to actually win a war. Toward the end of the Cold War we started smacking smaller allies of the Soviet Union like Grenada and succeeded.

We then moved on to give Iraq a real drubbing after the Cold War. The so-call “Hyperwar” in Iraq terrified the Russians and Chinese alike. The more they studied what we did in Iraq the more terrified they became. On of the many counters they have written about is posing asymmetric threats to the US, that is to say threatening the US in a way in which it is uniquely, or unusually vulnerable. One of these areas of vulnerability is Cyberspace. All sorts of press reporting indicate that the Russians and Chinese have made significant investments in this area. The Russians and Chinese deny these reports as quickly as they emerge. So, it is difficult to determine what the truth is. The fact that the Russians and Chinese are so sensitive to these claims may be a clear indication that they have active programs – the guilty men in these cases have a clear record of protesting to much when they are most guilty.

Assuming that all of this post-Cold War activity is true, I believe this puts us in much the same situation that existed in the pre-World War II Spanish Civil War era. I think the Russian and Chinese governments are just itching to test and refine their Cyberwarfare capabilities. But, at the same time I think they want to operate in a manner similar to how the Germans and the Soviet Union operated in that conflict. I think they want and are testing their capabilities but in a limited way that provides them with some deniability and diplomatic cover. This is important to them because the last thing they want now is to create a Cyber-incident that will precipitate a general conflict or even a major shift in diplomatic or trade relationships.

One of the major differences between the Spanish Civil War example and our current situation of course is that there is no need for a physical battlefield to exist to provide as a live testing environment for Cyber weapons and techniques. However, at least in the case of Russia with respect to Georgia, they are exploiting open military conflicts to use Cyberwar techniques when those conflicts do arise. We have seen similar, but much smaller efforts on the part of Iran, and the Palestinian Authority as embrace what is seen as a cheap and low risk weapon. However, their efforts seem to be more reactionary and rudimentary. The point is, the longer this game goes on without serious consequence the more it will escalate both vertically (in sophistication) and horizontally (be embraced by more countries). Where all of this will lead is anyone guess. But, I think the safe money is betting that the concept of Cyberwar is here to stay and eventually the tools and techniques and full potential of Cyberwar will eventually be used as part of as part of a strategy including more traditional weapons and techniques.



Similar Posts:

Posted in Public Policy, Rants, The Guerilla CISO | No Comments »
Tags:

A Short History of Cyberwar Lookalikes

Posted June 17th, 2009 by

Rybolov’s Note: Hello all, I’m venturing into an open-ended series of blog posts aimed at starting conversation. Note that I’m not selling anything *yet* but ideas and maybe some points for discussion.

Let’s get this out there from the very beginning: I agree with Ranum that full-scale, nation-v/s-nation Cyberwar is not a reality.  Not yet anyway, and hopefully it never will be.  However, on a smaller scale with well-defined objectives, cyberwar is not only happening now, but it is also a natural progression over the past century.

DojoSec Monthly Briefings – March 2009 – Marcus J. Ranum from Marcus Carey on Vimeo.

Looking at where we’re coming from in the existing models and techniques for activities similar to cyberwar, it frames our present state very nicely :

Electronic Countermeasures. This has been happening for some time.  The first recorded use of electronic countermeasures (ECM) was in 1905 when the Russians tried to jam radio signals of the Japananese fleet besieging Port Arthur.  If you think about ECM as DOS based on radio, sonar, etc, then it seems like cyberwar is just an extension of the same denial of communications that we’ve been doing since communication was “invented”.

Modern Tactical Collection and Jamming. This is where Ranum’s point about spies and soldiers falls apart, mostly because we don’t have clandestine operators doing electronic collection at the tactical level–they’re doing both collection and “attack”.  The typical battle flow goes something along the lines of scanning for items of interest, collecting on a specific target, then jamming once hostilities have begun.  Doctrinally, collection is called Electronic Support and jamming is called Electronic Attack.  What you can expect in a cyberwar is a period of reconnaissance and surveillance for an extended length of time followed by “direct action” during other “kinetic” hostilities.

Radio Station Jamming. This is a wonderful little world that most of you never knew existed.  The Warsaw Pact used to jam Radio America and other sorts of fun propaganda that we would send at them.  Apparently we’ve had some interesting radio jamming since the end of the Cold War, with China, Cuba, North Korea, and South Korea implicated in some degree or another.

Website Denial-of-Service. Since only old people listen to radio anymore and most news is on the Internet, so it makes sense to DOS news sites with an opposing viewpoint.  This happens all the time, with attacks ranging from script kiddies doing ping floods to massive DOSBots and some kind of racketeering action… “You got a nice website, it would be pretty bad if nobody could see it.”  Makes me wonder why the US hasn’t taken Al Jazeera off the Internet.  Oh, that’s right, somebody already tried it.  However, in my mind, jamming something like Al Jazeera is very comparable to jamming Voice of America.

Estonia and Gruzija DOS. These worked pretty well from a denial-of-communications standpoint, but only because of the size of the target.  And so what if it did block the Internet, when it comes to military forces, it’s at best an annoyance, at most it will slow you down just enough.  Going back to radio jamming, blocking out a signal only works when you have more network to throw at the target than the target has network to communicate with the other end.  Believe it or not, there are calculators to determine this.

Given this evolution of communications denial, it’s not unthinkable that people wouldn’t be launching electronic attacks at each other via radar, radio, carrier pigeon, IP or any other way they can.

However, as in the previous precedents and more to some of the points of Ranum’s talk at DojoSec, electronic attacks by themselves only achieve limited objectives.  Typically the most likely type of attack is to conduct a physical attack and use the electronic attack, whether it’s radio, radar, or IT assets, to delay the enemy’s response.  This is why you have to take an electronic attack seriously if it’s being launched by a country which has a military capable of attacking you physically–it might be just a jamming attack, it might be a precursor to an invasion.

Bottom line here is this: if you use it for communication, it’s a target and has been for some time.



Similar Posts:

Posted in Technical, The Guerilla CISO, What Doesn't Work, What Works | 5 Comments »
Tags:

Why We Need PCI-DSS to Survive

Posted June 9th, 2009 by

And by “We”, I mean the security industry as a whole.  And yes, this is your public-policy lesson for today, let me drag my soapbox over here and sit for a spell while I talk at you.

By “Survive”, I mean that we need some kind of self-regulatory framework that fulfills the niche that PCI-DSS occupies currently. Keep reading, I’ll explain.

And the “Why” is a magical phrase, everybody say it after me: self-regulatory organization.  In other words, the IT industry (and the Payment Card Industry) needs to regulate itself before it crosses the line into being considered for statutory regulation (ie, making a law) by the Federal Government.

Remember the PCI-DSS hearings with the House Committe on Homeland Security (AKA the Thompson Committee)?  All the Security Twits were abuzz about it, and it did my heart great justice to hear all the cool kids become security and public policy wonks at least for an afternoon.  Well, there is a little secret here and that is that when Congress gets involved, they’re gathering information to determine if they need to regulate an industry.  That’s about all Congress can do: make laws that you (and the Executive Branch) have to follow, maybe divvy up some tax money, and bring people in to testify.  Other than that, it’s just positioning to gain favor with other politicians and maybe some votes in the next election.

Regulation means audits and more compliance.  They go together like TCP and IP.  Most regulatory laws have at least some designation for a party who will perform oversight.  They have to do this because, well, if you’re not audited/assessed/evaluated/whatever, then it’s really an optional law, which doesn’t make sense at all.

Yay Audits photo by joebeone.

Another magical phrase that the public policy sector can share with the information security world: audit burden.  Audit burden is how much a company or individual pays both in direct costs (paying the auditors) and in indirect costs (babysitting the auditors, producing evidence for the auditors, taking people away from making money to talk to auditors, “audit requirements”, etc).  I think we can all agree that low audit burden is good, high audit burden is bad.  In fact, I think that’s one of the problems with FISMA as implemented is that it has a high audit burden with moderately tangible results. But I digress, this post is about PCI-DSS.

There’s even a concept that is mulling around in the back of my head to make a metric that compares the audit burden to the amount of security that it provides to the amount of assurance that it provides against statutory regulation.  It almost sounds like the start of a balanced scorecard for security management frameworks, now if I could get @alexhutton to jump on it, his quant brain would churn out great things in short order.

But this is the lesson for today: self-regulation is preferrable to legislation.

  • Self-regulation is defined by people in the industry.  Think about the State Bar Association setting the standards for who is allowed to practice law.
  • Standards ideally become codified versions of “best practices”.  OK, this is if they’re done correctly, more to follow.
  • Standards are more flexible than laws.  As hard/cumbersome as it is to change a standard, the time involved in changing a law is prohibitive most of the time unless you’re running for reelection.
  • Standards sometimes can be “tainted” to force out competition, laws are even more so.

The sad fact here is that if we don’t figure out as an industry how to make PCI-DSS or any other forms of self-regulation work, Congress will regulate for us.  Don’t like PCI-DSS because of the audit burden, wait until you have a law that requires you to do the same controls framework.  It will be the same thing, only with bigger penalties for failure, larger audit burdens to avoid the larger penalties, larger industries created to satisfy the market demand for audit.  Come meet the new regulatory body, same as the old only bigger and meaner. =)

However, self-regulation works if you do it right, and by right I mean this:

  • The process is transparent and not the product of a secret back-room cabbal.
  • Representation from all the shareholders.  For PCI-DSS, that would be Visa/MasterCard, banks, processors, large merchants, small merchants, and some of the actual customers.
  • The standards committee knows how to compromise and come to a consensus.  IE, we can’t have both full hard drive encryption, a WAF, code review, and sacrificing of chickens in the server room, so we’ll make one of the 4 mandatory.
  • The regulatory organization has a grievance process for its constituency to present valid (AKA “Not just more whining”) discrepencies in the standards and processes for clarification or consideration for change.
  • The standard is “owned” by every member of the constituency.  Right now, people governed by PCI-DSS are not feeling that the standard is their standard and that they have a say in what comprises the standard and that they are the ones being helped by the standard.  Some of that is true, some of that is an image problem.  The way you combat this is by doing the things that I mentioned in the previous bullets.

Hmm, sounds like making an ISO standard, which brings its own set of politics.

While we need some form of self-regulation, right now PCI-DSS and ISO 27001 are the closest that we have in the private sector.  Yeah, it sucks, but it sucks the least, just like our form of government.



Similar Posts:

Posted in Public Policy, Rants | 11 Comments »
Tags:

Some Thoughts on POA&M Abuse

Posted June 8th, 2009 by

Ack, Plans of Action and Milestones.  I love them and I hate them.

For those of you who “don’t habla Federali”, a POA&M is basically an IOU from the system owner to the accreditor that yes, we will fix something but for some reason we can’t do it right now.  Usually these are findings from Security Test and Evaluation (ST&E) or Certification and Accreditation (C&A).  In fact, some places I’ve worked, they won’t make new POA&Ms unless they’re traceable back to ST&E results.

Functions that a POA&M fulfills:

  • Issue tracking to resolution
  • Serves as a “risk register”
  • Used as the justification for budget
  • Generate mitigation metrics
  • Can be used for data-mining to find common vulnerabilities across systems

But today, we’re going to talk about POA&M abuse.  I’ve seen my fair share of this.

Conflicting Goals: The basic problem is that we want POA&Ms to satisfy too many conflicting functions.  IE, if we use the number of open POA&Ms as a metric to determine if our system owners are doing their job and closing out issues but we also turn around and report these at an enterprise level to OMB or at the department level, then it’s a conflict of interest to get these closed as fast as possible, even if it means losing your ability to track things at the system level or to spend the time doing things that solve long-term security problems–our vulnerability/weakness/risk management process forces us into creating small, easily-to-satisfy POA&Ms instead of long-term projects.

Near-Term v/s Long-Term:  If we set up POA&Ms with due dates of 30-60-90 (for high, moderate, and low risks) days, we don’t really have time at all to turn these POA&Ms into budget support.  Well, if we manage the budget up to 3 years in advance and we have 90 days for high-risk findings, then that means we’ll have exactly 0 input into the budget from any POA&M unless we can delay the bugger for 2 years or so, much too long for it to actually be fixable.

Bad POA&Ms:  Let’s face it, sometimes the one-for-one nature of ST&E, C&A, and risk assessment findings to POA&Ms means that you get POA&Ms that are “bad” and by that I mean that they can’t be satisfied or they’re not really something that you need to fix.

Some of the bad POA&Ms I’ve seen, these are paraphrased from the original:

  • The solution uses {Microsoft|Sun|Oracle} products which has a history of vulnerabilities.
  • The project team needs to tell the vendor to put IPV6 into their product roadmap
  • The project team needs to implement X which is a common control provided at the enterprise level
  • The System Owner and DAA have accepted this risk but we’re still turning it into a POA&M
  • This is a common control that we really should handle at the enterprise level but we’re putting it on your POA&M list for a simple web application

Plan of Action for Refresh Philly photo by jonny goldstein.

Keys to POA&M Nirvana:  So over the years, I’ve observed some techniques for success in working with POA&Ms:

  • Agree on the evidence/proof of POA&M closure when the POA&M is created
  • Fix it before it becomes a POA&M
  • Have a waiver or exception process that requires a cost-benefit-risk analysis
  • Start with”high-level” POA&Ms and work down to more detailed POA&Ms as your security program matures
  • POA&Ms are between the System Owner and the DAA, but the System Owner can turn around and negotiate a POA&M as a cedural with an outsourced IT provider

And then the keys to Building Good POA&Ms:

  • Actionable–ie, they have something that you need to do
  • Achievable–they can be accomplished
  • Demonstrable–you can demonstrate that the POA&M has been satisfied
  • Properly-Scoped–absorbed at the agency level, the common control level, or the system level
  • They are SMART: Specific, Manageable, Attainable, Relevant, and within a specified Timeframe
  • They are DUMB: Doable, Understandable, Manageable, and Beneficial

Yes, I stole the last 2 bullets from the picture above, but they make really good sense in a way that “know thyself” is awesome advice from the Oracle at Delphi.



Similar Posts:

Posted in BSOFH, FISMA | No Comments »
Tags:

Working with Interpreters, a Risk Manager’s Guide

Posted June 3rd, 2009 by

So how does the Guerilla-CISO staff communicate with the locals on jaunts to foreign lands such as Deleware, New Jersey, and Afghanistan?  The answer is simple, we use interpreters, known in infantrese as “terps”.  Yes, you might not trust them deep down inside because they harbor all kinds of loyalties so complex that you can spend the rest of your life figuring out, but you can’t do the job without them.

But in remembering how we used our interpreters, I’m reminded of some basic concepts that might be transferable to the IT security and risk management world.  Or maybe not, at least kick back and enjoy the storytelling while it’s free. =)

Know When to Treat Them Like Mushrooms: And by that, we mean “keep them in the dark and feed them bullsh*t”.  What really mean is to tell potentially adversarial people that you’re working with the least amount of information that they need to do their job in order to limit the frequency and impact of them doing something nasty.  When you’re planning a patrol, the worst way to ruin your week is to tell the terps when you’re leaving and where you’re going.  That way, they can call their Taliban friends when you’re not looking and they’ll have a surprise waiting for you.  No, it won’t be a birthday cake.  The way I would get a terp is that one would be assigned to me by our battalion staff and the night before the patrol I would tell the specific terp that we were leaving in the morning, give them a time that I would come by to check up on them, and that they would need to bring enough gear for 5 days.  Before they got into my vehicles and we rolled away, I would look through their gear to make sure they didn’t have any kind of communications device (radio or telephone) to let their buddies know where we were at.

Fudge the Schedule to Minimize Project Risk: Terps–even the good ones–are notorious for being on “local time”, which for a patrol means one hour later than you told them you were leaving.  The good part about this is that it’s way better than true local time, which has a margin of error of a week and a half.  In order to keep from being late, always tell the terps when you’ll need them an hour and a half before you really do, then check up on them every half hour or so.  Out on patrol, I would cut that margin down to half an hour because they didn’t have all the typical distractions to make them late.

Talk Slowly, Avoid Complex Sentences: The first skill to learn when using terps is to say things that their understanding of English can handle.  When they’re doing their job for you, simple sentences works best.  I know I’m walking down the road of heresy, but this is where quantitative risk assessment done poorly doesn’t work for me because now I something that’s entirely too complex to interpret to the non-IT crowd.  In fact, it probably is worse than no risk assessment at all because it comes accross as “consultantspeak” with no tangible link back to reality.

Put Your Resources Where the Greatest Risk Is: To a vehicle patrol out in the desert, most of the action happens at the front of the patrol.  That’s where you need a terp.  That way, the small stuff, such as asking a local farmer to move his goats and sheep out of the road so you can drive through, stays small–without a terp up front, a 2-minute conversation becomes 15 minutes of hassle as you first have to get the terp up to the front of the patrol then tell them what’s going on.

Pigs, Chicken, and Roadside Bombs: We all know the story about how in the eggs and bacon breakfast, the chicken is a participant but the pig is committed.  Well, when I go on a patrol with a terp, I want them to be committed.  That means riding in the front vehicle with me.  It’s my “poison pill” defense in knowing that if my terp tipped off the Taliban and they blow up the lead vehicle with me in it, at least they would also get the terp.  A little bit of risk-sharing in a venture goes a long way at getting honesty out of people.

Share Risk in a Culturally-Acceptable Way: Our terps would balk at the idea of riding in the front vehicle most of the time.  I don’t blame them, it’s the vehicle most likely to be turned into 2 tons of slag metal thanks to pressure plates hooked up to IEDs.  The typical American response is something along the lines of “It’s your country, you’re riding up front with me so if I get blown up, you do to”.  Yes, I share that ideal, but the Afghanis don’t understand country loyalties, the only thing they understand is their tribe, their village, and their family.  The Guerilla-CISO method here is to get down inside their heads by saying “Come ride with me, if we die, we die together like brothers”.  You’re saying the same thing basically but you’re framing it in a cultural context that they can’t say no to.

Reward People Willing to Embrace Your Risks: One of the ways that I was effective in dealing with the terps was that I would check in occassionally to see if they were doing alright during down-time from missions.  They would show me some Bollywood movies dubbed into Pashto, I would give them fatty American foods (Little Debbie FTW!).  They would play their music.  I would make fun of their music and amaze them because they never figured out how I knew that the song had drums, a stringed instrument, and somebody singing (hey, all their favorite songs have that).  They would share their “foot bread” (the bread is stamped flat by people walking on it before it’s cooked, I was too scared to ask if they washed their feet first) with me.  I would teach them how to say “Barbara (their assignment scheduler back on an airbase) was a <censored> for putting them out in the middle of nowhere on this assignment” and other savory phrases.  These forays weren’t for my own enjoyment, but to build rapport with the terps so that they would understand when I would give them some risk management love, Guerilla-CISO style.

Police, Afghan Army and an Interpreter photo by ME!.  The guy in the baseball cap and glasses is one of the best terps I ever worked with.



Similar Posts:

Posted in Army, Risk Management, The Guerilla CISO, What Works | 1 Comment »
Tags:

« Previous Entries Next Entries »


Visitor Geolocationing Widget: