Working with Interpreters, a Risk Manager’s Guide

Posted June 3rd, 2009 by

So how does the Guerilla-CISO staff communicate with the locals on jaunts to foreign lands such as Deleware, New Jersey, and Afghanistan?  The answer is simple, we use interpreters, known in infantrese as “terps”.  Yes, you might not trust them deep down inside because they harbor all kinds of loyalties so complex that you can spend the rest of your life figuring out, but you can’t do the job without them.

But in remembering how we used our interpreters, I’m reminded of some basic concepts that might be transferable to the IT security and risk management world.  Or maybe not, at least kick back and enjoy the storytelling while it’s free. =)

Know When to Treat Them Like Mushrooms: And by that, we mean “keep them in the dark and feed them bullsh*t”.  What really mean is to tell potentially adversarial people that you’re working with the least amount of information that they need to do their job in order to limit the frequency and impact of them doing something nasty.  When you’re planning a patrol, the worst way to ruin your week is to tell the terps when you’re leaving and where you’re going.  That way, they can call their Taliban friends when you’re not looking and they’ll have a surprise waiting for you.  No, it won’t be a birthday cake.  The way I would get a terp is that one would be assigned to me by our battalion staff and the night before the patrol I would tell the specific terp that we were leaving in the morning, give them a time that I would come by to check up on them, and that they would need to bring enough gear for 5 days.  Before they got into my vehicles and we rolled away, I would look through their gear to make sure they didn’t have any kind of communications device (radio or telephone) to let their buddies know where we were at.

Fudge the Schedule to Minimize Project Risk: Terps–even the good ones–are notorious for being on “local time”, which for a patrol means one hour later than you told them you were leaving.  The good part about this is that it’s way better than true local time, which has a margin of error of a week and a half.  In order to keep from being late, always tell the terps when you’ll need them an hour and a half before you really do, then check up on them every half hour or so.  Out on patrol, I would cut that margin down to half an hour because they didn’t have all the typical distractions to make them late.

Talk Slowly, Avoid Complex Sentences: The first skill to learn when using terps is to say things that their understanding of English can handle.  When they’re doing their job for you, simple sentences works best.  I know I’m walking down the road of heresy, but this is where quantitative risk assessment done poorly doesn’t work for me because now I something that’s entirely too complex to interpret to the non-IT crowd.  In fact, it probably is worse than no risk assessment at all because it comes accross as “consultantspeak” with no tangible link back to reality.

Put Your Resources Where the Greatest Risk Is: To a vehicle patrol out in the desert, most of the action happens at the front of the patrol.  That’s where you need a terp.  That way, the small stuff, such as asking a local farmer to move his goats and sheep out of the road so you can drive through, stays small–without a terp up front, a 2-minute conversation becomes 15 minutes of hassle as you first have to get the terp up to the front of the patrol then tell them what’s going on.

Pigs, Chicken, and Roadside Bombs: We all know the story about how in the eggs and bacon breakfast, the chicken is a participant but the pig is committed.  Well, when I go on a patrol with a terp, I want them to be committed.  That means riding in the front vehicle with me.  It’s my “poison pill” defense in knowing that if my terp tipped off the Taliban and they blow up the lead vehicle with me in it, at least they would also get the terp.  A little bit of risk-sharing in a venture goes a long way at getting honesty out of people.

Share Risk in a Culturally-Acceptable Way: Our terps would balk at the idea of riding in the front vehicle most of the time.  I don’t blame them, it’s the vehicle most likely to be turned into 2 tons of slag metal thanks to pressure plates hooked up to IEDs.  The typical American response is something along the lines of “It’s your country, you’re riding up front with me so if I get blown up, you do to”.  Yes, I share that ideal, but the Afghanis don’t understand country loyalties, the only thing they understand is their tribe, their village, and their family.  The Guerilla-CISO method here is to get down inside their heads by saying “Come ride with me, if we die, we die together like brothers”.  You’re saying the same thing basically but you’re framing it in a cultural context that they can’t say no to.

Reward People Willing to Embrace Your Risks: One of the ways that I was effective in dealing with the terps was that I would check in occassionally to see if they were doing alright during down-time from missions.  They would show me some Bollywood movies dubbed into Pashto, I would give them fatty American foods (Little Debbie FTW!).  They would play their music.  I would make fun of their music and amaze them because they never figured out how I knew that the song had drums, a stringed instrument, and somebody singing (hey, all their favorite songs have that).  They would share their “foot bread” (the bread is stamped flat by people walking on it before it’s cooked, I was too scared to ask if they washed their feet first) with me.  I would teach them how to say “Barbara (their assignment scheduler back on an airbase) was a <censored> for putting them out in the middle of nowhere on this assignment” and other savory phrases.  These forays weren’t for my own enjoyment, but to build rapport with the terps so that they would understand when I would give them some risk management love, Guerilla-CISO style.

Police, Afghan Army and an Interpreter photo by ME!.  The guy in the baseball cap and glasses is one of the best terps I ever worked with.



Similar Posts:

Posted in Army, Risk Management, The Guerilla CISO, What Works | 1 Comment »
Tags:

LOLCATS and the 60-Day Cybersecurity Review

Posted May 28th, 2009 by

Ooh ooh, the review is supposed to be announced tomorrow!

funny pictures



Similar Posts:

Posted in IKANHAZFIZMA | No Comments »
Tags:

When Standards Aren’t Good Enough

Posted May 22nd, 2009 by

One of the best things about being almost older than dirt is that I’ve seen several cycles within the security community.  Just like fashion and ladies’ hemlines, if you pay attention long enough, you’ll see history repeat itself, or something that closely resembles history.  Time for a short trip “down memory lane…”

In the early days of computer security, all eyes were fixed on Linthicum and the security labs associated with the NSA.  In the late 80’s and early 90’s the NSA evaluation program was notoriously slow – glacial would be a word one could use…  Bottom line, the process just wasn’t responsive enough to keep up with the changes and improvements in technology.  Products would be in evaluation for years before coming out of the process with their enabling technology nearly obsolete.   It didn’t matter, it was the only game in town until NIST and the Common Criteria labs  came onto the scene.  This has worked well, however the reality is, it’s not much better at vetting and moving technology from vendors to users.  The problem is, the evaluation process takes time and time means money, but it also means that the code submitted for evaluation will most likely be several revisions old by the time it emerges from evaluation.   Granted, it may only be 6 months, but it might take a year – regardless, this is far better than before.

So…  practically speaking, if the base version of FooOS submitted for evaluation is, say Version 5.0.1, several revisions —  each solving operational problems affecting the  organization — may have been released.  We may find that we need to run Version 5.6.10r3 in order to pass encrypted traffic via the network.  Because we encrypt traffic we must use FIPS-Level 2 certified code – but in the example above, the validated version of the FooOS will not work in our network…    What does the CISO do?  We’ll return to this in a moment, it gets better!

In order to reach levels of FIPS-140 goodness, one vendor in particular has instituted “FIPS Mode.”  What this does is require administration of the box from apposition directly in front  of the equipment, or at the length of your longest console cable…  Clearly, this is not suitable for organizations with equipment deployed worldwide to locations that do not have qualified administrators or network engineers.  Further, having to fly a technician to Burundi to clear sessions on a box every time it becomes catatonic is ridiculous at worst.  At best it’s not in accordance with the network concept of operations.  How does the CISO propose a workable, secure solution?


Standard Hill photo by timparkinson.

Now to my point.  (about time Vlad)   How does the CISO approach this situation?  Allow me to tell you the approach I’ve taken….

1. Accept the fact that once Foo OS has achieved a level of FIPS-140 goodness, the likelihood that the modules of code within the OS implementing cryptographic functionality in follow-on versions have not been changed.  This also means you have to assume the vendor has done a good job of documenting the changes to their baseline in their release notes, and that they HAVE modular code…

2. Delve into vendor documentation and FIPS-140 to find out exactly what “FIPS Mode” is, its benefits and the requirement.  Much of the written documentation in the standard deals with physical security of the cryptographic module itself (e.g., tamper-evident seals) – but most helpful is Table 1.

Security Level  1 Security Level 2 Security Level 3 Security Level 4
Cryptographic

Module Specification

Specification of cryptographic module, cryptographic boundary, Approved algorithms, and Approved modes of operation. Description of cryptographic module, including all hardware, software, and firmware components. Statement of module security policy.
Cryptographic Module Ports and Interfaces Required and optional interfaces. Specification of all interfaces and of all input and output data paths. Data ports for unprotected critical security parameters logically or physically separated from other data ports.
Roles, Services, and Authentication Logical separation of required and optional roles and services Role-based or identity-based operator authentication Identity-based operator authentication.
Finite State Model Specification of finite state model.  Required and optional states.  State transition diagram and specification of state transitions.
Physical Security Production grade equipment. Locks or tamper evidence. Tamper detection and response for covers and doors. Tamper detection and response envelope.  EFP or EFT.
Operational Environment Single operator. Executable code. Approved integrity technique. Referenced PPs evaluated at EAL2 with specified discretionary access control mechanisms and auditing. Referenced PPs plus trusted path evaluated at EAL3 plus security policy modeling. Referenced PPs plus trusted path evaluated at EAL4.
Cryptographic Key Management Key management mechanisms: random number and key generation, key establishment, key distribution, key entry/output, key storage, and key zeroization.
Secret and private keys established using manual methods may be entered or output in plaintext form. Secret and private keys established using manual methods shall be entered or output encrypted or with split knowledge procedures.
EMI/EMC 47 CFR FCC Part 15. Subpart B, Class A (Business use). Applicable FCC requirements (for radio). 47 CFR FCC Part 15. Subpart B, Class B (Home use).
Self-Tests Power-up tests: cryptographic algorithm tests, software/firmware integrity tests, critical functions tests. Conditional tests.
Design Assurance Configuration management (CM). Secure installation and generation. Design and policy correspondence. Guidance documents. CM system. Secure distribution. Functional specification. High-level language implementation. Formal model. Detailed explanations (informal proofs). Preconditions and postconditions.
Mitigation of Other Attacks Specification of mitigation of attacks for which no testable requirements are currently available.

Summary of Security Requirements From FIPS-140-2

Bottom line — some “features” are indeed useful,  but this one particular vendor’s implementation into a “one-size fits all” option tends to limit the use of the feature at all in some operational scenarios (most notably, the one your humble author is dealing with.)  BTW, changing vendors is not an option.

3. Upon analyzing the FIPS requirements against operational needs, and (importantly) the environment the equipment is operating in, one has to draw the line between “operating in vendor FIPS Mode,” and using FIPS 140-2 encryption.

4. Document the decision and the rationale.

Once again, security professionals have to help managers to strike a healthy balance between “enough” security and operational requirements.   You would think that using approved equipment, operating systems, and vendors using the CC evaluation process would be enough.  Reading the standard, we see the official acknowledgement that “Your Mileage May Indeed Vary:” TM

While the security requirements specified in this standard are intended to maintain the security provided by a cryptographic module, conformance to this standard is not sufficient to ensure that a particular module is secure. The operator of a cryptographic module is responsible for ensuring that the security provided by a module is sufficient and acceptable to the owner of the information that is being protected and that any residual risk is acknowledged and accepted.”     FIPS 140-2 Sec 15, Qualifications

The next paragraph constitutes validation of the approach I’ve embraced:

“Similarly, the use of a validated cryptographic module in a computer or telecommunications system does not guarantee the security of the overall system. The responsible authority in each agency shall ensure that the security of the system is sufficient and acceptable.”  (Emphasis added.)

One could say, “it depends,” but you wouldn’t think so at first glance – it’s a Standard for Pete’s sake!

Then again, nobody said this job would be easy!

Vlad



Similar Posts:

Posted in Rants, Risk Management, Technical | 4 Comments »
Tags:

Wanted: Some SCAP Wranglers

Posted May 18th, 2009 by

So I was doing my usual “Beltway Bandit Perusal of Opportunities for Filthy Lucre” also known as diving into FedBizOps and I found this gem.  Basically what this means is that sometime this summer, NIST is going to put out an RFP for contractors to further develop SCAP using ARRA funds.

Keeping in mind that this isn’t the official list of what NIST wants done under this contract, but it’s interesting to look at from an angle of where SCAP will go over the next couple of years:

  1. Evolution of the SCAP protocol and specifications thereof
  2. Feasibility studies, development, documenting, prototyping, and road-mapping of SCAP expansions (e.g., remediation capability) and analog protocols (e.g., Network Event Content Automation Protocol
  3. Implementation and maintenance support for the Security Automation Content Validation Program
  4. Maintenance support for the SCAP Product Validation Program
  5. Pilot, beta, and production support for SCAP and security automation use-cases
  6. Content development, modification, and testing
  7. Infrastructure and reference implementation development in JAVA, C++, and C programming languages
  8. Data trust models and data provenance solutions.

So how do you play?  Well, the first thing is that you respond to the notice with a capabilities statement saying “yes, we have experience in doing what you want”–there is a list of specifics in the original notice.  Then sign up for FedBizOps and follow the announcement so you can get changes and the RFP when it comes out.



Similar Posts:

Posted in NIST, Outsourcing | 5 Comments »
Tags:

The World Asks: is S.773 Censorship?

Posted May 15th, 2009 by

Here in the information assurance salt mines, we sure do loves us some conspiracies, so here’s the conspiracy of the month: S.773 gives the Government the ability to view your private data and the President disconnect authority over the Internet, which means he can sensor it.

Let’s look at the sections and paragraphs that would seem to say this:

Section 14:

(b) FUNCTIONS- The Secretary of Commerce–

(1) shall have access to all relevant data concerning such networks without regard to any provision of law, regulation, rule, or policy restricting such access;

Section 18: The President–

(2) may declare a cybersecurity emergency and order the limitation or shutdown of Internet traffic to and from any compromised Federal Government or United States critical infrastructure information system or network;

(6) may order the disconnection of any Federal Government or United States critical infrastructure information systems or networks in the interest of national security;

Taken completely by itself, it would seem like this gives the president the authorities to do all sorts of wrong stuff, all he has to do is to declare something as critical infrastructure and declare it compromised or in the interests of national security.  And some people have:

And some movies (we all love movies):

Actually, Shelly is pretty astute and makes some good points, she just doens’t have the background in information security.

It makes me wonder since when have people considered social networking sites or the Internet as a whole as “critical infrastructure”. Then the BSOFH in me things “Ye gods, when did our society sink so low?”

Now, as far as going back to Section 14 of S.773, it exists because most of the critical infrastructure is privately-held.  There is a bit of history to understand here and that is that the critical infrastructure owners and operators are very reluctant to give the information on their piece of critical infrastructure to the Government.  Don’t blame them, I had the same problem as a contractor: if you give the Government information, the next step is them telling you how to change it and how to run your business.  Since the owners/operators are somewhat non-helpful, the Government needs more teeth to get what it needs.

But as far as private data traversing the critical infrastructure?  I think it’s a stretch to say that’s part of the requirements of Section 14, it’s to collect data “about” (the language of the bill) the critical infrastructure, not “processed, stored, or forwarded” on the critical infrastructure. But yeah, let’s scope this a little bit better, CapHill Staffers.

On to Section 18.  Critical infrastructure is defined elsewhere in law.  Let’s see the definitions section from HSPD-7, Critical Infrastructure Identification, Prioritization, and Protection:

In this directive:

The term “critical infrastructure” has the meaning given to that term in section 1016(e) of the USA PATRIOT Act of 2001 (42 U.S.C. 5195c(e)).

The term “key resources” has the meaning given that term in section 2(9) of the Homeland Security Act of 2002 (6 U.S.C. 101(9)).

The term “the Department” means the Department of Homeland Security.

The term “Federal departments and agencies” means those executive departments enumerated in 5 U.S.C. 101, and the Department of Homeland Security; independent establishments as defined by 5 U.S.C. 104(1);Government corporations as defined by 5 U.S.C. 103(1); and the United States Postal Service.

The terms “State,” and “local government,” when used in a geographical sense, have the same meanings given to those terms in section 2 of the Homeland Security Act of 2002 (6 U.S.C. 101).

The term “the Secretary” means the Secretary of Homeland Security.

The term “Sector-Specific Agency” means a Federal department or agency responsible for infrastructure protection activities in a designated critical infrastructure sector or key resources category. Sector-Specific Agencies will conduct their activities under this directive in accordance with guidance provided by the Secretary.

The terms “protect” and “secure” mean reducing the vulnerability of critical infrastructure or key resources in order to deter, mitigate, or neutralize terrorist attacks.

And referencing the Patriot Act gives us the following definition for critical infrastructure:

In this section, the term “critical infrastructure” means systems and assets, whether physical or virtual, so vital to the United States that the incapacity or destruction of such systems and assets would have a debilitating impact on security, national economic security, national public health or safety, or any combination of those matters.

Since it’s not readily evident from what we really consider to be critical infrastructure, let’s look at the implemention of HSPD-7.  They’ve defined critical infrastructure sectors and key resources, each of which have a sector-specific plan on how to protect them.

  • Agriculture and Food
  • Banking and Finance
  • Chemical
  • Commercial Facilities
  • Communications
  • Critical Manufacturing
  • Dams
  • Defense Industrial Base
  • Emergency Services
  • Energy
  • Government Facilities
  • Healthcare and Public Health
  • Information Technology
  • National Monuments and Icons
  • Nuclear Reactors, Materials and Waste
  • Postal and Shipping
  • Transportation System
  • Water

And oh yeah, S.773 doesn’t mention key resources, only critical infrastructure.  Some of this key infrastructure isn’t even networked (*cough* icons and national monuments *cough*). Also note that “Teh Interblagosphere” isn’t listed, although you could make a case that information technology and communications sectors might include it.

Yes, this is not immediately obvious, you have to stitch about half a dozen laws together, but if we didn’t do pointers to other laws, we would have the legislative version of spaghetti code.

Going back to Section 18 of S.773, what paragraph 2 does is give the President the authority to disconnect critical infrastructure or government-owned IT systems from the Internet if they have been compromised.  That’s fairly scoped, I think.  I know I’ll get some non-technical readers on this blog post, but basically one of the first steps in incident response is to disconnect the system, fix it, then restore service.

Paragraph 6 is the part that scares me, mostly because it has the same disconnect authority as paragraph 2and the same scope (critical infrastructure and but the only justification is “in the interests of national security”. In other words, we don’t have to tell you why we disconnected your systems from the Internet because you don’t have the clearances to understand.

So how do we fix this bill?

Section 14 needs an enumeration of the types of data that we can request from critical infrastructure owners and operators. Something like the following:

  • Architecture and toplogy
  • Vulnerability scan results
  • Asset inventories
  • Audit results

The bill has a definitions section–Section 23.  We need to adopt the verbiage from HSPD-7 and include it in Section 23.  That takes care of some of the scoping issues.

We need a definition for “compromise” and we need a definition for “national security”. Odds are these will be references to other laws.

Add a recourse for critical infrastructure owners who have been disconnected: At the very minimum, give them the conditions under which they can be reconnected and some method of appeal.



Similar Posts:

Posted in Public Policy, Rants | 3 Comments »
Tags:

Sir Bruce Mentions FDCC, World Goes Nuts

Posted May 7th, 2009 by

Check out this blog post.  Wow, all sorts of crazies decend out of the woodwork when Bruce talks about something that’s been around for years and suddenly everyone’s redesigning the desktop from the ground up.

Quick recap on comments:

  • 60-day password changes suck
  • You can do this at home, the GPOs are available from NIST
  • My blue-haired sheepdog can’t use the FDCC image, it’s broken for commercial use!
  • You wouldn’t have to do this in Linux
  • Linux is teh suxx0rz
  • My computer started beeping and smoke came out of it, is this FDCC?

Proving once again that you can’t talk about Windows desktop security without it evolving into a flamewar.  Might as well pull out “vi v/s emacs” while you’re at it, Bruce.  =)

Computer Setup photo by karindalziel.  Yes, one of them is a linux box, I used this picture for that very same reason.  =)

But there is one point that people need to understand.  The magic of FDCC is not in the fact that the Government used its IT-buying muscle to get Microsoft to cooperate.  Oh no, that’s to be expected–the guys at MS are used to working with a lot of people now on requests.

The true magic of FDCC is getting the application vendors to play along.  To wit:

  • The FDCC GPOs are freely available from NIST
  • You can download images from NIST with a preconfigured FDCC setup
  • Application vendors can test their product against FDCC in their own lab
  • There is no external audit burden (yet, it might be coming) for software vendors because it’s a self-certification
  • FDCC-compatible software doesn’t require administrative privileges

In other words, if your software works with FDCC, it’s probably built to run on a security-correct operating system in the first place.  This is a good thing, and in this case the Government is using its IT budget to bring the application vendors into some sort of minimal security to the rest of the world.

This statement is from the FDCC FAQ, comments in parenthesis are mine:

“How are vendors required to prove FDCC compliance?
There is no formal compliance process; vendors of information technology products must self-assert FDCC compliance. They are expected to ensure that their products function correctly with computers configured with the FDCC settings. The product installation process must make no changes to the FDCC settings. Applications must work with users who do not have administrative privileges, the only acceptable exception being information technology management tools. Vendors must test their products on systems configured with the FDCC settings, they must use SCAP validated tools with FDCC Scanner capability to certify their products operate correctly with FDCC configurations and do not alter FDCC settings. The OMB provided suggested language in this memo: http://www.whitehouse.gov/omb/memoranda/fy2007/m07-18.pdf, vendors are likely to encounter similar language when negotiating with agencies.”

So really what you get out of self-certification is something like this:



Similar Posts:

Posted in Technical | 4 Comments »
Tags:

« Previous Entries Next Entries »


Visitor Geolocationing Widget: