The Guerilla’s Guide to Piggybacking

Posted July 18th, 2007 by

As much effort as we put into badge readers, smart cards, and access controls systems, it’s a dirty little secret that they are easy to overcome if you know what you are doing, and the only way to keep you from cheating is to put a “meatgrinder” in your way.

Techniques for getting past card reader systems:

  • The Big Box: Hold a box that’s big enough and bulky enough that you need two hands to hold it. Ask a cleared employee to hold the door open for you.
  • The Mad Dash: Hide just out of reach of the door. Wait for a cleared person to go inside, then make a “mad dash” to grab the door right before it closes. If you practice, you don’t even have to run to get the door, you use your sense of timing.
  • The New Employee: “Hi, I’m new here and they told me it would be a week until I got my badge. Can you let me in?”
  • The Clipboard: Hold a clipboard and act like an auditor who is dismayed that they couldn’t get into the area that they need to inspect.
  • The Visitor: Ask somebody to sign in so you can legitimately get access to the area. After that, it’s a simple deal to shed your escort.

The commonality to all this is that you’re preying on peoples’ sense of either being a team player or giving other people some common hospitality. You can teach people to not let anybody else in, but our brains just won’t let us slam the door in somebody else’s face.

Come to think of it, it’s suspiciously like trying to teach your kids not to talk to strangers.



Similar Posts:

Posted in Hack the Planet, What Doesn't Work, What Works | 3 Comments »

Response to “What is Information Assurance – The Video”

Posted June 18th, 2007 by

Movie from George Mason professor Paul Strassman on Information Assurance which was digitized and shared forever thanks to Google Movies.

My response to the movie:

This presentation has many problems.

FISMA is not that large of a law.  You can get the text from the NIST website at the following url:
http://csrc.nist.gov/policies/FISMA-final.pdf (16 pages long)

FISMA does not require SP 800-53.  It charges NIST with creating standards for information security.  FIPS 200 dictates that an agency use 800-53 as their baseline security controls.  Once again, we’re confusing the law with the implementation.

The security plan is a vehicle to get people to agree on what the security controls should be, not a post-fact documentation on what security controls that exist.

DIACAP is not the first time that systems have had to be certified.  Prior to this, there was DITSCAP, NIACAP, FIPS 102, and SP 800-37.  I also wonder how we got from SP 800-53 to DIACAP since they are different flavors–civilian agencies v/s DoD.

In certification, you do not certify compliance.  You certify that the controls meet the needs of the business owners.  Those needs might be considerably more relaxed than you think.  For example:  completely air-gapped systems in a SCIF don’t need a sizeable chunk of the control in 800-53.  Compliance is costly because you don’t have the ability to not do something that doesn’t make sense.

The “Hamster Wheel of Pain” that is shown as the DIACAP process is detached from other SDLC activities, which is rapidly becoming one of my pet peeves.  If you do DIACAP divorced from the SDLC, you are creating liarware.

“The DIACAP activities, the certification of the system, is a very involved, complicated, time-consuming, laborious process that nobody has as yet completed.”  It’s so wrong I can’t even begin to explain.

The DAA is not responsible for DIACAP.  The DAA is only a key decision maker.

The DAA does not sign a statement saying that you are secure, they sign a statement saying that the level of risk to the system and to the mission is of an acceptable level.

The CIO does not usually go to the DAA.  The CIO is more likely to be *the* DAA than just about anybody else.

The second half of the movie is general security information, not really IA-specific.

If this is what they teach in the universities around the beltway, no wonder we have an IA constituency who don’t “get it“.



Similar Posts:

Posted in DISA, FISMA, NIST, What Doesn't Work | 3 Comments »

Rebuilding C&A

Posted June 13th, 2007 by

After commenting on Mike Rothman’s Security Incite and Alex Hutton’s riskanalysis.is, I’m about ready to explain how C&A works and doesn’t work.

Let’s roleplay, shall we?

You’re the US government. You have an IT budget of $65 ba-ba-ba-ba-billion (stuttering added for additional effect) every year (2007 budget). If you wanted to, you might be able to make an offer to buy Microsoft based on one year’s worth of budget.

So how do you manage security risks associated with such a huge amount of cash? Same way you would manage those IT systems in the non-security world:

  • Break it all down into bite-sized pieces
  • Have some sort of methodology to manage the pieces effectively
  • Delegate responsibility for each piece to somebody
  • Use metrics to track where you are going
  • Focus on risks to the business and the financial investment
  • Provide oversight on all of the pieces that you delegated
  • Evaluate each piece to see how well it is doing

Hmm, sounds exactly like what the government has done so far. It’s exactly like an agency’s investment (system) inventory/portfolio, OMB budget process, and the GAO metrics.

Now how would you manage each bite-sized piece? This is roughly the way a systems engineer would do it:

  • Define needs
  • Define requirements
  • Build a tentative design
  • Design review
  • Build the system
  • Test that the requirements are met
  • Flip the switch to take it live
  • Support anything that breaks

Hmm, that’s suspiciously like a system development life-cycle, isn’t it? There’s a reason we use project management and SDLC–in order to get from here to there, you need to have a plan or methodology to follow, and SDLC makes sense.

So then let’s do the same exercise and add in the security pieces of the puzzle.

  • Define needs: Determine how much thesystem and the information is worth–categorization (FIPS-199 and NIST SP 800-60)
  • Define requirements (FIPS-200 andNIST SP 800-53 along with a ton of tailoring)
  • Build a tentative design (first security plan draft)
  • Design review (security plan approval)
  • Build the system
  • Test that the needs and requirements are met (security test and evaluation)
  • Flip the switch to take it live (accreditation decision)
  • Support anything that breaks (continuous monitoring)

Guess what? That’s C&A in a nutshell. All this other junk is just that–junk. If you’re not managing security risk throughout the SDLC, what are you doing except for posturing for the other security people to see and arguing about triviata?

This picture (blatantly stolen from NIST SP 800-64, Security Considerations in the Information System Development Life Cycle) shows you how the core components of C&A fit in with the rest of the SDLC:

Security in the SDLC

My theory is that the majority of systems have already been built and are in O&M phase of their SDLC. What that means is that we are trying to do C&A for these systems too late to really change anything. It also means that for the most part we will be trying to do C&A on systems that have already been built, so, just like how people confused war communism with pure communism, we confuse the emergency state of C&A post-facto with the pure state of C&A.

Now let’s look at where C&A typically falls apart:

Keys to success at this game follow roughly along what ISM-Community has proposed as an ISM Top 10. Those ISM guys, they’re pretty smart. =)



Similar Posts:

Posted in FISMA, ISM-Community, NIST, Risk Management, What Doesn't Work, What Works | 2 Comments »

Some Random Thoughts on C&A SOPs

Posted June 7th, 2007 by

I had a friend forward me today a C&A SOP from a small government agency. Other than taking NIST guidance and repackaging it in some weird morphed way that didn’t make any sense (they added a weird pre-certification phase), they missed the obvious piece: C&A is just a way to get security requirements and risk assessment into the SDLC. About 80% of the people playing the C&A game for the government think that the process goes something like this:

  • Build the system
  • Write a security plan
  • Notify CISO that document is ready to be tested
  • Auditor audits the document and makes a “you been bad” report that nitpicks about the grammar being in passive voice or you’re not using the “approved” template
  • System is given a certification statement
  • Somebody signs off on the accreditation
  • We forget about it all until it’s time to update the security plan

OK, if you do it this way, then maybe you do need a SOP.

Then again, maybe you need a new job.

I get “wigged out” when I see SOPs for C&A. A big part of why the government is failing at security and C&A is that they have divorced the 2 activities from the rest of how they do business. You shouldn’t need a SOP for C&A any more than you would need a SOP for breathing–you should have a SDLC SOP or an engineering SOP of which security is a small but important piece.

Mike’s version of how to do C&A:

  • We realize we have a need for a system
  • We categorize the data and come up with a SWAG on how much it’s worth to protect
  • We haggle over what security controls we should build based on our SWAG
  • We start writing a security plan that lists the controls we agreed on
  • We build the thing
  • We do user acceptance testing and security testing concurrently or in series
  • We fix problems and do regression testing
  • We certify that we have implemented the controls we determined we needed
  • Somebody gives the security team a vote of confidence in the form of accreditation


Similar Posts:

Posted in FISMA, NIST, What Doesn't Work | 3 Comments »

Puzzles v/s Mysteries

Posted May 31st, 2007 by

There’s a nice article at the Smithsonian about the difference between riddles and mysteries. I received this via the security metrics email list.

Risks and Riddles

This reminds me of intelligence work, for obvious reasons.

There are 2 major types of offensive actions an army can conduct: deliberate attack and movement to contact. (Yes, those of you pedantic enough will bring up hasty attacks and a dozen other scenarios, I’m being a generalist here =) )

In a deliberate attack, you know roughly what the Bad Guys are doing–they are defending key terrain. The task for the intelligence people is to find specific Bad Guy battle positions down the the platoon level. This is a puzzle with a fairly established framework, you are interested in the details.

In a movement to contact, you have a very hazy idea that there are Bad Guys out there. You move with an eye towards retaining flexibility so that you can develop the situation based on what you learn during the mission. The task for the intelligence people is to determine the overall trend on what the Bad Guys are doing. This is a mystery, and you’re more concerned with finding out the overall direction than you are with the specifics–they’ll get lost due to “friction” anyway.

Now translated to information security, there is some of what we know about the Bad Guys that is static and therefore more of a puzzle–think about threats that have mature technologies like firewalls, Anti-virus, etc to counter them. Solutions to these threats are all about products.

On the other hand, we have the mysteries: 0-day attacks, covert channels, and the ever-popular insider threat. Just like a well-established military has problems understanding the mystery that is movement to contact, information security practitioners have problems responding to threats that have not been well-defined.

So information security, viewed in the light of puzzle v/s mystery becomes the following scenario: how much time, effort, and money do we spend on the puzzles versus how much time do we spend on mysteries? The risk geek in me wants to sit down and determine probabilities, rate of occurance, etc in order to make the all-important cost-benefit-risk comparison. But for mysteries I can’t, by definition of what a mystery is, do that, and our model goes back to peddling voodoo to the business consumers.



Similar Posts:

Posted in Army, Rants, Risk Management, What Doesn't Work, What Works | 1 Comment »

Enterprise !== Managed Service Provider

Posted May 16th, 2007 by

Message to vendors:  If you want to break into the Managed Service Provider market, there is one thing extra that you need to do.

Enterprise-class products are reasonably good at being able to support a 3-tier model.  That way you can abstract out everything into  whatever architectural model you want.  Need more database oomph, add some more power at the database tier.  Need to support a remote site, put a data collector out there on the management LAN and just send events back to the central collectors.  This stuff is great.

But when it comes down to MSPs, there is one thing that we need above and beyond what enterprise-class products have.  We need to be able to flag data as belonging to a certain customer.  That way, once events have trickled up to the Single Pane of Glass (TM) that the NOC operators use, we still can tell which environment the event came from.  That requires tagging and the simple ability to have multiple devices on one IP address when clients have address collisions (everybody using 10.0.0.0 comes to mind).



Similar Posts:

Posted in Outsourcing, Technical, What Doesn't Work, What Works | 2 Comments »

« Previous Entries Next Entries »


Visitor Geolocationing Widget: