Some Thoughts on “Malicious Compliance”

Posted February 22nd, 2007 by

Malicious compliance (thanks for the name, Steve) is the act of generating literally waist-high stacks of documentation in order to satisfy the auditors with a snide comment like “You wanted documentation, you got it now”.  While it’s not exclusive to the government, I think we have a big, broad corner on the market when it comes to malicious compliance.  The BOFH in me likes malicious compliance, but it’s really no way to manage information security.

There are many problems with this approach, but I think the most important thing is that usually the documents created during malicious compliance are not grounded in reality, nor do they relate to “adequate security” as we define it.  Really all we’re doing is wasting time and money in order to “game” the way information assurance works.  Hmmm, I sound like one of the FISMA critics today.

So why does malicious compliance occur?  Well, I have seen it on several projects, but usually it all comes down to 2 factors:  there is a gap in understanding of information assurance, and/or a personality conflict exists between auditors/evaluators and the project team.

No surprise, there is a shortage of skilled information assurance staff.  There is a significant amount of people who come from non-security (technical writing, journalism, etc) backgrounds that are put into certification and accreditation roles.  They just don’t know how security controls work, nor do I expect them to.  Don’t get me wrong, there is a place for these people, but it involves doing final editing and document control, not developing content for security plans.

I used to have a boss that would tear up all of my writing, saying things like “This paragraph is written in passive voice,” or “You need to go through this line-by-line to make sure that all the periods have 2 spaces after them.  The crux of the matter is that I’m not a big guy on style and formatting, I focus more on the content.

When it comes to auditors, in the words of a friend, “They know what they know, and what the know are checklists.”   At some point in the public sector, you’ll find an auditor who can’t see past their checklist.  They want to do well, they want to protect the data that has been entrusted to the government.  But it’s a hard position to be in because you are coming in and in a matter of days you have to make an evaluation of what the system owner has had months (sometimes years) to look at.

Notice a common theme here?  It’s all about personnel management in information security.  You can’t get the job done without skilled people.  Most of the time, what I’ve noticed is that instead of building real security, we’re stuck dealing with personnel management.



Similar Posts:

Posted in FISMA, Rants, What Doesn't Work | No Comments »

Do You “Do It” or Do You “Get It”?

Posted February 21st, 2007 by

In the circles I frequent, we have a saying that “Either you do it or you get it”.

The people who do it are fairly smart.  They have a stack of regulations that they are well-versed in.  They talk about matching 800-53 controls to implementation details.  They worry about SSP content.  They’re fairly competent.  They can accomplish most of the information assurance tasks out there.

But these people are only 75% of the solution.  We need more of the second type of people if we are going to succeed as a government with this security game.

There is a small subset of security people who get it.  You know who these people are within 3 minutes of talking to them.  They understand what the “rules” are, but they also understand where you have to break the rules because the rules contradict each other (have cost-effective security but implement this entire catalog of controls).

The difference between these 2 groups of people is that the people who get it understand one additional thing.  They know risk management.  They practice risk management on a minute-by-minute basis.  They are able to make cost/benefit/risk comparisons, which is something that you can’t really learn out of a book.

Doctors have the Hipocratic Oath: “First, do no harm.”  Why don’t security practitioners have the Smith Oath: “Above all, do risk management”?



Similar Posts:

Posted in FISMA, NIST, Rants, What Doesn't Work, What Works | 2 Comments »

Self-Quote Time

Posted February 21st, 2007 by

“True confidentiality controls are when you have thermite grenades taped to the top of the servers.” –Michael Smith



Similar Posts:

Posted in Army, Odds-n-Sods, Risk Management, What Works | No Comments »

It’s All About Common Controls!

Posted February 21st, 2007 by

Inspired by Richard Bejtlich.

I both love and hate FISMA and C&A.  FISMA itself is pretty good:  Do security planning and tie security into the budget.  That’s great.

But why are we spending all this effort as a jobs program for security people who have no skills?  I want to see C&A people out of business.  As an infantry squad leader, I didn’t outsource planning my missions to contractors who aren’t going with me to be shot at, so why should the government outsource security planning to somebody who has never even seen the system?

I want to see the government figure out a way to do things cheaper, faster, stronger, so that they can spend money and effort on things that matter more than documentation.  C&A was supposed to do that, but it’s now impossible to do right because you have too many people with oversight of security planning who want to argue semantics.

This is my secret to C&A nirvana:  document and test the shared controls once (read: managerial and operational controls), then get on with your life.  Out of the plethora of controls in 800-53, why don’t you create one common controls package (at the risk of sounding like a complete and utter wonk, this is the purpose of having a General Support System) and then for each system, you say “same as the common controls package, this is how we built this system”.  That should limit the repetition of effort to the minimum.

There is no reason that I should have to test my security policy for each system that I own.  That’s a waste of time.  If we keep reinventing the wheel and playing NIH (Not Invented Here) games, we will continue to hemorrhage cash on rewrites of security plans that do not add security value with the exception of mitigating the “auditor risks”.

The System Security Plan (SSP) for one system should be a small binder (or even *gasp* stapled), not a rehash of every NIST publication, the agency policy, and a bunch of fluff to make it look like you added value.  People who do the latter activities or insist on them need to be put out of business, and that’s why I can appreciate the anti-wonk backlash that Richard promotes.



Similar Posts:

Posted in FISMA, NIST, Rants, Risk Management, What Doesn't Work | 1 Comment »

Bad Security People

Posted February 20th, 2007 by

I always like to say that there are two types of bad security people (I don’t address security people who don’t know security): The Secret Squirrel and The Chicken Little.

We all know The Secret Squirrel.  These are the people who sit back in their infosec fortress behind the biometrics and don’t tell you anything at all because it’s “sensitive information”.   This includes security policy and procedures, which you can’t have because you might be an attacker.  In short, paranoia becomes everything for The Secret Squirrel to the exclusion of helping the business needs.

The Chicken Little behaves exactly like the children’s story.  “The sky is falling,” they say, “We’re exposed to these risks.  The end is near!”  These people are the kind who cannot prioritize between various degrees of badness.

What both of these types of people have in common is that usually they don’t really understand what they are doing.  They only know one part of the whole job, and that is what they fixate on.

Please don’t let this happen to you.



Similar Posts:

Posted in Rants, What Doesn't Work | 1 Comment »

Indicator Species

Posted February 19th, 2007 by

Trout are an indicator species.  You can tell how healthy the stream is by counting the number of trout and the size of trout in a particular section.  Trout need clean water, a certain temperature range of water, protection from predators, unsilted gravel to spawn in, and a food supply like smaller fish and invertebrates.  So absence of trout means absence of these factors, which by extension means an unhealthy stream.

There are even metrics for this: number of trout per mile, pounds of trout per mile, average size of trout.  Biologists do periodic electroshocking surveys to capture the fish, weigh and measure them, then release them back into the current.  All in the interest of gathering metrics.

By extension, a very valuable tool for an information security manager is to be able to gather metrics.  Instead of trout per mile, we are interested in total number of vulnerabilities in our information system.  Instead of pounds of trout per mile, we are interested in the aggregate risk to our enterprise.  And so on.

Enter Certification and Accreditation.  It is not just a paperwork exercise.  There, I said it.  It is, however, risk assessment and the gathering of metrics to determine how well our security program is progressing (or not, as the case may be).

As a whole, the government is spending $FooMillions on certification and accreditation and still losing the battle.  I know one agency that is in the process of getting fleeced year after year by unscrupulous contractors selling C&A solutions.  It seems like everybody I’ve worked with previously on a project who didn’t have the skills to succeed is now being billed to this agency as a subject-matter expert.  For every 30 people the agency hires, they get 5 that are any good, and the 25 bad ones can mess things up faster than the others can fix them.

Why is C&A in such a pathetic state?

Well, this is apparently a little-known secret: C&A is an indicator, not the actual act of providing “adequate security”.  If a security program is in place and effective, then it’s relatively easy to satisfy C&A requirements but not the other way around–it is possible to have a certified and accredited system that does not provide adequate security.

With C&A getting such a high amount of press from the guardians of all things security (NIST, OMB, and GAO), what has happened down among the practitioners is that the focus has switched to the indicators instead of the root cause.  Going back to our trout stream, we’re expecting the pounds of trout per mile to go up based solely on the fact that we keep conducting electroshock surveys.

So how do we succeed at the information security game?  One of the steps is to realize C&A for what it is (a risk assessment and metrics tool for decision-makers, a method to incorporate security into the SDLC) and what it isn’t (a solution to internal agency politics, a comprehensive security program).  The next step is to relearn how to perform risk management, which is where the real intent and purpose of C&A lies.



Similar Posts:

Posted in FISMA, NIST, Risk Management, What Doesn't Work, What Works | 4 Comments »

« Previous Entries Next Entries »


Visitor Geolocationing Widget: