Splunk Goes After the FISMA Lucre, They’re not Alone

Posted April 23rd, 2008 by

Interestingly, Splunk has been going after FISMA dollars here lately.  check out the Forbes article, video on YouTube, and their own articles.  I guess there’s another “pig at the trough” (heh, including myself from time to time).

It’s interesting how companies decide to play in the Government market.  It seems like they fall into 2 categories:  companies that have grown to the point where they can sustain the long-term investment with a chance of payoff in 5 years, and companies that are desparate and want a spot at the trough.

To its credit, Splunk seems to be one of the former and not the latter, unlike the hordes of “Continuous Compliance” tools I’ve seen in the past year.

Which brings up the one big elephant in the room that nobody will talk about:  who is making money on FISMA?

This is my quick rundown on where the money is at:

  • Large Security Services Firms:  Definitely.  About a quarter of that is document-munging and other jack*ssery that is wasteful, but a good 3/4 of the services are needed and well-received.  Survival tip:  combining FISMA services with other advisory/assessment services.
  • Software and Product Vendors:  Yes and no.  Depends on how well they can make that crucial step of doing traceability from their product to the catalog of controls or have a product that’s so compelling that the Government can’t say no (A-V).  Survival tip:  Partner with the large integrator firms.
  • Managed Security Service Providers:  Yes, for the time being,  but look at their market getting eaten from the top as US-CERT gets more systems monitored under Einstein and from the bottom as agencies stand up their own capabilities.  Survival tip: US-Cert affiliation and watch your funding trail, when it starts to dry up, you had better be diversified.
  • System Integrators:  It’s split.  One half of them take a loss on FISMA-related issues because they get caught in a Do What I Mean with a “Contractor must comply with FISMA and all NIST Guidance” clause.  The other half know how to either scope FISMA into their proposals or they have enough good program management skills to protest changes in scope/cost.  Survival tip:  Have a Government-specific CSO/CISO who understands shared controls and how to negotiate with their SES counterparts.
  • 8(a) and Security Boutique Firms:  Yes, depending on how well they can absorb overhead while they look for work.  Survival tip:  being registered as a disadvantaged/woman-owned/minority-owned/foo-owned business means that the big firms have to hire you because their contracts have to contain a certain percentage of small firms.
  • Security Training Providers:  Yes.  These guys always win when there’s a demand.  That’s why SANS, ISC2, and a host of hundreds are all located around the beltway.  Survival tip:  trying to absorb government representation in training events and as speakers.


Similar Posts:

Posted in FISMA, Outsourcing, What Doesn't Work, What Works | No Comments »
Tags:

Selling Water to People in the Desert

Posted April 15th, 2008 by

Some things should absolutely sell themselves. In the Mojave desert, the guy to be is the one driving the ice cream truck because everybody is happy to see you.

When it comes to the Government there is one thing that is their lifeblood: they make and trade secrets. And since 2001, every building in DC has become its own semi-autonomous nation-state with X-ray machines and armed guards.

So why is it so hard to sell Data Leakage Prevention (DLP) and Database Activity Monitoring (DAM) solutions to them? I’ve talked to vendors in both solution spaces, and they’ve found that it’s a hard sell to get product in the door.

If anybody needs DAM and DLP, it’s the Gub’mint. I try not to play this game, but if you look at the PII incidents that meet the Washington Post front page threshold, you’ll see that all of them are preventable with either DAM or DLP or both.

DAM and Leackage Prevention

Photo by Dru

My thoughts on what’s up:

  • Government purchasing lags behind the private sector. Government CPIC works on a 2-year cycle. Keeping in mind that the average life expectancy for a CISO is 2 years, this doesn’t bode well. This is also why it’s so hard to get strategic projects (*cough* redundant data center *cough*) completed.
  • If it’s not in the control catalog, it’s hard to justify buying it. It’s the double-edged sword of compliance. Unless I have all the controls in the catalog implemented, I can’t really justify anything not in the catalog, and once I have all of the catalog done, they yank my budget for somebody who doesn’t have the catalog implemented.
  • It takes approximately 2 years to get a particular technology into the catalog of controls. If the catalog (SP 800-53) is revised every year, then if NIST thinks that my technology/concept is a good idea, then I still have to wait for the next revision.
  • So if you introduce a new technology today, the earliest I could expect to have it implemented is in 4 years, 3 if you’re lucky.
  • Selling to the government is long and slow (can we say “heavy on bizdev investment”) but has a big payoff: remember that the Overall IT budget is just shy of $80Bazillionz.

The winning strategies:

  1. Partnering up with the larger integrators who can bundle your product with an existing outsourcing contract.
  2. Matching up your product description with the catalog of controls. Make it easy for the Government to select your product.
  3. Let NIST and Mitre evaluate your product. Seriously. If you’ve got game, flaunt it.
  4. Invest in BizDev expecting 4 years before you get a return.


Similar Posts:

Posted in FISMA, Technical, What Doesn't Work, What Works | No Comments »

Oooh, DITSCAP to DIACAP is SOOO Hard

Posted April 9th, 2008 by

Very nice article in Military Information Technology Magazine (Online edition in case you couldn’t figure it out) about the DITSCAP to DIACAP transition.

Just looking at the concepts behind DIACAP, they’re very sound.  In some places, the article whines a bit too much.  Me, I’m glad to see DITSCAP go the way of the flesh in favor of risk registers and sharing of risk information with “business partners”.

My favorite quote this week:

“The services face a number of other challenges in implementing DIACAP, not least of which is what Lundgren called ‘significant cultural issue’ in moving from the ‘paperwork drill’ characteristic of DITSCAP, to DIACAP, ‘where you’re expected to actually go out and do the testing.'”

How can that NOT be a good thing?

Some other good quotes in the article and my random thoughts:

“Training and education of personnel is another concern faced by DoD components, according to King. ‘They must make sure they have a cadre of information assurance professionals who are in full understanding of what DIACAP is and how it differs from DITSCAP,’ he said. ‘This includes the complete realm of IA professionals, including principle certification and accreditation personnel to program managers and IA managers. There is a significant training and education tail that need to be accomplished for DIACAP to be properly implemented.'”

Well, to be very honest, I think that this was a problem with DITSCAP, is a problem with NIST 800-37, and will continue to be a problem until I work myself out of a job because everybody in the government understands risk management.

“This is going to save money and time because it allows capabilities to be put out to the field without having to be certified and accredited three or four times.”

That’s a happy thing.  Wait until DoD figures out how to do common controls, then they’ll find out how to save scads of money.

Now want to know the secret to why DIACAP will succeed?  This is a bit of brilliance that needs to be pointed out.  DIACAP became the standard in late 2007 after the DoD watched the civilian agencies go through 5 years of FISMA implementation and were able to steal the best parts and ignore the bad parts.

Future state:  civilian agencies borrowing some of the DIACAP details, like scorecards and eMASS.

Future state:  merging of DIACAP, DCID 6/3, and SP 800-37.

Future state:  adoption of the “one standard to rule them all” by anybody who trades data with the Government.



Similar Posts:

Posted in Army, Risk Management, What Works | 1 Comment »

Government-Wide Monitoring? ‘Bout Time.

Posted April 8th, 2008 by

Good, I’m glad we’re finally doing this.

For those of you watching the other initiatives, this does have something to do with the Trusted Internet Connections initiative–if you can choke traffic down into 50 “sets of tubes” to watch, then it’s easier to watch them.

Expect to see more over the next year, the pieces are starting to fall into place.



Similar Posts:

Posted in Technical, What Works | 1 Comment »

Remembering Accreditation

Posted March 20th, 2008 by

Accreditation is the forgotten and abused poor relation to certification.

Part of the magic that makes C&A happen is this:  you have certification which is a verification that all the minimum security controls are in place, and then you have accreditation which is a formal acceptance of risk by a senior manager/executive.  You know what?  The more I think about this idea, the more I come to see the beautiful simplicity in it as a design for IT security governance.  You really are looking at two totally complete concepts:  compliance and risk management.

So far, we’ve been phenomenal at doing the certification part.  That’s easy, it’s driven by a catalog of controls and checklists.  Hey, it’s compliance after all–so easy an accountantcaveman could do it. =)

The problem we’re having is in accreditation.   Bear with me here while I illustrate the process of how accreditation works in the real world.

After certification, a list of deficiencies is turned into a Plan of Action and Milestones–basically an IOU list of how much it will cost to fix the deficiency and when you can have it done by.

Then the completed C&A package is submitted to the Authorizing Official.  It consists of the following things:

  • Security Plan
  • Security Testing Results
  • Plan of Actions and Milestones

The accreditor looks at the C&A package and gives the system one of the following:

  • Denial to Operate
  • Approval to Operate
  • Interim Approval to Operate (ie, limited approval)

And that’s how life goes.

There’s a critical flaw here, one that you need to understand:  what we’re giving the Authorizing Official is, more often than not, the risks associated with compliance validation testing.  In other words, audit risks that might or might not directly translate into compromised systems or serious incidents.

More often than not, the accreditation decision is based on these criteria:

  • Do I trust the system owner and ISSO?
  • Has my assessment staff done an adequate job at finding all the risks I’m exposed to?
  • What is the extent of my political exposure?
  • How much do I need this system to be up and operational right now?
  • Is there something I need fixed right now but the other parts I’m OK with?

For the most part, this is risk management, but from a different angle.  We’ve unintentionally derailed what we’re trying to do with accreditation.  It’s not about total risk, it’s about audit risk.  Instead of IT security risk management, it becomes career risk management.

And the key to fix this is to get good, valid, thorough risk assessments in parallel with compliance assessments.   That requires smart people.

Smart CISOs out there in Government understand this “flaw” in the process.  The successful ones come from technical security testing backgrounds and know how to get good, valid, comprehensive risk assessments out of their staff and contractors, and that, dear readers is the primary difference between agencies that succeed and those who do not.

NIST is coming partly to the rescue here.  They’re working on an Accreditor’s Handbook that is designed to teach Authorizing Officials how to evaluate what it is they’re being given.  That’s a start.

However, as an industry, we need more people who can do security and risk assessments.  This is very crucial to us as a whole because your assessment is only as good as the people you hire to do it.  If we don’t have a long-term plan to grow people into this role, we will continually fail, and it takes at least 3-5 years to grow somebody into the role with the skills to do a good assessment, coming from a system administrator background.  In other words, you need to have the recruiting machinery of a college basketball program in order to bring in the talent that you need to meet the demand.

And this is why I have a significant case of heartburn when it comes to Alan Paller.  What SANS teaches perfectly compliments the policy, standards, regulations, and complicance side of the field.  And the SANS approach–highly-tactical and very technologically-focused–is very much needed.  Let me say that again:  we need a SANS to train the huge volume of people in order to have valid, thorough risk assessments.

There is a huge opportunity to say “you guys take care of the policy and procedures side (*cough* the CISSP side), we can give you the technical knowledge (the G.*C side) to augment your staff’s capabilities.  But for some reason, Alan sees FISMA, NIST, and C&A as a competitor and tries to undermine them whenever he can.

Instead of working with, he works against.  All the smart people in DC know this.



Similar Posts:

Posted in FISMA, NIST, Rants, Risk Management, What Doesn't Work, What Works | No Comments »

Towards Actionable Metrics

Posted March 4th, 2008 by

Ah yes, our favorite part of FISMA:  the ongoing reporting of metrics to OMB.  Last year’s guidance on what to report is in OMB Memo 07-19.  It’s worth the time to read, and you probably won’t follow with the rest of this blog post if you don’t at least skim it to find out what kind of items get reported.

 Still haven’t read it?  Fer chrissakes, just look at pages 24-28, it’s a fast read.

If you look through the data that OMB wants, there are 2 recurring themes:  What is the scope/extent/size of your IT systems, and how well are you doing what we told you to do to protect them?  In other words, how effectively are you, Mr CISO, executing at the operational level?

We’re missing one crucial bit of process here–what are we actually going to do with scoping metrics and operational performance metrics at the national, strategic level?  What we are collecting and reporting are primarily operational-level metrics that any good CISO should at least know or be able to guess at to do their job, but it’s not really the type of metrics that we need to be collecting at levels above the CISO unless our sole purpose is to watch over their shoulder.

As our metrics gurus will point out, the following are characteristics of good metrics:

  • Easy to collect:  I think the metrics that OMB is asking for are fairly easy to collect now that people know what to expect.  Originally, they were not.
  • Objective:  Um, I’ll intentionally side-step this one.  Suffice it to say that I’ve heard from several people a story where the punch line goes something like “Your security can’t be this good, we’ve already decided that you’re getting a “D”.
  • Consistent:  Our consistency is inconsistent.  Look at how many times the FISMA grading scale has changed, and we still wonder why people think it’s not rooted in any kind of reality.  And yes, I’m advocating yet another change, so I’m probably more an accomplice than not.
  • Relevant:  We do a fairly good job at this.  Scoping and performance metrics are fairly relevant.  I have some questions about if our metrics are relevant at the appropriate level, but I’ve already mentioned that.
  • Actionable:  This is where I think we fall apart because we’re collecting metrics that we’re not really using for anything.  More on this later….

Now, as Dan Geer says in his outstanding metrics tutorial, the key to metrics is to start measuring anything you can (caveat, 6-MB PDF).  The line of though goes that if you can collect a preliminary set of data and do some analysis on it, it will tell you where you really need to be collecting metrics.

The techie version of this is that the first server install you do, you will blow it away in 6 months because you now know better how you operate and what you need the configuration really to be.

Now ain’t that special?  =)

So the question I pose is this:  after 6 years, have we reached the watershed point where we’ve outgrown our initial set of metrics and are ready to tailor our metrics based on what we now know?

I think the answer is yes, and applying our criteria for good metrics, what we need to answer is a good set of questions:

  • What national-level programs can reduce the aggregate risk to the government?
  • What additional support do the agencies need and how do we translate that into policy?
  • As an executive branch, are we spending too much or too little on security?  Yes, I know what the analysts say, but their model is for companies, not the Government.
  • What additional threats are there to government information and missions?  Yes, I’m talking about state-sponsored hacking and some of the other things specific to the government.  Is it cost-effective to blackhole IP ranges for some countries for some services?
  • Is it more cost-effective to convert all the agencies to one single NSM/SIEM/$foo ala Einstein or is it better to do it on a per-agency basis?
  • What is the cost of implementing FDCC, and is it more cost-effective and risk-effective to do it immediately or to wait until the next tech refresh on desktops as we migrate to Vista or upgrade Vista to the next major service pack?
  • What is the cost-benefit-risk comparison for the Trusted Internet Connections initiative, and why did we come up with 50 as a number v/s 10 or 100?
  • Is there a common theme in unmitigated vulnerabilities (long-term, recurring POA&Ms) across all the agencies that can be “fixed” with policy and funding at the national level?  Say, for example, the fact that many systems don’t have a decent backup site, so why not a federal-level DR “Hotel”?
  • Many more that are above and beyond my ability to generate today…

In other words, I want to see metrics that produce action or at least steer us to where we need to be.  I’ve said it before, I’ll say it again:  metrics without actionability means that what we’ve ended up doing is performing information security management through public shame.  Yes, some of that is necessary to serve as a catalyst to generate public support which generates Congressional support which gets the laws on the books to initiate action, but do we still need it now that we have those pieces in place?

If I had my druthers, this is what I would like to see happen, maybe one day I’ll get somebody’s attention:

  • OMB and GAO directly engage Mr Jacquith to help them build a national-level metrics program.
  • We produce metrics that are actionable.
  • We find a way to say what our problems are without overreacting.  I don’t know if this can happen because of cultural issues.
  • We share the metrics and the corresponding results with the information security management world because we’ve just generated the largest-scale metrics program ever. 

And oh yeah, while I’m making wishes, I want a friggin’ pony for Christmas! =)



Similar Posts:

Posted in FISMA, What Doesn't Work, What Works | 3 Comments »

« Previous Entries Next Entries »


Visitor Geolocationing Widget: