Beware the Cyber-Katrina!

Posted February 19th, 2009 by

Scenario: American Internet connections are attacked.  In the resulting chaos, the Government fails to respond at all, primarily because of infighting over jurisdiction issues between responders.  Mass hysteria ensues–40 years of darkness, cats sleeping with dogs kind of stuff.

Sounds similar to New Orleans after Hurricane Katrina?  Well, this now has a name: Cyber-Katrina.

At least, this is what Paul Kurtz talked about this week at Black Hat DC.  Now I understand what Kurtz is saying:  that we need to figure out the national-level response while we have time so that when it happens we won’t be frozen with bureaucratic paralysis.  Yes, it works for me, I’ve been saying it ever since I thought I was somebody important last year.  =)

But Paul…. don’t say you want to create a new Cyber-FEMA for the Internet.  That’s where the metaphor you’re using failed–if you carry it too far, what you’re saying is that you want to make a Government organization that will eventually fail when the nation needs it the most.  Saying you want a Cyber-FEMA is just an ugly thing to say after you think about it too long.

What Kurtz really meant to say is that we don’t have a national-level CERT that coordinates between the major players–DoD, DoJ, DHS, state and local governments, and the private sector for large-scale incident response.  What’s Kurtz is really saying if you read between the lines is that US-CERT needs to be a national-level CERT and needs funding, training, people, and connections to do this mission.  In order to fulfill what the administration wants, needs, and is almost promising to the public through their management agenda, US-CERT has to get real big, real fast.

But the trick is, how do you explain this concept to somebody who doesn’t have either the security understanding or the national policy experience to understand the issue?  You resort back to Cyber-Katrina and maybe bank on a little FUD in the process.  Then the press gets all crazy on it–like breaking SSL means Cyber-Katrina Real Soon Now.

Now for those of you who will never be a candidate for Obama’s Cybersecurity Czar job, let me break this down for you big-bird stylie.  Right now there are 3 major candidates vying to get the job.  Since there is no official recommendation (and there probably won’t be until April when the 60 days to develop a strategy is over), the 3 candidates are making their move to prove that they’re the right person to pick.  Think of it as their mini-platforms, just look out for when they start talking about themselves in the 3rd person.

FEMA Disaster Relief photo by Infrogmation. Could a Cyber-FEMA coordinate incident response for a Cyber-Katrina?

And in other news, I3P (with ties to Dartmouth) has issued their National Cyber Security Research and Development Challenges document which um… hashes over the same stuff we’ve seen from the National Strategy to Secure Cyberspace, the Systems and Technology Research and Design Plan, the CSIS Recommendations, and the Obama Agenda.  Only the I3P report has all this weird psychologically-oriented mumbo-jumbo that when I read it my eyes glazed over.

Guys, I’ve said this so many times I feel like a complete cynic: talk is cheap, security isn’t.  It seems like everybody has a plan but nobody’s willing to step up and fix the problem.  Not only that, but they’re taking each others recommendations, throwing them in a blender, and reissuing their own.  Wake me up when somebody actually does something.

It leads me to believe that, once again, those who talk don’t know, and those who know don’t talk.

Therefore, here’s the BSOFH’s guide to protecting the nation from Cyber-Katrina:

  • Designate a Cybersecurity Czar
  • Equip the Cybersecurity Czar with an $100B/year budget
  • Nationalize Microsoft, Cisco, and one of the major all-in-one security companies (Symantec)
  • Integrate all the IT assets you now own and force them to write good software
  • Public execution of any developer who uses strcpy() because who knows what other stupid stuff they’ll do
  • Require code review and vulnerability assessments for any IT product that is sold on the market
  • Regulate all IT installations to follow Government-approved hardening guides
  • Use US-CERT to monitor the military-industrial complex
  • ?????
  • Live in a secure Cyber-World

But hey, that’s not the American way–we’re not socialists, damnit! (well, except for mortgage companies and banks and automakers and um yeah….)  So far all the plans have called for cooperation with the public sector, and that’s worked out just smashingly because of an industry-wide conflict of interest–writing junk software means that you can sell for upgrades or new products later.

I think the problem is fixable, but I predict these are the conditions for it to happen:

  • Massive failure of some infrastructure component due to IT security issues
  • Massive ownage of Government IT systems that actually gets publicized
  • Deaths caused by massive IT Security fail
  • Osama Bin Laden starts writing exploit code
  • Citizen outrage to the point where my grandmother writes a letter to the President

Until then, security issues will be always be a second-fiddle to wars, the economy, presidential impeachments, and a host of a bazillion other things.  Because of this, security conditions will get much, much worse before they get better.

And then the cynic in me can’t help but think that, deep down inside, what the nation needs is precisely an IT Security Fail along the lines of 9-11/Katrina/Pearl Harbor/Dien Bien Fu/Task Force Smith.



Similar Posts:

Posted in BSOFH, Public Policy, Rants | 6 Comments »
Tags:

When the Feds Come Calling

Posted October 21st, 2008 by

I’ve seen the scenario about a dozen times in the last 2 months–contractors and service providers of all sorts responding to the Government’s security requirements in the middle of a contract.  It’s almost reached the stage where I have it programmed as a “battle drill” ala the infantryman’s Battle Drill 1A, and I’m here to share the secret of negotiating these things.

Let’s see, without naming names, let’s look at where I’ve seen this come up:

  • Non-Government Organizations that assist the Government with para-Government services to the citizens
  • Companies doing research and development funded by the Government–health care and military
  • Universities who do joint research with the Government
  • Anybody who runs something that the Government has designated as “critical infrastructure”
  • State and local governments who use Federal Government data for their social plans (unemployment system, food stamps, and ) and homeland security-esque activities (law enforcement, disaster response)
  • Health Care Providers who service Government insurance plans

For the purposes of this blog post, I’ll refer to all of these groups as contractors or service providers.  Yes, I’m mixing analogies, making huge generalizations, and I’m not precise at all.  However, these groups should all have the same goals and the approach is the same, so bear with me while I lump them all together.

Really, guys, you need to understand both sides of the story because this a cause for negotiations.  I’ll explain why in a minute.

On the Government side:  Well, we have some people we share data with.  It’s not a lot, and it’s sanitized so the value of it is minimal except for the Washington Post Front Page Metric.  Even so, the data is PII that we’ve taken an anonymizer to so that it’s just statistical data that doesn’t directly identify anybody.  We’ve got a pretty good handle on our own IT systems over the past 2 years, so our CISO and IG want us to focus on data that goes outside of our boundaries.  Now I don’t expect/want to “own” the contractor’s IT systems because they provide us a service, not an IT system.  My core problem is that I’m trying to take an existing contract and add security requirements retroactively to it and I’m not sure exactly how to do that.

Our Goals:

  • Accomplishing the goals of the program that we provided data to support
  • Protection of the data outside of our boundaries
  • Proving due-diligence to our 5 layers of oversight that we are doing the best we can to protect the data
  • Translating what we need into something the contractor understands
  • Being able to provide for the security of Government-owned data at little to no additional cost to the program

On the contractor/service provider side:  We took some data from the Government and now they’re coming out of the blue saying that we need to be FISMA-compliant.  Now I don’t want to sound whiney, but this FISMA thing is a huge undertaking and I’ve heard that for a small business such as ourselves, it can cripple us financially.  While I still want to help the Government add security to our project, I need to at least break even on the security support.  Our core problem is to keep security from impacting our project’s profitability.

Our Goals:

  • Accomplishing the goals of the program that we were provided data to support
  • Protection of the data given to us to keep the Government happy and continuing to fund us (the spice must flow!)
  • Giving something to the Government so that they can demonstrate due-diligence to their auditors and IG
  • Translating what we do into something the Government understands
  • Keeping the cost of security to an absolute minimum or at least funded for what we do add because it wasn’t scoped into the SOW

Hmm, looks like these goals are very much in alignment with each other.  About the only thing we need to figure out is scope and cost, which sounds very much like a negotiation.

Hardcore Negotiation Skills photo by shinosan.

Little-known facts that might help in our scenario here:

  • Section 2.4 of SP 800-53 discusses the use of compensating controls for contractor and service-provider systems.
  • One of the concepts in security and the Government is that agencies are to provide “adequate security” for their information and information systems.  Have a look at FISMA and OMB Circular A-130.
  • Repeat after me:  “The endstate is to provide a level of protection for the data equivalent or superior to what the Government would provide for that data.”
  • Appendix G in SP 800-53 has a traceability matrix through different standards that can serve as a “Rosetta Stone” for understanding each other.  Note to NIST:  let’s throw in PCI-DSS, Sarbanes-Oxley,  and change ISO 17799 to 27001.

So what’s a security geek to do?  Well, this, dear readers, is Rybolov’s 5-fold path to Government/contractor nirvana:

  1. Contractor and Government have a kickoff session to meet each other and build raport, starting from a common ground such as how you both have similar goals.  The problem really is one of managing each others’ expectations.
  2. Both Government and Contractor perform internal risk assessment to determine what kind of outcome they want to negotiate.
  3. Contractor and Government meet a week later to negotiate on security.
  4. Contractor provides documentation on what security controls they have in place.  This might be as minimal as a contract with the guard force company at their major sites, or it might be just employee background checks and
  5. Contractor and Government negotiate for a 6-month plan-of-action.  For most organizations considering ISO 27001, this is a good time to make a promise to get it done.  For smaller organizations or data , we may not even

Assumptions and dependencies:

  • The data we’re talking about is low-criticality or even moderate-criticality.
  • This isn’t an outsourced IT system that could be considered government-owned, contractor-operated (GO-CO)


Similar Posts:

Posted in FISMA, Outsourcing | 1 Comment »
Tags:

Et Tu, TIC?

Posted October 7th, 2008 by

Let’s talk about TIC today, dear readers, for I smell a conspiracy theory brewing.

For those of you who missed the quick brief, TIC is short for “Trusted Internet Connections” and is an architecture model/mandate/$foo to take all of the Internet connections in the Government (srsly, nobody knows how many of them really exist, but it’s somewhere in the 2,000-10,000 range) and consolidate them into 50.  These connections will then be monitored by DHS’s Einstein program.

No, Not That Kind of TIC photo by m.prinke.

Bringing you all up to date, you’ll need to do some homework:

Now having read all of this, some things become fairly obvious:

  • If you have the following people needing connections:
    • 24 agencies, plus
    • DoD with 2 points of presence, plus
    • Intelligence agencies with a handful of Internet connections, means that:
  • That basically, everybody gets one Internet connection.  This is not good, it’s all single point-of-DOS.
  • Agencies have been designated as Internet providers for other agencies.  Sounds like LoB in action.
  • Given the amount of traffic going through the TIC access points, it most likely is going to take a significant amount of hardware to monitor all these connections–maybe you saved 50% of the monitoring hardware by reducing the footprint, but it’s still hardware-intensive.
  • TIC is closely tied with the Networx contract.
  • In order to share Internet connections, there needs to be a network core between all of the agencies so that an agency without a TIC access point can route through multiple TIC service provider agencies.

And this is where my conspiracy theory comes in:  TIC is more about making a grand unified Government network than it is monitoring events–Einstein is just an intermediate goal.   If you think about it, this is where the Government is headed.

We were headed this way back in ought-two with a wonderful name: GovNet.  To be honest, the groundwork wasn’t there and the idea was way ahead of its time and died a horrible death, but it’s gradually starting to happen, thanks to TIC, FDCC, and Einstein. 

More fun links:

If you want to get a reaction out of the OMB folks, mention GovNet and watch them backpedal and cringe,–I think the pain factor was very high for them on GovNet. So I think that we should, as a cadre of information security folks, start calling TIC what it really is:  Govnet 2.0!  =)



Similar Posts:

Posted in Technical | 2 Comments »
Tags:

Comments on SCAP 2008

Posted September 24th, 2008 by

I just got back from the SCAP 2008 conference at NIST HQ, and this is a collection of my thoughts in a somewhat random order:

Presention slides are available at the NVD website

I blogged about SCAP a year ago, and started pushing it in conversations with security managers that I came across.  Really, if you’re managing security of anything and you don’t know what SCAP is, you need to get smart on it really fast, if for no other reason than that you will be pitched it by vendors sporting new certifications.

Introduction to SCAP:  SCAP is a collection of XML schemas/standards that allow technical security information to be exchanged between tools.  It consists of the following standards:

  • Common Platform Enumeration (CPE): A standard to describe a specific hardware, OS, and software configuration.  Asset information, it’s fairly humdrum, but it makes the rest of SCAP possible–think target enumeration and you’re pretty close.
  • Common Vulnerabilities and Exposures (CVE): A definition of publicly-known vulnerabilities and weaknesses.  Should be familiar to most security researches and patch monkies.
  • Common Configuration Enumeration (CCE): Basically, like CVE but specific to misconfigurations.
  • Common Vulnerability Scoring System (CVSS): A standard for determining the characteristics and impact of security vulnerabilities.  Hmmm, sounds suspiciously like standardization of what is a high, medium, and low criticality vulnerability.
  • Open Vulnerability and Assessment Language (OVAL):  Actually, 3 schemas to describe the inventory of a computer, the configuration on that computer, and a report of what vulnerabilites were found on that computer.
  • Extensible Configuration Checklist Description Format (XCCDF): A data set that describes checks for vulnerabilities, benchmarks, or misconfigurations.  Sounds like the updates to your favorite vulnerability scanning tool because it is.

Hall of Standards inside NIST HQ photo by ME!!!

What’s the big deal with SCAP: SCAP allows data exchanges between tools.  So, for example, you can take a technical policy compliance tool, load up the official Government hardening policy in XCCDF for, say, Windows 2003, run a compliance scan, export the data in OVAL, and load the results into a final application that can help your CISO keep track of all the vulnerabilities.  Basically, imagine that you’re DoD and have 1.5 million desktops–how do you manage all of the technical information on those without having tools that can import and export from each other?

And then there was the Federal Desktop Core Configuration (FDCC): OMB and Karen Evans handed SCAP its first trial-by-fire.  FDCC is a configuration standard that is to be rolled out to every Government desktop.  According to responses received by OMB from the departments in the executive branch (see, Karen, I WAS paying attention =)   ), there are roughly 3.5 Million desktops inside the Government.  The only way to manage these desktops is through automation, and SCAP is providing that.

He sings, he dances, that Tony Sager is a great guy: So he’s presented at Black Hat, now SCAP 2008 (.pdf caveat).  Basically, while the NSA has a great red-team (think pen-test) capability, they had a major change of heart and realized, like the rest of the security world (*cough*Ranum*cough*), that while attacking is fun, it isn’t very productive at defending your systems–there is much more work to be done for the defenders, and we need more clueful people doing that.

Vendors are jumping on the bandwagon with both feet: The amount of uptake from the vulnerability and policy compliance vendors is amazing.  I would give numbers of how many are certified, but I literally get a new announcement in my news reader ever week or so.  For vendors, being certified means that you can sell your product to the Government, not being certified means that you get to sit on the bench watching everybody else have all the fun.  The GSA SAIR Smart-Buy Blanket Purchase Agreement sweetens the deal immensely by having your product easily purchasable in massive quantities by the Government.

Where are the rest of the standards: Yes, FDCC is great, but where are the rest of the hardening standards in cute importable XML files, ready to be snarfed into my SCAP-compliant tool?  Truth be told, this is one problem with SCAP right now because everybody has been focusing on FDCC and hasn’t had time yet to look at the other platforms.  Key word is “yet” because it’s happening real soon now, and it’s fairly trivial to convert the already-existing DISA STIGs or CIS Benchmarks into XCCDF.  In fact, Sun was blindsided by somebody who had made some SCAP schemas for their products and they had no idea that anybody was working on it–new content gets added practically daily because of the open-source nature of SCAP.

Changing Government role: This is going to be controversial.  With NVD/CVE, the government became the authoritative source for vulnerabilities.  So far that’s worked pretty well.  With the rest of SCAP, the Government changes roles to be a provider of content and configurations.  If NIST is smart, they’ll stay out of this because they prefer to be in the R&D business and not the operations side of things.  Look for DHS to pick up the role of being a definitions provider.  Government has to be careful here because they could in some instances be competing with companies that sell SCAP-like feed services.  Not a happy spot for either side of the fence.

More information security trickle-down effect: A repeated theme at SCAP 2008 is that the public sector is interested in what Big SCAP can do for them.  The vendors are using SCAP certification as a differentiator for the time being, but expect to see SCAP for security management standards like PCI-DSS, HIPAA, and SOX–to be honest here, though, most of the vendors in this space cut their teeth on these standards, it’s just a matter of legwork to be able to export in SCAP schemas.  Woot, we all win thanks to the magic that is the Government flexing its IT budget dollars!

OS and Applications vendors: these guys are feeling the squeeze of standardization.  On one hand, the smart vendors (Oracle, Microsoft, Sun, Cisco) have people already working with DISA/NSA to help produce the configuration guides, they just have to sit back and let somebody turn the guides into SCAP content.  Some of the applications vendors still haven’t figured out that their software is about to be made obsolete in the Government market because they don’t have the knowledge base to self-certify with FDCC and later OS standards.  With a 3-year lead time required for some of the desktop applications before a feature request (make my junk work with FDCC) makes it into a product release, there had better be some cluebat work going on in the application vendor community.  Adobe, I’m talking to you and Lifecycle ES–if you need help, just call me.

But how about system integrators: Well, for the time being, system integrators have almost a free ride–they just have to deal with FDCC.  There are some of them that have some cool solutions built on the capabilities of SCAP, but for the most part I haven’t seen much movement except for people who do some R&D.  Unfortunately for system integrators, the Federal Acquisition Regulation now requires that anything you sell to the Government be configured IAW the NIST checklists program.  And just how do you think the NIST checklists program will be implemented?  I’ll take SCAP for $5Bazillion, Alex.  Smart sytem integrators will at least keep an eye on SCAP before it blindsides them 6 months from now.

Technical compliance tools are destined to be a commodity: For the longest time, the vulnerability assessment vendors made their reputation by having the best vulnerability signatures.  In order to get true compatibility across products, standardized SCAP feeds means that the pure-play security tools are going to have less things to differentiate themselves from all the other tools and they fall into a commodity market centered on the accuracy of their checks with reduced false positives and negatives.  While it may seem like a joyride for the time being (hey, we just got our ticket to sell to the Gubmint by being SCAP-certified), that will soon turn into frustration as the business model changes and the margins get smaller.  Smart vendors will figure out ways to differentiate themselves and will survive, the others will not.

Which leads me to this: Why is it that SCAP only applies to security tools?  I mean, seriously, guys like BigFix and NetIQ have crossover from technical policy compliance to network management systems–CPE in particular.  What we need is a similar effort applied to network and data center tools.  And don’t point me at SNMP, I’m talking rich data.  =)  On a positive note, expect some of the security pure-play tools to be bought up and incorporated into enterprise suites if they aren’t already.

Side notes:

I love how the many deer (well over 9000 deer on the NIST campus) all have ear tags.  It brings up all sorts of scientific studies ideas.  But apparently the deer are on birth control shots or something….

Former Potomac Forum students:  Whattayaknow, I met some of our former students who are probably reading this right now because I pimped out my blog probably too aggressively.  =)  Hi Shawn, Marc, and Bob!

Old friends:  Wow, I found some of them, too.  Hi Jess, Walid, Chris, and a cast of thousands.

Deer on NIST Gaithersburg Campus photo by Chucka_NC.



Similar Posts:

Posted in DISA, FISMA, NIST, Technical, What Works | 2 Comments »
Tags:

Cloud Computing and the Government

Posted August 19th, 2008 by

Post summary: We’re not ready yet culturally.

What spurred this blog post into being is this announcement from ServerVault and Apptis about a Federal Computing Cloud.  I think it’s a pretty ballsy move, and I’ll be watching to see if it works out.

Disclaimer being that at one time I managed security for something similar in a managed services world, only it was built by account with everything being a one-off.  And yeah, we didn’t start our organization the right way, so we had a ton of legacy concepts that we could never shake off, more than anything else about our commercial background and ways of doing things.

Current Theory on Cloud Computing

Current Theory on Cloud Computing photo by cote.

The way you make money in the managed services world is on standardization and economy-of-scale.  To us mere mortals, it means the following:

  • Standardized OS builds
  • Shared services where it makes sense
  • Shared services as the option of choice
  • Split your people’s time between clients
  • Up-charge for non-standard configurations
  • Refuse one-off configurations on a case-by-case basis

The last 2 were our downfall.  Always eager to please our clients, our senior leadership would agree to whatever one-offs that they felt were necessary for client relationship purposes but without regard to the increased costs and inefficiency when it came time to implement.

Now for those of you out in the non-Government world, let me bring you to the conundrum of the managed services world:  shared services only works in limited amounts.  Yes, you can manage your infrastructure better than the Government does, but they’ll still not like most of it because culturally, they expect a custom-built solution that they own.  Yes, it’s as simple as managing the client’s expectations of ownership v/s their cost savings, and I don’t think we’re over that hurdle yet.

And this is the reason: when it comes to security and cloud computing, the problem is that you’re only as technically literate as your auditors are.  If they don’t understand what the solution is and what the controls are around it, you do not have a viable solution for the public sector.

A “long time ago” (9000 years at least), I created the 2 golden rules for shared infrastructure:

  • One customer cannot see another customer.
  • One customer cannot affect another customer’s level of service.

And the side-rules for shared infrastructure in the public sector:

  • We have a huge set of common controls that you get the documentation to.  It will have my name on it, but you don’t have to spend the money to get it done.
  • It’s to my benefit to provide you with transparency in how my cloud operates because otherwise, my solution is invalidated by the auditors.
  • Come to us to design a solution, it’s cheaper for you that way.  I know how to do it effectively and more cheaply because it’s my business to know the economics of my cloud.
  • You have to give up control in some ways in order to get cost savings.
  • There is a line beyond which you cannot change or view because of the 2 golden rules.  The only exception is that I tell you how it’s made, but you can’t see any of the data that goes into my infrastructure.
  • If I let you audit my infrastructure, you’ll want to make changes, which can’t happen because of the 2 golden rules.
  • I’ll be very careful where I put your data because if your mission data spills into my infrastructure, I put myself at extreme risk.

So, are ServerVault and Apptis able to win in their cloud computing venture?  Honestly, I don’t know.  I do think that when somebody finally figures out how to do cloud computing with the Federal Government, it will pay off nicely.

I think Apptis might be spreading themselves fairly thin at this point, rumor has it they were having some problems last year.  I think ServerVault is in their comfort space and didn’t have to do too much for this service offering.

I can’t help but think that there’s something missing in all of this, and that something is a partnering with the a sponsoring agency on a Line of Business.  FEA comes to mind.



Similar Posts:

Posted in What Doesn't Work, What Works | 1 Comment »
Tags:

C&A Seminar in August, Instructor-to-Coolness Ratio Goes Up!

Posted July 28th, 2008 by

Potomac Forum is having a 2-day C&A seminar on August 6th and 7th.  It will be unusually good this time because I won’t be there to drag everybody down–I’ll be on the road for some training.  =)  Anyway, check it out and say hi to my instructors from me.



Similar Posts:

Posted in FISMA, Speaking | 1 Comment »
Tags:

« Previous Entries Next Entries »


Visitor Geolocationing Widget: