DDoS Planning: Business Continuity with a Twist

Posted August 17th, 2011 by

So since I’ve semi-officially been granted the title of “The DDoS Kid” after some of the incident response, analysis, and talks that I’ve done, I’m starting to get asked a lot about how much the average DDoS costs the targeted organization.  I have some ideas on this, but the simplest way is to recycle Business Continuity/Disaster Recovery figures but with some small twists.

Scoping:

  • Plan on a 4-day attack.  A typical attack duration is 2-7 days.
  • Consider an attack on the “main” (www) site and anything else that makes money (shopping cart, product pages)

Direct:

  • Downtime: one day’s worth of downtime for both peak times (for most eCommerce sites, that’s Thanksgiving to January 5th) and low-traffic times x  (attack duration).
  • Bandwidth: For services that charge by the bit or CPU cycle such as cloud computing or some ISP services, the direct cost of the usage bursting.  The cost per bit/cpu/$foo is available from the service provider, multiply your average rate for peak times by 1000 (small attack) or 10000 (large attack) x (attack duration) worth of usage.  This is the only big difference in cost from BCP/DR data.
  • Mitigation Services:  Figure $5K to $10K for a DDoS mitigation service x (duration of attack).

Indirect:

  • Increased callcenter load: A percentage (10% as a starting guess) of user calls to the callcenter x (average dollar cost per call) x (attack duration).
  • Increased physical “storefront” visits: A percentage (10%) of users now have to go to a physical location x (attack duration).
  • Customer churn: customer loss due to frustration.  Figure 2-4% customer loss x (attack duration).

Brand damage, these vary from industry to industry and attack to attack:

  • Increased marketing budget: Percentage increase in marketing budget.  Possible starting value is 5%.
  • Increased customer retention costs: Percentage increase in customer retention costs.  Possible starting value is 10%.

Note that it’s reasonably easy to create example costs for small, medium, and large attacks and do planning around a medium-sized attack.

However we recycle BCP/DR figures for an outage, mitigation of the attack is different:

  • For high-volume attacks, you will need to rely on service providers for mitigation simply because of their capacity.
  • Fail-over to a secondary site means that you now have two sites that are overwhelmed.
  • Restoration of service after the attack is more like recovering from a hacking attack than resuming service at the primary datacenter.


Similar Posts:

Posted in DDoS, Risk Management, Technical | No Comments »
Tags:

Realistic NSTIC

Posted August 10th, 2011 by

OK, it’s been out a couple of months now with the usual “ZOMG it’s RealID all over again” worry-mongers raising their heads.

So we’re going to go through what NSTIC is and isn’t and some “colorful” (or “off-color” depending on your opinion) use cases for how I would (hypothetically, of course) use an Identity Provider under NSTIC.

The Future Looks Oddly Like the Past

There are already identity providers out there doing part of NSTIC: Google Authenticator, Microsoft Passport, FaceBook Connect, even OpenID fits into part of the ecosystem.  My first reaction after reading the NSTIC plan was that the Government was letting the pioneers in the online identity space take all the arrows and then swoop in to save the day with a standardized plan for the providers to do what they’ve been doing all along and to give them some compatibility.  I was partially right, NSTIC is the Government looking at what already exists out in the market and helping to grow those capabilities by providing some support as far as standardizations and community management.  And that’s the plan all along, but it makes sense: would you rather have experts build the basic system and then have the Government adopt the core pieces as the technology standard or would you like to have the Government clean-room a standard and a certification scheme and push it out there for people to use?

Not RealID Not RealID Not RealID

Many people think that NSTIC is RealID by another name.  Aaron Titus did a pretty good job at debunking some of these hasty conclusions.  The interesting thing about NSTIC for me is that the users can pick which identity or persona that they use for a particular use.  In that sense, it actually gives the public a better set of tools for determining how they are represented online and ways to keep these personas separate.  For those of you who haven’t seen some of the organizations that were consulted on NSTIC, their numbers include the EFF and the Center for Democracy and Technology (BTW, donate some money to both of them, please).  A primary goal of NSTIC is to help website owners verify that their users are who they say they are and yet give users a set of privacy controls.

 

Stick in the Mud

Stick in the Mud photo by jurvetson.

Now on to the use cases, I hope you like them:

I have a computer at home.  I go to many websites where I have my public persona, Rybolov the Hero, the Defender of all Things Good and Just.  That’s the identity that I use to log into my official FaceBook account, use teh Twitters, log into LinkedIn–basically any social networking and blog stuff where I want people to think I’m a good guy.

Then I use a separate, non-publicized NSTIC identity to do all of my online banking.  That way, if somebody manages to “gank” one of my social networking accounts, they don’t get any money from me.  If I want to get really paranoid, I can use a separate NSTIC ID for each account.

At night, I go creeping around trolling on the Intertubes.  Because I don’t want my “Dudley Do-Right” persona to be sullied by my dark, emoting, impish underbelly or to get an identity “pwned” that gives access to my bank accounts, I use the “Rybolov the Troll” NSTIC  ID.  Or hey, I go without using a NSTIC ID at all.  Or I use an identity from an identity provider in a region *cough Europe cough* that has stronger privacy regulations and is a couple of jurisdiction hops away but is still compatible with NSTIC-enabled sites because of standards.

Keys to Success for NSTIC:

Internet users have a choice: You pick how you present yourself to the site.

Website owners have a choice: You pick the NSTIC ID providers that you support.

Standards: NIST just formalizes and adopts the existing standards so that they’re not controlled by one party.  They use the word “ecosystem” in the NSTIC description a lot for a reason.



Similar Posts:

Posted in NIST, Technical | Comments Off on Realistic NSTIC
Tags:

LOLCATS and NSTIC

Posted April 14th, 2011 by

Ref: NSTIC
Ref: On the Internet…

on teh internetz...



Similar Posts:

Posted in IKANHAZFIZMA, NIST, Public Policy, Technical, What Works | No Comments »
Tags:

Some Comments on SP 800-39

Posted April 6th, 2011 by

You should have seen Special Publication 800-39 (PDF file, also check out more info on Fismapedia.org) out by now.  Dan Philpott and I just taught a class on understanding the document and how it affects security managers out them doing their job on a daily basis.  While the information is still fresh in my head, I thought I would jot down some notes that might help everybody else.

The Good:

NIST is doing some good stuff here trying to get IT Security and Information Assurance out of the “It’s the CISO’s problem, I have effectively outsourced any responsibility through the org chart” and into more of what DoD calls “mission assurance”.  IE, how do we go from point-in-time vulnerabilities (ie, things that can be scored with CVSS or tested through Security Test and Evaluation) to briefing executives on what the risk is to their organization (Department, Agency, or even business) coming from IT security problems.  It lays out an organization-wide risk management process and a framework (layer cakes within layer cakes) to share information up and down the organizational stack.  This is very good, and getting the mission/business/data/program owners to recognize their responsibilities is an awesome thing.

The Bad:

SP 800-39 is good in philosophy and a general theme of taking ownership of risk by the non-IT “business owners”, when it comes to specifics, it raises more questions than it answers.  For instance, it defines a function known as the Risk Executive.  As practiced today by people who “get stuff done”, the Risk Executive is like a board of the Business Unit owners (possibly as the Authorizing Officials), the CISO, and maybe a Chief Risk Officer or other senior executives.  But without the context and asking around to find out what people are doing to get executive buy-in, the Risk Executive seems fairly non-sequitor.  There are other things like that, but I think the best summary is “Wow, this is great, now how do I take this guidance and execute a plan based on it?”

The Ugly:

I have a pretty simple yardstick for evaluating any kind of standard or guideline: will this be something that my auditor will understand and will it help them help me?  With 800-39, I think that it is written abstractly and that most auditor-folk would have a hard time translating that into something that they could audit for.  This is both a blessing and a curse, and the huge recommendation that I have is that you brief your auditor beforehand on what 800-39 means to them and how you’re going to incorporate the guidance.



Similar Posts:

Posted in FISMA, NIST, Risk Management, What Works | 5 Comments »
Tags:

Reinventing FedRAMP

Posted February 15th, 2011 by

“Cloud computing is about gracefully losing control while maintaining accountability even if the operational responsibility falls upon one or more third parties.”
–CSA Security Guidance for Critical Areas of Focus in Cloud Computing V2.1

Now enter FedRAMP.  FedRAMP is a way to share Assessment and Authorization information for a cloud provider with its Government tenants.  In case you’re not “in the know”, you can go check out the draft process and supporting templates at FedRAMP.gov.  So far a good idea, and I really do support what’s going on with FedRAMP, except for somewhere along the lines we went astray because we tried to kluge doctrine that most people understand over the top of cloud computing which most people also don’t really understand.

I’ve already done my part to submit comments officially, I just want to put some ideas out there to keep the conversation going. As I see it, these are/should be the goals for FedRAMP:

  • Delineation of responsibilities between cloud provider and cloud tenant.  Also knowing where there are gaps.
  • Transparency in operations.  Understanding how the cloud provider does their security parts.
  • Transparency in risk.  Know what you’re buying.
  • Build maturity in cloud providers’ security program.
  • Help cloud providers build a “Governmentized” security program.

So now for the juicy part, how I would do a “clean room” implementation of FedRAMP on Planet Rybolov, “All the Authorizing Officials are informed, the Auditors are helpful, and every ISSO is above average”?  This is my “short list” of how to get the job done:

  • Authorization: Sorry, not going to happen on Planet Rybolov.  At least, authorization by FedRAMP, mostly because it’s a cheat for the tenant agencies–they should be making their own risk decisions based on risk, cost, and benefit.  Acceptance of risk is a tenant-specific thing based on the data types and missions being moved into the cloud, baseline security provided by the cloud provider, the security features of the products/services purchased, and the tenant’s specific configuration on all of the above.  However, FedRAMP can support that by helping the tenant agency by being a repository of information.
  • 800-53 controls: A cloud service provider manages a set of common controls across all of their customers.  Really what the tenant needs to know is what is not provided by the cloud service provider.  A simple RACI matrix works here beautifully, as does the phrase “This control is not applicable because XXXXX is not present in the cloud infrastructure”.  This entire approach of “build one set of controls definitions for all clouds” does not really work because not all clouds and cloud service providers are the same, even if they’re the same deployment model.
  • Tenant Responsibilities: Even though it’s in the controls matrix, there needs to be an Acceptable Use Policy for the cloud environment.  A message to providers: this is needed to keep you out of trouble because it limits the potential impacts to yourself and the other cloud tenants.  Good examples would be “Do not put classified data on my unclassified cloud”.
  • Use Automation: CloudAudit is the “how” for FedRAMP.  It provides a structure to query a cloud (or the FedRAMP PMO) to find out compliance and security management information.  Using a tool, you could query for a specific control or get documents, policy statements, or even SCAP assessment content.
  • Changing Responsibilities: Things change.  As a cloud provider matures, releases new products, or moves up and down the SPI stack ({Software|Platform|Infrastructure}as a Service), the balance of responsibilities change.  There needs to be a vehicle to disseminate these changes.  Normally in the IA world we do this with a Plan of Actions and Milestones but from the viewpoint of the cloud provider, this is more along the lines of a release schedule and/or roadmap.  Not that I’m personally signing up for this, but a quarterly/semi-annually tenant agency security meeting would be a good way to get this information out.

Then there is the special interest comment:  I’ve heard some rumblings (and read some articles, shame on you security industry press for republishing SANS press releases) about how FedRAMP would be better accomplished by using the 20 Critical Security Controls.  Honestly, this is far from the truth: a set of controls scoped to the modern enterprise (General Support System supporting end users) or project (Major Application) does not scale to an infrastructure-and-server cloud. While it might make sense to use 20 CSC in other places (agency-wide controls), please do your part to squash this idea of using it for cloud computing whenever and wherever you see it.

Ramp

Ramp photo by ell brown.



Similar Posts:

Posted in FISMA, Risk Management, What Works | 2 Comments »
Tags:

DojoCon DDoS Video

Posted December 16th, 2010 by

My DDoS presentation at DojoCon on Sunday.  A big thanks to Marcus J Carey for organizing the con and Adrian Crenshaw for doing the recording.

Michael Smith, @rybolov DDoS from Adrian Crenshaw on Vimeo.



Similar Posts:

Posted in Cyberwar, Speaking, Technical, What Doesn't Work, What Works | 2 Comments »
Tags:

« Previous Entries Next Entries »


Visitor Geolocationing Widget: