A Little Advice From Mike and Lee

Posted April 20th, 2010 by

Go have a look at what Mike Murray and Lee Kushner have to say on what I endearingly refer to as “Stupid Contractor Tricks”.

Now I know Mike and Lee are supposed to be tactful, and they do a really good job at that.  This post is not about tact.  =)

You need to step back a bit and understand the business model for contractors.  Because their margins are low and fixed, it means a couple of things:

  • You have large-volume contracts where you still have the same margin but more total net profit.
  • You can’t keep a bench of people off-project because it rapidly eats into your margin.  For some companies, this means that anybody off-project for 2 weeks or more gets laid off.
  • The name of the game is to win the proposal, get the work, then figure out how to staff it from rolling people onto the new project and bringing in new hires.  This is vastly inefficient.
  • New hires can also be to backfill on contracts where you’ve moved key people off to work something new.

So on to my advice in this particular scenario that Mike and Lee discuss:  Run away as fast as you can from this offer.

There are a couple of other things that I’m thinking about here:

  • A recruiter or HR person from Company A left for Company B and took their Rolodex of candidates.  Hence the surprise offer.  Either that, or Company A is now a sub for Company B or Company A is just the “staffing firm” getting paid $500/signed offer letter and doing business in bulk.
  • The Government usually requires “Commitment Letters” from the people that have resumes submitted on a proposal.  The reason for this is that the Government realizes what kind of jackassery goes on involving staffing, and requiring a signed letter gives the candidate an opportunity to decide up front.
  • If you sign an offer like this, you’re letting down the rest of the InfoSec community that are contractors by letting the recruiters commoditize what we do.  It’s bad for us and it’s bad for the Government.

Other stupid contractor tricks:

  • Signing an exclusivity letter that they are the only people who can submit your resume on a contract.
  • Making you sign an offer letter then letting the offer linger for 6+ months while you’re unemployed and could really use the ability to move on to a different job.
  • Shopping resumes for people you have never met and/or do not intend to make an offer letter to.
  • Changing the job completely after you have accepted the offer.
  • …and you probably have more that you can put into the comments section below.  =)


Similar Posts:

Posted in Odds-n-Sods, Rants, What Doesn't Work | 2 Comments »
Tags:

A Funny Thing Happened Last Week on Capital Hill

Posted April 1st, 2010 by

Well, several funny things happened, they happen every week.  But specifically I’m talking about the hearing in the House Committee on Homeland Security on FISMA reform–Federal Information Security: Current Challenges and Future Policy Considerations.  If you’re in information security and Government, you need to go read through the prepared statements and even watch the hearing.

Also referenced is HR.4900 which was introduced by Representative Watson as a modification to FISMA.  I also recommend that you have a look at it.

Now for my comments and rebuttals to the testimony:

  • On the cost per sheet of FISMA compliance paper: If you buy into the State Department’s cost of $1700 per sheet, you’re absolutely daft.  The cost of a security program divided by the total number of sheets of paper is probably right.  In fact, if you do the security bits right, your cost per sheet will go up considerably because you’re doing much more security work while the volume of paperwork is reduced.
  • Allocating budget for red teams: Do we really need penetration testing to prove that we have problems?  In Mike Smith’s world, we’re just not there yet, and proving that we’re not there is just an excuse to throw the InfoSec practitioners under the bus when they’re not the people who created the situation in the first place.
  • Gus Guissanie: This guy is awesome and knows his stuff.  No, really, the guy is sharp.
  • State Department Scanning: Hey, it almost seems like NIST has this in 800-53.  Oh wait, they do, only it’s given the same precedence as everything else.  More on this later.
  • Technical Continuous Monitoring Tools: Does anybody else think that using products of FISMA (SCAP, CVE, CVSS) as evidence that FISMA is failing is a bit like dividing by zero?  We really have to be careful of this or we’ll destroy the universe.
  • Number of Detected Attacks and Incidents as a Metric: Um, this always gets a “WTF?” from me.  Is the number increasing because we’re monitoring better or is it because we’re counting a whole bunch of small events as an attack (ie, IDS flagged on something), or is it because the amount of attacks are really increasing?  I asked this almost 2 years ago and nobody has answered it yet.
  • The Limitations of GAO: GAO are just auditors.  Really, they depend on the agencies to not misrepresent facts and to give them an understanding of how their environment works.  Auditing and independent assessment is not the answer here because it’s not a fraud problem, it’s a resources and workforce development problem.
  • OMB Metrics: I hardly ever talk bad about OMB, but their metrics suck.  Can you guys give me a call and I’ll give you some pointers?  Or rather, check out what I’ve already said about federated patch and vulnerability management then give me a call.

So now for Rybolov’s plan to fix FISMA:

  1. You have to start with workforce management. This has been addressed numerous times and has a couple of different manifestations: DoDI 8570.10, contract clauses for levels of experience, role-based training, etc.  Until you have an adequate supply of clueful people to match the demand, you will continue to get subpar performance.
  2. More testing will not help, it’s about execution. In the current culture, we believe that the more testing we do, the more likely the people being tested will be able to execute.  This is highly wrong and I’ve commented on it before.  I think that if it was really a fact of people being lazy or fraudulent then we would have fixed it by now.  My theory is that the problem is that we have too many wonks who know the law but not the tech and not enough techs that know the law.  In order to do the job, you need both.  This is also where I deviate from the SANS/20 Critical Security Controls approach and the IGs that love it.
  3. Fix Plans of Actions and Milestones. These are supposed to be long-term/strategic problems, not the short-term/tactical application of patches–the tactical stuff should be automated.  The reasoning is that you use these plans for budget requests for the following years.
  4. Fix the budget train. Right now the people with the budget (programs) are not the people running the IT and the security of it (CIO/CISO).  I don’t know if the answer here is a larger dedicated budget for CISO’s staff or a larger “CISO Tax” on all program budgets.  I could really policy-geek out on you here, just take my word for it that the people with the money are not the people protecting information and until you account for that, you will always have a problem.

Sights Around Capital Hill: Twice Sold Tales photo by brewbooks. Somehow seems fitting, I’ll let you figure out if there’s a connection. =)



Similar Posts:

Posted in FISMA, Public Policy, Rants, Risk Management | 7 Comments »
Tags:

Observations on PCI-DSS and Circular Arguments

Posted February 26th, 2010 by

OK, so I lied unintentionally all those months ago when I said I wouldn’t write any more PCI-DSS posts.

My impetus for this blog post is a PCI-DSS panel at ShmooCon that several of my friends (Jack Daniel, Anton Chuvakin, Mike Dahn, and Josh Corman, in no particular order) were on.  I know I’m probably the pot calling the kettle black, but the panel (as you would expect for any PCI-DSS discussion in the near future) rapidly disolved into chaos.  So as I’m sitting in the audience watching @Myrcurial’s head pop off, I came to the realization that this is really 4 different conversations disguised into one topic:

  1. The Cost-Benefit Assessment of replacing credit card # and CVV2 with something else–maybe chip and pin, maybe something entirely different–and what responsibility does Visa and Mastercard have towards protecting their business.  This calls for something more like an ROI approach because it’s infrastructure projects.  Maybe this CBA has already been done but guess what–nobody has said anything about the result of that analysis.
  2. Merchants’ responsibility to protect their customers, their business, and each other.  This is the usual PCI-DSS spiel.  The public policy equivalent here is overfishing: everybody knows that if they come back with full nets and by-catch, they’re going to ruin the fishery long-term for themselves and their peers, but they can’t stop the destruction of the fishery by themselves, they need everybody in the community to do their part.  In the same way, merchants not protecting card data mess over each other in this weird shared risk pool.
  3. Processor and bank responsibility.  Typically this is the Tier-1 and Tier-2 guys.  The issue here is that these guys are most of the processing infrastructure.  What works in PCI-DSS for small merchants doesn’t scale up to match these guys, and that’s the story here: how do you make a framework that scales?  I think it’s there (IE, the tiers and assurance levels in PCI-DSS) but it’s not communicated effectively.
  4. Since this is all a shared risk pool, at what places does it make sense to address particular risks?  IE, what is the division of roles and responsibilities inside the “community”?  Then how do you make a community standard that is at least reasonably fair to all the parties on this spectrum, Visa and MasterCard included?

PCI-DSS Tag Cloud photo by purpleslog.

There are a bunch of tangential questions, but they almost always circle
back to the 4 that I’ve mentioned above:

  • Regulatory capture and service providers
  • The pitfalls of designing a framework by committee
  • Self-regulation v/s legislation and Government oversight
  • Levels of hypocrisy in managing the “community”
  • Effectiveness of specific controls

Now the problem as I see it is that each of these conversations points to a different conversation as a solution and in doing so, they become thought-terminating cliches.  What this means is that when you do a panel, you’re bound to bounce between these 4 different themes without coming to any real resolution.  Add to this the fact that it’s a completely irrational audience who only understand their 1 piece of the topic, and you have complete chaos when doing a panel or debate.

Folks, I know this is hard to hear, but as an industry, we need to get over being crybabies and pointing fingers when it comes PCI-DSS.  The standard (or a future version of it anyway) and self-regulation is here to stay because even if we fix the core problems of payment, we’ll still have security problems because payment schemes are where the money is.  The world as I see it is that the standards process needs to be more transparent and the people governed by the standard need a seat at the table with their rational, adult, and constructive arguments on what works and what doesn’t work to help them do their job to help themselves.



Similar Posts:

Posted in Public Policy, Rants | 1 Comment »
Tags:

20 Critical Security Controls: What They Did Right and What They Did Wrong

Posted January 21st, 2010 by

Part 1

Part 2

Takeaways from the 20 CSC and what they do right (hey, it’s not all bad):

You have to prioritize. On a system basis, there are maybe 50-60 800-53 controls (out of a number just shy of 200) that need to be built 100% correctly and working every single time.  The rest (I know, I’m putting on my heretic hat here) can lapse from time to time.  For example, if I don’t have good event monitoring, my incident response team doesn’t have much work because I don’t know if I’m pwned or not.  What 20 CSC does is try to reduce that set of stuff that I should be concerned about into a set of controls that are technical, tactical, and track to classes taught by SANS vulnerability-based .

Common controls are more important than ever. They help you scope the smaller systems.  In fact, roughly half of the 20 CSC apply to the modern Enterprise and should be absorbed there, meaning that for systems not owning infrastructure, we only have 10 or so controls that I have to worry a bunch about, and 10 that I just need to be aware of what’s provided by my CISO.

Give examples. I’ll even go as far as to say this:  it should be a capital offense to release a catalog of controls without a reference implementation for both an Enterprise/GSS and a smaller IT system/Major Application inside of it.  20 CSC stops maybe one step short of that, but it’s pretty close in some controls to what I want if they were structured differently.

Security Management v/s IT Management. IT asset inventory, configuration management, change control:  these are IT management activities that somehow get pushed onto the security team because we are more serious about them than the people who should care.  I think 20 CSC does an OK job of just picking out the pieces that apply to security people instead of the “full meal deal” that ITIL and its ilk bring.

Control Key photo by .faramarz.

Now for what they did wrong:

It’s Still Not a Consensus, Dammit! That is, it’s a couple of smart people making a standard in a vacuum and detached from the folks who will have to live by the work that they do.  Seriously, ask around inside the agencies:  who admits to helping develop 20 CSC aside from “yeah, we looked at it briefly”?  And I’m not talking about the list that SANS claims, that’s stripped from the bios of the handful of people who did work on 20 CSC.  Sadly, this is the quick path to fail, it’s like building an IT system without asking the users what they need to get their job done on a daily basis.  Guys, we should know better than this.

It’s Still Not a Standard. It’s still written as guidance–more anecdote than hard requirements.  This isn’t something I can put into a contract and have my contractors execute without modifying it heavily.  It’s also not official, something I’ve already touched on before, which means that it’s not mandatory.  If you want to make this a standard, you need to turn it into ~50 controls each written as a “contracting shall”.  More to come on this in the future.

It Has Horrible Metrics. And I’m talking really horrible…it’s like the goatse of security metrics (NSFW link, even though it’s wikipedia).  Why?  Because they’re time-based for controls that are not time-based.  Metrics need to be a way to evaluate that the control works, not the indirect effects of the control.  Of course, metrics are just a number, but at the end of whatever assessment, my auditor/IG/GAO/$foo has to come up with some way to rank the work that I’ve done as a security officer.  If 20 CSC is the vehicle for the audit and the metrics are hosed, it doesn’t matter what I can do to provide real security, the perception from my management is that I don’t know what I’m doing.



Similar Posts:

Posted in NIST, Rants, Technical | 7 Comments »
Tags:

20 Critical Security Controls: Control-by-Control

Posted January 20th, 2010 by

OK, now for the control-by-control analysis of the 20 Critical Security Controls.  This is part 2.  Look here for the first installmentRead part 3 here.

Critical Control 1: Inventory of Authorized and Unauthorized Devices. This is good: get an automated tool to do IT asset discovery.  Actually, you can combine this with Controls 2, 3, 4, 11, and 14 with some of the data center automation software–you know the usual suspects, just ask your ops folks how you get in on their tools.  This control suffers from scope problems because it doesn’t translate down to the smaller-system scale:  if I have a dozen servers in an application server farm inside of a datacenter, I’ll usually know if anybody adds something.  The metric here (detect all new devices in 24 hours) “blows goats” because you don’t know if you’re detecting everything.  A better test is for the auditor to do their own discovery scans and compare it to the list in the permanent discovery tool–that would be validation that the existing toolset does work–with a viable metric of “percentage of devices detected on the network”.  The 24 hour metric is more like a functional requirement for an asset discovery tool.  And as far as the isolation of unmanaged assets, I think it’s a great idea and the way things should be, except for the fact that you just gave us an audit requirement to implement NAC.

Critical Control 2: Inventory of Authorized and Unauthorized Software. Sounds like the precursor to whitelisting.  I think this is more apropos to the Enterprise unless your system is the end-user computing environment (laptops, desktops).  Yes, this control will help with stuff in a datacenter to detect when something’s been pwned but the real value is out at the endpoints.  So yes, not happy with the scope of this control.  The metric here is as bad as for Control 1 and I’m still not happy with it.  Besides, if you allow unauthorized software to be on an IT device for up to 24 hours, odds are you just got pwned.  The goal here should be to respond to detected unauthorized software within 24 hours.

Critical Control 3: Secure Configurations for Hardware and Software on Laptops, Workstations, and Servers. This is actually a good idea, provided that you give me a tool to apply the settings automagically because manual configuration sucks.  I think it’s about a dozen different controls all wrapped into one, it’s just trying to do too much in one little control.  The time-based metric for this control is really bad, it’s like watching a train wreck.  But hey, I’ll offer up my own: percentage of IT assets conforming to the designated configuration.  It’s hinted at in the implementation guide, make it officially the metric and this might be a control I can support.

Critical Control 4: Secure Configurations for Network Devices such as Firewalls, Routers, and Switches. This is basically Control 3 for network devices.  The comments there also apply here.

Critical Control 5: Boundary Defense. This control is too much stuff crammed into one space.  As a result, it’s not concise enough to be implemented–it’s all over the map.  In fact, I’ll go as far as to say that this isn’t really one control, it’s a control theme with a ton of controls inside of it.  The “audit requirements” here are going to utterly kill me as a security manager because there is so much of a disparity between the control and the actual controls therein.

Critical Control 6: Maintenance, Monitoring, and Analysis of Audit Logs. Some of this control should be part of Controls 3 and 4 because, let’s be honest here, you’re setting up logging on devices the way that the hardening guide says you should.  The part that’s needed in this control is aggregation of logs and review of logs–get them off all the endpoints and into a centralized log management solution.  This is mentioned as the last “advanced” implementation technique but if you’re operating a modern Enterprise, I don’t see how you can get the rest of the implementation done without some kind of SIEM piece.   I just don’t get the metric here, again with the 24 hours.  How about “percentage of devices reporting into the SIEM”?  Yeah, that’s the easy money here.  The testing of this control makes me do a facepalm:  “At a minimum the following devices must be tested: two routers, two firewalls, two switches, ten servers, and ten client systems.”  OK, we’ve got a LAN/WAN with 15000 endpoints and that’s all we’re going to test?

Critical Control 7: Application Software Security. You keep using those words, I do not think they mean what you think they mean.  Application security is a whole different world and 20 CSC doesn’t even begin to scratch the surface of it.  Oh, but guess what?  It’s a tie-in to the 25 Most Dangerous Programming Errors which is about all this control is:  a pointer to a different project.  The metric here is very weak because it’s not tied back to the actual control.

Critical Control 8: Controlled Use of Administrative Privileges. This should be part of Controls 3 and 4, along with something about getting an Identity and Access Management system so that you have one ID repository.  I know this is a shocker to you, but the metric here sucks.

Critical Control 9: Controlled Access Based on Need to Know. This is a great idea, but as a control it’s too broad to achieve, which is why the 20 CSC were created in the first place.  What do we really want here?  Network share ACLs are mentioned, which is a control in itself, but the rest of this is hazy and leaves much room for interpretation.  Cue “audit requirements” and the part where Rybolov says “If it’s this hazy, it’s not really a standard, it’s a guideline that I shouldn’t be audited against.

Critical Control 10: Continuous Vulnerability Assessment and Remediation. All-in-all, not too bad.  I would suggest “Average time to resolve scan findings” here as a metric or even something as “hoakey” as the FoundScan metric just to gauge overall trends.

Arm Control photo by Crotchsplay.

Critical Control 11: Account Monitoring and Control. Haven’t we seen this before?  Yep, this should be incorporated into Controls 8, 3, and 4.  However, periodic account reviews are awesome if you have the patience to do it.

Critical Control 12: Malware Defenses. OK, this isn’t too bad.  Once again, the metric sucks, but I do like some of the testing steps.  The way I would test this is to compare our system inventory with my total list of devices.  A simple diff later, we have a list of unmanaged devices.

Critical Control 13: Limitation and Control of Network Ports, Protocols, and Services. Host firewalls was not what I thought of… I’m thinking more like firewalls and network segmentation where you have to get change control approval to add a firewall rule.  As far as the host setup, this should be part of Control 3.

Critical Control 14: Wireless Device Control. Not bad, but this should be dumped into a technical standard that you use like a hardening guide.  Metric here still sucks, but I don’t really need to say this again… oh wait, I just did.

Critical Control 15: Data Loss Prevention. Puh-lease.  I’ll be the first to admit, I’m a big believer in DLP done right, and that it’s an awesome tool to solve some of the unique .  But I don’t think that the market is mature enough to add it into your catalog of controls.  Also this will fall flat on its face if your system is just a web application cluster:  DLP addresses the endpoints (desktops, laptops, mobiles) and the outbound gateways (email, web, etc).  The problem with this control is that if you don’t buy and implement a full DLP solution (cue Rich Mogull and his DLP guide), there isn’t anything else that has a similar capability.  This is one of those controls where the 800-53 mapping gets really creative–Good Ship Lollipop Creative because we’re tapdancing around the issue that DLP-type solutions aren’t specifically required in 800-53.

These controls don’t have automated ways to implement and test them:

Critical Control 16: Secure Network Engineering. This control is a steaming crater.  It’s very much a guideline instead of an auditable standard.

Critical Control 17: Penetration Tests and Red Team Exercises. Not bad.  Still too easy to shop around for the bargain-basement penetration test team.  But yeah, pretty good overall.

Critical Control 18: Incident Response Capability. Good control.  Hard to test/audit except to look at after-incident reports.

Critical Control 19: Data Recovery Capability. Not bad here.  Not real COOP/DR/ITCP but about on par with typical controls frameworks.

Critical Control 20: Security Skills Assessment and Appropriate Training to Fill Gaps. Good idea.  Hard to implement without something like 8570.10 to give you a matrix by job position.  You want to change the world here, give your own mapping in the control.



Similar Posts:

Posted in FISMA, NIST, Rants, Technical | 2 Comments »
Tags:

Opportunity Costs and the 20 Critical Security Controls

Posted January 13th, 2010 by

This is a multi-post series.  You are at post 1.  Read post 2 hereRead Post 3 here.

This post begins with me.  For the past hour or so I’ve been working on a control-by-control objective analysis of SANS 20 Critical Security Controls.  This is a blog post I’ve had sitting “in the hopper” for 9 months or so.  And to be honest, I see some good things in the 20 CSC literature.  I think that, from a holistic perspective, the 20 CSC is an attempt at creating a prioritization of this huge list of stuff that I have to do as an information security officer–something that’s really needed.  I go into 20 CSC with a very open mind.

Then I start reading the individual controls.  I’m a big believer in Bottom-Line-Up-Front, so let me get my opinion out there now: 20 Critical Security Controls is crap.  I’m sorry John G and Eric C.  Not only is 20 CSC bad from a perspective of controls, metrics, and auditing tests, but if it’s implemented across the Government, it will be the downfall of security programs.  I really believe this.

Now on to the rationale….

Opportunity Costs. I can’t get that phrase out of my head.  And I’m not talking money just yet–I’m talking time.  See, I’m an IT security guy working for a contractor supporting a Government agency–just like 75% of the people out there.  I have a whole bunch of things to do–both in the NIST guidelines and organizational policy.  If you add anything else to the stack without taking anything away,  all you’ve done is to dilute my efforts.  And that’s why I can’t support 20 CSC–they’re an unofficial standard that does not achieve its stated primary goal of reducing the amount of work that I have to do.  I know they wanted to create a parallel standard focusing on technical controls but you have to have one official standard because if it’s not official, I don’t have to do it and it’s not really a standard anymore, it’s it?

Scoping Problems. We really have 2 tiers inside of an agency that we need to look at: the Enterprise and the various components that depend on the Enterprise.  Let’s call them… general support systems and major applications.  Now the problem here is that when you make a catalog of controls, some controls are more applicable to one tier than the other.  With 20 CSC, you run the classic blunder of trying to reinvent the wheel for every small system that comes along.

Threat Capabilities != Controls. And this is maybe the secret why compliance doesn’t work like we think it will.  In a nice theoretical world, it’s a threat-vulnerability-countermeasure coupling and the catalog of controls accounts for all likely threats and vulnerabilities.  Well, it doesn’t work that way:  it’s not a 1-to-1 ratio.  Typical security management frameworks start from a regulatory perspective and work their way down to technical details while what we really want to do is to build controls based on the countermeasures that we need.  So yeah, 20 CSC has the right idea here, the problem is that it’s a set of controls created by people who don’t believe in controls–the authors have the threat and vulnerability piece down and some of the countermeasures but they don’t understand how to translate that into controls to give to implementers and their auditors.  The 20 CSC guys are smart, don’t get me wrong, but I can’t help but get the feeling that they don’t understand how the “rest of the world” is getting their job done out there in the Enterprise.

The Mapping is Weak. There is a traceability matrix in the 20 CSC to map each control back to NIST controls.  It’s really bad, mostly because the context of 800-53 controls doesn’t extend into 20 CSC.  I have serious heartburn with how this is presented inside the agencies because we’re not really doing audits using the 20 CSC, we’re using the mapping of NIST controls with a weird subtext and it’s a “voluntary assessment” not an audit.

Guidelines?!?!?! This is basic stuff.  If it’s something you audit against, it *HAS* to be a standard.  Guidelines are recommendations and can add in more technique and education.  Standards are like hard requirements, they only work if they’re narrowly-scoped, achievable, and testable.  This isn’t specific to 20 CSC, the NIST Risk Management Framework (intended to be a set of guidelines also) suffers from this problem, too.  However, if your intent is to design a technical security and auditing standard, you need to write it like a standard.  While I’m up on a soapbox, for the love of $Diety, quit calling security controls “requirements”.

Auditor Limitations. Let’s face it, how do I get an auditor to add an unmanaged device to the network and know if we’ve detected it or not.  This is a classic mistake in the controls world:  assuming that we have enough people with the correct skillsets who can conduct intrusive technical tests without the collusion of my IT staff.

And the real reason why I dislike the 20 Critical Security Controls:

Introduction of “Audit Requirements”. One of the chief criticisms of the NIST Risk Management Framework is that the controls are not specific enough.  20 CSC falls into this trap of nonspecificity (Controls 7, 8, 9, and 15, I’m talking to you) and is not official guidance–a combination that means that my auditor has just added requirements to my workload simply because of how they interpret the control.  This is very dangerous and why I believe 20 CSC will be the end of IT security as we know it.

In future posts (I had to break this into multiple segments):

  • Control-by-control analysis
  • What 20 CSC got right (Hey, some of it is good, just not for the reasons that it’s supposed to be good)

SA-2 “Guideline” photo by cliff1066™.



Similar Posts:

Posted in FISMA, NIST, Rants, Risk Management, Technical | 4 Comments »
Tags:

« Previous Entries Next Entries »


Visitor Geolocationing Widget: