A Layered Model for Massively-Scaled Security Management

Posted August 24th, 2009 by

So we all know the OSI model by heart, right?   Well, I’m offering up my model of technology management. Really at this stage I’m looking for feedback

  • Layer 7: Global Layer. This layer is regulated by treaties with other nation-states or international standards.  I fit cybercrime treaties in here along with the RFCs that make the Internet work.  Problem is that security hasn’t really reached much to this level unless you want to consider multinational vendors and top-level cert coordination centers like CERT-CC.
  • Layer 6: National-Level Layer. This layer is an aggregation of Federations and industries and primarily consists of Federal law and everything lumped into a “critical infrastructure” bucket.  Most US Federal laws fit into this layer.
  • Layer 5: Federation/Community Layer. What I’m talking here with this layer is an industry federated or formed in some sort of community.  Think major verticals such as energy supply.  It’s not a coincidence that this layer lines up with DHS’s critical infrastructure and key resources breakdown but it can also refer to self-regulated industries such as the function of PCI-DSS or NERC.
  • Layer 4: Enterprise Layer. Most security thought, products, and tools are focused on this layer and the layers below.  This is the realm of the CSO and CISO and roughly equates to a large corporation.
  • Layer 3: Project Layer. Collecting disparate technologies and data into a similar piece such as the LAN/WAN, a web application project, etc.  In the Government world, this is the location for the Information System Security Officer (ISSO) or the System Security Engineer (SSE).
  • Layer 2: Integration Layer. Hardware, software, and firmware combine to become products and solutions and is focused primarily on engineering.
  • Layer 1: Code Layer. Down into the code that makes everything work.  This is where the application security people live.

There are tons of way to use the model.I’m thinking each layer has a set of characteristics like the following:

  • Scope
  • Level of centralization
  • Responsiveness
  • Domain expertise
  • Authority
  • Timeliness
  • Stakeholders
  • Regulatory bodies
  • Many more that I haven’t thought about yet

Chocolate Layer Cake photo by foooooey.

My whole point for this model is that I’m going to try to use it to describe the levels at which a particular problem resides at and to stimulate discussion on what is the appropriate level at which to solve it.  For instance, take a technology and you can trace it up and down the stack. Say Security Event and Incident Monitoring:

  • Layer 7: Global Layer. Coordination between national-level CERTs in stopping malware and hacking attacks.
  • Layer 6: National-Level Layer. Attack data from Layer 5 is aggregated and correlated to respond to large incidents on the scale of Cyberwar.
  • Layer 5: Federation/Community Layer. Events are filtered from Layer 4 and only the confirmed events or interest are correlated to determine trends.
  • Layer 4: Enterprise Layer. Events are aggregated by a SIEM with events of interest flagged for response.
  • Layer 3: Project Layer. Logs are analyzed in some manner.  This is most likely the highest in the model that we
  • Layer 2: Integration Layer. Event logs have to be written to disk and stored for a period of time.
  • Layer 1: Code Layer. Code has to be programmed to create event logs.

I do have an ulterior motive.  I created this model because most of our security thought, doctrine, tools, products, and solutions work at Layer 4 and below.  What we need is discussion on Layers 5 and above because when we try to create massively-scaled security solutions, we start to run into a drought of information at what to do above the Enterprise.  There are other bits of doctrine that I want to bring up, like trying to solve any problem at the lowest level for which it makes sense.  So in other words, we can use the model to propose changes to the way we manage security… say we have a problem like the lack of data on data breaches.  What we’re saying when we say that we need a Federal data breach law is that because of the scope and the amount of responsibility and competing interests at Layer 5, that we need a solution at Layer 6, but in any case we should start at the bottom and work our way up the model until we find an adequate scope and scale.

So, this is my question to you, Internet: have I just reinvented enterprise public policy, IT architecture (Federal Enterprise Architecture) and business blueprinting, or did I create some kind of derivative view of technology, security, and public policy that I can now use?



Similar Posts:

Posted in Public Policy | 6 Comments »
Tags:

Save a Kitten, Write SCAP Content

Posted August 7th, 2009 by

Apparently I’m the Internet’s SCAP Evangelist according to Ed Bellis, so at this point all I can do is shrug and say “OK, I’ll teach people about SCAP”.

Right now there is a “pretty OK” framework for SCAP.  IE, we have published standards, and there are some SCAP-certified tools out there to do patch and vulnerability management.

What’s missing right now is SCAP content.  I don’t think this is going to get solved en-masse, it’s more like there needs to be an awareness campaign directed at end-users, vulnerability researchers, and people who write small-ish tools.

So I sat around at home trying to figure out how to get people to use/write more SCAP content and finally settled on “Everytime you use SCAP content, a kitten runs free”.

Anyway, this is a presentation I gave at my local OWASP chapter.



Similar Posts:

Posted in NIST, Speaking, Technical | 4 Comments »
Tags:

The CyberArmy You Have…

Posted July 27th, 2009 by

In the military, there is a saying: “You go to war with the army you have, not with the army you wish you had.”  In other words, you do all your training in peace and once you go off to war, it’s too late to fix it. Not that I agree with all the Cyber Pearl Harbor doomsayers, but I think that the CyberArmy we got now isn’t the right one for the job.

So, let’s talk about services firms, contractors fit into this nicely since, well, they perform services.

There are 4 types of work that services firms do (and contractors are services firms):

  • Brains: nobody else has done this before, but we hire a whole bunch of PhD people who can research how to get this done.  We charge really high prices but it’s because in the downtime, our people are doing presentations, going to symposiums, and working on things that you don’t even know exist.  Think old-school L0pht.  Think half of Mitre.  Think sharks with friggin laser beams, lasing and eating everything in sight.
  • Gray Hair: We’ve done this before and know most of the problems that we can experience, along with the battle scars to prove it.  We charge quite a bit because we’re good and it takes less of us to get it done than our competitors.  Think most good IT engineers.  Think DLP and DAM right now.  Think infantry platoon sergeants.
  • Procedural: There is a fairly sizeable market starting to grow around this service so we have to standardize quite a bit to reduce our costs to provide the service.  We use methodologies and tools so that we can take an army of trained college graduates, put them in a project, and they can execute according to plan.  Think audit staff.  Think help desk staff.  Think of an efficient DMV.
  • Commodity: There isn’t a differentiator between competitors, so companies compete on price.  The way you make money is by making your cost of production lower or selling in volume.  Think Anti-Virus software (sorry friends, it’s true).  Think security guards.  Think peanut butter.

This is also the maturity model for technology, so you can take any kind of tech, drop it in at the top, and it percolates down to the bottom.  Think Internet use: First it was the academics, then the contractors, then the technology early adopters on CompuServe, then free Internet access to all.  For most technology, it’s a 5-10 year cycle to get from the top to the bottom.  You already know this: the skills you have now will be obsolete in 5 years.

Procedural Permit Required photo by Dawn Endico.

Now looking at government contracting….

As a government contractor, you are audited financially by DCAA and they add up all your costs and let you keep a fixed margin of around 13-20%.  You can pull some Stupid Contractor Tricks ™ like paying salaries and working your people 60 hours/week (this is called uncompensated overtime), but there still is a limit to what you can do.

This fixed margin forces you into high-volume work to turn a profit.  This in turn forces you into procedural or even commodity work.

If your project is strictly time and material, you make more money off the cheaper folks but for quality of work reasons, you have to provide them with a playbook of some sort.  This pushes you directly into the procedural tier.

There are some contractors providing services at the Brains and Gray Hair stages, only they are few and far between.

Traditional types of contractor security services:

  • Security Program Management and Governance
  • Audit and Penetration Testing
  • Compliance and Certification and Accreditation Support
  • Security Operations (think Managed Security Services)

Then back around to cyberwar…

Cyberwar right now is definitely at the top of the skill hierarchy.  We don’t have an official national strategy.  We have a Cybersecurity Coordinator that hasn’t been filled yet.  We need Brains people and their skills to figure this out.  In fact, we have a leadership drought.

And yet the existing contractor skillset is based on procedural offerings.  To be honest, I see lots of people with cybersecurity offerings, but what they really have is rebranded service offerings because the skills sets of the workforce haven’t changed.

Some of the procedural offerings work, but only if you keep them in limited scope.  The security operations folks have quite a few tranferable skills, so do the pen-testers.  However, these are all at the tactical level.  The managerial skills don’t transfer really at all unless you have people that are just well-rounded, usually with some kind of IT ops background.

But, and this is the important thing, we’re not ready to hire contractors until we do get some leadership in place. And that’s why the $25M question right now is “Who will that person be?”  Until that time, anything from the vendors and contractors is just posturing.

Once we get a national leadership and direction, then it’s a matter of lining up the services being offered with the needs at the time.  What I think we’ll find out at that time is that we’re grossly underrepresented in some areas and sadly underrepresented in some areas and that these areas are directly inverse to the skills that our current workforce has.  This part scares me.

We need workforce development.  There are some problems with this, mostly because it takes so long to “grow” somebody with the skills to get the job done–maybe 5-10 years with education and experience.  Sadly, about the time we build this workforce, the problem will have slid down the scale so that procedural offerings will probably work.  This frustrates me greatly.

The summary part…

Well, just like I don’t want to belong to any club that would stoop so low to have me as a member, it could be possible that almost all the contractors offering services aren’t the people that you want to hire for the job.

But then again, we need to figure out the leadership part first.  Sadly, that’s where we need the most love.  It’s been how many months with a significant leadership vacuum?  9? 12? 7 years?

The most critical step in building a cyberwar/cyberdefense/cyberfoo capability is in building a workforce.  We’re still stuck with the “option” of building the airplane while it’s taxiing down the runway.



Similar Posts:

Posted in Cyberwar, Rants | 6 Comments »
Tags:

Surprise Report: Not Enough Security Staff

Posted July 22nd, 2009 by

Somedays I feel like people are reading this blog and getting ideas that they turn around and steal.  Then I take my pills and my semi-narcisistic feelings go away.  =)

So anyway, B|A|H threw me for a loop this afternoon.  They released a report on the cybersecurity workforce.  You can check out the article on The Register or you can go get the report from here.  Surprise, we don’t have anywhere near enough security people to go around.  I’ve been saying this for years, I think B|A|H is stealing my ideas by using Van Eck phreaking on my brain while I sleep.

 Some revelations from the executive summary:

  • The pipeline of potential new talent is inadequate.  In other words, demand is growing and the amount of people that we’re training is not growing to meet the demand.
  • Fragmented governance and uncoordinated leadership hinders the ability to meet federal cybersecurity workforce needs.  Nobody’s so far been able to articulate how we build an adequate supply of security folks to keep up with demand and most of our efforts have been at the execution level.
  • Complicated processes and rules hamper recruiting and retention efforts.  It takes maybe 6 months to hire a government employee, this is entirely unsatisfactory.  My current project I was cleared for for 3 years, took a 9-month break, and it took me 6 months to get cleared again.
  • There is a disconnect between front-line hiring managers and government’s HR specialists.  Since the HR folks don’t know what the real job description is, hiring information security people is akin to buzzword bingo.

These are all the same problems the private sector deals with, only in true Government stylie, we have it on a larger scale.

 

He’s Part of the Workforce photo by pfig.

Now for the things that no self-respecting contractor will admit (hmm, what does this say about me?  I’m not sure yet)….

If you do not have an adequate supply of workers in the industry, outsourcing cybersecurity tasks to contractors will not work.  It works something like this:

  • High Demand = High Bill Rate.
  • High Bill Rate = More Contractor Interest
  • More Contractor Interest + High Bill Rate +  Low Supply = High Rate of Charlatans

Contractors do not have the labor pool to tap into to satisfy their contracts.  If you want to put on your cynic hat (all the Guerilla-CISO staff have theirs permanently attached with wood screws), you could say that the B|A|H report was trying to get the Government to pump more money into workforce development so that they could then hire those people and bill them back to the Government.  It’s a twisted world, folks.

Current contractor labor pools have some of the skills necessary for cybersecurity but not all.  More info in future blog posts, but I think a simple way to summarize it is to say that our current workforce is “tooled” around IT security compliance and that we are lacking in large-scale attack and defense skills.

Not only do we need more people in the security industry, but we need more security people in Government.  There is a set of tasks called “inherent government functions” that cannot be delegated to contractors.  Even if you solely increase the contractor headcount, you still have to increase the government employee headcount in order to manage the contractors.



Similar Posts:

Posted in Outsourcing, Public Policy | 9 Comments »
Tags:

Federated Vulnerability Management

Posted July 14th, 2009 by

Why hello there private sector folks.  It’s no big surprise, I work in the US Federal Government Space and we have some unique challenges of scale.  Glad to meet you, I hear you’ve got the same problems only not in the same kind of scale as the US Federal Government.  Sit back, read, and learn.

You see, I work in places where everything is siloed into different environments.  We have crazy zones for databases, client-facing DMZs, managment segments, and then the federal IT architecture itself: a loose federation of semi-independent enterprises that are rapidly coming together in strange ways under the wonderful initiative known as “The TIC”.  We’re also one of the most heavily audited sectors in IT.

And yet, the way we manage patch and vulnerability information is something out of the mid-80’s.

Current State of Confusion

Our current patch management information flow goes something like this:

  • Department SOC/US-CERT/CISOs Office releases a vulnerability alert (IAVA, ISVM, something along those lines)
  • Somebody makes a spreadsheet with the following on it:
    • Number of places with this vulnerability.
    • How many have been fixed.
    • When you’re going to have it fixed.
    • A percentage of completion
  • We then manage by spreadsheets until the spreadsheets say “100%”.
  • The spreadsheets are aggregated somewhere.  If we’re lucky, we have some kind of management tool that we dump our info into like eMASS.
  • We wonder why we get pwned (by either haxxorz or the IG).

Now for how we manage vulnerability scan information:

  • We run a tool.
  • The tool spits out a .csv or worse, a .html.
  • We pull up the .csv in Excel and add some columns.
  • We assign dates and responsibilities to people.
  • We have a weekly meeting togo over what’s been completed.
  • When we finish something, we provide evidence of what we did.
  • We still really don’t know how effective we were.

Problems with this approach:

  • It’s too easy to game.  If I’m doing reporting, the only thing really keeping me reporting the truth is my sense of ethics.
  • It’s slow as hell.  If somebody updates a spreadsheet, how does the change get echoed into the upstream spreadsheets?
  • It isn’t accurate at any given moment in time, mostly because changes quicker than the process can keep up.  What this means is that we always look like liars who are trying to hide something because our spreadsheet doesn’t match up with the “facts on ground”.
  • It doesn’t compare with our other management tools like Plans of Action and Milestone (POA&M).  They usually are managed in a different application than the technical parts, and this means that we need a human with a spreadsheet to act as the intermediary.

So this is my proposal to “fix” government patch and vulnerability management: Federated Patch and Vulnerability Management through SCAP.

Trade Federation Battle Droid photo by Stéfan.  Roger, Roger, SCAP means business.

Whatchu Talkin’ Bout With This “Federated” Stuff, Willis?

This is what I mean, my “Plan for BSOFH Happiness”:

Really what I want is every agency to have an “orchestrator” ala Ed Bellis’s little SCAP tool of horrors. =)  Then we federate them so that information can roll up to a top-level dashboard for the entire executive branch.

In my beautiful world, every IT asset reports into a patch management system of some sort.  Servers, workstations, laptops, all of it.  Yes, databases too.  Printers–yep.  If we can get network devices to get reported on config info via SCAP-enabled NMS, let’s get that pushing content into our orchestrator. We don’t even really  have to push patches using these tools–what I’m primarily concerned with at this point is to have the ability to pull reports.

I group all of my IT assets in my system into a bucket of some sort in the orchestrator.  That way, we know who’s responsible when something has a problem.  It also fits into our “system” concept from FISMA/C&A/Project Management/etc.

We do periodic network scanning to identify everything on our network and feed them into the orchestrator.  We do regular vulnerability scans and any findings feed into the orchestrator.  The more data, the better aggregate information we can get.

Our orchestrator correlates network scans with patch management status and gives us a ticket/alert/whatever where we have unmanaged devices.  Yes, most enterprise management tools do this today, but the more scan results I have feeding them, the better chance I have at finding all my assets.  Thanks to our crazy segmented architecture models, we have all these independent zones that break patch, vulnerability, and configuration management as the rest of the IT world performs it.  Flat is better for management, but failing that, I’ll take SCAP hierarchies of reporting.

The Department takes a National Vulnerability Database feed and pushes down to the Agencies what they used to send in an IAVA, only they also send down the check to see if your system is vulnerable.  My orchestrator automagically tests and reports back on status before I’m even awake in the morning.

I get hardening guides pushed from the Department or Agency in SCAP form, then pull an audit on my IT assets and have the differences automagically entered into my workflow and reporting.

I become a ticket monkey.  Everything is in workflow.  I can be replaced with somebody less expensive and can now focus on finding the answer to infosec nirvana.

We provide a feed upstream to our Department, the Department provides a feed to somebody (NCSD/US-CERT/OMB/Cybersecurity Coordinator) who now has the view across the entire Government.  Want to be bold, let Vivek K and the Sunlight Foundation at the data feeds and have truly open and transparent, “Unbreakable Government 2.1”.  Who needs FISMA report cards when our vulnerability data is on display?

Keys to Making Federated Patch and Vulnerability Management Work

Security policy that requires SCAP-compatible vulnerability and patch management products.  Instead of parroting back 800-53, please give me a requirement in your security policy that every patch and vulnerability management tool that we buy MUST BE SCAP-CERTIFIED.  Yes, I know we won’t get it done right now, but if we get it in policy, then it will trickle down into product choices eventually.  This is compliance I can live with, boo-yeah!

Security architecture models (FEA anyone?) that show federated patch and vulnerability management deployments as part of their standard configuration.  OK with the firewall pictures and zones of trust, I understand what you’re saying, now give me patch and vulnerability management flows across all the zones so I can do the other 85% of my job.

Network traffic from the edges of the hierarchy to…somewhere.  OK, you just need network connectivity throughout the hierarchy to aggregate and update patch and vulnerability information, this is basic data flow stuff.  US-CERT in a future incarnation could be the top-level aggregator, maybe.  Right now I would be happy building aggregation up to the Department level because that’s the level at which we’re graded.

Understanding.  Hey, I can’t fix everything all the time–what I’m doing is using automation to make the job of fixing things easier by aggregation, correlation, status reporting, and dashboarding.  These are all concepts behind good IT management, why shouldn’t we apply them to security managment also?  Yes, I’ll have times when I’m behind on something or another, but guess what, I’m behind today and you just don’t know it.  However, with near-real-time reporting, we need a culture shift away from trying to police each other up all the time to understanding that sometimes nothing is really perfect.

Patch and vulnerability information is all-in.  It has to be reporting in 100% across the board, or you don’t have anything–back to spreadsheets hell for you.  And simply put, why don’t you have everything in the patch management system already?  Come on, that’s not a good enough reason.

POA&Ms need to be more fluid.  Face it, with automated patch and vulnerability management, POA&Ms become more like trouble tickets.  But yes, that’s much awesome, smaller, easily-satisfied POA&Ms are much easier to manage provided that the administrative overhead for each of these is reduced to practically nothing… just like IT trouble tickets.

Regression testing and providing proof becomes easier because it’s all automated.  Once you fix something and it’s marked in the aggregator as completed, it gets slid into the queue for retesting, and the results become the evidence.

Interfaces with existing FISMA management tools.  This one is tough.  But we have a very well-entrenched software base geared around artifact management, POA&M management, and Security Test and Evaluation results.  This class of software exists because none of the tools vendors really understand how the Government does security management, and I mean NONE of them.  There has to be some weird unnatural data import/export acts going on here to make the orchestrator of technical data match up with the orchestrator of managment data, and this is the part that scares me in a federated world.

SCAP spreads to IT management suites.  They already have a footprint out there on everything, and odds are we’re using them for patch and configuration management anyway.  If they don’t talk SCAP, push the vendor to get it working.

Where Life Gets Surreal

Then I woke up and realized that if I provide my Department CISO with near-real-time patch and vulnerability mangement information, I suddenly have become responsible for patch and management instead of playing “kick it to the contractors” and hiding behind working groups.  It could be that if I get Federated Patch and Vulnerabilty Management off the ground, I’ve given my Department CISO the rope to hang me with.  =)

Somehow, somewhere, I’ve done most of what CAG was talking about and automated it.  I feel so… um… dirty.  Really, folks, I’m not a shill for anybody.



Similar Posts:

Posted in DISA, NIST, Rants, Technical | 12 Comments »
Tags:

Security Automation Developers Conference Slides

Posted July 2nd, 2009 by

Eh? What’s that mean?  Developer Days is a weeklong conference where they get down into the weeds about the various SCAP schemas and how they fit into the overall program of security automation. 

Highlights and new ideas:

Remedial Markup Language: Fledgeling schema to describe how to remediate a vulnerability.  A fully automated security system would scan and then use the RML content to automagically fix the finding… say, changing a configuration setting or installing a patch.  this would be much awesome if combined with the CVE/CWE so you have a vulnerability scanner that scans and fixes the problem.  Also needs to be kept in a bottle because the operations guys will have a heartattack if we are doing this without any human intervention.

Computer Network Defense: There is a pretty good scenario slide deck on using SCAP to automate hardening, auditing, monitoring, and defense.  The key from this deck is how the information flows using automation.

Common Control Identifier:  This schema is basically a catalog of controls (800-53, 8500.2, PCI, SoX, etc) in XML.  The awesomeness with this is that one control can contain a reference implementation for each technology and the checklist to validate it in XCCDF.  At this point, I get all misty…

Open Checklist Interactive Language: This schema is to capture questionaires.  Think managerial controls, operational controls, policy, and procedure captured in electronic format and fed into the regular mitigation and workflow tools that you use so that you can view “security of the enterprise at a glance” across technical and non-technical security.

Network Event Content Automation Protocol:  This is just a concept floating around right now on using XML to describe and automate responses to attacks.  If you’re familiar with ArcSight’s Common Event Format, this would be something similar but on steroids with workflow and a pony!

Attendance at developer days is limited, but thanks to all the “Powar of teh Intarwebs, you can go here and read the slides!



Similar Posts:

Posted in NIST, Technical | 3 Comments »
Tags:

« Previous Entries Next Entries »


Visitor Geolocationing Widget: