Building A Modern Security Policy For Social Media and Government

Posted December 13th, 2009 by

A small presentation Dan Philpott and I put together for Potomac Forum about getting sane social media policy out of your security staff. I also recommend reading something I put out a couple of months ago about Social Media Threats and Web 2.0.



Similar Posts:

Posted in FISMA, NIST, Outsourcing, Risk Management, Speaking | 4 Comments »
Tags:

I’m on the OWASP Podcast

Posted October 1st, 2009 by

I sat down with Jim Manico a month or so ago when he was in DC and recorded a podcast for the OWASP Podcast.  It’s now live, check it out.



Similar Posts:

Posted in FISMA, NIST, Public Policy, Rants, Speaking, The Guerilla CISO | No Comments »
Tags:

The Guerilla CISO Rants: Don’t Write a System Security Plan

Posted October 1st, 2009 by

OK, I know you’re shocked…I’m saying something controversial.  But hear me out on this one, I’ll explain.

Now this is my major beef with the way we write SSPs today:  this is all information that is contained in other artifacts that I have to pay people to do cut-and-paste to get it into a SSP template.  As practiced, we seriously have a problem with polyinstantiation of data in various lifecycle artifacts that is cut-and-pasted into an SSP.  Every time you change the upstream document, you create a difference between that document and the SSP.

This is a practice I would like to change, but I can’t do it all by myself.

This is the skeleton outline of an SSP from Special Publication 800-18, the guide to writing an SSP:

  1. Information System Name/Title–On the investment/FISMA inventory, the Exhibit 300/53, etc
  2. Information System Categorization–usually on a FIPS-199 memorandum
  3. Information System Owner–In an assignment memo
  4. Authorizing Official–In an assignment memo
  5. Other Designated Contacts–In an assignment memo
  6. Assignment of Security Responsibility–In assignment memos
  7. Information System Operational Status–On the investment/FISMA inventory, the Exhibit 300/53, etc
  8. Information System Type–On the investment/FISMA inventory, the Exhibit 300/53, etc
  9. General System Description/Purpose–In the design document, Exhibit 300/53
  10. System Environment–Common controls not inside the scope of our system
  11. System Interconnections/Information Sharing–from Interconnection Security Agreements
  12. Related Laws/Regulations/Policies–Should be part of the system categorization but hardly ever is on templates
  13. Minimum Security Controls–800-53 controls descriptions which can easily be done in a Requirements Traceability Matrix
  14. Information System Security Plan Completion Date–specific to each document
  15. Information System Security Plan Approval Date–specific to each document

Now some of this has changed in practice a little bit–# 10 can functionally be replaced with a designation of common controls and hybrid controls.

So my line of thinking is that if we provide a 2-6-page system description with the names of the “guilty parties” and some inventory information, controls-specific Requirements Traceability Matrix, and a System Design Document, then we have the functional equivalent of an SSP.

Why have I declared an InfoSec fatwah against SSPs as currently practiced?

Well, my philosophy for operation is based on some concepts I’ve picked up through the years:

  • Why run when you can walk, why walk when you can sit, why sit when you can lay down.  There is a time to spend effort on determining what the security controls are for a project.  You need to have them documented but it’s not cost-effective to be worried about format, which we do probably too much of today.
  • Make it easy to do the right thing.  If we polyinstantiate security information, we have made something harder to maintain.  Easier to maintain means that it will get maintained instead of being shelfware.  I would rather have updated and accurate security information than overly verbose and well-polished documents that are inaccurate.
  • Security is not a “security guy thing”–most problems are actually a management and project team problem.  My idea uses their SDLC artifacts instead of security-specific versions of artifacts.  My idea puts the project problems back in the project space where it belongs.
  • If I have a security engineer who has a finite amount of hours in a day, I have to choose what they spend their time on.  If it’s a matter of vulnerability mitigation, patching, etc, or correcting SSP grammar, I know what I want him to do.  Then again, I’m still an infantryman deep down inside and I realize I have biases against flowery writing.

Criticisms to not writing a dedicated SSP document:

“My auditors are used to seeing the information in the same format at someplace they worked previously”. Believe it or not, I hear this quite a bit.  My response is along the lines of the fact that if you make your standard be what I’m suggesting for a security plan, then you’ve met all of the FISMA and 800-53 requirements and my personal requirement to “don’t do stupid stuff if you can help it”.

“My auditors will grill me to death if they have to page back and forth between several documents”.  This one also I’ve heard.  There are a couple of ways to deal with this.  One way to deal with this is that in your 800-53 Requirements Traceability Matrix you reference the source document.  Most auditors at this point bring up that you need to reference the official name, date of publication, and specific page/section of the reference and I think they need to get a life because they’ve taken us back to the maintainability problem.

“This is all too new-school and I can’t get over it”. Then you are a dinosaur and your kind deserves extinction.  =)

.

This blog post is for grecs at novainfosecportal.com who perked up instantly when I mentioned the concept months ago.  Finally got around to putting the text somewhere.

How to Plan the Perfect Dinner Party photo by kevindooley.



Similar Posts:

Posted in FISMA, NIST | 11 Comments »
Tags:

Special Publication 800-53 Revision 3 Workshop

Posted September 1st, 2009 by

My friends at Potomac Forum are having a workshop on SP 800-53 R3 on the 15th of September.  This is an update to the Government’s catalog of controls.

The workshop will also be about standards convergence: how ODNI, DoD, and NIST are moving towards one standard and what this means for the intelligence community and military.

Ron Ross from NIST will talk about how the NIST Risk Management Framework is changing from a static, controls-based approach to a more dynamic “real-time continuous monitoring”.



Similar Posts:

Posted in NIST | 2 Comments »
Tags:

Save a Kitten, Write SCAP Content

Posted August 7th, 2009 by

Apparently I’m the Internet’s SCAP Evangelist according to Ed Bellis, so at this point all I can do is shrug and say “OK, I’ll teach people about SCAP”.

Right now there is a “pretty OK” framework for SCAP.  IE, we have published standards, and there are some SCAP-certified tools out there to do patch and vulnerability management.

What’s missing right now is SCAP content.  I don’t think this is going to get solved en-masse, it’s more like there needs to be an awareness campaign directed at end-users, vulnerability researchers, and people who write small-ish tools.

So I sat around at home trying to figure out how to get people to use/write more SCAP content and finally settled on “Everytime you use SCAP content, a kitten runs free”.

Anyway, this is a presentation I gave at my local OWASP chapter.



Similar Posts:

Posted in NIST, Speaking, Technical | 4 Comments »
Tags:

Federated Vulnerability Management

Posted July 14th, 2009 by

Why hello there private sector folks.  It’s no big surprise, I work in the US Federal Government Space and we have some unique challenges of scale.  Glad to meet you, I hear you’ve got the same problems only not in the same kind of scale as the US Federal Government.  Sit back, read, and learn.

You see, I work in places where everything is siloed into different environments.  We have crazy zones for databases, client-facing DMZs, managment segments, and then the federal IT architecture itself: a loose federation of semi-independent enterprises that are rapidly coming together in strange ways under the wonderful initiative known as “The TIC”.  We’re also one of the most heavily audited sectors in IT.

And yet, the way we manage patch and vulnerability information is something out of the mid-80’s.

Current State of Confusion

Our current patch management information flow goes something like this:

  • Department SOC/US-CERT/CISOs Office releases a vulnerability alert (IAVA, ISVM, something along those lines)
  • Somebody makes a spreadsheet with the following on it:
    • Number of places with this vulnerability.
    • How many have been fixed.
    • When you’re going to have it fixed.
    • A percentage of completion
  • We then manage by spreadsheets until the spreadsheets say “100%”.
  • The spreadsheets are aggregated somewhere.  If we’re lucky, we have some kind of management tool that we dump our info into like eMASS.
  • We wonder why we get pwned (by either haxxorz or the IG).

Now for how we manage vulnerability scan information:

  • We run a tool.
  • The tool spits out a .csv or worse, a .html.
  • We pull up the .csv in Excel and add some columns.
  • We assign dates and responsibilities to people.
  • We have a weekly meeting togo over what’s been completed.
  • When we finish something, we provide evidence of what we did.
  • We still really don’t know how effective we were.

Problems with this approach:

  • It’s too easy to game.  If I’m doing reporting, the only thing really keeping me reporting the truth is my sense of ethics.
  • It’s slow as hell.  If somebody updates a spreadsheet, how does the change get echoed into the upstream spreadsheets?
  • It isn’t accurate at any given moment in time, mostly because changes quicker than the process can keep up.  What this means is that we always look like liars who are trying to hide something because our spreadsheet doesn’t match up with the “facts on ground”.
  • It doesn’t compare with our other management tools like Plans of Action and Milestone (POA&M).  They usually are managed in a different application than the technical parts, and this means that we need a human with a spreadsheet to act as the intermediary.

So this is my proposal to “fix” government patch and vulnerability management: Federated Patch and Vulnerability Management through SCAP.

Trade Federation Battle Droid photo by Stéfan.  Roger, Roger, SCAP means business.

Whatchu Talkin’ Bout With This “Federated” Stuff, Willis?

This is what I mean, my “Plan for BSOFH Happiness”:

Really what I want is every agency to have an “orchestrator” ala Ed Bellis’s little SCAP tool of horrors. =)  Then we federate them so that information can roll up to a top-level dashboard for the entire executive branch.

In my beautiful world, every IT asset reports into a patch management system of some sort.  Servers, workstations, laptops, all of it.  Yes, databases too.  Printers–yep.  If we can get network devices to get reported on config info via SCAP-enabled NMS, let’s get that pushing content into our orchestrator. We don’t even really  have to push patches using these tools–what I’m primarily concerned with at this point is to have the ability to pull reports.

I group all of my IT assets in my system into a bucket of some sort in the orchestrator.  That way, we know who’s responsible when something has a problem.  It also fits into our “system” concept from FISMA/C&A/Project Management/etc.

We do periodic network scanning to identify everything on our network and feed them into the orchestrator.  We do regular vulnerability scans and any findings feed into the orchestrator.  The more data, the better aggregate information we can get.

Our orchestrator correlates network scans with patch management status and gives us a ticket/alert/whatever where we have unmanaged devices.  Yes, most enterprise management tools do this today, but the more scan results I have feeding them, the better chance I have at finding all my assets.  Thanks to our crazy segmented architecture models, we have all these independent zones that break patch, vulnerability, and configuration management as the rest of the IT world performs it.  Flat is better for management, but failing that, I’ll take SCAP hierarchies of reporting.

The Department takes a National Vulnerability Database feed and pushes down to the Agencies what they used to send in an IAVA, only they also send down the check to see if your system is vulnerable.  My orchestrator automagically tests and reports back on status before I’m even awake in the morning.

I get hardening guides pushed from the Department or Agency in SCAP form, then pull an audit on my IT assets and have the differences automagically entered into my workflow and reporting.

I become a ticket monkey.  Everything is in workflow.  I can be replaced with somebody less expensive and can now focus on finding the answer to infosec nirvana.

We provide a feed upstream to our Department, the Department provides a feed to somebody (NCSD/US-CERT/OMB/Cybersecurity Coordinator) who now has the view across the entire Government.  Want to be bold, let Vivek K and the Sunlight Foundation at the data feeds and have truly open and transparent, “Unbreakable Government 2.1”.  Who needs FISMA report cards when our vulnerability data is on display?

Keys to Making Federated Patch and Vulnerability Management Work

Security policy that requires SCAP-compatible vulnerability and patch management products.  Instead of parroting back 800-53, please give me a requirement in your security policy that every patch and vulnerability management tool that we buy MUST BE SCAP-CERTIFIED.  Yes, I know we won’t get it done right now, but if we get it in policy, then it will trickle down into product choices eventually.  This is compliance I can live with, boo-yeah!

Security architecture models (FEA anyone?) that show federated patch and vulnerability management deployments as part of their standard configuration.  OK with the firewall pictures and zones of trust, I understand what you’re saying, now give me patch and vulnerability management flows across all the zones so I can do the other 85% of my job.

Network traffic from the edges of the hierarchy to…somewhere.  OK, you just need network connectivity throughout the hierarchy to aggregate and update patch and vulnerability information, this is basic data flow stuff.  US-CERT in a future incarnation could be the top-level aggregator, maybe.  Right now I would be happy building aggregation up to the Department level because that’s the level at which we’re graded.

Understanding.  Hey, I can’t fix everything all the time–what I’m doing is using automation to make the job of fixing things easier by aggregation, correlation, status reporting, and dashboarding.  These are all concepts behind good IT management, why shouldn’t we apply them to security managment also?  Yes, I’ll have times when I’m behind on something or another, but guess what, I’m behind today and you just don’t know it.  However, with near-real-time reporting, we need a culture shift away from trying to police each other up all the time to understanding that sometimes nothing is really perfect.

Patch and vulnerability information is all-in.  It has to be reporting in 100% across the board, or you don’t have anything–back to spreadsheets hell for you.  And simply put, why don’t you have everything in the patch management system already?  Come on, that’s not a good enough reason.

POA&Ms need to be more fluid.  Face it, with automated patch and vulnerability management, POA&Ms become more like trouble tickets.  But yes, that’s much awesome, smaller, easily-satisfied POA&Ms are much easier to manage provided that the administrative overhead for each of these is reduced to practically nothing… just like IT trouble tickets.

Regression testing and providing proof becomes easier because it’s all automated.  Once you fix something and it’s marked in the aggregator as completed, it gets slid into the queue for retesting, and the results become the evidence.

Interfaces with existing FISMA management tools.  This one is tough.  But we have a very well-entrenched software base geared around artifact management, POA&M management, and Security Test and Evaluation results.  This class of software exists because none of the tools vendors really understand how the Government does security management, and I mean NONE of them.  There has to be some weird unnatural data import/export acts going on here to make the orchestrator of technical data match up with the orchestrator of managment data, and this is the part that scares me in a federated world.

SCAP spreads to IT management suites.  They already have a footprint out there on everything, and odds are we’re using them for patch and configuration management anyway.  If they don’t talk SCAP, push the vendor to get it working.

Where Life Gets Surreal

Then I woke up and realized that if I provide my Department CISO with near-real-time patch and vulnerability mangement information, I suddenly have become responsible for patch and management instead of playing “kick it to the contractors” and hiding behind working groups.  It could be that if I get Federated Patch and Vulnerabilty Management off the ground, I’ve given my Department CISO the rope to hang me with.  =)

Somehow, somewhere, I’ve done most of what CAG was talking about and automated it.  I feel so… um… dirty.  Really, folks, I’m not a shill for anybody.



Similar Posts:

Posted in DISA, NIST, Rants, Technical | 12 Comments »
Tags:

« Previous Entries Next Entries »


Visitor Geolocationing Widget: