Web 2.0 and Social Media Threats for Government

Posted September 30th, 2009 by

So most of the security world is familiar with the Web 2.0 and Social Media threats in the private sector.  Today we’re going to have an expose on the threats specific to Government because I don’t feel that they’ve been adequately represented in this whole push for Government 2.0 and transparency.

Threat: Evil Twin Agency Attack. A person registers on a social media site using the name of a Government entity.  They then represent that entity to the public and say whatever it is that they want that agency to say.

What’s the Big Deal: Since for the most part there is no way to prove the authenticity of Government entities on social media sites short of a “catch us on <social media site>” tag on their .gov homepage.  This isn’t an attack unique to Government but because of the authority that people give to Government Internet presences means that the attacker gains perceived legitimacy.

Countermeasures: Monitoring by the agencies looking for their official and unofficial presences on Social Media and Web 2.0 sites.  Any new registrations on social media are vetted for authenticity through the agency’s public affairs office.  Agencies should have an official presence on social media to reserve their namespace and put these account names on their official website.

References:

.

Threat: Web Hoax. A non-government person sets up their own social media or website and claims to be the Government.

What’s the Big Deal: This is similar to the evil twin attack only maybe of a different scale.  For example, an entire social media site can be set up pretending to be a Government agency doing social networking and collecting data on citizens or asking citizens to do things on behalf of the Government.  There is also a thin line between parody and

Countermeasures: Monitoring of URLs that claim to be Government-owned.  This is easily done with some Google advanced operators and some RSS fun.

References:

.

Threat: Privacy Violations on Forums. A Government-operated social media site collects Personally Identifiable Information about visitors when they register to participate in forums, blog comments, etc.

What’s the Big Deal: If you’re a Government agency and going to be collecting PII, you need to do a Privacy Impact Assessment which is overkill if you’re collecting names and email which could be false anyway.  However, the PIA is a lengthy process and utterly destroys the quickness of web development as we know it.

Countermeasures: It has been proposed in some circles that Government social media sites use third-party ID providers such as OpenID to authenticate simple commenters and forum posts.  This isn’t an original idea, Noel Dickover has been asking around about it for at least 9 months that I know of.

References:

.

Threat: Monitoring v/s Law Enforcement v/s Intelligence Collection. The Government has to be careful about monitoring social media sites.  Depending on which agency is doing it, at some point you collect enough information from enough sources that you’re now monitoring US persons.

What’s the Big Deal: If you’re collecting information and doing traffic analysis on people, you’re most likely running up against wiretap laws and/or FISA.

Countermeasures: Government needs Rules of Engagement for creating 2-way dialog with citizens complete with standards for the following practices:

  • RSS feed aggregation for primary and secondary purposes
  • RSS feed republishing
  • Social networking monitoring for evil twin and hoax site attacks
  • Typical “Web 2.0 Marketing” tactics such as group analysis

References:

.

Threat: Hacked?  Not Us! The Government does weird stuff with web sites.  My web browser always carps at the government-issued SSL certificates because they use their own certificate authority.

What’s the Big Deal: Even though I know a Government site is legitimate, I still have problems getting alert popups.  Being hacked with a XSS or other attack has much more weight than for other sites because people expect to get weird errors from Government sites and just click through.  Also the sheer volume of traffic on Government websites means that they are a lucrative target if the attacker’s end goal is to infect desktops.

Countermeasures: The standard web server anti-XSS and other web application security stuff works here.  Another happy thing would be to get the Federal CA Certificate embedded in web browsers by default like Thawt and Verisign.

References:

.

Threat: Oh Hai I Reset Your Password For You AKA “The Sarah Palin Attack”.  The password reset functions in social media sites work if you’re not a public figure.  Once the details of your life become scrutinized, your pet’s name, mother’s maiden name, etc, all become public knowledge.

What’s the Big Deal: It depends on what kind of data you have in the social media site.  This can range anywhere from the attacker getting access to one social media site that they get lucky with to complete pwnage of your VIP’s online accounts.

Countermeasures: Engagement with the social media site to get special considerations for Government VIPS.  Use of organizational accounts v/s personal accounts on social media sites.  Information poisoning on password reset questions for VIPs–don’t put the real data up there.  =)

References:

Tranparency in Action photo by Jeff Belmonte.



Similar Posts:

Posted in Risk Management | 2 Comments »
Tags:

Federal CIO Council’s Guidelines on Security and Social Media

Posted September 17th, 2009 by

I got an email today from the author who said that it’s now officially on the street: Guidelines for Secure Use of Social Media by Federal Departments and Agencies, v1.0.  I’m listed as a reviewer/contributor, which means that maybe I have some good ideas from time to time or that I know some people who know people.  =)

Abstract: The use of social media for federal services and interactions is growing tremendously, supported by initiatives from the administration, directives from government leaders, and demands from the public. This situation presents both opportunity and risk. Guidelines and recommendations for using social media technologies in a manner that minimizes the risk are analyzed and presented in this document.

This document is intended as guidance for any federal agency that uses social media services to collaborate and communicate among employees, partners, other federal agencies, and the public.



Similar Posts:

Posted in Odds-n-Sods, The Guerilla CISO | No Comments »
Tags:

Cyberlolcats Watch the Hackers at DefCon

Posted July 30th, 2009 by

Yeah, tell it to this guy, the Internet’s lawyer. =)

funny pictures



Similar Posts:

Posted in IKANHAZFIZMA | No Comments »
Tags:

Surprise Report: Not Enough Security Staff

Posted July 22nd, 2009 by

Somedays I feel like people are reading this blog and getting ideas that they turn around and steal.  Then I take my pills and my semi-narcisistic feelings go away.  =)

So anyway, B|A|H threw me for a loop this afternoon.  They released a report on the cybersecurity workforce.  You can check out the article on The Register or you can go get the report from here.  Surprise, we don’t have anywhere near enough security people to go around.  I’ve been saying this for years, I think B|A|H is stealing my ideas by using Van Eck phreaking on my brain while I sleep.

 Some revelations from the executive summary:

  • The pipeline of potential new talent is inadequate.  In other words, demand is growing and the amount of people that we’re training is not growing to meet the demand.
  • Fragmented governance and uncoordinated leadership hinders the ability to meet federal cybersecurity workforce needs.  Nobody’s so far been able to articulate how we build an adequate supply of security folks to keep up with demand and most of our efforts have been at the execution level.
  • Complicated processes and rules hamper recruiting and retention efforts.  It takes maybe 6 months to hire a government employee, this is entirely unsatisfactory.  My current project I was cleared for for 3 years, took a 9-month break, and it took me 6 months to get cleared again.
  • There is a disconnect between front-line hiring managers and government’s HR specialists.  Since the HR folks don’t know what the real job description is, hiring information security people is akin to buzzword bingo.

These are all the same problems the private sector deals with, only in true Government stylie, we have it on a larger scale.

 

He’s Part of the Workforce photo by pfig.

Now for the things that no self-respecting contractor will admit (hmm, what does this say about me?  I’m not sure yet)….

If you do not have an adequate supply of workers in the industry, outsourcing cybersecurity tasks to contractors will not work.  It works something like this:

  • High Demand = High Bill Rate.
  • High Bill Rate = More Contractor Interest
  • More Contractor Interest + High Bill Rate +  Low Supply = High Rate of Charlatans

Contractors do not have the labor pool to tap into to satisfy their contracts.  If you want to put on your cynic hat (all the Guerilla-CISO staff have theirs permanently attached with wood screws), you could say that the B|A|H report was trying to get the Government to pump more money into workforce development so that they could then hire those people and bill them back to the Government.  It’s a twisted world, folks.

Current contractor labor pools have some of the skills necessary for cybersecurity but not all.  More info in future blog posts, but I think a simple way to summarize it is to say that our current workforce is “tooled” around IT security compliance and that we are lacking in large-scale attack and defense skills.

Not only do we need more people in the security industry, but we need more security people in Government.  There is a set of tasks called “inherent government functions” that cannot be delegated to contractors.  Even if you solely increase the contractor headcount, you still have to increase the government employee headcount in order to manage the contractors.



Similar Posts:

Posted in Outsourcing, Public Policy | 9 Comments »
Tags:

Federated Vulnerability Management

Posted July 14th, 2009 by

Why hello there private sector folks.  It’s no big surprise, I work in the US Federal Government Space and we have some unique challenges of scale.  Glad to meet you, I hear you’ve got the same problems only not in the same kind of scale as the US Federal Government.  Sit back, read, and learn.

You see, I work in places where everything is siloed into different environments.  We have crazy zones for databases, client-facing DMZs, managment segments, and then the federal IT architecture itself: a loose federation of semi-independent enterprises that are rapidly coming together in strange ways under the wonderful initiative known as “The TIC”.  We’re also one of the most heavily audited sectors in IT.

And yet, the way we manage patch and vulnerability information is something out of the mid-80’s.

Current State of Confusion

Our current patch management information flow goes something like this:

  • Department SOC/US-CERT/CISOs Office releases a vulnerability alert (IAVA, ISVM, something along those lines)
  • Somebody makes a spreadsheet with the following on it:
    • Number of places with this vulnerability.
    • How many have been fixed.
    • When you’re going to have it fixed.
    • A percentage of completion
  • We then manage by spreadsheets until the spreadsheets say “100%”.
  • The spreadsheets are aggregated somewhere.  If we’re lucky, we have some kind of management tool that we dump our info into like eMASS.
  • We wonder why we get pwned (by either haxxorz or the IG).

Now for how we manage vulnerability scan information:

  • We run a tool.
  • The tool spits out a .csv or worse, a .html.
  • We pull up the .csv in Excel and add some columns.
  • We assign dates and responsibilities to people.
  • We have a weekly meeting togo over what’s been completed.
  • When we finish something, we provide evidence of what we did.
  • We still really don’t know how effective we were.

Problems with this approach:

  • It’s too easy to game.  If I’m doing reporting, the only thing really keeping me reporting the truth is my sense of ethics.
  • It’s slow as hell.  If somebody updates a spreadsheet, how does the change get echoed into the upstream spreadsheets?
  • It isn’t accurate at any given moment in time, mostly because changes quicker than the process can keep up.  What this means is that we always look like liars who are trying to hide something because our spreadsheet doesn’t match up with the “facts on ground”.
  • It doesn’t compare with our other management tools like Plans of Action and Milestone (POA&M).  They usually are managed in a different application than the technical parts, and this means that we need a human with a spreadsheet to act as the intermediary.

So this is my proposal to “fix” government patch and vulnerability management: Federated Patch and Vulnerability Management through SCAP.

Trade Federation Battle Droid photo by Stéfan.  Roger, Roger, SCAP means business.

Whatchu Talkin’ Bout With This “Federated” Stuff, Willis?

This is what I mean, my “Plan for BSOFH Happiness”:

Really what I want is every agency to have an “orchestrator” ala Ed Bellis’s little SCAP tool of horrors. =)  Then we federate them so that information can roll up to a top-level dashboard for the entire executive branch.

In my beautiful world, every IT asset reports into a patch management system of some sort.  Servers, workstations, laptops, all of it.  Yes, databases too.  Printers–yep.  If we can get network devices to get reported on config info via SCAP-enabled NMS, let’s get that pushing content into our orchestrator. We don’t even really  have to push patches using these tools–what I’m primarily concerned with at this point is to have the ability to pull reports.

I group all of my IT assets in my system into a bucket of some sort in the orchestrator.  That way, we know who’s responsible when something has a problem.  It also fits into our “system” concept from FISMA/C&A/Project Management/etc.

We do periodic network scanning to identify everything on our network and feed them into the orchestrator.  We do regular vulnerability scans and any findings feed into the orchestrator.  The more data, the better aggregate information we can get.

Our orchestrator correlates network scans with patch management status and gives us a ticket/alert/whatever where we have unmanaged devices.  Yes, most enterprise management tools do this today, but the more scan results I have feeding them, the better chance I have at finding all my assets.  Thanks to our crazy segmented architecture models, we have all these independent zones that break patch, vulnerability, and configuration management as the rest of the IT world performs it.  Flat is better for management, but failing that, I’ll take SCAP hierarchies of reporting.

The Department takes a National Vulnerability Database feed and pushes down to the Agencies what they used to send in an IAVA, only they also send down the check to see if your system is vulnerable.  My orchestrator automagically tests and reports back on status before I’m even awake in the morning.

I get hardening guides pushed from the Department or Agency in SCAP form, then pull an audit on my IT assets and have the differences automagically entered into my workflow and reporting.

I become a ticket monkey.  Everything is in workflow.  I can be replaced with somebody less expensive and can now focus on finding the answer to infosec nirvana.

We provide a feed upstream to our Department, the Department provides a feed to somebody (NCSD/US-CERT/OMB/Cybersecurity Coordinator) who now has the view across the entire Government.  Want to be bold, let Vivek K and the Sunlight Foundation at the data feeds and have truly open and transparent, “Unbreakable Government 2.1”.  Who needs FISMA report cards when our vulnerability data is on display?

Keys to Making Federated Patch and Vulnerability Management Work

Security policy that requires SCAP-compatible vulnerability and patch management products.  Instead of parroting back 800-53, please give me a requirement in your security policy that every patch and vulnerability management tool that we buy MUST BE SCAP-CERTIFIED.  Yes, I know we won’t get it done right now, but if we get it in policy, then it will trickle down into product choices eventually.  This is compliance I can live with, boo-yeah!

Security architecture models (FEA anyone?) that show federated patch and vulnerability management deployments as part of their standard configuration.  OK with the firewall pictures and zones of trust, I understand what you’re saying, now give me patch and vulnerability management flows across all the zones so I can do the other 85% of my job.

Network traffic from the edges of the hierarchy to…somewhere.  OK, you just need network connectivity throughout the hierarchy to aggregate and update patch and vulnerability information, this is basic data flow stuff.  US-CERT in a future incarnation could be the top-level aggregator, maybe.  Right now I would be happy building aggregation up to the Department level because that’s the level at which we’re graded.

Understanding.  Hey, I can’t fix everything all the time–what I’m doing is using automation to make the job of fixing things easier by aggregation, correlation, status reporting, and dashboarding.  These are all concepts behind good IT management, why shouldn’t we apply them to security managment also?  Yes, I’ll have times when I’m behind on something or another, but guess what, I’m behind today and you just don’t know it.  However, with near-real-time reporting, we need a culture shift away from trying to police each other up all the time to understanding that sometimes nothing is really perfect.

Patch and vulnerability information is all-in.  It has to be reporting in 100% across the board, or you don’t have anything–back to spreadsheets hell for you.  And simply put, why don’t you have everything in the patch management system already?  Come on, that’s not a good enough reason.

POA&Ms need to be more fluid.  Face it, with automated patch and vulnerability management, POA&Ms become more like trouble tickets.  But yes, that’s much awesome, smaller, easily-satisfied POA&Ms are much easier to manage provided that the administrative overhead for each of these is reduced to practically nothing… just like IT trouble tickets.

Regression testing and providing proof becomes easier because it’s all automated.  Once you fix something and it’s marked in the aggregator as completed, it gets slid into the queue for retesting, and the results become the evidence.

Interfaces with existing FISMA management tools.  This one is tough.  But we have a very well-entrenched software base geared around artifact management, POA&M management, and Security Test and Evaluation results.  This class of software exists because none of the tools vendors really understand how the Government does security management, and I mean NONE of them.  There has to be some weird unnatural data import/export acts going on here to make the orchestrator of technical data match up with the orchestrator of managment data, and this is the part that scares me in a federated world.

SCAP spreads to IT management suites.  They already have a footprint out there on everything, and odds are we’re using them for patch and configuration management anyway.  If they don’t talk SCAP, push the vendor to get it working.

Where Life Gets Surreal

Then I woke up and realized that if I provide my Department CISO with near-real-time patch and vulnerability mangement information, I suddenly have become responsible for patch and management instead of playing “kick it to the contractors” and hiding behind working groups.  It could be that if I get Federated Patch and Vulnerabilty Management off the ground, I’ve given my Department CISO the rope to hang me with.  =)

Somehow, somewhere, I’ve done most of what CAG was talking about and automated it.  I feel so… um… dirty.  Really, folks, I’m not a shill for anybody.



Similar Posts:

Posted in DISA, NIST, Rants, Technical | 12 Comments »
Tags:

Your Security “Requirements” are Teh Suxxorz

Posted July 1st, 2009 by

Face it, your security requirements suck. I’ll tell you why.  You write down controls verbatim from your catalog of controls (800-53, SoX, PCI, 27001, etc), put it into a contract, and wonder how come when it comes time for security testing, we just aren’t talking the same language.  Even worse, you put in the cr*ptastic “Contractor shall be compliant with FISMA and all applicable NIST standards”.  Yes, this happens more often than I could ever care to count, and I’ve seen it from both sides.

The problem with quoting back the “requirements” from a catalog of controls is that they’re not really requirements, they’re control objectives–abstract representations of what you need in order to protect your data, IT system, or business.  It’s a bit like brain surgery using a hammer and chisel–yes, it might work out for you, but I don’t really feel comfortable doing it or being on the receiving end.

And this is my beef with the way we manage security controls nowadays.  They’re not requirements, functionally they’re a high-level needs statement or even a security concept of operations.  Security controls need to be tailored into real requirements that are buildable, testable, measurable, and achievable.

Requirements photo by yummiec00kies.  There’s a social commentary in there about “Single, slim, and pleasant looking” but even I’m afraid to touch that one. =)

Did you say “Wrecks and Female Pigs’? In the contracting world, we have 2 vehicles that we use primarily for security controls: Statements of Work (SOW) and Engineering Requirements.

  • Statements of Work follow along the lines of activities performed by people.  For instance, “contractor shall perform monthly 100% vulnerability scanning of the $FooProject.”
  • Engineering Requirements are exactly what you want to have build.  For instance, “Prior to displaying the login screen, the application shall display the approved Generic Government Agency warning banner as shown below…”

Let’s have a quick exercise, shall we?

What 800-53 says: The information system produces audit records that contain sufficient information to, at a minimum, establish what type of event occurred, when (date and time) the event occurred, where the event occurred, the source of the event, the outcome (success or failure) of the event, and the identity of any user/subject associated with the event.

How It gets translated into a contract: Since it’s more along the lines of a security functional requirement (ie, it’s a specific functionality not a task we want people to do), we brake it out into multiple requirements:

The $BarApplication shall produce audit records with the following content:

  • Event description such as the following:
    • Access the $Baz subsystem
    • Mounting external hard drive
    • Connecting to database
    • User entered administrator mode
  • Date/time stamp in ‘YYYY-MM-DD HH:MM:SS’ format;
  • Hostname where the event occured;
  • Process name or program that generated the event;
  • Outcome of the event as one of the following: success, warn, or fail; and
  • Username and UserID that generated the event.

For a COTS product (ie, Windows 2003 server, Cisco IOS), when it comes to logging, I get what I get, and this means I don’t have a requirement for logging unless I’m designing the engineering requirements for Windows.

What 800-53 says: The The organization configures the information system to provide only essential capabilities and specifically prohibits and/or restricts the use of the following functions, ports, protocols, and/or services: [Assignment: organization-defined list of prohibited and/or restricted functions, ports, protocols, and/or services].

How It gets translated into a contract: Since it’s more along the lines of a security functional requirement, we brake it out into multiple requirements:

The $Barsystem shall have the software firewall turned on and only the following traffic shall be allowed:

  • TCP port 443 to the command server
  • UDP port 123 to the time server at this address
  • etc…..

If we drop the system into a pre-existing infrastructure, we don’t need firewall rules per-se as part of the requirements, what we do need is a SOW along the following lines:

The system shall use our approved process for firewall change control, see a copy here…

So what’s missing, and how do we fix the sorry state of requirements?

This is the interesting part, and right now I’m not sure if we can, given the state of the industry and the infosec labor shortage:  we need security engineers who understand engineering requirements and project management in addition to vulnerability management.

Don’t abandon hope yet, let’s look at some things that can help….

Security requirements are a “best effort” proposition.  By this, I mean that we have our requirements and they don’t fit in all cases, so what we do is we throw them out there and if you can’t meet the requirement, we waiver it (live with it, hope for the best) or apply a compensating control (shield it from bad things happening).  This is unnerving because what we end up doing is arguing all the time over whether the requirements that were written need to be done or not.  This drives the engineers nuts.

It’s a significant amount of work to translate control objectives into requirements.  The easiest, fastest way to fix the “controls view” of a project is to scope out things that are provided by infrastructure or by policies and procedures at the enterprise level.  Hmmm, sounds like explicitly stating what our shared/common controls are.

You can manage controls by exclusion or inclusion:

  • Inclusion:  We have a “default null” for controls and we will explicitly say in the requirements what controls you do need.  This works for small projects like standing up a pair of webservers in an existing infrastructure.
  • Exclusion:  We give you the entire catalog of controls and then tell you which ones don’t apply to you.  This works best with large projects such as the outsourcing of an entire IT department.

We need a reference implementation per technology.  Let’s face it, how many times have I taken the 800-53 controls and broken them down into controls relevant for a desktop OS?  At least 5 in the last 3 years.  The way you really need to do this is that you have a hardening guide and that is the authoritative set of requirements for that technology.  It makes life simple.  Not that I’m saying deviate from doctrine and don’t do 800-53 controls and 800-53A test procedures, but that’s the point of having a hardening guide–it’s really just a set of tailored controls specific to a certain technology type.  The work has been done for you, quit trying to re-engineer the wheel.

Use a Joint Responsibilities Matrix.  Basically this breaks down the catalog of controls into the following columns:

  • Control Designator
  • Control Title
  • Provided by the Government/Infrastructure/Common Control
  • Provided by the Contractor/Project Team/Engineer


Similar Posts:

Posted in BSOFH, Outsourcing, Technical | 3 Comments »
Tags:

« Previous Entries Next Entries »


Visitor Geolocationing Widget: