Federal CIO Council’s Guidelines on Security and Social Media

Posted September 17th, 2009 by

I got an email today from the author who said that it’s now officially on the street: Guidelines for Secure Use of Social Media by Federal Departments and Agencies, v1.0.  I’m listed as a reviewer/contributor, which means that maybe I have some good ideas from time to time or that I know some people who know people.  =)

Abstract: The use of social media for federal services and interactions is growing tremendously, supported by initiatives from the administration, directives from government leaders, and demands from the public. This situation presents both opportunity and risk. Guidelines and recommendations for using social media technologies in a manner that minimizes the risk are analyzed and presented in this document.

This document is intended as guidance for any federal agency that uses social media services to collaborate and communicate among employees, partners, other federal agencies, and the public.



Similar Posts:

Posted in Odds-n-Sods, The Guerilla CISO | No Comments »
Tags:

Special Publication 800-53 Revision 3 Workshop

Posted September 1st, 2009 by

My friends at Potomac Forum are having a workshop on SP 800-53 R3 on the 15th of September.  This is an update to the Government’s catalog of controls.

The workshop will also be about standards convergence: how ODNI, DoD, and NIST are moving towards one standard and what this means for the intelligence community and military.

Ron Ross from NIST will talk about how the NIST Risk Management Framework is changing from a static, controls-based approach to a more dynamic “real-time continuous monitoring”.



Similar Posts:

Posted in NIST | 2 Comments »
Tags:

OMB Wants a Direct Report

Posted August 28th, 2009 by

The big news in OMB’s M-09-29 FY 2009 Reporting Instructions for the Federal Information Security Management Act and Agency Privacy Management is that instead of fiddling with document files reporting will now be done directly through an online tool. This has been covered elsewhere and it is the one big change since last year.  However having less paper in the paperwork is not the only change.

Piles of Paper photo by °Florian.

So what will this tool be like? It is hard to tell at this point. Some information will be entered directly but the system appears designed to accept uploads of some documents, such as those supporting M-07-16. Similar to the spreadsheets used for FY 2008 there will be separate questions for the Chief Information Officer, Inspector General and Senior Agency Official for Privacy. Microagencies will still have abbreviated questions to fill out. Additional information on the automated tool, including full instructions and a beta version will be available in August, 2009.

Given the required information has changed very little the automated system is unlikely to significantly ease the reporting burden. This system appears primarily designed to ease the data processing requirements for OMB. With Excel spreadsheets no longer holding data many concerns relating to file versions, data aggregation and analysis are greatly eased.

It is worth noting that a common outcome of systems re-engineered to become more efficient is that managers look to find ways to utilize the new efficiency. What does this mean? Now that OMB has the ability to easily analyze data which took a great amount of effort to process before they may want to improve what is reported. A great deal has been said over the years about the inefficiencies in the current reporting regime. This may be OMB’s opportunity to start collecting an increased amount of information that may better reflect agencies actual security posture. This is pure speculation and other factors may moderate OMB’s next steps, such as the reporting burden on agencies, but it is worth consideration.

One pleasant outcome to the implementation of this new automated tool is the reporting deadline has been pushed back to November 18, 2009.

Agencies are still responsible for submitting document files to satisfy M-07-16. The automated tool does not appear to allow direct input of this information. However the document requirements are slightly different. Breach notification policy document need only be submitted if it has changed. It is no longer sufficient to simply report progress on eliminating SSNs and reducing PII, an implementation plan and a progress update must be submitted. The requirement for a policy document covering rules of behavior and consequences has been removed.

In addition to the automated tool there are other, more subtle changes to OMB’s FY 2009 reporting. Let’s step through them, point by point:

10. It is reiterated that NIST guidance is required. This point has been expanded to state that legacy systems, agencies have one year to come into compliance with NIST documents new material. For new systems agencies are expected to be in compliance upon system deployment.

13 & 15. Wording indicating that disagreements on reports should be resolved prior to submission and that the agency head’s view will be authoritative have been removed. This may have been done to reduce redundancy as M-09-29’s preface indicates agency reports must reflect the agency head’s view.

52. The requirement for an central web page with working links to agency PIAs and Federal Register published SORNs has been removed.

A complete side-by-side comparison of changes between the two documents is available at FISMApedia.org.

All in all the changes to OMB’s guidance this year will not change agencies reporting burden significantly. And that may not be a bad thing.



Similar Posts:

Posted in FISMA, NIST, Public Policy | 1 Comment »
Tags:

A Layered Model for Massively-Scaled Security Management

Posted August 24th, 2009 by

So we all know the OSI model by heart, right?   Well, I’m offering up my model of technology management. Really at this stage I’m looking for feedback

  • Layer 7: Global Layer. This layer is regulated by treaties with other nation-states or international standards.  I fit cybercrime treaties in here along with the RFCs that make the Internet work.  Problem is that security hasn’t really reached much to this level unless you want to consider multinational vendors and top-level cert coordination centers like CERT-CC.
  • Layer 6: National-Level Layer. This layer is an aggregation of Federations and industries and primarily consists of Federal law and everything lumped into a “critical infrastructure” bucket.  Most US Federal laws fit into this layer.
  • Layer 5: Federation/Community Layer. What I’m talking here with this layer is an industry federated or formed in some sort of community.  Think major verticals such as energy supply.  It’s not a coincidence that this layer lines up with DHS’s critical infrastructure and key resources breakdown but it can also refer to self-regulated industries such as the function of PCI-DSS or NERC.
  • Layer 4: Enterprise Layer. Most security thought, products, and tools are focused on this layer and the layers below.  This is the realm of the CSO and CISO and roughly equates to a large corporation.
  • Layer 3: Project Layer. Collecting disparate technologies and data into a similar piece such as the LAN/WAN, a web application project, etc.  In the Government world, this is the location for the Information System Security Officer (ISSO) or the System Security Engineer (SSE).
  • Layer 2: Integration Layer. Hardware, software, and firmware combine to become products and solutions and is focused primarily on engineering.
  • Layer 1: Code Layer. Down into the code that makes everything work.  This is where the application security people live.

There are tons of way to use the model.I’m thinking each layer has a set of characteristics like the following:

  • Scope
  • Level of centralization
  • Responsiveness
  • Domain expertise
  • Authority
  • Timeliness
  • Stakeholders
  • Regulatory bodies
  • Many more that I haven’t thought about yet

Chocolate Layer Cake photo by foooooey.

My whole point for this model is that I’m going to try to use it to describe the levels at which a particular problem resides at and to stimulate discussion on what is the appropriate level at which to solve it.  For instance, take a technology and you can trace it up and down the stack. Say Security Event and Incident Monitoring:

  • Layer 7: Global Layer. Coordination between national-level CERTs in stopping malware and hacking attacks.
  • Layer 6: National-Level Layer. Attack data from Layer 5 is aggregated and correlated to respond to large incidents on the scale of Cyberwar.
  • Layer 5: Federation/Community Layer. Events are filtered from Layer 4 and only the confirmed events or interest are correlated to determine trends.
  • Layer 4: Enterprise Layer. Events are aggregated by a SIEM with events of interest flagged for response.
  • Layer 3: Project Layer. Logs are analyzed in some manner.  This is most likely the highest in the model that we
  • Layer 2: Integration Layer. Event logs have to be written to disk and stored for a period of time.
  • Layer 1: Code Layer. Code has to be programmed to create event logs.

I do have an ulterior motive.  I created this model because most of our security thought, doctrine, tools, products, and solutions work at Layer 4 and below.  What we need is discussion on Layers 5 and above because when we try to create massively-scaled security solutions, we start to run into a drought of information at what to do above the Enterprise.  There are other bits of doctrine that I want to bring up, like trying to solve any problem at the lowest level for which it makes sense.  So in other words, we can use the model to propose changes to the way we manage security… say we have a problem like the lack of data on data breaches.  What we’re saying when we say that we need a Federal data breach law is that because of the scope and the amount of responsibility and competing interests at Layer 5, that we need a solution at Layer 6, but in any case we should start at the bottom and work our way up the model until we find an adequate scope and scale.

So, this is my question to you, Internet: have I just reinvented enterprise public policy, IT architecture (Federal Enterprise Architecture) and business blueprinting, or did I create some kind of derivative view of technology, security, and public policy that I can now use?



Similar Posts:

Posted in Public Policy | 6 Comments »
Tags:

Random Thoughts on “The FISMA Challenge” in eHealthcare

Posted August 4th, 2009 by

OK, so there’s this article being bounced all over the place.  Basic synopsis is that FISMA is keeping the government from doing any kind of electronic health records stuff because FISMA requirements extend to health care providers and researchers when they take data from the Government.

Read one version of the story here

So the whole solution is that, well, we can’t extend FISMA to eHealthcare when the data is owned by the Government because that security management stuff gets in the way.  And this post is about why they’re wrong and right, but not in the places that they think they are.

Government agencies need to protect the data that they have by providing “adequate security”.  I’ve covered this a bazillion places already. Somewhere somehow along the lines we let the definition of adequate security mean “You have to play by our rulebook” which is complete and utter bunk.  The framework is an expedient and a level-setting experience across the government.  It’s not made to be one-size-fits-all, but is instead meant to be tailored to each individual case.

The Government Information Security Trickle-Down Effect is a name I use for FISMA/NIST Framework requirements being transferred from the Government to service providers, whether they’re in healthcare or IT or making screws that sometimes can be used on the B2 bombers.  It will hit you if you take Government data but only because you have no regulatory framework of your own with which you can demonstrate that you have “adequate security”.  In other words, if you provide a demonstrable level of data protection equal to or superior to what the Government provides, then you should reasonably be able to take the Government data, it’s finding the right “esperanto” to demonstrate your security foo.

If only there was a regulatory scheme already in place that we could use to hold the healthcare industry to.  Oh wait, there is: HIPAA.  Granted, HIPAA doesn’t really have a lot of teeth and its effects are maybe demonstrable, but it does fill most of the legal requirement to provide “adequate security”, and that’s what’s the important thing, and more importantly, what is required by FISMA.

And this is my problem with this whole string of articles:  The power vacuum has claimed eHealthcare.  Seriously, there should be somebody who is in charge of the data who can make a decision on what kinds of protections that they want for it.  In this case, there are plenty of people with different ideas on what that level of protection is so they are asking OMB for an official ruling.  If you go to OMB asking for their guidance on applying FISMA to eHealthcare records, you get what you deserve, which is a “Yes, it’s in scope, how do you think you should do this?”

So what the eHealthcare people really are looking for is a set of firm requirements from their handlers (aka OMB) on how to hold service providers accountable for the data that they are giving them.  This isn’t a binary question on whether FISMA applies to eHealthcare data (yes, it does), it’s a question of “how much is enough?” or even “what level of compensating controls do we need?”

But then again, we’re beaten down by compliance at this point.  I know I feel it from time to time.  After you’ve been beaten badly for years, all you want to do is for the batterer to tell you what you need to do so the hurting will stop.

So for the eHealthcare agencies, here is a solution for you.  In your agreements/contracts to provide data to the healthcare providers, require the following:

  • Provider shall produde annual statements for HIPAA compliance
  • Provider shall be certified under a security management program such as an  ISO 27001, SAS-70 Type II, or even PCI-DSS
  • Provider shall report any incident resulting in a potential data breach of 500 or more records within 24 hours
  • Financial penalties for data breaches based on number of records
  • Provider shall allow the Government to perform risk assessments of their data protection controls

That should be enough compensating controls to provide “adequate security” for your eHealthcare data.  You can even put a line through some of these that are too draconian or high-cost.  Take that to OMB and tell them how you’re doing it and how they would like to spend the taxpayers’ money to do anything other than this.

Case Files and Medical Records photo by benuski.



Similar Posts:

Posted in FISMA, Rants | 1 Comment »
Tags:

Cyberlolcats Watch the Hackers at DefCon

Posted July 30th, 2009 by

Yeah, tell it to this guy, the Internet’s lawyer. =)

funny pictures



Similar Posts:

Posted in IKANHAZFIZMA | No Comments »
Tags:

« Previous Entries Next Entries »


Visitor Geolocationing Widget: