Oh Hey, Secure DNS now Mandatory

Posted August 27th, 2008 by

OMB sneaked this one in on me:  OMB Memo 08-23 requires secure DNS (standard .pdf caveat).  Agencies need to submit a plan by September 5th on how they should accomplish this.  The whole switchover should occur by December 2009.

The interesting thing to me is that OMB is getting bolder in specifying technical solutions.  Part of me wants to scream because public policy people have no business dictating technical solutions–that’s what we have standards boards and RFCs for.

From what I hear, some of this is because OMB is starting to be a really bad lame duck.  Think about it, what are the odds that anybody at OMB is going to be around in December 2009?  Completely unofficial word on the street is that OMB is pushing last-minute initiatives because of politicals–trying to accomplish things in time for the elections.

Also, I think that OMB is getting tired of NIST’s nonpspecificity in their guidance.  NIST’s approach to being generic in nature is necessary because of their role as a research organization and the producers of methodologies.

The solution to all this?  Well, the way it happens in the rational world is organic standards boards.  Yes, they have their problems (*cough* WAFs anyone? *cough*) but overall, they fill a place.  Inside Government, we don’t have much of that happening–we have the CIO council and the Enterprise Architecture folks, but nothing security-specific.

Lockup Your Data

Lock Up Your Data photo by psd.

Description of the picture, it’s great and needs to be repeated:

The road passes the temptations of Flash and AIR. Those who succumb or who are unfortunate enough to be lured by Silverlight’s Siren find themselves sold down the river of Rich User Experiences and hurled towards lock-in weir. The TiddlyWiki steps may rescue some, who can then join those who stuck to the main path of Javascript and AJAX for their interactions.

The URI scheme is based on DNS, a registry which has weaknesses, meanwhile the ICANN Fracture results from their greedily adding spurious new Top Level Domains such as .mobi, .jobs, .xxx and even .tel, which whilst generating more revenue (for them) causes mass confusion and threatens to break the opacity of URIs.



Similar Posts:

Posted in Technical | 2 Comments »
Tags:

Yet More Security Controls You Won’t See in SP 800-53

Posted August 26th, 2008 by

PE-52 Self-Destructing RFID Implants
Control:
The organization equips all employees with integrated storage media with self-igniting RFID devices so that they can be tracked throughout any government facility and destroyed upon command.

Supplemental Guidance:
All CISOs know that the information inside their employees’ heads is the real culprit.  When they get a new job, they take that information–all learned on the taxpayers’ dime–with them.  This is a much bigger security risk than the data on a USB drive could ever be.  Instead of denying the obvious truth, why don’t we implement security controls to minimize the impact of out-of-control employees?  This control is brought to you by L Bob Rife.

Control Enhancements:
(1) The organization destroys the information inside an employee’s head when the employee leaves the organization, much like hard drives need to be degaussed before they are sent for maintenance.
Low: MP-52 Moderate: MP-52(1) High: MP-52(1)



Similar Posts:

Posted in IKANHAZFIZMA | 2 Comments »
Tags:

Give Me Your Free-Form Comments

Posted August 20th, 2008 by

Any comment or graffiti you want to put up in the comments, go ahead.  Only stipulation is that it’s profanity-free (ack, this coming from me?) and relevant to security in the Federal Government.

 

Why do this?  Well, to give a voice to those who don’t say anything about what’s going on.  We need to hear more from the “silent infosec majority” who just do their jobs every day.



Similar Posts:

Posted in Odds-n-Sods, Rants | 5 Comments »
Tags:

Effective Inventory Management

Posted August 20th, 2008 by

So what exactly is a “system”?  After all this time, it’s still probably one of the most misunderstood ways that we manage security in the Government.

The short answer is this:  a system is what you say it is.  Long answer is it depends on the following factors:

  • Maturity of your agency
  • Budget processes and Exhibit 300s
  • The extent of your common controls
  • Political boundaries between inter-agency organizations
  • Agency missions
  • Amount of highly-regulated data such as PII or financial

Yes, this all gets complicated.  But really, whatever you say is a system is a system, the designation is just for you so you can manage the enterprise in pieces.  There are 3 main techniques that I use to determine what is a system:

  • As a budget line-item: If it has an Exhibit 300, then it’s a system.  This works better for Plan of Actions and Milestones (POA&Ms) but in reality there might not be a 1:1 correllation between systems and Exhibit 300s.
  • As a data type: If it has a particular type of data, then it’s a system.  This works well for special-purpose systems or where a type of data is regulated, such as PII or financial data.
  • As a project or program: if it’s the same people that built it and maintain it, then it’s a system.  This dovetails in nicely with any kind of SDLC or with any kind of outsourcing.

Inventory

Inventory photo by nutmeg.

Inventory management techniques that work:

  • Less systems are better.  Each system incurs overhead in effort and cost.
  • More systems works when you have no idea what is out there, but will cripple you in the long term because of the overhead.
  • Start with many systems, assess each as its own piece, then consolidate them into a general support system or common controls package.
  • Set a threshold for project size in either pieces of hardware or dollar value.  If the project exceeds that threshold, then it’s a system.
  • Determine if something will be a system when the budget request is made.  Good CISOs realize this and have a place on the investment control board or capital planning investment board.

Guerilla CISO war story time:

Way back when all this was new, one of the agency CISOs would have a roundtable every quarter or so.  Won’t name who, but some of my blog readers do.  Almost every meeting devolved at some point into the time-honored sticking point of “what is a system?”  Everybody wanted to know if they had “2 servers, 3 PCs, a database, a dog, and a dickfore”, was that a system.  After one too many iterations, the gray-hair in the group would put up “Exhibit 300=System” on the whiteboard before every meeting.  Then when the inevitable conversation of “what is a system?” would come up, he would just point to the board.

And another story:

Several years ago I was working an IT outsourcing contract with an inventory that was determined using the budget line-item technique.  Turned out we had all sorts of systems, some of which didn’t make sense, like the desktop client to manage the local admin account.  One of my first priorities was to consolidate as many systems as I could.  Not that I was altruistic about saving money or anything, it was that the less systems I had, the less paperwork needed to be generated. =)   Most of the systems I rolled up into a general support system aimed at basic user connectivity.



Similar Posts:

Posted in FISMA | No Comments »
Tags:

Cloud Computing and the Government

Posted August 19th, 2008 by

Post summary: We’re not ready yet culturally.

What spurred this blog post into being is this announcement from ServerVault and Apptis about a Federal Computing Cloud.  I think it’s a pretty ballsy move, and I’ll be watching to see if it works out.

Disclaimer being that at one time I managed security for something similar in a managed services world, only it was built by account with everything being a one-off.  And yeah, we didn’t start our organization the right way, so we had a ton of legacy concepts that we could never shake off, more than anything else about our commercial background and ways of doing things.

Current Theory on Cloud Computing

Current Theory on Cloud Computing photo by cote.

The way you make money in the managed services world is on standardization and economy-of-scale.  To us mere mortals, it means the following:

  • Standardized OS builds
  • Shared services where it makes sense
  • Shared services as the option of choice
  • Split your people’s time between clients
  • Up-charge for non-standard configurations
  • Refuse one-off configurations on a case-by-case basis

The last 2 were our downfall.  Always eager to please our clients, our senior leadership would agree to whatever one-offs that they felt were necessary for client relationship purposes but without regard to the increased costs and inefficiency when it came time to implement.

Now for those of you out in the non-Government world, let me bring you to the conundrum of the managed services world:  shared services only works in limited amounts.  Yes, you can manage your infrastructure better than the Government does, but they’ll still not like most of it because culturally, they expect a custom-built solution that they own.  Yes, it’s as simple as managing the client’s expectations of ownership v/s their cost savings, and I don’t think we’re over that hurdle yet.

And this is the reason: when it comes to security and cloud computing, the problem is that you’re only as technically literate as your auditors are.  If they don’t understand what the solution is and what the controls are around it, you do not have a viable solution for the public sector.

A “long time ago” (9000 years at least), I created the 2 golden rules for shared infrastructure:

  • One customer cannot see another customer.
  • One customer cannot affect another customer’s level of service.

And the side-rules for shared infrastructure in the public sector:

  • We have a huge set of common controls that you get the documentation to.  It will have my name on it, but you don’t have to spend the money to get it done.
  • It’s to my benefit to provide you with transparency in how my cloud operates because otherwise, my solution is invalidated by the auditors.
  • Come to us to design a solution, it’s cheaper for you that way.  I know how to do it effectively and more cheaply because it’s my business to know the economics of my cloud.
  • You have to give up control in some ways in order to get cost savings.
  • There is a line beyond which you cannot change or view because of the 2 golden rules.  The only exception is that I tell you how it’s made, but you can’t see any of the data that goes into my infrastructure.
  • If I let you audit my infrastructure, you’ll want to make changes, which can’t happen because of the 2 golden rules.
  • I’ll be very careful where I put your data because if your mission data spills into my infrastructure, I put myself at extreme risk.

So, are ServerVault and Apptis able to win in their cloud computing venture?  Honestly, I don’t know.  I do think that when somebody finally figures out how to do cloud computing with the Federal Government, it will pay off nicely.

I think Apptis might be spreading themselves fairly thin at this point, rumor has it they were having some problems last year.  I think ServerVault is in their comfort space and didn’t have to do too much for this service offering.

I can’t help but think that there’s something missing in all of this, and that something is a partnering with the a sponsoring agency on a Line of Business.  FEA comes to mind.



Similar Posts:

Posted in What Doesn't Work, What Works | 1 Comment »
Tags:

Draft of SP 800-37 R1 is Out for Public Review

Posted August 19th, 2008 by

Go check it out (caveat: .pdf file) and send your comments to sec-cert@nist.gov

I’ve just given it a glance and here are some things that I’ve noticed:

  • Section on security in SDLC
  • Incorporates some of the concepts in SP 800-39 about enterprise-wide risk
  • Section on common controls
  • The process has remained pretty much the same but now references all the other core documents

Where I see this revision’s weaknesses:

  • Still possible to view the C&A process as happening at the end of the SDLC as a gateway activity.  This is the path to failure–you have to start at the initiation of a project.  In other words, I don’t think the SDLC thing is obvious enough for the constituency.  C&A should be all about security in the SDLC, and I  think we’ve done ourselves a disservice by trying to separate the two.
  • Unity:  Yes, we have all the pieces there, but the document doesn’t flow as a whole yet.  BFD, I’ll get over it soon enough.
  • It all goes back to metrics:  If completed C&A is going to be one of the core metrics that you use (or OMB uses), then it should be THE core document with everything else being a stub of of it.  We have a start, but I don’t think it’s as fleshed-out as it needs to be.

Side-note for NIST:  C&A is the implementation of the System Security Engineering Process, some of that SSE has a place in 800-37.

The origingal announcement from NIST is this:

NIST, in cooperation with the Office of the Director of National Intelligence (ODNI), the Department of Defense (DOD), and the Committee on National Security Systems (CNSS), announces the completion of an interagency project to develop a common process to authorize federal information systems for operation. The initial public draft of NIST Special Publication 800-37, Revision 1, Guide for Security Authorization of Federal Information Systems: A Security Lifecycle Approach, is now available for a six-week public comment period. The publication contains the proposed new security authorization process for the federal government (currently commonly referred to as certification and accreditation, or C&A). The new process is consistent with the requirements of the Federal Information Security Management Act (FISMA) and the Office of Management and Budget (OMB) Circular A-130, Appendix III, promotes the concept of near real-time risk management based on continuous monitoring of federal information systems, and more closely couples information security requirements to the Federal Enterprise Architecture (FEA) and System Development Life Cycle (SDLC). The historic nature of the partnership among the Civil, Defense, and Intelligence Communities and the rapid convergence of information security standards and guidelines for the federal government will have a significant impact on the federal government’s ability to protect its information systems and networks. The convergence of security standards and guidelines is forging ahead with the development of a series of new CNSS policies and instructions that closely parallel the NIST security standards and guidelines developed in response to FISMA. The CNSS policies and instructions which address the specific areas of security categorization, security control specification, security control assessment, risk management, and security authorization, coupled with the current NIST publications will provide a more unified information security framework for the federal government and its contracting base. The unified approach to information security is brought together in part by the update to NIST Special Publication 800-37, Revision 1, which provides a common security authorization process and references the NIST and CNSS publications for the national security and non national security communities, respectively. The convergence activities mentioned above along with tighter integration of security requirements into the FEA and SDLC processes will promote more consistent and cost-effective information security and trusted information sharing across the federal government. Comments on the IPD of SP 800-37, Revision 1 should be provided by September 30, 2008 and forwarded to the Computer Security Division, Information Technology Laboratory at NIST or submitted via email to: sec-cert@nist.gov .
 



Similar Posts:

Posted in Uncategorized | 1 Comment »
Tags:

« Previous Entries Next Entries »


Visitor Geolocationing Widget: