Barcode Hacking

Posted January 13th, 2010 by

A little presentation I did for NoVA Hackers.  Basic intent was to be more workshop than something more formal and to give everybody the tools to do their own experimentation at home.

I even inspired Jack to write a blog post.

Caveat: this has nothing to do with FISMA or Government InfoSec.  =)

Links in the Presentation:

Links of interest:



Similar Posts:

Posted in Hack the Planet, Speaking, Technical | 6 Comments »
Tags:

Stress-Test Apache with Intent to Tune: BSOFH Tip for the Software Masochist

Posted August 28th, 2009 by

So I’ve been having some problems with my server for a month or so–periodically the number of apache servers would skyrocket and the box would get so overloaded (load of ~50 or so) that I couldn’t even run simple commands on it.  I would have to get into the hardware console and give the box a hard boot (a graceful reboot wouldn’t work).

Root cause is I’m a dork, but more about that later.

Anyway, I needed a way to troubleshoot and fix it.  The biggest problem I had was that the problem was very sporadic–sometime it would be 2 weeks between crashes, other times it would be 3 times in one day.  This is so begging for a stress-test really badly.  Looking on the Internet, I found a couple of articles about running a load-tester on apache and information on the tuning settings but not really much about a methodology (yeah yeah I work for a Big 4 firm, the word still makes me shudder even though it’s the right one to use here) to actually solve the problem of apache tuning.

So the “materials” I needed:

  • One server running apache.  Mine runs Apache2 under Debian Stable.  This is a little bit different from the average distro out there in that the process is apache2 and the command is apache2ctl where normally you would have httpd and httpdctl.  If you try this at home, you’ll need to use the latter commands.
  • An apache tuning guide or 3.  Here’s the simplest/most straightforward one I’ve seen.
  • A stress-tester.  Siege is awesome for this.
  • Some simple shell commands: htop (top works here too), ps, grep, and wc.

Now for the method to my madness…

I ssh into my server using three different sessions.  On one I run htop.  Htop is a version of top that gives you a colored output and supports multiple processors.  The output without stress-testing looks something like this:

(Click for a life-size image)

I keep one session free to edit files and do an emergency “killall apache2” if things get out of hand (and they will really quickly, I had to pull the plug about 20 times throughout this process).   I run a simple command on another ssh session to get a count of how many apache threads I have running:

rybolov@server:~$ ps aux | grep apache2 | grep start | wc -l
11

OK, so far so good.  I’ve got 11 threads running with no load and RAM usage of 190MB.  I needed the extra “grep start” because it removes the text editor I have open on apache2.conf and anything else I might be doing in the background.

I also killed apache, waited 10 seconds, and looked at the typical RAM use.  With no apache running, I use about 80MB just for the OS and everything else I’m running.  This means that I’m using 110MB of RAM for 11 apache threads, which means I’m using ~10MB of RAM for each apache thread.  Now that’s something important I can use.

I took my tuning settings in apache2.conf (httpd.conf for most distros) (Apache2 uses the prefork module which uses threads, read the tuning guide for more info) and set them at the defaults listed in the tuning guide.  They became something like the following:

<IfModule prefork.c>
  StartServers            8
  MinSpareServers         5
  MaxSpareServers        20
  MaxClients            150
  MaxRequestsPerChild  1000
</IfModule>

Notice how the MaxClients is set at 150?  This will prove to be my downfall later.  Turns out that my server is RAM-poor for as much processor as it has or WordPress is a RAM hog (or both, which is the case =)  ).  I’ll eventually upgrade my server, but since it’s a cloud server from Mosso, I pay by the RAM and drive space.

After each edit of apache2.conf, you need to give apache a configuration test and reload:

server:~# apache2ctl configtest
Syntax OK                        <- If something else comes back, fix it!!
server:~# apache2ctl restart

I’m now ready to stress-test using the default setup.  This is the awesome part.  First, I need to simulate a load.  I make an url seedfile so that siege will bounce around between a handful of pages.  I make a file siege.urls.txt and put in a collection of urls so that it looks like the following:

http://www.guerilla-ciso.com/
http://www.guerilla-ciso.com/about
http://www.guerilla-ciso.com/contact
http://www.guerilla-ciso.com/papers-and-presentations
....<about 20 lines deleted here, you get the point>
http://www.guerilla-ciso.com/page/2
http://www.guerilla-ciso.com/page/3
http://www.guerilla-ciso.com/page/4

I’m sure there is an efficient and fun way to make this, like say, a text-only sitemap or sproxy which is made by the same guy who does siege, but since I only needed about 30 urls, I just cut-n-pasted them off the blog homepage.

I fire up siege and give my webserver a thorough drubbing, running 50 connections for 10 minutes and using my url seedfile.  BTW, I’m running siege on the webserver itself, so there isn’t anything in the way of network latency.  <enter sinister laugh of evil as I sadistically torture my apache and the underlying OS>

server:~# siege -c 50 -t 600s -f siege.urls.txt
** SIEGE 2.66
** Preparing 50 concurrent users for battle.    <-The guy writing siege has a wicked sense of humor.
The server is now under siege...                <-Man the ramparts, Apache, they're coming for you!
HTTP/1.1 200   1.08 secs:   16416 bytes ==> /
HTTP/1.1 200   1.07 secs:   16416 bytes ==> /
....<about 2 bazillion lines deleted here, you get the idea>
HTTP/1.1 200   4.66 secs:    8748 bytes ==> /about
HTTP/1.1 200   3.92 secs:    8748 bytes ==> /about
Lifting the server siege...      done.

Transactions:                  61 hits   <-No, this isn't actual, I abbreviated the siege output
Availability:              100.00 %      <-with a ctrl-c just to get some results so I didn't
Elapsed time:                6.70 secs   <-have to scroll through all that output from the real test.
Data transferred:            0.87 MB
Response time:                3.27 secs
Transaction rate:            9.10 trans/sec
Throughput:                0.13 MB/sec
Concurrency:               29.75
Successful transactions:          61
Failed transactions:               0
Longest transaction:            5.61
Shortest transaction:            1.07

Now I watch the output of htop.  Under stress, the output looks something like this:

(Click for a life-size image)

Hmm, looks like I have a ton of apache threads soaking up all my RAM.  What happens is that in about 30 seconds, the OS starts swapping and the swap use just keeps growing until the OS is unresponsive.  This is a very interesting cascade failure because writing to swap incurs a load which makes the OS write to swap more.  Maybe I need to limit either the amount of RAM used per apache or limit the maximum amount of threads that apache spawns.  The tuning guide tells us how…

There is one setting that is the most important in tuning apache, it’s MaxClients.  This is the maximum number of servers (with the worker module) or threads (prefork module).  Looking at my apache tuning guide, I get a wonderful formula: ($SizeOfTotalRAM – $SizeOfRAMForOS) / $RAMUsePerThread = MaxClients.  So in my case, (512 – 80) / 11 = 39.something.  Oops, this is a far cry from the 150 that comes as default.  I also know that the RAM/thread number I used was without any load on apache, so with a load on and generating dynamic content (aka WordPress) , I’ll probably use ~15MB per thread.

One other trick that I can use:  Since I think that what’s killing me is the number of apache threads, I can run with a reduced amount of simultaneous connections and watch htop.  When htop shows that I’ve just started to write to swap, I can run my ps command to find out how many apache threads I have running.

rybolov@server:~$ ps aux | grep apache2 | grep start | wc -l
28

Now this is about what I expected:  With 28 threads going, I tipped over into using swap.  Reversing my tuning formula, I get (28 threads x 15 MB/thread) +80 MB for OS = 500 MB used.  Hmm, this makes much sense to me, since the OS starts swapping when you use ~480MB of RAM.

So I go back to my prefork module tuning.

<IfModule mpm_prefork_module>
 StartServers          8
 MinSpareServers       5
 MaxSpareServers      10
 MaxClients           25
 MaxRequestsPerChild   2000
</IfModule>

I set MaxClients at 25 because 28 seems to be the tipping point, so that gives me a little bit of “wiggle room” in case something else happens at the same time I’m serving under a huge load.  I also tweaked some of the other settings slightly.

Then it’s time for another siege torture session.  I run the same command as above and watch the htop output.  With the tuning settings I have now, the server dips into swap about 120MB and survives the full 10 minutes.  I’m sure the performance is degraded somewhat by going into swap, but I’m happy with it for now because the server stays alive.  It wasn’t all that smooth, I had to do a little bit of trial and error first, starting with MaxClients 25 and working my way up to 35 under a reduce siege load (-c 25 -t 60s) to see what would happen, then increasing the load from siege (-c50 -t 600s) and ratcheting MaxClients back down to 25.

And as far as me being a dork… well, aside from the huge MaxClients setting (That’s the default, don’t blame me), I set MaxRequestsPerChild to 100 instead of 1000, meaning that every 100 http requests I was rolling over and making a new thread.  That would lead to cascade failure under a load. (duh!)



Similar Posts:

Posted in Technical, The Guerilla CISO, What Works | 5 Comments »

Save a Kitten, Write SCAP Content

Posted August 7th, 2009 by

Apparently I’m the Internet’s SCAP Evangelist according to Ed Bellis, so at this point all I can do is shrug and say “OK, I’ll teach people about SCAP”.

Right now there is a “pretty OK” framework for SCAP.  IE, we have published standards, and there are some SCAP-certified tools out there to do patch and vulnerability management.

What’s missing right now is SCAP content.  I don’t think this is going to get solved en-masse, it’s more like there needs to be an awareness campaign directed at end-users, vulnerability researchers, and people who write small-ish tools.

So I sat around at home trying to figure out how to get people to use/write more SCAP content and finally settled on “Everytime you use SCAP content, a kitten runs free”.

Anyway, this is a presentation I gave at my local OWASP chapter.



Similar Posts:

Posted in NIST, Speaking, Technical | 4 Comments »
Tags:

Federated Vulnerability Management

Posted July 14th, 2009 by

Why hello there private sector folks.  It’s no big surprise, I work in the US Federal Government Space and we have some unique challenges of scale.  Glad to meet you, I hear you’ve got the same problems only not in the same kind of scale as the US Federal Government.  Sit back, read, and learn.

You see, I work in places where everything is siloed into different environments.  We have crazy zones for databases, client-facing DMZs, managment segments, and then the federal IT architecture itself: a loose federation of semi-independent enterprises that are rapidly coming together in strange ways under the wonderful initiative known as “The TIC”.  We’re also one of the most heavily audited sectors in IT.

And yet, the way we manage patch and vulnerability information is something out of the mid-80’s.

Current State of Confusion

Our current patch management information flow goes something like this:

  • Department SOC/US-CERT/CISOs Office releases a vulnerability alert (IAVA, ISVM, something along those lines)
  • Somebody makes a spreadsheet with the following on it:
    • Number of places with this vulnerability.
    • How many have been fixed.
    • When you’re going to have it fixed.
    • A percentage of completion
  • We then manage by spreadsheets until the spreadsheets say “100%”.
  • The spreadsheets are aggregated somewhere.  If we’re lucky, we have some kind of management tool that we dump our info into like eMASS.
  • We wonder why we get pwned (by either haxxorz or the IG).

Now for how we manage vulnerability scan information:

  • We run a tool.
  • The tool spits out a .csv or worse, a .html.
  • We pull up the .csv in Excel and add some columns.
  • We assign dates and responsibilities to people.
  • We have a weekly meeting togo over what’s been completed.
  • When we finish something, we provide evidence of what we did.
  • We still really don’t know how effective we were.

Problems with this approach:

  • It’s too easy to game.  If I’m doing reporting, the only thing really keeping me reporting the truth is my sense of ethics.
  • It’s slow as hell.  If somebody updates a spreadsheet, how does the change get echoed into the upstream spreadsheets?
  • It isn’t accurate at any given moment in time, mostly because changes quicker than the process can keep up.  What this means is that we always look like liars who are trying to hide something because our spreadsheet doesn’t match up with the “facts on ground”.
  • It doesn’t compare with our other management tools like Plans of Action and Milestone (POA&M).  They usually are managed in a different application than the technical parts, and this means that we need a human with a spreadsheet to act as the intermediary.

So this is my proposal to “fix” government patch and vulnerability management: Federated Patch and Vulnerability Management through SCAP.

Trade Federation Battle Droid photo by Stéfan.  Roger, Roger, SCAP means business.

Whatchu Talkin’ Bout With This “Federated” Stuff, Willis?

This is what I mean, my “Plan for BSOFH Happiness”:

Really what I want is every agency to have an “orchestrator” ala Ed Bellis’s little SCAP tool of horrors. =)  Then we federate them so that information can roll up to a top-level dashboard for the entire executive branch.

In my beautiful world, every IT asset reports into a patch management system of some sort.  Servers, workstations, laptops, all of it.  Yes, databases too.  Printers–yep.  If we can get network devices to get reported on config info via SCAP-enabled NMS, let’s get that pushing content into our orchestrator. We don’t even really  have to push patches using these tools–what I’m primarily concerned with at this point is to have the ability to pull reports.

I group all of my IT assets in my system into a bucket of some sort in the orchestrator.  That way, we know who’s responsible when something has a problem.  It also fits into our “system” concept from FISMA/C&A/Project Management/etc.

We do periodic network scanning to identify everything on our network and feed them into the orchestrator.  We do regular vulnerability scans and any findings feed into the orchestrator.  The more data, the better aggregate information we can get.

Our orchestrator correlates network scans with patch management status and gives us a ticket/alert/whatever where we have unmanaged devices.  Yes, most enterprise management tools do this today, but the more scan results I have feeding them, the better chance I have at finding all my assets.  Thanks to our crazy segmented architecture models, we have all these independent zones that break patch, vulnerability, and configuration management as the rest of the IT world performs it.  Flat is better for management, but failing that, I’ll take SCAP hierarchies of reporting.

The Department takes a National Vulnerability Database feed and pushes down to the Agencies what they used to send in an IAVA, only they also send down the check to see if your system is vulnerable.  My orchestrator automagically tests and reports back on status before I’m even awake in the morning.

I get hardening guides pushed from the Department or Agency in SCAP form, then pull an audit on my IT assets and have the differences automagically entered into my workflow and reporting.

I become a ticket monkey.  Everything is in workflow.  I can be replaced with somebody less expensive and can now focus on finding the answer to infosec nirvana.

We provide a feed upstream to our Department, the Department provides a feed to somebody (NCSD/US-CERT/OMB/Cybersecurity Coordinator) who now has the view across the entire Government.  Want to be bold, let Vivek K and the Sunlight Foundation at the data feeds and have truly open and transparent, “Unbreakable Government 2.1”.  Who needs FISMA report cards when our vulnerability data is on display?

Keys to Making Federated Patch and Vulnerability Management Work

Security policy that requires SCAP-compatible vulnerability and patch management products.  Instead of parroting back 800-53, please give me a requirement in your security policy that every patch and vulnerability management tool that we buy MUST BE SCAP-CERTIFIED.  Yes, I know we won’t get it done right now, but if we get it in policy, then it will trickle down into product choices eventually.  This is compliance I can live with, boo-yeah!

Security architecture models (FEA anyone?) that show federated patch and vulnerability management deployments as part of their standard configuration.  OK with the firewall pictures and zones of trust, I understand what you’re saying, now give me patch and vulnerability management flows across all the zones so I can do the other 85% of my job.

Network traffic from the edges of the hierarchy to…somewhere.  OK, you just need network connectivity throughout the hierarchy to aggregate and update patch and vulnerability information, this is basic data flow stuff.  US-CERT in a future incarnation could be the top-level aggregator, maybe.  Right now I would be happy building aggregation up to the Department level because that’s the level at which we’re graded.

Understanding.  Hey, I can’t fix everything all the time–what I’m doing is using automation to make the job of fixing things easier by aggregation, correlation, status reporting, and dashboarding.  These are all concepts behind good IT management, why shouldn’t we apply them to security managment also?  Yes, I’ll have times when I’m behind on something or another, but guess what, I’m behind today and you just don’t know it.  However, with near-real-time reporting, we need a culture shift away from trying to police each other up all the time to understanding that sometimes nothing is really perfect.

Patch and vulnerability information is all-in.  It has to be reporting in 100% across the board, or you don’t have anything–back to spreadsheets hell for you.  And simply put, why don’t you have everything in the patch management system already?  Come on, that’s not a good enough reason.

POA&Ms need to be more fluid.  Face it, with automated patch and vulnerability management, POA&Ms become more like trouble tickets.  But yes, that’s much awesome, smaller, easily-satisfied POA&Ms are much easier to manage provided that the administrative overhead for each of these is reduced to practically nothing… just like IT trouble tickets.

Regression testing and providing proof becomes easier because it’s all automated.  Once you fix something and it’s marked in the aggregator as completed, it gets slid into the queue for retesting, and the results become the evidence.

Interfaces with existing FISMA management tools.  This one is tough.  But we have a very well-entrenched software base geared around artifact management, POA&M management, and Security Test and Evaluation results.  This class of software exists because none of the tools vendors really understand how the Government does security management, and I mean NONE of them.  There has to be some weird unnatural data import/export acts going on here to make the orchestrator of technical data match up with the orchestrator of managment data, and this is the part that scares me in a federated world.

SCAP spreads to IT management suites.  They already have a footprint out there on everything, and odds are we’re using them for patch and configuration management anyway.  If they don’t talk SCAP, push the vendor to get it working.

Where Life Gets Surreal

Then I woke up and realized that if I provide my Department CISO with near-real-time patch and vulnerability mangement information, I suddenly have become responsible for patch and management instead of playing “kick it to the contractors” and hiding behind working groups.  It could be that if I get Federated Patch and Vulnerabilty Management off the ground, I’ve given my Department CISO the rope to hang me with.  =)

Somehow, somewhere, I’ve done most of what CAG was talking about and automated it.  I feel so… um… dirty.  Really, folks, I’m not a shill for anybody.



Similar Posts:

Posted in DISA, NIST, Rants, Technical | 12 Comments »
Tags:

Security Automation Developers Conference Slides

Posted July 2nd, 2009 by

Eh? What’s that mean?  Developer Days is a weeklong conference where they get down into the weeds about the various SCAP schemas and how they fit into the overall program of security automation. 

Highlights and new ideas:

Remedial Markup Language: Fledgeling schema to describe how to remediate a vulnerability.  A fully automated security system would scan and then use the RML content to automagically fix the finding… say, changing a configuration setting or installing a patch.  this would be much awesome if combined with the CVE/CWE so you have a vulnerability scanner that scans and fixes the problem.  Also needs to be kept in a bottle because the operations guys will have a heartattack if we are doing this without any human intervention.

Computer Network Defense: There is a pretty good scenario slide deck on using SCAP to automate hardening, auditing, monitoring, and defense.  The key from this deck is how the information flows using automation.

Common Control Identifier:  This schema is basically a catalog of controls (800-53, 8500.2, PCI, SoX, etc) in XML.  The awesomeness with this is that one control can contain a reference implementation for each technology and the checklist to validate it in XCCDF.  At this point, I get all misty…

Open Checklist Interactive Language: This schema is to capture questionaires.  Think managerial controls, operational controls, policy, and procedure captured in electronic format and fed into the regular mitigation and workflow tools that you use so that you can view “security of the enterprise at a glance” across technical and non-technical security.

Network Event Content Automation Protocol:  This is just a concept floating around right now on using XML to describe and automate responses to attacks.  If you’re familiar with ArcSight’s Common Event Format, this would be something similar but on steroids with workflow and a pony!

Attendance at developer days is limited, but thanks to all the “Powar of teh Intarwebs, you can go here and read the slides!



Similar Posts:

Posted in NIST, Technical | 3 Comments »
Tags:

Your Security “Requirements” are Teh Suxxorz

Posted July 1st, 2009 by

Face it, your security requirements suck. I’ll tell you why.  You write down controls verbatim from your catalog of controls (800-53, SoX, PCI, 27001, etc), put it into a contract, and wonder how come when it comes time for security testing, we just aren’t talking the same language.  Even worse, you put in the cr*ptastic “Contractor shall be compliant with FISMA and all applicable NIST standards”.  Yes, this happens more often than I could ever care to count, and I’ve seen it from both sides.

The problem with quoting back the “requirements” from a catalog of controls is that they’re not really requirements, they’re control objectives–abstract representations of what you need in order to protect your data, IT system, or business.  It’s a bit like brain surgery using a hammer and chisel–yes, it might work out for you, but I don’t really feel comfortable doing it or being on the receiving end.

And this is my beef with the way we manage security controls nowadays.  They’re not requirements, functionally they’re a high-level needs statement or even a security concept of operations.  Security controls need to be tailored into real requirements that are buildable, testable, measurable, and achievable.

Requirements photo by yummiec00kies.  There’s a social commentary in there about “Single, slim, and pleasant looking” but even I’m afraid to touch that one. =)

Did you say “Wrecks and Female Pigs’? In the contracting world, we have 2 vehicles that we use primarily for security controls: Statements of Work (SOW) and Engineering Requirements.

  • Statements of Work follow along the lines of activities performed by people.  For instance, “contractor shall perform monthly 100% vulnerability scanning of the $FooProject.”
  • Engineering Requirements are exactly what you want to have build.  For instance, “Prior to displaying the login screen, the application shall display the approved Generic Government Agency warning banner as shown below…”

Let’s have a quick exercise, shall we?

What 800-53 says: The information system produces audit records that contain sufficient information to, at a minimum, establish what type of event occurred, when (date and time) the event occurred, where the event occurred, the source of the event, the outcome (success or failure) of the event, and the identity of any user/subject associated with the event.

How It gets translated into a contract: Since it’s more along the lines of a security functional requirement (ie, it’s a specific functionality not a task we want people to do), we brake it out into multiple requirements:

The $BarApplication shall produce audit records with the following content:

  • Event description such as the following:
    • Access the $Baz subsystem
    • Mounting external hard drive
    • Connecting to database
    • User entered administrator mode
  • Date/time stamp in ‘YYYY-MM-DD HH:MM:SS’ format;
  • Hostname where the event occured;
  • Process name or program that generated the event;
  • Outcome of the event as one of the following: success, warn, or fail; and
  • Username and UserID that generated the event.

For a COTS product (ie, Windows 2003 server, Cisco IOS), when it comes to logging, I get what I get, and this means I don’t have a requirement for logging unless I’m designing the engineering requirements for Windows.

What 800-53 says: The The organization configures the information system to provide only essential capabilities and specifically prohibits and/or restricts the use of the following functions, ports, protocols, and/or services: [Assignment: organization-defined list of prohibited and/or restricted functions, ports, protocols, and/or services].

How It gets translated into a contract: Since it’s more along the lines of a security functional requirement, we brake it out into multiple requirements:

The $Barsystem shall have the software firewall turned on and only the following traffic shall be allowed:

  • TCP port 443 to the command server
  • UDP port 123 to the time server at this address
  • etc…..

If we drop the system into a pre-existing infrastructure, we don’t need firewall rules per-se as part of the requirements, what we do need is a SOW along the following lines:

The system shall use our approved process for firewall change control, see a copy here…

So what’s missing, and how do we fix the sorry state of requirements?

This is the interesting part, and right now I’m not sure if we can, given the state of the industry and the infosec labor shortage:  we need security engineers who understand engineering requirements and project management in addition to vulnerability management.

Don’t abandon hope yet, let’s look at some things that can help….

Security requirements are a “best effort” proposition.  By this, I mean that we have our requirements and they don’t fit in all cases, so what we do is we throw them out there and if you can’t meet the requirement, we waiver it (live with it, hope for the best) or apply a compensating control (shield it from bad things happening).  This is unnerving because what we end up doing is arguing all the time over whether the requirements that were written need to be done or not.  This drives the engineers nuts.

It’s a significant amount of work to translate control objectives into requirements.  The easiest, fastest way to fix the “controls view” of a project is to scope out things that are provided by infrastructure or by policies and procedures at the enterprise level.  Hmmm, sounds like explicitly stating what our shared/common controls are.

You can manage controls by exclusion or inclusion:

  • Inclusion:  We have a “default null” for controls and we will explicitly say in the requirements what controls you do need.  This works for small projects like standing up a pair of webservers in an existing infrastructure.
  • Exclusion:  We give you the entire catalog of controls and then tell you which ones don’t apply to you.  This works best with large projects such as the outsourcing of an entire IT department.

We need a reference implementation per technology.  Let’s face it, how many times have I taken the 800-53 controls and broken them down into controls relevant for a desktop OS?  At least 5 in the last 3 years.  The way you really need to do this is that you have a hardening guide and that is the authoritative set of requirements for that technology.  It makes life simple.  Not that I’m saying deviate from doctrine and don’t do 800-53 controls and 800-53A test procedures, but that’s the point of having a hardening guide–it’s really just a set of tailored controls specific to a certain technology type.  The work has been done for you, quit trying to re-engineer the wheel.

Use a Joint Responsibilities Matrix.  Basically this breaks down the catalog of controls into the following columns:

  • Control Designator
  • Control Title
  • Provided by the Government/Infrastructure/Common Control
  • Provided by the Contractor/Project Team/Engineer


Similar Posts:

Posted in BSOFH, Outsourcing, Technical | 3 Comments »
Tags:

« Previous Entries Next Entries »


Visitor Geolocationing Widget: