Tag Archives: grid computing

NSA chief endorses the cloud for classified military cyber program, considerations

Perhaps old news given the NSA chief made the below comments in 2011 presenting to Congress asking for support of the projects (basically a budget justification meeting).  What is interesting is how he frames the current state weaknesses versus the benefits of the future state of leveraging Cloud architectures.  He is also referring to several key programs that are deployed and seeing active participation.

As this relates to information security professionals, control safeguards, and ultimately PCI DSS is for the eye of the beholder.  A striking point is to fundamentally challenge your risk assumptions and the benefits of moving to the cloud.  A key consideration here is the concept of redeploying, rearchitecting, and I would say restart managing access and security anew.  Cloud provides an inflection point to businesses, and governments to start fresh to meet the current threats.

As I have often have CxO discussions, the framing of these technology changes provides a mechanism to reach a stability and integrity of technology supported operations (hard to find one that is not).  Consider the NSA Chief points below and perhaps consider that he is speaking of highly sensitive data that has human life risks directly associated.  That type of data is highest sensitivity, and if such can be secured in a collaborative, cloud, integrated, and mobile enabled environment – why not other data elements and industries.

This is in line with the OCR NIST HIPAA guidance and recent clarification (June 2012) regarding how Cloud environments are subject to the BA agreement and security elements.  Clouds are permitted, but the expected controls must exist along with the proper risk management factors.

NSA Chief: “The idea is to reduce vulnerabilities inherent in the current architecture and to exploit the advantages of cloud computing and thin-client networks, moving the programs and the data that users need away from the thousands of desktops we now use — each of which has to be individually secured for just one of our three major architectures — up to a centralized configuration that will give us wider availability of applications and data combined with tighter control over accesses and vulnerabilities and more timely mitigation of the latter,” he testified before a House subcommittee in March 2011.

via NSA chief endorses the cloud for classified military cyber program – Cybersecurity – Nextgov.com.

Kind regards,

James DeLuccia IV

Security News – inspired by #RSAC

This week is the RSA Conference in San Francisco and despite itself being a huge conference with great people in attendance, there is also numerous other satellite conferences happening (BSidesSF and Cloud Summit).  All that brain power is bound to generate some discussion and research reports generally are released during this PR window.  So, here is a few items that (new and old) jumped out to me getting much discussion and would be valuable to restate.  As always, I will be punching up my notes to share as things that are meaningful are presented.

First stop the CIO of the U.S. Government:  on DarkReading: “White House CIO Lays Out ‘Cloud First’ Strategy To Streamline Bloated Government IT”.  This is generally a repeat of his prior strategy laid out before the security community [Direct D/L] and the Wall Street Journal.  Nonetheless worth zipping through:

In the same stream of thought (both highlighted at Cloud Summit) is the initiation of the updating the “Security Guidance for Critical Areas of Focus in Cloud Computing” by the Cloud Security Alliance.  Note this is a collaborative group and passionate and knowledgeable persons are highly sought – if you can give your time and help.  The prior version is available here for download.

True Cost of Compliance put forward by Ponemon Institute and TripWire (released January 2011) – right off the top states that the average non-compliance costs are more than $5 million dollars than the cost to comply.  Here is the link to the report – no registration required, very nice.  Also interested what that cover graphic is hiding…

Plenty of great streams of information flowing from the conference on twitter – set search filters to: #RSAC #RSA and of course, if you like a specific area (NIST, ISO, Cloud) hit those tags up too… This week is going to produce enough reading for a few flights across the pond for us all!


James DeLuccia


End to End Resilience .. ENISA.. Cloud..

The beautiful opportunity with distributed computing, globalization, and cloud services is the ability to scale and run complex environments around the globe.  This is balanced of course by assurance that the operations are occurring as you expect, are managed properly, and protected to secure the competitive intelligence of the business.  Especially interesting has been the movement of centralizing data centers of a company into super data centers.

Together these points raise and are possibly met by the ENISA (The European Network for Information Security Agency) report that highlights the decisive factors of an end-to-end resilient network.  The report can be found directly at this link location.

An interesting challenge highlighted by, what appears Egypt’s government shutting down the internet, is how are these distributed cloud systems managed if they are cut-off from their administrative consoles?  Considerations for all businesses, and perhaps an appropriate addition to business continuity and such planning risk documents – is the following:

Can the business’ systems function autonomously when the primary controls and administrative connections are lost?

Perhaps a lesson could be gained by the masterful administration of the bot-net armies that leverage dark and shifting network clouds.

I would be interested of the implications that arise as a result of this disconnect of a country, and potential of other countries (whether due to more direct action, or the indirect result to further contain internet traffic).

Come join me and others in San Francisco where I will be speaking at RSA.  Stop by.. lets catchup.. and looking forward to great debates (as always).

James DeLuccia

Lessons from Microsoft’s ‘Global Criminal Compliance Handbook’

Just finished reading the Microsoft Global Criminal Compliance Handbook, and a few things jump to mind that are beneficial for every business owner, security professional, and innovator…

  • First off – the detail and type of information available is very interesting and demonstrates a very and prudent effort to lock down what can be reliably provided to law enforcement.  I am certain with a bit of effort less reliable data may be uncovered if required, but consider the intense level of technology practices and controls required to unequivocally state these data points are available.
    • Ask yourself this question – what data points/metrics does my business rely upon, and can we currently make such absolute statements with regards to the availability and integrity of such information.  A step further – what information requests does your business receive (within the context of Information Technology / Audit / Security / Risk Management) throughout the year, and how rapidly can this information be presented?  It appears from this document that Microsoft has worked the process into a near real-time response, and that is the new reality and requirement for organizations to be competitive and cooperative with internal and external parties.
  • Secondly – The access to the business financial accounts and the online storage accounts highlights (or simply reinforces) a concern of Cloud computing systems.  Deploying / Using systems that are not “yours” creates a reasonable chance for the true operator to grant access to your data for “appropriate” reasons.  While I encourage businesses to respond to legal requests as required, it is Risk Managers task to consider these situations and ensure operators have SLA in place along with technical assurances that provide proper safeguards.
    • SLA discrepancies between companies and third party providers is a gap that is growing with the usage of SaaS (other iterations) providers, and it is a new risk vector that must be considered, carefully.
  • Thirdly – Information versus Knowledge:  The document goes beyond simply dumping data on the recipient and is designed to help the layman understand the data provided.  The effort to convey knowledge truly is exceptional and not often found within the highly technical and complex system environment that is technology.  Reflection on internal documentation and the conveyance of knowledge should be equal in effort if not more than the actual production of data points.  As technologists are able to interpret complex interactions between multiple routing devices and ACL logs, the team lead / business manager / auditor / CEO need the knowledge of this meaning in order to merge these facts into the greater business risk landscape.

While several articles highlight the privacy and direct implications, I hope this post has provided productive and next step information with this Microsoft document.  The Microsoft document may be downloaded directly here from WikiLeaks.  A ComputerWorld article is available and nicely breaks down the document.

Other perspectives?

James DeLuccia

Security and Compliance challenges with Web 2.0

What happens to the organization when the data that represents the heart of the business is distributed through Twitter, Facebook, torrent networks, gaming consoles, iphones, google phones, and such peripherals.  Many would state that DLP is the holy grail to ensuring the data never reaches these platforms, but I would challenge that statement with the fact that much content moving forward will be generated from these devices.  The difficulty of greater platforms, interfaces, availability of API, and the now efficient and mature malware market creates a new risk landscape.

Visit me next week live to discuss these challenges in depth at RSA London 2009.  I have brought together leading thinkers in this space and interjected client engagements to make it relevant and actionable.  A brief (9 minutes) podcast was published last week, and may be viewed here w/ abstract, or here for the direct link to the mp3.

A new risk landscape exists – how have you adjusted?

James DeLuccia IV

IT Strategy and Governance: Avoiding the pitfalls of Perception Bias…

In a recent article for the Payment Card Industry magazineSecure Payments, I introduced the conceptual idea of Information Technology Governance as a bicycle wheel with the organization being made up of the spokes (representing all initiatives – contractual; regulated; competition necessitated), and the rounded wheel depicting the operating strategy of the business fully integrated and inter-dependent.  Check out the article here online (starting on page 24), or join the SPSP and receive complimentary free copies in the mail.  I distinguish the challenges of organization’s focusing on single regulations as a means to orchestrating their security and compliance programs.  The concept of creating a custom control framework is articulated and broken down in IT Compliance and Controls that I published last year with John Wiley and Sons (for those looking for greater discussion and practical advice).
Why is that wrong – to extend upon the articles points:  The information technology operations of the business are unique to every business, as unique as that of the culture of the business.  While the parts that make up the information technology (routers, switches, clouds, software, etc…) the combination and implementation make up the competitive advantage of the business.  So, if following one regulation is not appropriate for all businesses, is it appropriate for those within that particular industry?  Simply answered, no.
The organization, in the instance of PCI DSS, is susceptible to many different risks.  These risks relate to geography, staffing, operational decisions, and external factors to the business.  Each standard is conceived under the premise that under a single environment XYZ are the risks and appropriate mitigating responses.  This premise falls apart when additional concerns, assets, and risks are introduced.
IT Strategy and Governance must constitute a merging of business aptitude with technology capability.  This shall be a topic that we will revisit with greater specifics and tools to achieve this objective.  Thoughts / Concerns?

Kind regards,

James DeLuccia IV

Building a crash-proof internet, Off-the-Internet Processes

Interesting article in NewScientist speaking towards the challenges of building a crash proof internet.  Bennett Daviss provides accurate information regarding the challenges of the internet, and how it has become a mission critical part of our lives – personal and professional.  The Internet is not guaranteed to be up and unless conscious effort is taken to ensure that your business’ packets are flowing it is likely a random event will cause a disruption of at least one hour if not many hours.  RackSpace’s operational challenges the other day highlighted this fact.

The article has a nice breakdown on the threats and highlights a specific solution being revamping the routers.  In order to achieve this ‘revamp’ it is necessary to deploy new and emerging concepts onto in-production devices without causing an interruption has led to the need for a separate test bed.   The concept of building a separate internet for testing massive firmware upgrades and innovative new approaches is underway with GENI, and creates a great opportunity to building in security and operational integrity.  The technology of OpenFlow, designed to slice up a router to enable researchers access to devices to test ideas without requiring entire new devices or introducing downtime, does cause me to pause and consider the possible inherent risks:

OpenFlow program can be added to almost any router, where it acts like a remote control for the proprietary algorithms and hardware inside.”

This project is highlighted in the article and does have a given amount of inherent risk – introducing such an access vector to core internet routers may create greater interruptions initially then are prevented.  Careful consideration should always be taken when adding features to systems that are inherently single tasked (this is not solely due to the vulnerabilities that may be introduced, but to the increasing degree of complexity added as a result).
Complexity has proven time and again to be the greatest threat to technology, so any increase should be done consciously and expertly to ensure that the entire control environment reflects these changes.

Creating a crashproof internet is an important effort (especially considering the impacts of Michael Jackson on social networking sites and Twitter with Iranian elections), but one must remember the internet is a service provider and as such contingency plans must be devised.  Separate network connections, satellite, and off-the-internet (OII) processing must exist.  Consider how your business would be affected without the internet; with a loss of half the planet; with a loss of consistency in uptime.

Preparation is great business and a necessary control safeguard advised by numerous regulations.

Best regards,

James DeLuccia IV