Tag Archives: twitter

Data sensitivity is a matter of perspective, by example PS3 master key tweet

A common challenge of organizations is defining what is valuable to their business.  What is valuable then could be defined and then the risks determined.  The classic risk management process would (not) conclude with deploying safeguards and practices to minimize the impact of any event related to this data, but also extend to a commitment of reassessment and reconfiguration / shifting of safeguards and position.

To be specific:  Valuable to an organization may be the root password to their latest and greatest appliance.  The need would be identified by technology and business leaders that protecting this code extends their ability to remain competitive.  The release of such key lowers said risk, especially as in many cases one insight leads into other insights quite rapidly.  This was shown clearly by the re-tweeting of a PS3 representative that included the master key for the PS3.  Here is an article explaining it very plainly.  The individual tweeting it, did not know it was sensitive or its meaning.  This highlights a critical point in this model whereby the whole structure relies upon the first premise – defining what is valuable.

Envision a pyramid… the top is the what is valuable to an organization, and layer by layer down the pyramid you have the traditional risk management process.  (This may be ISO 27001, OCTAVE, or others…)  The point being that as the pyramid is built there are safeguards, cost, management participation, and deployments, re-configurations, and such continuously…ALL based on the single assumption of the value being known.

Now take that pyramid and flip it over.  The challenge becomes clear – the first assumption is now holding the entire program up.  I love this analogy because it highlights to everyone the importance and critical need to properly identify what is valuable information and then communicating and safeguarding it properly.

Considerations:

  1. Valuable information should be considered at an abstract level first, and then the formats and channels of communication approached.  Meaning – first decide if X product component is valuable.  If so… what formats does it exist: paper, electronic, physical prototype
  2. The safeguards must consider each representation equally and evenly to ensure that the valuable items are protected.  Otherwise as has been proven time and again, the weakest link will be exploited.
  3. Training – well a common and near constant safeguard should include the various forms and NOT be abstract to the participants, but instead be reflective of their interaction with the type of data
  4. Understand the life cycle value of the “valuable” item.  Meaning at one point the data is tremendously valuable and senior leadership is involved in protecting it (capex etc).. as time progresses the forms of the valuable item become a commodity.  For instance, if you are protecting a device… the moment it is shipped the prototype is less valuable and therefore would require less protections.  The safeguards and risk program must consider the ‘half-life’ of the property

There are many aspects of a risk program, but the linchpin starts and ends at defining what is valuable.  The tweet referenced above and below is a single representation of this fact.

Other considerations?

See you at RSA SFO 2011!!

James DeLuccia

 

Security and Compliance challenges with Web 2.0

What happens to the organization when the data that represents the heart of the business is distributed through Twitter, Facebook, torrent networks, gaming consoles, iphones, google phones, and such peripherals.  Many would state that DLP is the holy grail to ensuring the data never reaches these platforms, but I would challenge that statement with the fact that much content moving forward will be generated from these devices.  The difficulty of greater platforms, interfaces, availability of API, and the now efficient and mature malware market creates a new risk landscape.

Visit me next week live to discuss these challenges in depth at RSA London 2009.  I have brought together leading thinkers in this space and interjected client engagements to make it relevant and actionable.  A brief (9 minutes) podcast was published last week, and may be viewed here w/ abstract, or here for the direct link to the mp3.

A new risk landscape exists – how have you adjusted?

James DeLuccia IV

Sensitive Data leaked onto P2P networks… how to safeguard assets?

An article was highlighted in a LinkedIN Group (that spawned a discussion) published by SC Magazine entitled, “First lady’s safe house location leaked on P2P“.  The article breaks down the concern that lawmakers and regulators have with P2P networks due to the recent release of sensitive data.
You can find the article here, and the U.S. Committee on Oversight and Reform transcriptions & webcast here.  The chairman’s closing remarks (short) are here on “predator-to-prey” networks.

I strongly advise reading through these to understand the current risks and perception of risks that exist.
The article is a good overview of a problem, but I would contend that the attack / threat / vector is not as described by the testimony or highlighted in this article.  They state the problem is the P2P technology that lead to the disclosure of the sensitive data.  That is similar to blaming the highway to causing an accident.  Professionals within the business of protecting assets and managing operations must have safeguards for the data that transcends the risks of the technology.
Safeguarding data begins with a few simple efforts (a good initial start…):

  1. Identify what is worth protecting (this definition allows for PII, PHI, Top Secret, Competitive importance)
  2. Determine the flows of data (i.e., the Rabbit holes… follow where the data from origination to retirement)
  3. Introduce process efficiencies (i.e., reduce the rabbit hole dead ends; add automation where possible; simplify the process to reduce the final assets requiring protection)
  4. Develop and define the necessary Safeguards to protect these assets
  5. Compare existing controls (for the remaining rabbit holes or “business processes”) and eliminate duplication
  6. Finally define performance metrics of these controls, a timetable, and deploy

It is dangerous, and unfortunate that the committee seems to be hunting for a culprit that can be regulated, to assume and believe that P2P is the simple problem.  When in fact it is the current state of security within the Nation’s critical infrastructure, and this is as much an internal people problem as an internal technology compliance problem.  I do agree with the elimination of software that is known to be at risk to attack, but in the client-browser attack world we live in today that would include things such as Internet Explorer!  Removing access to Torrents and other p2p networks only stifles innovation and increases costs.  A more risk aware and intelligent method needs to be devised that allows the government to gain access to valuable resources without placing sensitive information at risk.

I look forward to anyone’s take and experience on solving this challenge,

Kind Regards,

James DeLuccia IV

See me speak at RSA 2009 Europe on a new framework for addressing social, smartphones, netbooks, and their risks

Order my book online at Amazon where I elaborate on how to develop an Enterprise Risk Management Program, based upon NIST and years of client engagements.

Building a crash-proof internet, Off-the-Internet Processes

Interesting article in NewScientist speaking towards the challenges of building a crash proof internet.  Bennett Daviss provides accurate information regarding the challenges of the internet, and how it has become a mission critical part of our lives – personal and professional.  The Internet is not guaranteed to be up and unless conscious effort is taken to ensure that your business’ packets are flowing it is likely a random event will cause a disruption of at least one hour if not many hours.  RackSpace’s operational challenges the other day highlighted this fact.

The article has a nice breakdown on the threats and highlights a specific solution being revamping the routers.  In order to achieve this ‘revamp’ it is necessary to deploy new and emerging concepts onto in-production devices without causing an interruption has led to the need for a separate test bed.   The concept of building a separate internet for testing massive firmware upgrades and innovative new approaches is underway with GENI, and creates a great opportunity to building in security and operational integrity.  The technology of OpenFlow, designed to slice up a router to enable researchers access to devices to test ideas without requiring entire new devices or introducing downtime, does cause me to pause and consider the possible inherent risks:

OpenFlow program can be added to almost any router, where it acts like a remote control for the proprietary algorithms and hardware inside.”

This project is highlighted in the article and does have a given amount of inherent risk – introducing such an access vector to core internet routers may create greater interruptions initially then are prevented.  Careful consideration should always be taken when adding features to systems that are inherently single tasked (this is not solely due to the vulnerabilities that may be introduced, but to the increasing degree of complexity added as a result).
Complexity has proven time and again to be the greatest threat to technology, so any increase should be done consciously and expertly to ensure that the entire control environment reflects these changes.

Creating a crashproof internet is an important effort (especially considering the impacts of Michael Jackson on social networking sites and Twitter with Iranian elections), but one must remember the internet is a service provider and as such contingency plans must be devised.  Separate network connections, satellite, and off-the-internet (OII) processing must exist.  Consider how your business would be affected without the internet; with a loss of half the planet; with a loss of consistency in uptime.

Preparation is great business and a necessary control safeguard advised by numerous regulations.

Best regards,

James DeLuccia IV

Audits of the future must enrich and enforce your IT Strategy

Yesterday I presented with Prat Moghe, the founder of Tizor, on the challenges faced by businesses.  A broad topic, but we were primarily focused on the database administrators and those charged with the controls in place.  While we go into great detail on the difficulties of manually evaluating controls in a checkbox manner, and I highlighted specific concerns on twitter (#nzdc) a more basic harm and cause emerged – most organizations have been approaching audits and controls in the wrong manner.

  • First off – consider what is the point of an/the audit?  This answer may result in one of two prime responses:
  • The point is the Federal government and our industry cohorts don’t trust how we’ll do business, so we have to demonstrate particular safeguards and operating integrity base points to keep our operating license.
  • The second maybe, management is overseeing a massively complex organism, and only through third party verification and evaluation shall we know what in the world is right / wrong / or a complete waste.

Now both responses are right and there is nothing wrong with being more polar on any of these points, but there is a severe cost.  Taking an audit as a checkbox approach means that the INTENT is not being satisfied (The classic Compliance does not equal Security is a prime example), and one should not be passing such audits – but that is not the focus of this post.  Furthermore, conducting an audit in a manner where one simply responds and ties loosely together the controls for the sake of “the audit” every year translates to a complete loss in the possible savings that can be achieved from such events.
There is not doubt, audits are time consuming and resource intensive, and it is similar to a High-Stakes test.  The difference is when you take a high stakes test and then take it again, you reuse the same information and have learned from the prior experience.  Too often organizations do not have those lessons carried forward, because they are treated as one-time events and not integrated.

To be sure – auditors vary in skill, standards stretch the spectrum from prescriptive to principal based, and management / company culture severely impact how these evaluations are viewed and addressed.  It is also true that without taking these lessons beyond the hour the audit occurs errors, expense, time, and resources, will forever and continually be lost.

Best Practice Advice:

Consider your audit plan for the year and how they can fit with your IT strategy and IT governance function as a part of the company governance program.  Draft a charter that reflects how these audits work toward the companies goals, and how each audit enforces and ENRICHES the business operations.

Thoughts and contributions?

James DeLuccia IV
CIA, CISA, CISM, CISSP, CPISA, CPISM

Check out the webinar I mentioned above here, it shall be archived and viewable at your leisure.

Twitter, PCI DSS posts…

In preparation for a PCI DSS training seminar I am hosting this month I uncovered a few nuggets within the PCI DSS universe that ALWAYS draws questions and concerns.  Catch my 140 character contributions below.  If you are not using Twitter or another search aggregator to identify updates and vulnerabilities you are working too hard (and in non-compliance to some regulations PCI DSS Section 6.2, for instance).  These doesn’t mean tracking persons who post personal items, but find and follow those that have a propensity to discuss items of interest to you!  Start with searching for #PCI and go from there – feel free to follow me of course, and check out the SecurityTwits

Kind regards,

James DeLuccia