A common challenge of organizations is defining what is valuable to their business. What is valuable then could be defined and then the risks determined. The classic risk management process would (not) conclude with deploying safeguards and practices to minimize the impact of any event related to this data, but also extend to a commitment of reassessment and reconfiguration / shifting of safeguards and position.
To be specific: Valuable to an organization may be the root password to their latest and greatest appliance. The need would be identified by technology and business leaders that protecting this code extends their ability to remain competitive. The release of such key lowers said risk, especially as in many cases one insight leads into other insights quite rapidly. This was shown clearly by the re-tweeting of a PS3 representative that included the master key for the PS3. Here is an article explaining it very plainly. The individual tweeting it, did not know it was sensitive or its meaning. This highlights a critical point in this model whereby the whole structure relies upon the first premise – defining what is valuable.
Envision a pyramid… the top is the what is valuable to an organization, and layer by layer down the pyramid you have the traditional risk management process. (This may be ISO 27001, OCTAVE, or others…) The point being that as the pyramid is built there are safeguards, cost, management participation, and deployments, re-configurations, and such continuously…ALL based on the single assumption of the value being known.
Now take that pyramid and flip it over. The challenge becomes clear – the first assumption is now holding the entire program up. I love this analogy because it highlights to everyone the importance and critical need to properly identify what is valuable information and then communicating and safeguarding it properly.
- Valuable information should be considered at an abstract level first, and then the formats and channels of communication approached. Meaning – first decide if X product component is valuable. If so… what formats does it exist: paper, electronic, physical prototype
- The safeguards must consider each representation equally and evenly to ensure that the valuable items are protected. Otherwise as has been proven time and again, the weakest link will be exploited.
- Training – well a common and near constant safeguard should include the various forms and NOT be abstract to the participants, but instead be reflective of their interaction with the type of data
- Understand the life cycle value of the “valuable” item. Meaning at one point the data is tremendously valuable and senior leadership is involved in protecting it (capex etc).. as time progresses the forms of the valuable item become a commodity. For instance, if you are protecting a device… the moment it is shipped the prototype is less valuable and therefore would require less protections. The safeguards and risk program must consider the ‘half-life’ of the property
There are many aspects of a risk program, but the linchpin starts and ends at defining what is valuable. The tweet referenced above and below is a single representation of this fact.
See you at RSA SFO 2011!!
What happens to the organization when the data that represents the heart of the business is distributed through Twitter, Facebook, torrent networks, gaming consoles, iphones, google phones, and such peripherals. Many would state that DLP is the holy grail to ensuring the data never reaches these platforms, but I would challenge that statement with the fact that much content moving forward will be generated from these devices. The difficulty of greater platforms, interfaces, availability of API, and the now efficient and mature malware market creates a new risk landscape.
Visit me next week live to discuss these challenges in depth at RSA London 2009. I have brought together leading thinkers in this space and interjected client engagements to make it relevant and actionable. A brief (9 minutes) podcast was published last week, and may be viewed here w/ abstract, or here for the direct link to the mp3.
A new risk landscape exists – how have you adjusted?
James DeLuccia IV
Posted in Compliance
Tagged botnet, Compliance, data breaches, denial of service attacks, grid computing, it compliance and controls, pci, rsa, Security, twitter, virtualization
Interesting article in NewScientist speaking towards the challenges of building a crash proof internet. Bennett Daviss provides accurate information regarding the challenges of the internet, and how it has become a mission critical part of our lives – personal and professional. The Internet is not guaranteed to be up and unless conscious effort is taken to ensure that your business’ packets are flowing it is likely a random event will cause a disruption of at least one hour if not many hours. RackSpace’s operational challenges the other day highlighted this fact.
The article has a nice breakdown on the threats and highlights a specific solution being revamping the routers. In order to achieve this ‘revamp’ it is necessary to deploy new and emerging concepts onto in-production devices without causing an interruption has led to the need for a separate test bed. The concept of building a separate internet for testing massive firmware upgrades and innovative new approaches is underway with GENI, and creates a great opportunity to building in security and operational integrity. The technology of OpenFlow, designed to slice up a router to enable researchers access to devices to test ideas without requiring entire new devices or introducing downtime, does cause me to pause and consider the possible inherent risks:
“OpenFlow program can be added to almost any router, where it acts like a remote control for the proprietary algorithms and hardware inside.”
This project is highlighted in the article and does have a given amount of inherent risk – introducing such an access vector to core internet routers may create greater interruptions initially then are prevented. Careful consideration should always be taken when adding features to systems that are inherently single tasked (this is not solely due to the vulnerabilities that may be introduced, but to the increasing degree of complexity added as a result).
Complexity has proven time and again to be the greatest threat to technology, so any increase should be done consciously and expertly to ensure that the entire control environment reflects these changes.
Creating a crashproof internet is an important effort (especially considering the impacts of Michael Jackson on social networking sites and Twitter with Iranian elections), but one must remember the internet is a service provider and as such contingency plans must be devised. Separate network connections, satellite, and off-the-internet (OII) processing must exist. Consider how your business would be affected without the internet; with a loss of half the planet; with a loss of consistency in uptime.
Preparation is great business and a necessary control safeguard advised by numerous regulations.
James DeLuccia IV
Posted in Compliance
Tagged best practices, Compliance, grid computing, iran, it compliance and controls, oii, pci, regulation, Security, twitter, uptime
Yesterday I presented with Prat Moghe, the founder of Tizor, on the challenges faced by businesses. A broad topic, but we were primarily focused on the database administrators and those charged with the controls in place. While we go into great detail on the difficulties of manually evaluating controls in a checkbox manner, and I highlighted specific concerns on twitter (#nzdc) a more basic harm and cause emerged – most organizations have been approaching audits and controls in the wrong manner.
- First off – consider what is the point of an/the audit? This answer may result in one of two prime responses:
- The point is the Federal government and our industry cohorts don’t trust how we’ll do business, so we have to demonstrate particular safeguards and operating integrity base points to keep our operating license.
- The second maybe, management is overseeing a massively complex organism, and only through third party verification and evaluation shall we know what in the world is right / wrong / or a complete waste.
Now both responses are right and there is nothing wrong with being more polar on any of these points, but there is a severe cost. Taking an audit as a checkbox approach means that the INTENT is not being satisfied (The classic Compliance does not equal Security is a prime example), and one should not be passing such audits – but that is not the focus of this post. Furthermore, conducting an audit in a manner where one simply responds and ties loosely together the controls for the sake of “the audit” every year translates to a complete loss in the possible savings that can be achieved from such events.
There is not doubt, audits are time consuming and resource intensive, and it is similar to a High-Stakes test. The difference is when you take a high stakes test and then take it again, you reuse the same information and have learned from the prior experience. Too often organizations do not have those lessons carried forward, because they are treated as one-time events and not integrated.
To be sure – auditors vary in skill, standards stretch the spectrum from prescriptive to principal based, and management / company culture severely impact how these evaluations are viewed and addressed. It is also true that without taking these lessons beyond the hour the audit occurs errors, expense, time, and resources, will forever and continually be lost.
Best Practice Advice:
Consider your audit plan for the year and how they can fit with your IT strategy and IT governance function as a part of the company governance program. Draft a charter that reflects how these audits work toward the companies goals, and how each audit enforces and ENRICHES the business operations.
Thoughts and contributions?
James DeLuccia IV
CIA, CISA, CISM, CISSP, CPISA, CPISM
Check out the webinar I mentioned above here, it shall be archived and viewable at your leisure.
Posted in Compliance
Tagged audit, best practices, Compliance, database, ffiec, it compliance and controls, IT Controls, onsite audit, pci, PCI DSS, regulatory, Security, sox, tizor, twitter
In preparation for a PCI DSS training seminar I am hosting this month I uncovered a few nuggets within the PCI DSS universe that ALWAYS draws questions and concerns. Catch my 140 character contributions below. If you are not using Twitter or another search aggregator to identify updates and vulnerabilities you are working too hard (and in non-compliance to some regulations PCI DSS Section 6.2, for instance). These doesn’t mean tracking persons who post personal items, but find and follow those that have a propensity to discuss items of interest to you! Start with searching for #PCI and go from there – feel free to follow me of course, and check out the SecurityTwits