I read a short section in Bruce Schneier’s book Liars and Outliers that tells the tale of Jamaica Ginger:
“an epidemic of paralysis occurred as a result of Jamaica Ginger… it was laced with a nerve poison… and the company was vilified”, but not until 10s of thousands were victims, this resulted in the creation of the FDA.
To date, throughout most industries there is no absolute requirement with meaningful incentives to introduce and sustain operational information technology safeguards. There are isolated elements focused on particular threats and fraud (such as, PCI for the credit card industry, CIP for the Energy sector, etc…). So what will result in the Jamaica Ginger of information security?
Some portend that a cyber-war (a real one) that creates such societal disruption; a long enough sustained negative impact to survive the policy development process, and driven enough motivation to be complete. OSHA, FDA, and other such entities exist as a result of such events.
The best action enterprises can follow is to mature and engage sufficient operations that address their information technology concerns in the marketplace. As a means of self preservation; selfish (perhaps) demonstration of a need to NOT have legislation or a body established (such as the Federal Security Bureau), and ultimately preparedness should such a requirement be introduced the changes to your business would be incremental at best.
Posted in audit, Compliance, IT Controls
Tagged 2013, best practices, china, Compliance, cybersecurity, europe, fines, fisma, it compliance and controls, IT Controls, james deluccia, jdeluccia, pci, PCI DSS, regulation, Security
An interesting discussion I had the other day raised the point:
What do we need for perfect security?
Defining perfect and security itself is difficult, but let us simply state…
- Perfect = zero events that cause competitive harm
- Security = operational integrity of environment
(note this is not restricted to a specific type of system, but directed towards the business concerns as a whole).
Over the dialogue we ended across the use of standards to establish the governance and security architectures; we delved into the pizza box kitchen, and of course serious amounts of detection / prevention activities. Ultimately though we ended at a higher level of abstraction that is far more important… at least initially.
Perfect Security is defined on what the business will permit to occur. How many breaches, of what severity (physical and in person), and by what individuals is acceptable? Understanding the risk tolerance on activity and operating at that state of operations is far more crucial, as the entire security-compliance program results from this level of acceptance.
Thus, as we enter the New Year, and the security summits / executive committees are coming together … ask:
- What is our risk tolerance
- What is the straw that will be unacceptable by the stakeholders, stockholders, and simply the community as a whole.
- Define the feeling of the event, detail the services that are being discussed, and equate possible outcomes.
The idea is to not have days of risk threat discussions, but determine the level of acceptance and allow the practitioners and SMEs in the business to execute. Similar to the hierarchy of documents – Strategy should be defined via policy and then allow the competency centers of excellence to do what they love and are paid to do at the business.
Posted in IT Controls, Risk Management
Tagged 2013, best practices, cio, ciso, Compliance, cybersecurity, it compliance and controls, IT Controls, james deluccia, jdeluccia, perfect security, security summit
Information security practices are influenced by the geography of operations, the culture from that area, and the industry in general. The trust found within a community, as highlighted by Bruce Schneier in Liars & Outliers, allows the wheels of society to move forward. Said wheels also myopically continue as researched by Steven Pinker. To provide a bit of elaboration on these three points, let me elaborate briefly:
- Geography of Operations – This trust though is based on, in part, on proximity. Individuals are more trusting to those within the same community (however you define this works out to the same result).
- Culture from that area – “Trust non-kin is calibrated by the society we live in. If we live in a polite society where trust is generally returned, we’re at ease trusting first. If we live in a violent society…we don’t trust easily and require further evidence…” – Pg 37
- Industry – Familiarity also engenders trust within an industry, i.e, a doctor working with another doctor automatically introduces a level of confidence and trust in the communication and mutual activities.
Ultimately, Culture is King. It is the culture that defines an organization’s DNA and differentiates them in the market space. The experience one encounters with the Culture of a Google vs. Microsoft environment is palatable. One or the other is not right or wrong, but the Culture is different nonetheless. The challenge is that the Culture MUST change in a world where these principles are violated.
History and biology have proven that when an aggressive culture that doesn’t need to trust as it is the aggressor is introduced into a culture that doesn’t share that culture – the Aggressor always wins. This is highlighted across numerous examples of entire societies being destroyed / absorbed in Guns, Germs, and Steel. A biology example would be the Chinese fish that had invaded the ecosystem in the Great Lakes, and is destroying the current biology.
Ultimately, all systems are connected – regardless of the geography, culture, or industry. Therefore the concepts and methodologies of organizing go to market strategies; deployment of new technology, and simply sustaining competitive operations requires a reframing of the trust model. In essence, the culture of the organization where technology is introduced must be adapted to fit the more aggressive, violent, and hostile landscapes in the world.
Strategically speaking enterprises may operate locally, but must be governed with a global perspective. Such can and must include the geopolitical risks globally, the global value of the intellectual property, and be adaptive to the degrees of risk that is introduced at any given time.
Technologically the deployed systems must be considered and ensured that the trust equated into the system controls is configured aggressively. An example – the classic firewall rule strictness and ‘Deny All’ must prevail, yet in some cases I have seen this not to be true. Be mindful of the connectedness of these systems in the global community.
The impact of culture on an organization’s decision to survive competitively starts with trust – in the systems, the people, the process, and the market.
Posted in Compliance, Risk Management
Tagged 2012, best practices, cio, ciso, Compliance, culture, cybersecurity, it compliance and controls, IT Controls, james deluccia, jdeluccia, Security, society, strategic
A quote similar stated that SCADA and basically systems controlling physical machines is the new attack surface. It struck me as obvious and non-obvious upon reflection. The security of these systems tends to be Facilities and not under the scope of concern of most CISO and certainly not the CIO. That is unless the organization is structured where such operating roles are under the General Legal Counsel or the COO. The structure of the organization as it relates to operational integrity, competitiveness, and ultimately compliance – security depends upon the organizational structures being adapted to the technology age. To often we forget the value of organizational strategy shifts, and this is one that will be necessary and provide valuable returns.
How can this trickle into the tactical operations of the business?
Consider this single example?
- What controls do you have on checking the version of the HVAC units (software version) powering your data center and or corporate offices?
- Is there a security control in place to have it; be sure it can handle the load, and testing to ensure it works? I imagine yes to all 3, as these are ABC of operations
- What is the version of the HVAC PLC / SCADA element that is being utilized by the vendor and monitoring teams that is accessible remotely?
- When audits occur, do they check to be sure the device isn’t the Siemens or other manufacturer that was just highlighted at Defcon or on the news?
If this is the new frontier, we need to start structuring organizations in a manner that are designed to care for these considerations to allow for business to be agile and competitive.
Thoughts (a bit of latitude on the above terminology is requested, given I am simplifying the example to avoid to much technical specification and confusion)?
Posted in audit, Compliance, Security
Tagged 2012, attack surface, best practices, chief audit executive, cio, clo, Compliance, coo, cybersecurity, data center, defcon, it compliance and controls, IT Controls, james deluccia, jdeluccia, new frontier, organizational change, scada, Security, strategic, trend
Over the holiday I have been diving into different government information security and cyber scenario studies and research. An article (pdf) speaking to the NATO pursuit of an early detection system is interesting in of itself. The analogy is to that of nuclear launch early detection sufficient to allow for leaders to make responsive decisions.
The concept though I wonder is flawed. A detective responsive for cyber war has an extremely (milliseconds) lead time, and does not leave much for human response capabilities.
The NATO and military stop gap here is to monitor geopolitical activity to provide a barometer of when strikes will be likely – and unlikely.
Two critical points that every CIO and CISO must consider, and is emerging at some of my most impressive and advanced clients:
- Establish an adaptive security defense model (year over year we have been tactically responding, but there is more strategic elements that must be transparent)
- “Warnings are not just sounding alarms of a likely or inbound (anonymous or others) attack, but the converse is equally important – having confidence to tell them that for the time being significant attacks are not likely and they should turn their attention [ / funding] to more pressing matters.”
An interesting question I would pose:
- if you KNEW you were going to be targeted, what actions would you do differently today?
- Would you deploy technology different?
- Would the 2 years of projects get reshuffled?
- What if you had 2 years warning to make preparations, would your vector of response differ?
We are entering an interesting time where business, operational competitive security strategy, and tactical activities are necessary to maintaining sustainable businesses. The executive must balance this with tact and great care. Combined together with the awesome new technologies and mobile spaces, a whole new field of competitive business advantage awaits the prepared and willing.
Posted in Compliance
Tagged 2012, best practices, cio, ciso, Compliance, cybersecurity, executive, it compliance and controls, IT Controls, james deluccia, jdeluccia, mckinsey, Security, strategy
The payment card industry standard articulates very prescriptively what should be done for all system components that are within the payment card process. An area of usual confusion is the depth of abstraction that should be applied to the “connected system” element of the standard. Specifically, the standard states the following:
The PCI DSS security requirements apply to all system components. In the context of PCI DSS, “system components” are defined as any network component, server, or application that is included in or connected to the cardholder data environment. ―”System components” also include any virtualization components such as virtual machines, virtual switches/routers, virtual appliances, virtual applications/desktops, and hypervisors. The cardholder data environment is comprised of people, processes and technology that store, process or transmit cardholder data or sensitive authentication data. Network components include but are not limited to firewalls, switches, routers, wireless access points, network appliances, and other security appliances. Server types include, but are not limited to the following: web, application, database, authentication, mail, proxy, network time protocol (NTP), and domain name server (DNS). Applications include all purchased and custom applications, including internal and external (for example, Internet) applications.
– PCI DSS 2.0 page 10
To simplify – there are the system components that are involved with the payment card process, and then there are the supporting systems (connected systems) that also are in scope of PCI DSS. An example would be the patch server where the in-scope PCI system is receiving patches (but there are dozens).
So a rule of thumb on scope most offered in the industry is:
If you can digitally communicate with the system it is a connected system (this includes UDP, TCP, etc …) it is in scope.
A nice write up by Jeff Lowder referring to specifically the security system components can be found here written in 2010.
A Korzybski abstraction problem:
How many levels of abstraction should one undertake? Meaning – should that same patch server then be examined to see what systems are connecting to it and thus also be included in the ‘connected system’ web?
The answer here is generally no – the abstraction is only one level deep. That doesn’t mean best practice risk and security practices evaporate, so no leaving that server unpatched on the internet or anything.
What Requirements of PCI DSS should be applied to these ‘connected systems’?
The standard makes it clear in the beginning “The PCI DSS security requirements apply to all…” So, every PCI control applies to the connected system under discussion and identified through the abstraction of the services supporting the CHD environment itself. Limitations can be applied to the core system components that make up this “connected system”. Such as in the virtualization space, the hypervisor risks and controls are differentially applied from the entire standard. These exceptions from fully applying the PCI standard directly to the connected system must be limited and done with clear awareness. [Updated: All requirements should be considered … each QSA is different but addressing the risk with an eye towards compliance is the best and safest bet. Shopping for someone to accept a state of controls is a painful road]
An “Open PCI DSS Scoping Toolkit“(pdf) published on August 24, 2012 is available that provides an excellent structure in methodically determining scope and the controls that would be applicable. While not a product of the PCI SSC or a technical group – there is good content here that should be carefully considered as part of every security and compliance strategy. Thanks to Eric Brothers for the note! [Updated 12/5/2012]
Another good write up is offered here where a good articulation on the two factor authentication exception; file integrity monitoring exception, and a few other practices are nicely elaborated by Andrew Plato (Plus check out the comment threads.. very informative though proceed with caution as this is one QSA interpretation). Definitely worth a review, though sadly after a review there appears no other elaborations within the PCI SSC on this topic.
This write-up is an exploratory effort to seek clarity by consolidating thoughts of leading individuals in the payment card space and client environment realities. Any additional insights or perspectives you have are welcomed and requested in the comments below! I’ll update as anything is shared.
Posted in Compliance, IT Controls, PCI DSS
Tagged 2012, audit, best practices, Compliance, connected system, it compliance and controls, IT Controls, james deluccia, jdeluccia, pci, PCI DSS, remote access, Security, virtualization, visa
A security program and it’s controls are a hypothesis put in place and evaluated within an organization based on a set of assumptions and expected value. This is a critical success factor in an information security compliance program.
The concept of testing the viability of a hypothesis is not new and one that is commonly missing within organization’s security compliance programs. Consider all the areas within the business where testing of hypothesis exists and the result are fed back into the development process. In some cases products may be dreamed-up; prototyped; tested; iterated, and perhaps shelved or launched. Software development (SDL) includes developing code, testing it against use cases, and continually evaluating it against performance requirements, customer acceptance criteria, security!! requirements, and of course regulatory considerations.
Organizations are not lacking in the ability of scientific method, metrics, performance testing, or hypotheses. The opportunity lies in establishing proper use cases as they relate to information security compliance, and rigorously challenging and tracking these policies, practices, and procedures against the real life result of such a deployment.
A few mythes to dispel:
- Organizations can define metrics and KPI based on the root cause analysis and driver for a set of security program controls
- Metrics and KPI should be tracked, challenged regularly, and brought to executive levels for acceptance of performance (an important element in driving definition of value with security programs to core business initiatives)
- Controls do not beget controls
- Technology need not beget more technology or safeguards
Sometimes there is no solution that is guaranteed, so transparency on performance, predictability, impact, cost, and residual risk are key factors for all involved
The takeaway’s here include at least the following considerations:
- Identify why such policies, practices, and controls are deployed.
- Determine the root cause these are solving.
- Define the performance expected.
- Measure that performance against the metric.
- Is the the performance conforming to objectives.
- Are the metrics appropriate for reaching the conclusion sought by the root cause and technology information available.
- Can security compliance program elements be consolidated to address the root causes
- Can efficiencies be gained by consolidating technology and safeguards
- Are there architecture opportunities that can be considered
- Are there business procedure changes that could better enable the business activity and directly improve the overall state of the business
There are numerous additional considerations, but as in all enhancements – focus on a small set of tasks and iterate. Through a few cycles efficiencies will be gained internally, and the practices will begin to transform to reflect the culture and operating habits of the business. A word of caution though, don’t elongate the process. Once a method is established and advantages realized, scale rapidly to high impact areas (definition may be based upon user impact; risk impact; dollar to revenue served, etc..)
The thoughts here are based on personal experience building and designing global security programs. Some elements described may need customization in approach and process based on your own organization’s structure.
Posted in IT Controls, Security
Tagged 2012, best practices, cio, ciso, Compliance, cybersecurity, it compliance and controls, IT Controls, james deluccia, jdeluccia, metrics, performance improvement, regulation