Tag Archives: europe

1 Billion Data Records Stolen in 2014, WSJ

A nice summation of the Gemalto report regarding the data breaches in 2014.

Identity theft was by far the largest type of attack, with 54% of the breaches involving the theft of personal data, up from 23% in 2013.

Data records are defined as personally identifiable information such as email addresses, names, passwords, banking details, health information, and social security numbers.

via 1 Billion Data Records Stolen in 2014, Says Gemalto – Digits – WSJ.

Key points:

  1. 4% of the data breached was encrypted – demonstrating it’s effectiveness and it’s still lack of proper adoption
  2. 78% of breaches were from U.S. companies, followed by the U.K.

Lessons abound, and I am working on publishing a new piece on the evolution of these breaches, and how “we” have misinterpreted the utility of this data.

On a similar topic, please join me in pursuing to build leading habits for everyday user’s to minimize the impact of these breaches at – http://www.hownottobehacked.com my new research project.



Who will be the Jamaica Ginger of Information Security?

I read a short section in Bruce Schneier’s book Liars and Outliers that tells the tale of Jamaica Ginger:

“an epidemic of paralysis occurred as a result of Jamaica Ginger… it was laced with a nerve poison… and the company was vilified”, but not until 10s of thousands were victims, this resulted in the creation of the FDA.

To date, throughout most industries there is no absolute requirement with meaningful incentives to introduce and sustain operational information technology safeguards. There are isolated elements focused on particular threats and fraud (such as, PCI for the credit card industry, CIP for the Energy sector, etc…). So what will result in the Jamaica Ginger of information security?

Some portend that a cyber-war (a real one) that creates such societal disruption; a long enough sustained negative impact to survive the policy development process, and driven enough motivation to be complete. OSHA, FDA, and other such entities exist as a result of such events.

The best action enterprises can follow is to mature and engage sufficient operations that address their information technology concerns in the marketplace. As a means of self preservation; selfish (perhaps) demonstration of a need to NOT have legislation or a body established (such as the Federal Security Bureau), and ultimately preparedness should such a requirement be introduced the changes to your business would be incremental at best.

Other thoughts?

James DeLuccia

What does the SCADA water pump attack mean to your business…

The ability to attack, compromise, and cause damage has existed since the utility industry began connecting these systems on the Internet.  Examples, including the European nation that was attacked 24+ months ago, are easy to locate.  Yesterday an attack (more proof of concept than anything it could have really been) occurred.  The current public awareness of cyber attacks, the nation state theater risks, and transparency of this action has raised the resulting awareness beyond the closed professional circles within Information Security.    There is a number of interesting writeups and I would suggest carefully reading a few for a balanced perspective.  Two that I would recommend include:

What this means for your Utility company is that the abstract threat modeling exercise that considers these attack vectors should be conducted more thoroughly with real risk and mitigation decisions progressing up to the Board of Directors.

As for everyone else who is a customer of such utility companies, the BCP/DR plans should be updated to reflect the possibility of such a loss of services.  Business enterprise information security / risk management programs (+vendor management) should elevate utility service providers (including cellular operators).  These actions should directly impact the annual/ongoing risk assessments and establish an expectation of security assessment and assurance on a regular basis from these service providers.

It is an interesting quandry that Cloud service providers are vetted and assessed more rigorously than that of Utility service providers, the original cloud.

Thoughts .. challenges?

James DeLuccia iV

Other thoughts?


#RSAC Panel on Cloud – security and privacy

Security, Privacy, and liability are top issues for #Cloud security and a popular topic at #RSAC (RSA SFO 2011 Conference)  The first session today was moderated by Drue Reeves (Gartner), Michelle Dennedy (iDennedy), Tanya Forsheit (InfoLawGroup LLP), Archie Reed (HP), and Eran Feigenbaum (Google).  A great discussion and lots of interesting points, and I would highlight ideas for managing security pragmatically for organizations.  Below are my notes.  Apologies for broken flows in logic, as I was trying to capture ideas put forward by panel I sometimes got lost in discussion.

Customers cannot rely only on provider to ensure data confidentiality and compliance, but are seeking assurances.


  • Erin Feigenbaum (Google) – Customers want more transparency in general in the Cloud.  Google is seeing smaller companies move into the cloud and we see that the service and type of cloud sought varies.  Some clouds vary in ability to serve (Gmail in 2010 had uptime of 99.984%).
  • Panel – Due diligence is necessary for both sides of the customer-cloud provider model.  As such must  and get a fair assessment of is happening today for both sides – to know what is happening today.  Understanding what the customer is doing individually to create an ‘honest conversation’.  Create a performance and services assessment of internal (corporate data center and software services) delivery and then determine what Cloud providers meet the current and future state target.  Understanding what is essential to your business is critical to having reasonable expectations and having a proper cost/benefit return.

Legal, procurement, internal audit, business, and technology team members must get together to determine what is important and rate these items.  This then can allow for a better data set identification and procurement of service providers.

  • The end result is the business needs to determine what are their risk tolerance – such as what are they willing to accept.  The universe of Cloud providers allows businesses to identify those that can meet and demonstrate adherence to the criteria that matters to the business.

Focusing on the dataset is what matters and consideration of the period of time.  The dataset released to the cloud must meet your internal safeguard and risk tolerance criteria.

  1. Set Principles first – save money, keep agility, achieve availability
  2. Check application – is it generating revenue; does it create a loss of life scenario
  3. Keeping it in-house does not eliminate the risk vs. having it in the cloud.

Must focus at the strategic level …

Shadow IT, an example:

  • Shadow IT is a problem and is still ongoing.  A security survey with a bank in Canada where the marketing department did a survey in Salesforce.com.  The problem was using the system the data of private Canadian citizens was crossing the U.S. border – which is against the law.  This required a re-architecture effort to correct these activities.

There is a need for awareness and education on the implications of engaging cloud providers and how the flow of datasets impact the business’ legal obligations.

Consumer Technology in Business:

  • Eran – 50% of people surveyed installed applications that are not allowed by their corporations and IT.  The consumerization of technology is creating complex and intertwined technology ecosystems that must be considered by the business, risk management, legal, and security.
  • It is your responsibility to do the due diligence on what the cloud providers are doing to provide assurance, and work with those that provide such information.  The necessity is a balance between providing sufficient information security confidence and mapping out attack vectors for criminals.

Google Growth rate on Cloud:

  • 3,000 new businesses are signing up on the Google cloud every day – impossible to respond uniquely to each one individually.

Data Location

  • It is up to the customer on knowing what are the legal aspects and appropriate uses of the business data.  Understanding the transportation of sensitive data across borders is the business responsibility.
  • It is up to the business to understand and act to protect the data of the business – pushing the information onto a Cloud provider is not a transfer of risk / ownership / responsibility.

If you had the chance today to rebuild your systems, would you do it the same way?

  • Cloud does provide unique technologies beyond what you have already today.  Cloud providers today have allowed them to rebuild their centers that consider today’s technology data architecture and leverage new tech.

Points of reality and impossibility

  • If an organization does not have deep Identity Access Management (IAM) it is poor to try and bolt this on while transitioning to the cloud.  Reasonable expectations must be had for both the consumer and of the cloud provider.

Liability and Allocation between Customers and Clouds

  • Customers with data in their own data centers – they are basically self-insuring their operations.  When moving to the Cloud these customers are now transferring this a third party.  There is a financial aspect here.  How can liability be balanced between customer and service provider?
  • When Customer absorbs all liability they are hesitant to put X data on Cloud.  If Cloud absorbs liability the cost will be to high.

Data in Space

  • People are putting data on the cloud based on rash decisions without unique risk assessments on the data sets and providers.

Agreeing on Liability in the Cloud

  • Organizations have been able to negotiate liability clauses with cloud providers.  Ponemon institute figures are used in determining the limit of liability and are a good way of coming to a proper number that is even with industry figures.  I.e., If Ponemon institute says cost of a breach per record is $224 and business has 20,000 employee records —> The limit of liability should equal the product of these two numbers, and this has proven to be a reasonable discussion with cloud providers.  Indemnification is generally a non-discussion point.
  • The world will move into specialized applications and services.  These point organizations allows for specific legal and technology considerations that are appropriate for that niche.  This is seen at the contract level, notification levels, prioritization on RTR, and across many areas.

Everything is negotiable for the right amount of money or love – Eran

  • Cloud providers do not like to do one-offs.  Cloud providers including Google will negotiate.

APPROACH to cleanse data with confidence

  • Best tip is to encrypt data online… When de-provisioning systems and cleansing .. consider rewriting databases / applications / instances with clean values fully.  Is this a practical method of ensuring the data is satisfied.  How long should the data be in this state to ensure the data is pushed to other parallel instances?
  • Are PCI, SIGS, and such standards for financial services appropriate for the Cloud provider?  The responsibility is always the data owner.  Internal controls must be migrated out to the cloud evenly as applied internally.  It is the business’ risk and responsibility.

Recommendations of the Panel

Archie Reed:  Everyone becomes a broker and recommend that IT teams to embrace this role.  Need to understand how to source, and the chemistry and structure of the IT organization needs to shift.  It will and must include working with the business to have such parties as legal, internal audit, and risk management.

Tanya Forsheit:  I would love to see standards developed and the customers participate in a meaningful way.  The provider side has thought through these seriously over the last few years.  The business to business relationship within the Cloud – Customer relationship is weak.  Be reasonable.

Eran: There is a paradigm shift from a server you can touch and be managed by an Admin that you hired vs. one that is acquired by a contract through a Cloud providers.  Google has over 200 security professionals.  Bank robbers go where the data is – the Cloud has the data.

How do you respond to a vulnerability, how do you respond to a hack … ARE THESE the new / right questions to seek of Cloud providers?

Michelle Dennedy: Leverage and plan for a loss with cloud providers.

Drue:  There are risks you can identify to mitigate risks on the technology side, and there are financial tools (insurance, etc…) that must be deployed.

Question and Answer:

  • Cloud providers have the opportunity to have a dashboard to track and demonstrate controls.  These are hard we know.
  • FedRamp and continuous auditing is a future component of the Cloud providers (that some) will adhere to and demonstrate.

An engaging panel and some interesting and useful points raised.  Welcome any feedback and expansions on the ideas above,

James DeLuccia

Visa allows international Merchants to not demonstrate PCI DSS compliance

On February 11, 2011 Visa announced an interesting program that promotes and demonstrates the fraud deterrence strength of the Europay, MasterCard and Visa (EMV) smartcard standard and are also equipped to accept both contact-based and contactless transactions.  Those organizations that have at least 75% of their transactions originating from smartcard-enabled terminals will not have to demonstrate compliance.  This is perhaps a reflection of Visa weighing the risks and benefits of technology from a risk management point of view.  A win for merchants certainly, as this technology is widely adopted in many parts of the world.

As a reminder, all organizations within the Payment Card Industry must be compliant with the data security standard, but the nuance of demonstration / attestation is based on the channel and volume of each individual card.  This program of Visa does not impact the other Card brands, so international Merchants will still need to consider these within their global compliance and security programs.

An interesting writeup on the article is available at Computerworld here.  The press release is here.

The deployment in the U.S. requires both adoption at the Merchant level, and the consumers too.  It would be interesting to compare the costs of the EMV architecture vs. the compliance costs of organizations.  I also wonder if the net benefit of requiring security controls to be meaningfully applied to sensitive data (in this case PCI) does not raise all the “boats” (read: other sensitive data types), as it is more likely that security safeguards are applied broadly.  Is this demonstrated by 28% reduction in identity thefts in 2010?

See you in San Francisco at RSA 2011,

James DeLuccia

End to End Resilience .. ENISA.. Cloud..

The beautiful opportunity with distributed computing, globalization, and cloud services is the ability to scale and run complex environments around the globe.  This is balanced of course by assurance that the operations are occurring as you expect, are managed properly, and protected to secure the competitive intelligence of the business.  Especially interesting has been the movement of centralizing data centers of a company into super data centers.

Together these points raise and are possibly met by the ENISA (The European Network for Information Security Agency) report that highlights the decisive factors of an end-to-end resilient network.  The report can be found directly at this link location.

An interesting challenge highlighted by, what appears Egypt’s government shutting down the internet, is how are these distributed cloud systems managed if they are cut-off from their administrative consoles?  Considerations for all businesses, and perhaps an appropriate addition to business continuity and such planning risk documents – is the following:

Can the business’ systems function autonomously when the primary controls and administrative connections are lost?

Perhaps a lesson could be gained by the masterful administration of the bot-net armies that leverage dark and shifting network clouds.

I would be interested of the implications that arise as a result of this disconnect of a country, and potential of other countries (whether due to more direct action, or the indirect result to further contain internet traffic).

Come join me and others in San Francisco where I will be speaking at RSA.  Stop by.. lets catchup.. and looking forward to great debates (as always).

James DeLuccia

RSA Europe Conference 2009, Day 3 Recap

I attended the session first thing in the morning on Mal-ware with Michael Thumann of ERNW GmbH.  He gave quite a bit of technical detail on how to methodically go through the detection and investigation process highlighting specific tools and tricks.  His slides are available online and provide sufficient detail to be understood out of the presentation context.  Definitely recommend downloading them!  A few nuggets I took away:

Malware best practices:

  • When you have been targeted by an unknown (to anti-virus) malware attack and specifically if it is against key persons within the organization the code should be preserved and reverse engineered by a professional to uncover what details were employed and including in the application to understand the risk landscape.
  • Most malware (windows) is designed to NOT run on virtualized VMWARE systems.  The reason is these systems are used primarily in the anti-malware reverse engineering environment.  So – 2 points:  First virtualized client workstations sounds like a reasonable defense for some organizations, and second be sure to follow de-Vmware identification efforts when building such labs (check using tools such as ScoobyNG).

Virtualization panel moderated by Becky Bace with Hemma Prafullchandra, Lynn Terwoerds, and John Howie;

The panel was fantastic – best of the entire conference and should have been given another hour or a keynote slot.  Becky gave great intelligence around the challenges and framed the panel perfectly.  There was an immense amount of content covered, and below are my quick notes (my apologies for broken flows in the notes):

Gartner says by 2009: 4M virtual machines

  • 2011: 660M
  • 2012: 50% of all x86 server workload will be running VMs

Nemertes Research:

  • 93% of organizations are deploying server virtualization
  • 78% have virtualization deployed

Morgan Stanley CIO Survey stated that server virtualization management and admin functions for 2009 include: Disaster recovery, High availability, backup, capacity planning, provisioning, live migration, and lifecycle management (greatest to smallest)
Currently advisories vs vulnerabilities are still showing patches leading before actual vulnerabilities being introduced… i.,e, the virtualization companies are fixing things before they become vulnerabilities by a fair degree.

Questions to consider?

  1. How does virtualization change/impact my security strategy & programs?
  2. Are the mappings (policy, practice, guidance, controls, process) more complex?   How can we deal with it?
  3. Where are the shortfalls, landmines, and career altering opportunities?
  4. What are the unique challenges of compliance in the virtualized infrastructure?
  5. How are the varying compliance frameworks (FISMA, SAS 70, PCI, HIPAA, SOX, etc) affected by virtualization?
  6. How do you demonstrate compliance? e.g., patching or showing isolation at the machine, storage, and network level
  7. How do you deal with scale rate of change?  (Asset/patch/Update) mgmt tools?
  8. What’s different or the same with operational security?
  9. Traditionally separation was king – separation of duties, network zones, dedicated hardware & storage per purpose and such – now what?
  10. VMs are “data” – they can be moved, copied, forgotten, lost and yet when active they are the machines housing the applications and perhaps even the app data – do you now have to apply all your data security controls too?  What about VMs at rest?


  • Due to the rapidity of creation and deletion it is necessary to put in procedures and process that SLOWS down activities to include review and careful adherence to policies.  The ability to accidentally wipe out a critical machine or to trash a template are quick and final.  Backups do not exist in shared system / storage environment.
  • Better access control on WHO can make a virtual machine; notification of creation to a control group; people forget that their are licensing implications (risk 1); it is so cheap to create vm; people are likely to launch vm to fill the existing capacity of a system; storage management must be managed; VMs are just bits and can fill space rapidly; VMs can be forgotten and must be reviewed; Audit VMs on disk and determine use and implementation; VMs allow “Self-Service IT” where business units can reboot and operate systems;  Businesses will never delete a virtual machine; Policy that retires VMs after X period and then archives, and then deletes after Y perood; Create policy akin to user access policies.
  • In the virtualization world you have consolidation of 10 physical servers on 1 server .. when you give cloud-guru access you give them access to all systems and data and applications.


  • People that come from highly regulated industries require you to put your toe in where you can … starting with redundant systems to give flexibility…  the lesson learned is managing at a new level of complexity and bringing forward issues that existed within the organization and become more emphasized.
    We need some way to manage that complexity.  while there are cost pressures and variables driving us towards virtualization we need ways to manage these issues.

  • Must show that virtualization will offset the cost in compliance and lower or hold the cost bar – otherwise impossible to get approval to deploy technology.

  • The complexity you are trying to manage is the the risk and audit folks.  This means internal and external bodies

John Howie

  • Utility computing will exacserbate the problems of instantiating an amazon instance is preferred over using internal resources.  Challenge is getting users to not go and setup an Amazon server is at the forefront – saying no and penalties are not the right pathway…must find positive pro-business rewards to bringing such advantages internal.
  • Finance is saying “take the cheapest route to get this done” … Capex to OpEx is easier to manage financially.
  • Is there a tool / appliance that allows you to do full lifecycle machines?  Is there a way to track usage statistics of specific virtual machines?  -> answer yes, available in the systems.

GREATEST concern of panel:  The shadow IT systems are not tracked or do not have the life cycle.  The deployment of systems on amazon are a black hole – especially due to the fact or use of credit cards…. Is the fix simply as having company setup an account to allow employees to use?

Classic Apply slide:  (download the slides!)

  • Do not IGNORE virtualization – it is happening:
  • Review programs and strategies as they affect virtualization / cloud
  • Existing security & compliance tools and techs will not work….

The other session I enjoyed on the conference was the Show Me the Money: Fraud Management Solutions session with Stuart Okin of Comsec Consulting and Ian Henderson of Advanced Forensics:


  • Always conduct forensic analysis
  • Fraudsters historically hide money for later use post-jail time.
  • Consider IT forensics and be aware of disk encryption – especially if it was an IT administrator and the system is part of the corporation.  Basically – be sure of the technology in place and carefully work forward.
  • Syncronize time clocks – including the CCTV and data systems
  • Be aware of the need for quality logs and court submitable evidence
  • There are many tools that can support the activities of a fraud investigation and daily operations, but the necessity is complete and sufficient logs.  Meaning that the logs have to be captured and they have come from all the devices that matter.  Scope is key to ensuring full visibility.
  • Make someone is responsible for internal fraud
  • Management must instate a whistle-blowing policy w/hotline
  • Be attentive to changes in behavior
  • Obligatory vacation time
  • Ensure job rotation


  • Audit employee access activation and logging
  • maintain and enforce strict duty
  • Pilot deterrent technologies (EFM) <– as the use of these will highlight problems in regular operations and help lift the kimono momentarily allowing aggressive improvements.

Overall a great conference.  Much smaller than the San Francisco Conference, but the result is better conversations; deeper examination of topics, and superior networking with peers.

Till next year,

James DeLuccia IV