Tag Archives: cloud

Hardware failure can lead to 70% breakout in Cloud / virtualization setup

cosmic raysGoogle released details on how an attacker can take advantage of the physical design and setup of some memory chips in computers. This exploit basically is based on setting and releasing a charge on one memory block to the point it leaks over to the neighbor block (simplifying here). Stated another way – Imagine cutting an onion and then using the same knife to cut a tomato… the taste of the onion would definitely transfer to the tomato, ask any toddler ūüėČ

  • What does this mean to enterprises – well it is early, but this type of risk to an organization should be addressed and covered in your third party supplier / procurement security team. Leading organizations are already vetting hardware vendors and the components included in each purchase to prevent malicious firmware and snooping technology.
  • In addition, the supplier team managing all of the deployed cloud and virtualization relationships (your Cloud Relationship Manager) should begin a process of reviewing their provider evaluations.

Of course this is a new release and the attack is not simple, but that doesn’t mean it won’t and could not occur.

The attack identified by Google plus the virtualized environment creates a situation where an attacker “…can design a program such that a single-bit error in the process address space gives him a 70% probability of completely taking over the JVM to execute arbitrary code” – Research paper

Given the probability of success, it is definitely valuable to have this on your risk and supplier program evaluations.

Here is the full analysis by Google and the virtualized research paper.

Best,

James DeLuccia

d

To

FedRamp on the Cloud: AWS Architecture and Security Recommendations

In December Amazon released a nice guide with architecture layouts + tips across the NIST 800-53 standard. This is an important tool for ANY business looking to accelerate their operations into a distributed system model.

I took a few things away from this PDF – the two are that every company moving to the cloud should read this document. It not only provides an architecture layout that is critical in planning, but it also has numerous nuggets of awesome sprinkled throughout – an example:

 Many of the SAAS service providers do not have a FedRAMP ATO, so using their services will have to be discussed with the authorizing official at the sponsoring agency. Pg 28 <Рsounds simple, but very costly if done under hopeful assumptions of acceptance!

Regarding the need to harden a base system:

AWS has found that installing applications on hardened OS’s can be problematic. When the registry is locked down, it can be very difficult to install applications without a lot of errors. If this becomes an issue, our suggestion is to install applications on a clean version of windows, snapshot the OS and use GPOs (either locally or from the AD server) to lock down the OS. When applying the GPOs and backing off security settings, reboot constantly because many of the registry changes only take effect upon reboot.

A bit about the White paper as described by Amazon:

Moving from traditional data centers to the AWS cloud presents a real opportunity for workload owners to select from over 200 different security features (Figure 1 – AWS Enterprise Security Reference ) that AWS provides. ‚ÄúWhat do I need to implement in order to build a secure and compliant system that can attain an ATO from my DAA?‚ÄĚ is a common question that government customers ask. In many cases, organizations do not possess a workforce with the necessary real-world experience required to make decision makers feel comfortable with their move to the AWS cloud. This can make it seem challenging for customers to quickly transition to the cloud and start realizing the cost benefits, increased scalability, and improved availability that the AWS cloud can provide

A helpful guide and glad to see a major Cloud provider enabling it’s clients to excel at information security operations, and in this case – FedRamp

Methodology for the identification of critical connected infrastructure and services ‚ÄĒ SAAS, shared services..

ENISA released a study with a methodology identifying critical infrastructure in communication networks. While this is important and valuable as a topic, I dove into this study for a particularly selfish reason … I am SEEKING a methodology that we could leverage for identifying critical connected infrastructure (cloud providers, SAAS, shared services internally for large corporations, etc..) for the larger public/private sector. ¬†Here are my highlights – I would value any additional analysis, always:

  • Challenge to the organization: “..which are exactly those assets that can be identified as Critical Information Infrastructure and how we can make sure they are secure and resilient?”
  • Key success factors:
    • Detailed list of critical services
    • Criticality criteria for¬†internal and external interdependencies
    • Effective collaboration between providers (internal and external)
  • Interdependency angles:
    • Interdependencies within a category of service
    • Interdependencies between categories of services
    • Interdependencies among data assets
  • Establish¬†baseline security guidelines (due care):
    • Balanced to business risks & needs
    • Established at procurement cycle
    • Regularly verified (at least w/in 3 yr cycle)
  • Tagging/Grouping of critical¬†categories of service
    • Allows for clean tracking & regular security verifications
    • Enables troubleshooting
    • Threat determination and incident response
  • Methodology next steps:
    • Partner with business and product teams to identify economic entity / market value
    • Identify the dependencies listed about and mark criticality based on entity / market value
    • Develop standards needed by providers
    • Investigate how monitoring to standards can be managed and achieved (in some cases contracts can support you, others will be a monopoly and you’ll need to augment their processes to protect you)
    • Refresh and adjust annually to reflect modifications of business values

I hope this breakout is helpful. The ENISA document has a heavy focused on promoting government / operator ownership, but businesses cannot rely or wait for such action and should move accordingly. The above is heavily modified and original thinking based on my experience with structuring similar business programs. A bit about ENISA’s original intent of the study:

This study aims to tackle the problem of identification of Critical Information Infrastructures in communication networks. The goal is to provide an overview of the current state of play in Europe and depict possible improvements in order to be ready for future threat landscapes and challenges. Publication date: Feb 23, 2015 via Methodologies for the identification of Critical Information Infrastructure assets and services ‚ÄĒ ENISA.

Best, James

Product development – Battlefield leadership series: WN60 – defensive positions by Germans at Omaha Beach

Leading up to the invasion of Normandy (read this book on the topic, 2 week perspective shifting emotional journey), the leaders of each side had differing ideas about when an invasion should and would occur. The Allies came to the conclusion of low to mid-tide times, and the Germans believed that that the Allies would prefer to invade during high-tide.

The Germans built obstacles around the Omaha Beach shore. They created mines throughout the beach that would be hidden during high tide. Based on gun placements along the cliffs, the Germans were confident that this would be ideal in protecting their own. After preparations were finished, the Germans had dozens of gun placements providing criss-crossing machine gun fire over the entirety of Omaha Beach. As history shows, the Allied casualty rate indicates exactly how successful these gun placements were.

In preparation for attack, the Allies took the opposite perspective. Low tide provided easy exit pathways later at high tide. Low tide also allowed the Allies to see the obstacles, carefully avoid them, and easily destroy them. During the battle, the removal of obstacles allowed for a continued steady landing of forces after the initial invasion.

The Allies won; they got Omaha Beach. They were able to exploit gaps in the German defensive strategy through the application of carefully planned actions.

Business Reflections…

In a free market world, there is always someone who sees an opportunity that others do not. The advantages to each opportunity are weighed and measured. The result can be great or completely opposite. During the invasion of Normandy, fire from the Germans required the infantry on the ground to adjust from the original plan (most Allied troops were landed in the wrong zones, without the equipment they needed, and the general leadership structure was fractured due to the loss of so many soldiers at the landing). This ability — the ability to go off course of the original plan in order to find success in the heat of battle — is crucial to businesses and their teams.

Leaders are not always on the ground and cannot be effective if the teams have to seek out answers prior to taking an initiative. The successful Allies learned from prior landings to implement the following (all applicable to businesses as well):

  1. Training, a lot of training. The troops were trained clearly, relentlessly, and aggressively. The training included hands-on challenges with similar landscape and environmental hurdles.
  2. Building culture. Teams, squads, packs, etc. of individuals were grouped together, in most cases, since enlisting. These groupings created mass cohesiveness and inspired troops to push themselves and their fellow soldiers further than they thought possible (as in the desire to ‘stand strong in front of their comrades’).
  3. Unit command – localized leadership and decision making allowed for the teams to respond, re-group, and deploy without micro-managed leadership (the Germans required authority to engage and move assets, and thus were to late in being effective in resisting the invasion force).

Leaders must consider how they are embracing the above, and how they have made themselves leaders instead of micro-managers with teams executing check-sheets. 


 

What is Battlefield Leadership and what is this series about … 

This is the second paper in this series. As part of my pursuit to learn and grow, I sought out the excellent management training team at¬†Battlefield Leadership. I am¬†professionally¬†leveraging this across multi-million dollar projects I am overseeing (currently I am the lead executive building global compliance and security programs specifically in the online services / cloud leader space). Personally I am bringing these lessons to bear within my pursuits to cross the chasm. To often I see brilliant technical individuals fail to communicate to very smart business leaders and to the common person on the street.¬†My new book¬†‚ÄstHow Not to be hacked¬†seeks to be a first step in bringing deep information security practices beyond the technologist.

Most exciting the Battlefield group for this training placed it in Normandy France. This allowed for senior executives to be trained in a setting where serious decisions were placed by both sides, and each provided a lesson. This series represents my notes (that I could take down) and takeaways. I share to continue the conversation with those great individuals I met, and with the larger community.

Kind regards,

James

 

 

 

How to do DevOps – with security not as a bottle neck

As in any good morning, I read a nice article written by George Hulme that got me thinking on this topic; that lead to a discussion with colleagues in the Atlanta office, and resulted in me drawing crazy diagrams on my ipad trying to explain sequencing. Below I share my initial thoughts and diagrams for consumption and critique to improve the idea.

Problem StatementIs Security a bottleneck to development and likely more so in a continuous delivery culture?

Traditional development cycles look like this …

  1. A massive amount of innovation and effort occurs by developers
  2. Once everything works to spec, it is sent to security for release to Ops
  3. In most cases security “fights” and in a few cases fails the release to ask developers to patch (patching itself implies not a real solution but a fix and not a solution), and then
  4. a final push through security to Ops
There are many problems here, but to tackle the first myth – security here is a bottleneck, because that is how it is structurally placed in the development cycle.
On a time (man days; duration; level of work) basis, security is barely even present on the product develop to deploy timeline – this is akin to thinking that man has been on Earth for a long time, but is a mistake when taken relative to the creation of the planet.. but I digress
Solution In a continuous develop environment – iterate security cycles

As Mr. Hulme highlighted in the article, integration of information security with development through automation will certainly help scale #infosec tasks, but there is more. Integrate through rapid iterations and a feedback (note the attempt at coloring of the feedbacks by infosec & ops, joined and consistent with the in-flight development areas)

While high level, I find that as I work with leadership within organizations – clearly communicating and breaking out the benefits to their security posture; ability to hold market launch dates, and clarity for technology attestations is equally as important as the code base itself. Awareness and comprehension of the heavy work being done by developers, security, Ops, and compliance audit teams allows for leadership to provide appropriate funding, resources, governance, monitoring, and timelines (time is always the greatest gift).

How have I developed my viewpoint?

I have been spending an increasing amount of time these past few years working with service provider organizations and global F100. The common thread I am finding is the acceleration and dependecy of third party providers (Cloud, BPO, integrated operators, etc..) and in the process have had an interesting role with continuous delivery and high deploy partners. Specifically, my teams have audited and implemented global security compliance programs of those running these high deploy environments, and sought to establish a level of assurance (from the audit public accounting perspective) and security (actually being secure and better sustainable operations).

Best,

James

 

Sony PSN hack of 100M+ accts executed from Amazon EC2

The playstation breach for Sony has gotten reasonable publicity, but little intelligence on the attack, methods, and results have been shared sufficient to enable others to learn and be more resilient. A nice article on The Register details information indicating that the attackers leveraged the power of Amazon EC2 to execute the attack as paid customers.

The article can be found here http://ow.ly/1sY8TW with links to the Bloomberg article here (http://www.bloomberg.com/news/2011-05-13/sony-network-said-to-have-been-invaded-by-hackers-using-amazon-com-server.html)

While not new to leverage these cloud services, what is intriguing and worth deeper consideration is how much can we extend cloud beyond what is already being applied by companies and security researchers. Super computer processing; rapid instant access, and globally accessible yet still being used uncreatively to host web sites and such?!? Using the example from the article, if one can spend less than a dollar to break good encryption, could we not also leverage that for rotating keys at a similar cost benefit model?

I digress, the consideration of clouds being weaponized harks to the day of defense by blocking entire country IP address blocks. Perhaps naive in simplicity, but when customers become robots (like Amazon’s Mechanical Turk) then these cloud IP addresses need to be reconsidered. Looking forward to a greater discussion here…

Best,

James DeLuccia

(produced on iPad)

RSA Europe Conference 2009, Day 3 Recap

I attended the session first thing in the morning on Mal-ware with Michael Thumann of ERNW GmbH.  He gave quite a bit of technical detail on how to methodically go through the detection and investigation process highlighting specific tools and tricks.  His slides are available online and provide sufficient detail to be understood out of the presentation context.  Definitely recommend downloading them!  A few nuggets I took away:

Malware best practices:

  • When you have been targeted by an unknown (to anti-virus) malware attack and specifically if it is against key persons within the organization the code should be preserved and reverse engineered by a professional to uncover what details were employed and including in the application to understand the risk landscape.
  • Most malware (windows) is designed to NOT run on virtualized VMWARE systems.¬† The reason is these systems are used primarily in the anti-malware reverse engineering environment.¬† So – 2 points:¬† First virtualized client workstations sounds like a reasonable defense for some organizations, and second be sure to follow de-Vmware identification efforts when building such labs (check using tools such as ScoobyNG).

Virtualization panel moderated by Becky Bace with Hemma Prafullchandra, Lynn Terwoerds, and John Howie;

The panel was fantastic Рbest of the entire conference and should have been given another hour or a keynote slot.  Becky gave great intelligence around the challenges and framed the panel perfectly.  There was an immense amount of content covered, and below are my quick notes (my apologies for broken flows in the notes):

Gartner says by 2009: 4M virtual machines

  • 2011: 660M
  • 2012: 50% of all x86 server workload will be running VMs

Nemertes Research:

  • 93% of organizations are deploying server virtualization
  • 78% have virtualization deployed

Morgan Stanley CIO Survey stated that server virtualization management and admin functions for 2009 include: Disaster recovery, High availability, backup, capacity planning, provisioning, live migration, and lifecycle management (greatest to smallest)
Currently advisories vs vulnerabilities are still showing patches leading before actual vulnerabilities being introduced… i.,e, the virtualization companies are fixing things before they become vulnerabilities by a fair degree.

Questions to consider?

  1. How does virtualization change/impact my security strategy & programs?
  2. Are the mappings (policy, practice, guidance, controls, process) more complex?   How can we deal with it?
  3. Where are the shortfalls, landmines, and career altering opportunities?
  4. What are the unique challenges of compliance in the virtualized infrastructure?
  5. How are the varying compliance frameworks (FISMA, SAS 70, PCI, HIPAA, SOX, etc) affected by virtualization?
  6. How do you demonstrate compliance? e.g., patching or showing isolation at the machine, storage, and network level
  7. How do you deal with scale rate of change?  (Asset/patch/Update) mgmt tools?
  8. What’s different or the same with operational security?
  9. Traditionally separation was king – separation of duties, network zones, dedicated hardware & storage per purpose and such – now what?
  10. VMs are “data” – they can be moved, copied, forgotten, lost and yet when active they are the machines housing the applications and perhaps even the app data – do you now have to apply all your data security controls too?¬† What about VMs at rest?

Hemma

  • Due to the rapidity of creation and deletion it is necessary to put in procedures and process that SLOWS down activities to include review and careful adherence to policies.¬† The ability to accidentally wipe out a critical machine or to trash a template are quick and final.¬† Backups do not exist in shared system / storage environment.
  • Better access control on WHO can make a virtual machine; notification of creation to a control group; people forget that their are licensing implications (risk 1); it is so cheap to create vm; people are likely to launch vm to fill the existing capacity of a system; storage management must be managed; VMs are just bits and can fill space rapidly; VMs can be forgotten and must be reviewed; Audit VMs on disk and determine use and implementation; VMs allow “Self-Service IT” where business units can reboot and operate systems;¬† Businesses will never delete a virtual machine; Policy that retires VMs after X period and then archives, and then deletes after Y perood; Create policy akin to user access policies.
  • In the virtualization world you have consolidation of 10 physical servers on 1 server .. when you give cloud-guru access you give them access to all systems and data and applications.

Lynn

  • People that come from highly regulated industries require you to put your toe in where you can … starting with redundant systems to give flexibility…¬† the lesson learned is managing at a new level of complexity and bringing forward issues that existed within the organization and become more emphasized.
    We need some way to manage that complexity.  while there are cost pressures and variables driving us towards virtualization we need ways to manage these issues.

  • Must show that virtualization will offset the cost in compliance and lower or hold the cost bar – otherwise impossible to get approval to deploy technology.

  • The complexity you are trying to manage is the the risk and audit folks.¬† This means internal and external bodies

John Howie

  • Utility computing will exacserbate the problems of instantiating an amazon instance is preferred over using internal resources.¬† Challenge is getting users to not go and setup an Amazon server is at the forefront – saying no and penalties are not the right pathway…must find positive pro-business rewards to bringing such advantages internal.
  • Finance is saying “take the cheapest route to get this done” … Capex to OpEx is easier to manage financially.
  • Is there a tool / appliance that allows you to do full lifecycle machines?¬† Is there a way to track usage statistics of specific virtual machines?¬† -> answer yes, available in the systems.

GREATEST concern of panel:¬† The shadow IT systems are not tracked or do not have the life cycle.¬† The deployment of systems on amazon are a black hole – especially due to the fact or use of credit cards…. Is the fix simply as having company setup an account to allow employees to use?

Classic Apply slide:  (download the slides!)

  • Do not IGNORE virtualization – it is happening:
  • Review programs and strategies as they affect virtualization / cloud
  • Existing security & compliance tools and techs will not work….

The other session I enjoyed on the conference was the Show Me the Money: Fraud Management Solutions session with Stuart Okin of Comsec Consulting and Ian Henderson of Advanced Forensics:

Tips:

  • Always conduct forensic analysis
  • Fraudsters historically hide money for later use post-jail time.
  • Consider IT forensics and be aware of disk encryption – especially if it was an IT administrator and the system is part of the corporation.¬† Basically – be sure of the technology in place and carefully work forward.
  • Syncronize time clocks – including the CCTV and data systems
  • Be aware of the need for quality logs and court submitable evidence
  • There are many tools that can support the activities of a fraud investigation and daily operations, but the necessity is complete and sufficient logs.¬† Meaning that the logs have to be captured and they have come from all the devices that matter.¬† Scope is key to ensuring full visibility.
  • Make someone is responsible for internal fraud
  • Management must instate a whistle-blowing policy w/hotline
  • Be attentive to changes in behavior
  • Obligatory vacation time
  • Ensure job rotation

IT:

  • Audit employee access activation and logging
  • maintain and enforce strict duty
  • Pilot deterrent technologies (EFM) <– as the use of these will highlight problems in regular operations and help lift the kimono momentarily allowing aggressive improvements.

Overall a great conference.  Much smaller than the San Francisco Conference, but the result is better conversations; deeper examination of topics, and superior networking with peers.

Till next year,

James DeLuccia IV