Tag Archives: 2009

Amazon Cloud Attacked – Lessons Learned

As mentioned in prior posts, Cloud security and addressing the risks that exist (the new risks and the new tools to address these risks) is fundamental to ensuring a successful and beneficial use of the Cloud provider environments.  The RSA London conference held several strong documents highly to help approach the best practices for cloud security.  The two most commonly referenced were:

A nice article (October 2009 “Amazon EC2 attack prompts customer support changes“) posted on TechTarget highlights the Denial of Service Attack against a hosted website on AWS EC2.  Check out the article here.  Overall the results from this attack were very promising for instilling confidence in Amazon AWS, but also highlights the duties and next steps in evolving beyond simply “starting instances” on the Cloud.  A few of the key points that jumped from the screen, and should be carefully considered include:

  • “The problem was that no one could see the complete picture…” AWS took 18 hour to respond to attack – primarily the result that the backend AWS environment (internal IP traffic) was just fine, but the outside public facing IP was bogged down.
  • AWS responded immediately to fix the issue – demonstrating their dedication to ensuring a great operating environment
    The target organization acknowledged that “they weren’t taking full advantage of AWS’s unique characteristics.” to reduce the impact of this type of attack.  Indeed it is the availability of new enterprising environments and access to a broad set of resources that makes the Cloud such a rich platform.
  • There are ways and means of improving the operational integrity of solutions leveraging the Cloud but it requires, Peter DeSantis VP of AWS EC2 states that “customers take proactive measures, such as distributing instances for redundancy and safety. He said that there were distinct advantages in a cloud computing environment that many weren’t aware of or haven’t learned about…We are underplaying tools that are at people’s disposal…”
  • A great set of lessons are further elaborated in the article.  An additional observation – no other customer operating environments were reportedly impacted, which speaks very positively for Amazon’s architecture and current deployment.

Other thoughts and concerns?

Best,

James DeLuccia IV

RSA Europe Conference 2009, Day 3 Recap

I attended the session first thing in the morning on Mal-ware with Michael Thumann of ERNW GmbH.  He gave quite a bit of technical detail on how to methodically go through the detection and investigation process highlighting specific tools and tricks.  His slides are available online and provide sufficient detail to be understood out of the presentation context.  Definitely recommend downloading them!  A few nuggets I took away:

Malware best practices:

  • When you have been targeted by an unknown (to anti-virus) malware attack and specifically if it is against key persons within the organization the code should be preserved and reverse engineered by a professional to uncover what details were employed and including in the application to understand the risk landscape.
  • Most malware (windows) is designed to NOT run on virtualized VMWARE systems.  The reason is these systems are used primarily in the anti-malware reverse engineering environment.  So – 2 points:  First virtualized client workstations sounds like a reasonable defense for some organizations, and second be sure to follow de-Vmware identification efforts when building such labs (check using tools such as ScoobyNG).

Virtualization panel moderated by Becky Bace with Hemma Prafullchandra, Lynn Terwoerds, and John Howie;

The panel was fantastic – best of the entire conference and should have been given another hour or a keynote slot.  Becky gave great intelligence around the challenges and framed the panel perfectly.  There was an immense amount of content covered, and below are my quick notes (my apologies for broken flows in the notes):

Gartner says by 2009: 4M virtual machines

  • 2011: 660M
  • 2012: 50% of all x86 server workload will be running VMs

Nemertes Research:

  • 93% of organizations are deploying server virtualization
  • 78% have virtualization deployed

Morgan Stanley CIO Survey stated that server virtualization management and admin functions for 2009 include: Disaster recovery, High availability, backup, capacity planning, provisioning, live migration, and lifecycle management (greatest to smallest)
Currently advisories vs vulnerabilities are still showing patches leading before actual vulnerabilities being introduced… i.,e, the virtualization companies are fixing things before they become vulnerabilities by a fair degree.

Questions to consider?

  1. How does virtualization change/impact my security strategy & programs?
  2. Are the mappings (policy, practice, guidance, controls, process) more complex?   How can we deal with it?
  3. Where are the shortfalls, landmines, and career altering opportunities?
  4. What are the unique challenges of compliance in the virtualized infrastructure?
  5. How are the varying compliance frameworks (FISMA, SAS 70, PCI, HIPAA, SOX, etc) affected by virtualization?
  6. How do you demonstrate compliance? e.g., patching or showing isolation at the machine, storage, and network level
  7. How do you deal with scale rate of change?  (Asset/patch/Update) mgmt tools?
  8. What’s different or the same with operational security?
  9. Traditionally separation was king – separation of duties, network zones, dedicated hardware & storage per purpose and such – now what?
  10. VMs are “data” – they can be moved, copied, forgotten, lost and yet when active they are the machines housing the applications and perhaps even the app data – do you now have to apply all your data security controls too?  What about VMs at rest?

Hemma

  • Due to the rapidity of creation and deletion it is necessary to put in procedures and process that SLOWS down activities to include review and careful adherence to policies.  The ability to accidentally wipe out a critical machine or to trash a template are quick and final.  Backups do not exist in shared system / storage environment.
  • Better access control on WHO can make a virtual machine; notification of creation to a control group; people forget that their are licensing implications (risk 1); it is so cheap to create vm; people are likely to launch vm to fill the existing capacity of a system; storage management must be managed; VMs are just bits and can fill space rapidly; VMs can be forgotten and must be reviewed; Audit VMs on disk and determine use and implementation; VMs allow “Self-Service IT” where business units can reboot and operate systems;  Businesses will never delete a virtual machine; Policy that retires VMs after X period and then archives, and then deletes after Y perood; Create policy akin to user access policies.
  • In the virtualization world you have consolidation of 10 physical servers on 1 server .. when you give cloud-guru access you give them access to all systems and data and applications.

Lynn

  • People that come from highly regulated industries require you to put your toe in where you can … starting with redundant systems to give flexibility…  the lesson learned is managing at a new level of complexity and bringing forward issues that existed within the organization and become more emphasized.
    We need some way to manage that complexity.  while there are cost pressures and variables driving us towards virtualization we need ways to manage these issues.

  • Must show that virtualization will offset the cost in compliance and lower or hold the cost bar – otherwise impossible to get approval to deploy technology.

  • The complexity you are trying to manage is the the risk and audit folks.  This means internal and external bodies

John Howie

  • Utility computing will exacserbate the problems of instantiating an amazon instance is preferred over using internal resources.  Challenge is getting users to not go and setup an Amazon server is at the forefront – saying no and penalties are not the right pathway…must find positive pro-business rewards to bringing such advantages internal.
  • Finance is saying “take the cheapest route to get this done” … Capex to OpEx is easier to manage financially.
  • Is there a tool / appliance that allows you to do full lifecycle machines?  Is there a way to track usage statistics of specific virtual machines?  -> answer yes, available in the systems.

GREATEST concern of panel:  The shadow IT systems are not tracked or do not have the life cycle.  The deployment of systems on amazon are a black hole – especially due to the fact or use of credit cards…. Is the fix simply as having company setup an account to allow employees to use?

Classic Apply slide:  (download the slides!)

  • Do not IGNORE virtualization – it is happening:
  • Review programs and strategies as they affect virtualization / cloud
  • Existing security & compliance tools and techs will not work….

The other session I enjoyed on the conference was the Show Me the Money: Fraud Management Solutions session with Stuart Okin of Comsec Consulting and Ian Henderson of Advanced Forensics:

Tips:

  • Always conduct forensic analysis
  • Fraudsters historically hide money for later use post-jail time.
  • Consider IT forensics and be aware of disk encryption – especially if it was an IT administrator and the system is part of the corporation.  Basically – be sure of the technology in place and carefully work forward.
  • Syncronize time clocks – including the CCTV and data systems
  • Be aware of the need for quality logs and court submitable evidence
  • There are many tools that can support the activities of a fraud investigation and daily operations, but the necessity is complete and sufficient logs.  Meaning that the logs have to be captured and they have come from all the devices that matter.  Scope is key to ensuring full visibility.
  • Make someone is responsible for internal fraud
  • Management must instate a whistle-blowing policy w/hotline
  • Be attentive to changes in behavior
  • Obligatory vacation time
  • Ensure job rotation

IT:

  • Audit employee access activation and logging
  • maintain and enforce strict duty
  • Pilot deterrent technologies (EFM) <– as the use of these will highlight problems in regular operations and help lift the kimono momentarily allowing aggressive improvements.

Overall a great conference.  Much smaller than the San Francisco Conference, but the result is better conversations; deeper examination of topics, and superior networking with peers.

Till next year,

James DeLuccia IV

RSA London 2009, Day 2

Day 3 started off early and I jumped right into any advanced technical topics I could find.  Overall the sessions were very good with a single disappointment.  My apologies for odd english and incomplete notes… pretty tough recalling all the points that jumped up during each session.  The great news is that these presentations will be available online!

Nomrod Vax of CA spoke on “Reversing of the Roles in IT – IAM Aspects of Cloud Computing”:

  • Key issue in virtualization is that audit and security requirements must consider virtual “images” as they are files and can be moved, copied, and such very easily.  In addition, a balance must be established to ensure that the inherent flexibility of virtual machines of being transportable is not handicapped (as this is a key value point).
  • Copying a virtual machine (file / directory) is equal to stealing a server from the server room.
  • Virtual machine storage can be mounted while they are operating and data can be modified / accessed during operation

Example:

A developer took a virtual machine that ran the check transactions for an organization that was having errors into a QA environment for testing, and placed it onto a clean QA server.  He then ran tests against the system.

What happened is that the system didn’t “know” it was in the QA environment, and began process the transactions that were queued in the system.  The result was the system completed its open tasks and these were duplicates of the ones being run by the other ‘original’ image running the checks.

CLOUD Concerns:


Reverting back from snapshots can have the following effects:

  • Audit events are LOST
  • Security Configurations are reverted
  • Security Policies are reverted
  • **Snapshots are brilliant, but their use has massive impacts**

Users and privileged users exist, but in virtual environments there is a new level of problems by introducing a new privilege layer.  This new layer resides in the hypervisor / underlying system (depending on the virtualization model)

One host running multiple virtual machines = Critical infrastructure

Critical infrastructure = Business impact

Business impact = Compliance requirements

  • Cloud can be considered as a new IT service that is commoditized and cheap.  Resulting in a lower barrier of entry, lower switching costs, and lower visibility and control.
  • How do you know the cloud environment provider is able to satisfy the concerns that exist within the Cloud?
  • Less than 20% state they are interested in deploying cloud technology, and the majority state they are going to put it on internally managed systems.
  • Primary concern is security and control.
  • Assurance related to performance, trust, and reliability are primary drivers away from greater adoption.
  • Barriers to Enterprise adoption include – Security & control and Protecting sensitive information.

IT teams are becoming the Auditors of the Cloud providers – a role has tremendous risks to both the enterprise and the Cloud experiment.

Challenges with Auditing:

  • Regulatory compliance is the major driver for Identity Management
  • Auditors are not virtualization savvy
  • virtualization audit issues have not been flagged
  • What will drive priorities? – Education; compliance & regulatory pressures; public exposure.
  • Will require adjusted controls for separate management, and control

Cloud – Due Diligence:

  • Attain assurance from the Cloud provider
  • Establish visibility and accountability through performance and service agreements (note this does not transfer the duty)
  • Demand full privileged user management
  • Restrict privileged access (provide process for allowing elevated and policy based activities)
  • Ensure accountability
  • Restrict access to logs
  • Restrict access to virtual environment (resources)

Central Access Policy Management:

  • Enforce the policy across all VMs and Virtualization platform
  • Centralization – do not rely on local policies
  • Enforce policy change control
  • Manage deviations from the policy
  • Report on Policy Compliance

Complete and Secure Auditing:

  • Monitor administrative activity (including impersonation)
  • Monitor all access to virtualization resources
  • Centralize audit logs
  • Notify on significant events
  • Integrate with central SIEM systems (including triggers to provide automation and actions seamlessly)

Conclusions:

  • Security and Controls are the Number 1 inhibitors for cloud adoption
  • Cloud providers need to reassure their enterprise customers
  • Automation is imperative
  • Security will become a cloud differentiation
  • The IT roles are reversing

*Nimrod Vax gave a nice presentation.  He started off very basic focusing on definitions, but the middle and end focused nicely on specific risks related Cloud and Virtualization.  I felt the session sometimes overlapped the two environments – Cloud vs. Virtualization, and subsequently the risks and controls.  Overall very good and well worth it.  Well done.

Hemma Prafullchandra of HyTrust lead a great discussion on Assurance Framework for Internal and External clouds

A lively discussion with 8 people was held in the main conference area on Cloud computing.  Specifically we were discussing the risks and levels of assurance that can be achieved in the market, as it is today.  In addition to myself, there was Becky Bace (renowned technology expert), Tim Minster (Cloud afficionado), Lynn (Executive chewing on this Cloud thing in ‘real life’), and several others who I could not recognize.

Key points that I captured:

  • Assumptions must be challenged regarding Cloud systems
  • Premises of security
  • Premises of what is prudent in the environment
  • The intent of using the Cloud – i.e., is this a pet project business case or the payroll systems?
  • Clouds today should not house PII, PHI, Regulated, or Confidential information.
  • When putting Cloud technology to work the parties must understand what they are receiving from each specific Cloud provider – each one provides different levels of granularity and assurance through service level agreements and access to 3rd party audit reports.
  • Consider Cloud as a Utility – such as power.  If you want to protect your assets in your home you buy a security strip and UPS.  Same should be considered on low cost Cloud-Utilities, in that deployment of end-level encryption, firewalls, IDS, and proper system level security are required.  These additions, which would exist regardless of the physical locality (meaning Data Center or Cloud, they are needed regardless for certain levels of assurance), provide degrees of assurance based on the intent of the organization.
  • Translating the Cloud architecture to the ‘Old Data Center’ model is key to providing the clean mapping of Common Controls and regulations to these systems.
  • These Cloud Systems should be managed with the understanding of the new risks as a result of being multi-tenant and hosted.
  • Data management and classification are still required in a Cloud environment, and must be considered for such things as cross border data crossings and long term management.
  • Action items – support Cloud initiatives; understand the risks to such operations; place sufficient controls in place, and communicate these to the auditors and management accurately.

Ben Rothke (Sr. Security consultant with BT) spoke on Establishing a SOC,

Great session on SOC – best practices; setting one up; auditing a 3rd parties, and ensuring success in the long term.  I didn’t see the entire session as I was in another session and left it early and found Ben’s very helpful.  I would recommend downloading his presentation for auditors or 3rd parties vetting a SOC, as he gives tremendous detail on how these facilities should be run; how to measure them; and how to make the business case for setting these up.

Hugely valuable session.

The evening discussions were with 13 unique individuals from Visa Europe, KOBIL, People Security, and several other organizations.  The discussions went through an immense amount of topics and was hugely valuable.  Thank everyone for joining and sharing their great experiences!

Day 2 was fantastic.  Mid-point opinion of RSA Europe = Above Average (RSA SFO being average), but the sessions need to be increased in complexity… Consensus is that sessions that are advanced could be more advanced with less basic definitions and high level concepts.  Otherwise brilliant.

Best,

James DeLuccia IV

RSA Europe 2009, London – Day 1

Below are my notes and takeways from the first day of RSA London.  The day started off with plenty of cinematics and a nature theme opening session by the heads of RSA.  The location is fantastic – the conference hotel is very close to good trains in London and there are plenty of things around.  While the hotel is a bit of a maze – I am told it is a fair trade off from prior years.

Arthur Coviello opened the conference and gave a great deal of specifics regarding the industry.  He actually gave a fantastic introduction to my session materials, and that certainly helped the delivery of my own session.  (Speaking of my session – I will be composing my session into a paper and posting it online to gather opinion and input).

Arthur’s nuggets:

  • 15 Billion devices communicating – according to John Ganz “The Embedded Interent: Methodology and Findings”
  • By 2011 – 75% of us workforce will be mobile (BNET)
  • Facebook will exceed 300M users by end of 2009.
  • More information has been created in this decade than “all time”?!
  • All content is created digitally at inception or within 3 months.
  • Physical control of IT assets are loosening
  • Growth and adoption of IT Cloud services will grow to 35 billion Euros.  It will capture 25% of ALL IT spending
  • Today “high value” is in taming the complexities of information and this is the challenge of IT organizations.
  • The challenge of the security industry is to enable these ubiquitous technologies and enable them in a boundaryless IT environment.

Hugh Thompson of People Security:

Hugh have a great session that was both animated and well structured.  His presentation reflected a past project where I created algorithms and technology to conduct social cyber forensics using similar mechanisms.  He gave a good number of points regarding Data Gateways and I hope that his presentation will be available to help disiminate the definitions and risks.

He highlighted (the somewhat obvious fact) that people are posting indiscriminately data online through social sites.  There will be a fallout to the information posted online.  There should be some kind of education for people on “what to post” online.

A takeaway from the Meetup dialogs was if the social norm is to post all this information then the minority is to not.  Therefore is it right to educate the masses or secure them for them?  Interesting implications regarding security authentication and authorization systems as a result.

Guy Bunker’s session:

Highlighted the Jericho forum 2009 Cube and provided very interesting and impactful questions that every organization should know about – need to see if I can find a copy of that slide.

In a internal infrastructure situation – if the system to system controls fail it is generally OK.  An audit deficiency or a weekend is in order to rectify the error.  If such errors or security failures occur at the Cloud Administrator the exposure has an impact – the severity depends upon the breach, information, and audience.

“Compliance is the dark side” … hmmm

  • Moving to the cloud doesn’t devoid the duty of complying with regulations
  • Moving to short time contracts with Cloud providers (i.e., spinning up an environment for XYZ project over a measured time frame) introduces requirements to demonstrate that these environments are appropriately secure and compliant.  What is the onboarding, management, measurement, and offboarding process occur in these environments?
  • Caution (especially within the EU) is the exporting of data to these providers, and do they maintain the possession in proper approved regions and systems.
  • Data Migration:
    • Moving data back and forth between internal and external clouds can be difficult
    • Moving the data if the systems are proprietary requires API and conversion efforts
    • Restoration of data from backups based on prior / old applications creates challenge when the system is wholly updated and older updates are no longer supported (Think JP Morgan required to provide restored email backups from systems they internally managed that are no longer available by the vendor).
  • Questions to Ask Part 1 and Part 2 are a nice breakdown of sanity checks when using cloud providers by Guy Bunker
  • Trend in new threats is people taking the entire Virtual Machine images, not just data.  This not only is the information but more importantly the HOW and MEANS of delivering the exact same service.

The end of evening included the Vendor booths which were significantly smaller than the April conference, but was better due to the quieter and more productive conversations I had and observed.  The Blogger Meetup also was Tuesday and it was great.  Plenty of very smart individuals and the conversations were all geek, security, compliance, and extensions of sessions conducted during the day.  I actually hope that other such evening events will come together to take advantage of the presence of so many experts in their fields.

Day 1 was certainly a success.  The tremendous focus on Cloud, Social Media Risks, Identity Fraud, and audit was rich and worthwhile.  A personal takeway I have is how applicable my hands-on work around securing auditable Cloud environments is for businesses.

Other sources for information:

More to come…

Kind regards,

James DeLuccia IV

Wireless Insecurity … a near constant state: RFID, Bluetooth, 802.11

The securing of information assets is the core to ensuring operational integrity for every business, and is supported by security and compliance safeguards.  The near constant stream of innovation over the past 10 years has provided near ubiquitous wire(less) connectivity to an abundant number of devices.  Matched equally to this innovation and connectivity is the transportability of data.  Of course the data must be transported and portable; however, it must be done in a manner that supports the organization’s entire strategic objectives.
The reality of wireless technology has reached a crescendo with regards to WIFI / 802.11 within the payment card industry where encryption and two factor authentication was required to leverage these technologies.  Due to a number of data breaches (presumably), specific wireless technology is being banned from the payment card network.  Guidance on the wireless guidelines may be found here.
These lessons – that wireless technology can be eavesdropped; that the data can float literally anywhere (for confirmation turn on your wireless network card on an airplane and fire up a DHCP gateway application); that the only way to secure it is through strong crypto and TWO factor authentication.  All of these seem clear, but the last one should be elaborated on to understand that risks of Bluetooth and RFID.
2 Factor authentication beyond ensuring the identity of the individual provides a far more important safeguard – that the user intended to make a connection and goes through the handshake process.  This does not exist in these other technologies, and creates a great deal of risk to the users of these systems.
To provide specific context to why Bluetooth and RFID are risky business without proper safeguards consider the following:

  • Bruce Scheiener’s post on how passport RFID is dangerous and susceptible to attacks.  Here is a Wired article with more details.
  • DefCon radio scanners “read” and “recorded” the information off of security badges from the attendees.  This is the most security conscious / paranoid group that you can assemble, and this scanner caught unsecured badges.
  • When attending it is near unanimous that all wireless radios should be disabled
  • The data on these RFID type devices contains things as simple as identifiers to full names and departments.

(iphone focus of post, but applicable to all such capable devices) prior to getting on a plane TO Blackhat / DefCon.  The reason is simple: it is near certain that someone is running a scanner.

In the end these technologies do provide essential functions, but should cautiously deployed where security can be ensured and is tested properly.  Care should be given to the information applied to these transmitting devices.
NIST has a nice document here (800-98)

Other recommendations?

James DeLuccia