Tag Archives: botnet

If I were Evil Series: Creating a malware pandemic through USB charging stations

I would infect the USB power stations at airports & first class w/ malware to take-over all the Laptops & Smart-devices, iPads, iPhones, and latest Samsung device. I would do this either one on one device – much like spreading a virus as demonstrated through pace makers (Jonathan Brossard did a proof of concept of infecting pace makers simply by proximity with each other), much like any other virus. The goal would be simply to infiltrate systems and these devices for exfiltration and espionage.

Of course one could do this too at the hardware level by poisoning the chipsets coming out of China, as was done with the missile guidance chips…

If I were Evil that is …

Too many compromises …

There is a large volume of public disclosures over the past several months involving sophisticated (and some not) with mature defenses being breached.  The most visible ones are not from small startup companies that were lean on their defense, but instead these are being perpetrated against those organization’s with the resources and skills to muster a good defense.

The lessons of these public disclosures (note being public – as more are likely not being publicly released) is the need to reflect and adjust.  A comment i made in February regarding the need to continue building and evolving security through risk assessments.  Information protection is about conditioning and not about the latest fad.  Certainly the latest fad requires attention, but so does historic attack methods.  So, what is interesting of the recent compromises.

First off, sharing and depth of defense.  Barracuda Labs had a breach, and they published very specifically what happened.  What is positive about this breakdown is that they articulate precisely what was taken, and how they are CERTAIN that was it.  Their technology and analysis do not have words of legal or marketing anywhere, and therefore are actionable.

Amazon AWS…  this affected many more people and for many days the impact was unknown.  This was more an operational breakdown for businesses than perhaps an information protection process.  There is a decent timeline of the web services’ updates here, and from it one can tell that great effort was taken by Amazon to address the challenges.  Unfortunately, for businesses trying to take action there was no clear point where clients using Amazon should activate their disaster recovery plans.  Given processes were coming online incrementally I understand that decision is challenging.  In addition, the plethora of terminology probably made all system administrators and CIOs review their deployment maps more than once.  The take away here is – understand precisely what services are being provided and where (as much as you are able), and secondly have your own plan that depends on your own customer’s expectations.  This is leading practice, but can get lost in these serious moments.

Admittedly the Amazon example is not a breach – though think of the potential if it was an attack on the infrastructure of Amazon’s cloud environment and how extremely valuable that landfall would be to criminals and nation-states.

Then there is Sony’s network that was attacked and required a complete shutdown of the service to extricate and stabilize the environment.  Three separate posts by Sony on their blog here, here, and here.  This attack highlighted that, despite the business model, sensitive data is sensitive data (PCI).  In addition, an idea I have been discussing with colleagues is the potential for bot armies.  Consider that playstation consoles have very strong computing platforms built-in and are used by graphic companies and military for high end computing (1,716 linked make 1 super computer!).  Also consider that at least 77 million devices connect at some point to the Playstation network, receive updates, transmit data at high speeds, and are all likely not disconnected after a gaming session.  The point being, a single hardware platform running basically identical firmware, connected on high speed broadband .. sounds like a perfect bot army to me.

There has been much written on these events above and I encourage all to deeply understand the executions.  My intention above, as always, is to provide a different context to these events, their meaning, and perhaps some actions to be appropriate by you.

Best,

James DeLuccia

Security and Privacy on mobile devices are not always equal

I spend a great deal of time on global security programs where the focus is beyond the bit and bytes (finally) and includes the people process side of the equation surrounding information security.  One may argue this has always existed when just looking at the regulations and standards we have built our compliance programs around.  I would politely highlight this is not always the case and not to a sufficient level.

A common challenge in the security world is that a lot of bad can and does happen online.  The only difference between what scares people one day to the next, lies in what is being focused on.  Exposed emails are nothing new; financials leaked on torrents; or simply the acronym APT are not as new as they appear.  What is substantially new is the emerging device universe and the consumerization of tools beyond and into the enterprise.

These devices not only introduce entire new platforms with application risks, but also the manner of handling the traffic and the data itself is also different.  A bit of an example to clarify:

In the late 1990s web browsers and websites allowed fields that went unchecked by the server – why?  Well, there was no reason anyone would send a bazzillion letter ‘A’s, or would type in SQL statements that might interfere with the database backend, right?

Switch to 2011, and with smartphone devices we have new platforms and a model where assumptions are being built into the applications and interfaces on what the users will do.  It is a given that we are wiser today on these points, but with the release of new code and applications the level of complexity increases rapidly on each device and technology ecosystem.  The consequences of these individual applications interacting on the same device have yet to be realized.

Another point of view is what happens with the data being handled by the service provider?  As organizations switched to mobile sites 3rd party systems were used, but those are being pushed aside by custom built and iOS type applications.  As was highlighted in a nice little post by Dan Wallach – not all communication settings are adhered to for every device and every channel (his example Android and Facebook).

There is an immense opportunity to reduce current and future difficulties by reflecting on the past and applying the correct safeguards in place today – completely.  Coverage is key – without it, we are just plugging holes and hoping others don’t look at the others.

A bit broad, but look forward to challenge and alternate perspectives,

James DeLuccia IV

RSA SFO 2011 is done

This week has been a blitz of sessions, one-on-one deep discussions, and random swarms of passionate people descending on any table to discuss all things information security.  The sessions were good, the products somewhat interesting, and the networking was fantastic.  I did my best to tweet as much as I could from sessions throughout the conference, but there is a theme I saw and wanted to share for debate and consumption.

The risks are severe and quite frankly the offensive capability of attackers (individuals, attack teams like Anonymous, and nation state sponsored groups) is excellent.  Organizations are suffering from exfiltrated data at an alarming scale, and lack of maturity in managing these threats is ad-hoc.

A single vendor this would come across as F.U.D., but this was expressed by the Director of the NSA, and at nearly every session and keynote.

So what does this mean?  Well, much like at RSA there is a need to translate and form an opinion, or lovingly called the ‘Apply Slide’.  Below are the points that resonated for me – in no particular priority order:

  • There is a need for a more meaningful appreciation of what is valuable to every organization.  This discussion needs to happen with the management, legal, risk management, internal audit, and technology leadership.  A primary effort of bringing these individuals together is to ascertain what is valuable and what forms may it exist throughout the business.
  • A sophisticated incident handling process is needed.  This is a topic highlighted by the likes of Google and Signal Intelligence experts.  The point though was lost I feel to the majority of attendees.  The need is not simply to have trained team members with tools to be activated in the case of a breach.  That is needed, but there is a much deeper need:
    • The maturing and sustaining of a firmwide global effort to respond to every infection / malware-instance / behavioral anomaly.  Here is the thesis:  Today most of these are addressed through a help desk function that follows a decade old process of risk identification and remediation.  The common response is to update patches and have the behavior cease (removal of the error is considered a “fix”).  It is widely accepted that the attackers and infection tools are highly sophisticated, and removal is not a linear path nor a guarantee of a “clean” system.  In addition the statistics reinforce this fact when we look at the effectiveness of the anti-virus tools, the amount of malware that is unique and unknown, and the percentage of exfiltration events that occur resulting from this code.  Finally, there is a stigma to ‘activating an incident response’ team in many organizations.  Together these create an atmosphere where keyloggers / botnets / stuxnet / and similar malware toolsets can infect, avoid destruction, increase infiltration, and have intelligent exfiltration of desired data.
  • Cloud was a very popular topic all week, and despite professional annoyance of the media focusing on a single aspect of information technology one simple fact remains true.  These sessions were packed.  The information provided was not clear and visibility remains beyond immediate grasp.  So – my response here is … these sessions were packed and the term is everywhere, because we do not have this at a state of understanding.  I foresee this will be a long and great area to continue developing.

Thank to everyone and hope to see you again  – soon!

James DeLuccia

End to End Resilience .. ENISA.. Cloud..

The beautiful opportunity with distributed computing, globalization, and cloud services is the ability to scale and run complex environments around the globe.  This is balanced of course by assurance that the operations are occurring as you expect, are managed properly, and protected to secure the competitive intelligence of the business.  Especially interesting has been the movement of centralizing data centers of a company into super data centers.

Together these points raise and are possibly met by the ENISA (The European Network for Information Security Agency) report that highlights the decisive factors of an end-to-end resilient network.  The report can be found directly at this link location.

An interesting challenge highlighted by, what appears Egypt’s government shutting down the internet, is how are these distributed cloud systems managed if they are cut-off from their administrative consoles?  Considerations for all businesses, and perhaps an appropriate addition to business continuity and such planning risk documents – is the following:

Can the business’ systems function autonomously when the primary controls and administrative connections are lost?

Perhaps a lesson could be gained by the masterful administration of the bot-net armies that leverage dark and shifting network clouds.

I would be interested of the implications that arise as a result of this disconnect of a country, and potential of other countries (whether due to more direct action, or the indirect result to further contain internet traffic).

Come join me and others in San Francisco where I will be speaking at RSA.  Stop by.. lets catchup.. and looking forward to great debates (as always).

James DeLuccia

RSA London 2009, Day 2

Day 3 started off early and I jumped right into any advanced technical topics I could find.  Overall the sessions were very good with a single disappointment.  My apologies for odd english and incomplete notes… pretty tough recalling all the points that jumped up during each session.  The great news is that these presentations will be available online!

Nomrod Vax of CA spoke on “Reversing of the Roles in IT – IAM Aspects of Cloud Computing”:

  • Key issue in virtualization is that audit and security requirements must consider virtual “images” as they are files and can be moved, copied, and such very easily.  In addition, a balance must be established to ensure that the inherent flexibility of virtual machines of being transportable is not handicapped (as this is a key value point).
  • Copying a virtual machine (file / directory) is equal to stealing a server from the server room.
  • Virtual machine storage can be mounted while they are operating and data can be modified / accessed during operation

Example:

A developer took a virtual machine that ran the check transactions for an organization that was having errors into a QA environment for testing, and placed it onto a clean QA server.  He then ran tests against the system.

What happened is that the system didn’t “know” it was in the QA environment, and began process the transactions that were queued in the system.  The result was the system completed its open tasks and these were duplicates of the ones being run by the other ‘original’ image running the checks.

CLOUD Concerns:


Reverting back from snapshots can have the following effects:

  • Audit events are LOST
  • Security Configurations are reverted
  • Security Policies are reverted
  • **Snapshots are brilliant, but their use has massive impacts**

Users and privileged users exist, but in virtual environments there is a new level of problems by introducing a new privilege layer.  This new layer resides in the hypervisor / underlying system (depending on the virtualization model)

One host running multiple virtual machines = Critical infrastructure

Critical infrastructure = Business impact

Business impact = Compliance requirements

  • Cloud can be considered as a new IT service that is commoditized and cheap.  Resulting in a lower barrier of entry, lower switching costs, and lower visibility and control.
  • How do you know the cloud environment provider is able to satisfy the concerns that exist within the Cloud?
  • Less than 20% state they are interested in deploying cloud technology, and the majority state they are going to put it on internally managed systems.
  • Primary concern is security and control.
  • Assurance related to performance, trust, and reliability are primary drivers away from greater adoption.
  • Barriers to Enterprise adoption include – Security & control and Protecting sensitive information.

IT teams are becoming the Auditors of the Cloud providers – a role has tremendous risks to both the enterprise and the Cloud experiment.

Challenges with Auditing:

  • Regulatory compliance is the major driver for Identity Management
  • Auditors are not virtualization savvy
  • virtualization audit issues have not been flagged
  • What will drive priorities? – Education; compliance & regulatory pressures; public exposure.
  • Will require adjusted controls for separate management, and control

Cloud – Due Diligence:

  • Attain assurance from the Cloud provider
  • Establish visibility and accountability through performance and service agreements (note this does not transfer the duty)
  • Demand full privileged user management
  • Restrict privileged access (provide process for allowing elevated and policy based activities)
  • Ensure accountability
  • Restrict access to logs
  • Restrict access to virtual environment (resources)

Central Access Policy Management:

  • Enforce the policy across all VMs and Virtualization platform
  • Centralization – do not rely on local policies
  • Enforce policy change control
  • Manage deviations from the policy
  • Report on Policy Compliance

Complete and Secure Auditing:

  • Monitor administrative activity (including impersonation)
  • Monitor all access to virtualization resources
  • Centralize audit logs
  • Notify on significant events
  • Integrate with central SIEM systems (including triggers to provide automation and actions seamlessly)

Conclusions:

  • Security and Controls are the Number 1 inhibitors for cloud adoption
  • Cloud providers need to reassure their enterprise customers
  • Automation is imperative
  • Security will become a cloud differentiation
  • The IT roles are reversing

*Nimrod Vax gave a nice presentation.  He started off very basic focusing on definitions, but the middle and end focused nicely on specific risks related Cloud and Virtualization.  I felt the session sometimes overlapped the two environments – Cloud vs. Virtualization, and subsequently the risks and controls.  Overall very good and well worth it.  Well done.

Hemma Prafullchandra of HyTrust lead a great discussion on Assurance Framework for Internal and External clouds

A lively discussion with 8 people was held in the main conference area on Cloud computing.  Specifically we were discussing the risks and levels of assurance that can be achieved in the market, as it is today.  In addition to myself, there was Becky Bace (renowned technology expert), Tim Minster (Cloud afficionado), Lynn (Executive chewing on this Cloud thing in ‘real life’), and several others who I could not recognize.

Key points that I captured:

  • Assumptions must be challenged regarding Cloud systems
  • Premises of security
  • Premises of what is prudent in the environment
  • The intent of using the Cloud – i.e., is this a pet project business case or the payroll systems?
  • Clouds today should not house PII, PHI, Regulated, or Confidential information.
  • When putting Cloud technology to work the parties must understand what they are receiving from each specific Cloud provider – each one provides different levels of granularity and assurance through service level agreements and access to 3rd party audit reports.
  • Consider Cloud as a Utility – such as power.  If you want to protect your assets in your home you buy a security strip and UPS.  Same should be considered on low cost Cloud-Utilities, in that deployment of end-level encryption, firewalls, IDS, and proper system level security are required.  These additions, which would exist regardless of the physical locality (meaning Data Center or Cloud, they are needed regardless for certain levels of assurance), provide degrees of assurance based on the intent of the organization.
  • Translating the Cloud architecture to the ‘Old Data Center’ model is key to providing the clean mapping of Common Controls and regulations to these systems.
  • These Cloud Systems should be managed with the understanding of the new risks as a result of being multi-tenant and hosted.
  • Data management and classification are still required in a Cloud environment, and must be considered for such things as cross border data crossings and long term management.
  • Action items – support Cloud initiatives; understand the risks to such operations; place sufficient controls in place, and communicate these to the auditors and management accurately.

Ben Rothke (Sr. Security consultant with BT) spoke on Establishing a SOC,

Great session on SOC – best practices; setting one up; auditing a 3rd parties, and ensuring success in the long term.  I didn’t see the entire session as I was in another session and left it early and found Ben’s very helpful.  I would recommend downloading his presentation for auditors or 3rd parties vetting a SOC, as he gives tremendous detail on how these facilities should be run; how to measure them; and how to make the business case for setting these up.

Hugely valuable session.

The evening discussions were with 13 unique individuals from Visa Europe, KOBIL, People Security, and several other organizations.  The discussions went through an immense amount of topics and was hugely valuable.  Thank everyone for joining and sharing their great experiences!

Day 2 was fantastic.  Mid-point opinion of RSA Europe = Above Average (RSA SFO being average), but the sessions need to be increased in complexity… Consensus is that sessions that are advanced could be more advanced with less basic definitions and high level concepts.  Otherwise brilliant.

Best,

James DeLuccia IV

RSA Europe 2009, London – Day 1

Below are my notes and takeways from the first day of RSA London.  The day started off with plenty of cinematics and a nature theme opening session by the heads of RSA.  The location is fantastic – the conference hotel is very close to good trains in London and there are plenty of things around.  While the hotel is a bit of a maze – I am told it is a fair trade off from prior years.

Arthur Coviello opened the conference and gave a great deal of specifics regarding the industry.  He actually gave a fantastic introduction to my session materials, and that certainly helped the delivery of my own session.  (Speaking of my session – I will be composing my session into a paper and posting it online to gather opinion and input).

Arthur’s nuggets:

  • 15 Billion devices communicating – according to John Ganz “The Embedded Interent: Methodology and Findings”
  • By 2011 – 75% of us workforce will be mobile (BNET)
  • Facebook will exceed 300M users by end of 2009.
  • More information has been created in this decade than “all time”?!
  • All content is created digitally at inception or within 3 months.
  • Physical control of IT assets are loosening
  • Growth and adoption of IT Cloud services will grow to 35 billion Euros.  It will capture 25% of ALL IT spending
  • Today “high value” is in taming the complexities of information and this is the challenge of IT organizations.
  • The challenge of the security industry is to enable these ubiquitous technologies and enable them in a boundaryless IT environment.

Hugh Thompson of People Security:

Hugh have a great session that was both animated and well structured.  His presentation reflected a past project where I created algorithms and technology to conduct social cyber forensics using similar mechanisms.  He gave a good number of points regarding Data Gateways and I hope that his presentation will be available to help disiminate the definitions and risks.

He highlighted (the somewhat obvious fact) that people are posting indiscriminately data online through social sites.  There will be a fallout to the information posted online.  There should be some kind of education for people on “what to post” online.

A takeaway from the Meetup dialogs was if the social norm is to post all this information then the minority is to not.  Therefore is it right to educate the masses or secure them for them?  Interesting implications regarding security authentication and authorization systems as a result.

Guy Bunker’s session:

Highlighted the Jericho forum 2009 Cube and provided very interesting and impactful questions that every organization should know about – need to see if I can find a copy of that slide.

In a internal infrastructure situation – if the system to system controls fail it is generally OK.  An audit deficiency or a weekend is in order to rectify the error.  If such errors or security failures occur at the Cloud Administrator the exposure has an impact – the severity depends upon the breach, information, and audience.

“Compliance is the dark side” … hmmm

  • Moving to the cloud doesn’t devoid the duty of complying with regulations
  • Moving to short time contracts with Cloud providers (i.e., spinning up an environment for XYZ project over a measured time frame) introduces requirements to demonstrate that these environments are appropriately secure and compliant.  What is the onboarding, management, measurement, and offboarding process occur in these environments?
  • Caution (especially within the EU) is the exporting of data to these providers, and do they maintain the possession in proper approved regions and systems.
  • Data Migration:
    • Moving data back and forth between internal and external clouds can be difficult
    • Moving the data if the systems are proprietary requires API and conversion efforts
    • Restoration of data from backups based on prior / old applications creates challenge when the system is wholly updated and older updates are no longer supported (Think JP Morgan required to provide restored email backups from systems they internally managed that are no longer available by the vendor).
  • Questions to Ask Part 1 and Part 2 are a nice breakdown of sanity checks when using cloud providers by Guy Bunker
  • Trend in new threats is people taking the entire Virtual Machine images, not just data.  This not only is the information but more importantly the HOW and MEANS of delivering the exact same service.

The end of evening included the Vendor booths which were significantly smaller than the April conference, but was better due to the quieter and more productive conversations I had and observed.  The Blogger Meetup also was Tuesday and it was great.  Plenty of very smart individuals and the conversations were all geek, security, compliance, and extensions of sessions conducted during the day.  I actually hope that other such evening events will come together to take advantage of the presence of so many experts in their fields.

Day 1 was certainly a success.  The tremendous focus on Cloud, Social Media Risks, Identity Fraud, and audit was rich and worthwhile.  A personal takeway I have is how applicable my hands-on work around securing auditable Cloud environments is for businesses.

Other sources for information:

More to come…

Kind regards,

James DeLuccia IV