Tag Archives: 2011

Court ruling in favor of bank… setting precedent?

Below is my analysis and breakdown of the summary judgement awarding a case to the bank that provided security safeguards to a client who was the victim of a malware attack and lost hundreds of thousands of dollars as a result.  You can read in-depth articles on the events here at Wired and here.  I broke down my analysis showing the page number, a quote from that page, and then my reflection on it.

Court order thoughts:

Page 21 “…did not communicate any wishes separate and apart from the security procedures included in the agreements governing eBanking…never express its dissatisfaction with these security procedures to Ocean Bank personnel.” <– silence is acceptance, hence the value of risk assessments of partners / vendors / customers where query, analysis, and resolution are methodical.

Page 24 ” …Rule was set at $100,000 to ensure customers were not inconvenienced by frequent prompts for challenge questions…intentionally lowered the Dollar Amount Rule from $100,000 to $1…nor any other outside security professional recommended that…threshold”  <– very safeguard designed to prevent fraud was pushed beyond effective range, and then lowered without analysis of the risks or impact.  This modification is warned against by the organization that developed the fraud technology.

Page 24 “…from time to time and purposefully changed the challenge question amount triggers…to thwart frauders’ software that was designed to identify the online system’s default…thresholds” <– an old, primitive, and ineffective control…but common enough.

Page 27 “…’something the user has’: device identification information specific to the client’s personal computer and its use of the Bank’s application…” <– an interesting twist on multi-factor authentication where the computing device itself is attributed as the authentication system.  Given the simplicity and commonality of malware infected systems the use of these devices as such greatly limits the intent of multi-factor authentication.

Page 28 “…FFIEC Guidance explains that a multifactor system must include at least two of the three basic factors, but says nothing about how banks must respond when one of these factors detects an anomaly” <– Legal perspective on “intent guidance” vs. “prescrptive guidance”

The use of “other controls” as permitted by FFIEC is interesting in that there is a distinction and perhaps an allowance in having controls ‘available’ vs actually ’employed’ in an environment.  Onus appears to be on users of systems to apply and leverage these actively and with awareness to the independent risks that result (page 29).

Page 36 “The risk-scoring engine generated a risk score of 790…’a very high-risk transaction’…legitimate transactions generally produced risk scores in the range of 10 to 214…”  <– Risk measurement system deployed, and created a score 3x higher than ever before but no monitoring so therefore no response / control.  A challenge of these environments is the necessity to have controls and the attention to manage and mature these systems to respond.

Page 36-38 The fraud damage could have been worse…as the attackers had the ability to change the out-of-band contact / call-back-information online.  In addition, the attackers used some account numbers that were incorrect (meaning they sent the stolen money to a bad bank account), so lost some possible cash.

Page 39/40 “…architecture houses the authentication information as encrypted data only and stores parts of it in multiple locations (including…on the system of RSA…This design is such that it would be virtually impossible for someone to garner all of the authentication information without obtaining it from the end-user.  It has been Jack Henry’s experience that virtually all of the fraud incidents involving banks…relate back to a compromise of the end-user’s machine.”  <– This is an interesting set of statements as the defense is that split keys makes it only possible to breach the credentials via the consumer.  This is untrue in cases where the key systems are compromised or misconfigured (or stolen as in the case of RSA Tokens).  It is interesting to state that compromises ‘all originate from the consumer’, as it is the intent of using these multi-factor systems to mitigate a portion of that very risk.

Page 42 “…because the configuration file [of Zeus/Zbot] cannot be decrypted and analyzed…it is impossible to say with any certainty that Zeus or another form of malware or something else altogether (e.g., Patco sharing its credentials with a third party) was responsible for the alleged fraudulent withdrawals.”  <– Highlighting the importance of forensic responsibility and that failure to do created a possibility of negligence.  Given the amount known of Zeus, the source code being publicly accessible, and purpose of its design it is difficult to imagine another source.  This could be also determined locally using the local firewalls and logs to analyze traffic and such.

Page 43 “…security system asks the user the precise challenge question, such as ‘What is your spouse’s middle name?'” <– These types of challenge questions continue to lose effectiveness as the amount of such data is breached.  The effect is cumulative, meaning the more breaches occur (published and non-published) the more likely these answers are not ‘secret’.  In addition to the breached data, the increasing of public intentional privacy releases (such as Facebook, and social networking sites) the effect is the same.  Commercial systems need to move beyond, and consider such things as rotating key cards (similar to utilized by the IRS Federal sites).

Page 52 “The agreed-to obligation…,a commercial banking customer, to monitor its accounts daily in turn fits the definition of a ‘security procedure’: ‘a procedure established by agreement of a customer and a receiving bank for the purpose of . . . detecting error in the transmission or the content of the pyament order or communication.'” <– Important stipulation whereby each party independent and in cooperation form the information security controls and security procedures.  In this case, such agreement was established through a modified agreement that was displayed for acceptance online to the user of the system.  Given the diversity, immensity, and complexity of such agreements it will be interesting if this principal holds in future examples.  In the meantime, it therefore is prudent to have service-provider agreements reviewed by the legal department and information security team to identify and properly manage these control environments.  This is an area that is often not included in standard information security programs.

Page 53 “The concept of what is commercially reasonable in a given case is flexible….It is reasonable to require large money center banks to make available state-of-the-art security procedures.  On the other hand, the same requirement may not ‘be reasonable for a small country bank’.” <– This section attempts to state that as some organizations handle higher value transactions they are more at risk and should institute sophisticated security safeguards.  This is true when transactions are handled manually and uniquely, but not so with electronic systems.  The presence of electronic systems constitutes a need for adequate security, in this case instituting a more mature and appropriate program that reflects the risks within the industry.  It is always prudent to employ risk assessments, but clearly (as both parties are litigating and seeking to recover sums of money here) both organizations would have identified these transactions as high risk and worthy of reasonable safeguards as necessary in online connected systems.

The conclusion of this summary judgement is in favor of the Bank, page 69.  An interesting gap in this entire case is the examination of the bank’s information security program.  There is no examination on the method or means taken in establishing the security programs.  In addition, there seems to be an allowance for controls to be installed but not correctly deployed – seeming to represent the classic compliant with FFIEC, but not secure.

The cost of this fraud was hundreds of thousands of dollars for the bank and hundreds more to the actual victim company.  Another global perspective is the lack of communication existing to businesses (any size) on the importance and responsibility that exists when using these computing systems.  Translating the need to adhere to leading practice is a challenge and one not clearly done well.

This case brings forward many questions, such as:

  • What techniques exist to cross the security/risk divide to business leadership?
  • Who must be the banking fraud expert… Who is tasked with handling these complex systems… it seems, in this case, the user / customer.  Where do these roles and responsibilities land?

Here is a link to the direct summary judgement (PDF).

I welcome any thoughts or further analysis.  Especially interested in other cases that are helping build the case law around this topic.  An article referencing a bit more history and another case may be found here – thanks for sharing!


James DeLuccia

VOIP and Skype are not secure

A recent research effort demonstrated through what they call “Phonotactic Reconstruction”, the ability to “decipher” the encapsulation/encryption process employed by a substantial amount of VOIP providers – including Skype.  The research is pretty clear cut, and highlights that – 1.  It is within means to eavesdrop on these conversations, and 2.  Security through obscurity still does not hold water.

The second point is that which has become more important as I work with global organizations and grow to understand the complexities that exist within the co-mingling of corporate-consumer system environments (more on that in the future).  VOIP is complex … it is a classic example of security balanced with a technology application (enough security but not too much that creates voice distortions).  Unfortunately, if the calls can be eavesdropped than the security is insufficient.  This is significant given the enormous usage of this technology that occurs within most global corporations (Skype alone has about 124 million active monthly users with 560 registered).  Beyond eavesdropping there also exists the ability to leverage these “voip environments” to break into other parts of the business.

This raises the question – how do organization’s be agile and aggressive in leveraging these beneficial technologies, safely.  The simple answer is that as technology is acquired a risk assessment is absolutely necessary, but as a technology evolves future risk assessments are paramount.  The evolution is what is critical here – Skype, Iphone/App-stores, Blackberry devices, etc … these all were introduced with unknown trajectories and without obvious benefits or risks.  It is THIS fact of unknowing and the shifting of what is known that creates the need to mature how businesses embrace, continue to embrace, and manage (yes by manage… I include secure) all these technologies.  The need here goes beyond simple technical specifications, but a balance of “risk” and “security”.  Meaning the following should be, at least, considered:

  1. How dependent is the business on the technology .. what are our backup plans
  2. What type of information will traverse this technology and who will depend upon it? – and therefore who are the stakeholders / inputs for discerning what is critical  –> this usually leads to conducting an assignment to owners and assets (it is only then can the risk be accepted let alone thought through properly)
  3. As the owner and information alignment exercise will show – this is a point in time exercise … future visits are needed to learn if the owner sees additional risk, or sees the risk universe shift completely
  4. What are the security safeguards – are they proven or new-fandangled?  (new is 100% always more risky and should initiate a broader consideration of risk)
  5. What encryption is being used .. and is it being used completely and in the right places?  (If home music stream devices can employ AES-256, enterprise products can too)

Much more time can be spent on risk assessments (I would suggest investing time to look at NIST 800-30 or Octave as a starting point… ISO 2700X is good too, but not free).  The key takeaway is challenge ‘complexity’ as a security and assurance control – it is neither.  In the world of PCI, the VOIP guidance / call center / and PCI DSS 2.0 provide insight – simplifying the language, yes encryption of the activity is required.

Article on “Phonotactic Reconstruction”can be found here and here (pdf).

Other thoughts?

James DeLuccia

Sony PSN hack of 100M+ accts executed from Amazon EC2

The playstation breach for Sony has gotten reasonable publicity, but little intelligence on the attack, methods, and results have been shared sufficient to enable others to learn and be more resilient. A nice article on The Register details information indicating that the attackers leveraged the power of Amazon EC2 to execute the attack as paid customers.

The article can be found here http://ow.ly/1sY8TW with links to the Bloomberg article here (http://www.bloomberg.com/news/2011-05-13/sony-network-said-to-have-been-invaded-by-hackers-using-amazon-com-server.html)

While not new to leverage these cloud services, what is intriguing and worth deeper consideration is how much can we extend cloud beyond what is already being applied by companies and security researchers. Super computer processing; rapid instant access, and globally accessible yet still being used uncreatively to host web sites and such?!? Using the example from the article, if one can spend less than a dollar to break good encryption, could we not also leverage that for rotating keys at a similar cost benefit model?

I digress, the consideration of clouds being weaponized harks to the day of defense by blocking entire country IP address blocks. Perhaps naive in simplicity, but when customers become robots (like Amazon’s Mechanical Turk) then these cloud IP addresses need to be reconsidered. Looking forward to a greater discussion here…


James DeLuccia

(produced on iPad)

Too many compromises …

There is a large volume of public disclosures over the past several months involving sophisticated (and some not) with mature defenses being breached.  The most visible ones are not from small startup companies that were lean on their defense, but instead these are being perpetrated against those organization’s with the resources and skills to muster a good defense.

The lessons of these public disclosures (note being public – as more are likely not being publicly released) is the need to reflect and adjust.  A comment i made in February regarding the need to continue building and evolving security through risk assessments.  Information protection is about conditioning and not about the latest fad.  Certainly the latest fad requires attention, but so does historic attack methods.  So, what is interesting of the recent compromises.

First off, sharing and depth of defense.  Barracuda Labs had a breach, and they published very specifically what happened.  What is positive about this breakdown is that they articulate precisely what was taken, and how they are CERTAIN that was it.  Their technology and analysis do not have words of legal or marketing anywhere, and therefore are actionable.

Amazon AWS…  this affected many more people and for many days the impact was unknown.  This was more an operational breakdown for businesses than perhaps an information protection process.  There is a decent timeline of the web services’ updates here, and from it one can tell that great effort was taken by Amazon to address the challenges.  Unfortunately, for businesses trying to take action there was no clear point where clients using Amazon should activate their disaster recovery plans.  Given processes were coming online incrementally I understand that decision is challenging.  In addition, the plethora of terminology probably made all system administrators and CIOs review their deployment maps more than once.  The take away here is – understand precisely what services are being provided and where (as much as you are able), and secondly have your own plan that depends on your own customer’s expectations.  This is leading practice, but can get lost in these serious moments.

Admittedly the Amazon example is not a breach – though think of the potential if it was an attack on the infrastructure of Amazon’s cloud environment and how extremely valuable that landfall would be to criminals and nation-states.

Then there is Sony’s network that was attacked and required a complete shutdown of the service to extricate and stabilize the environment.  Three separate posts by Sony on their blog here, here, and here.  This attack highlighted that, despite the business model, sensitive data is sensitive data (PCI).  In addition, an idea I have been discussing with colleagues is the potential for bot armies.  Consider that playstation consoles have very strong computing platforms built-in and are used by graphic companies and military for high end computing (1,716 linked make 1 super computer!).  Also consider that at least 77 million devices connect at some point to the Playstation network, receive updates, transmit data at high speeds, and are all likely not disconnected after a gaming session.  The point being, a single hardware platform running basically identical firmware, connected on high speed broadband .. sounds like a perfect bot army to me.

There has been much written on these events above and I encourage all to deeply understand the executions.  My intention above, as always, is to provide a different context to these events, their meaning, and perhaps some actions to be appropriate by you.


James DeLuccia

Malware Response guide (By Microsoft)

Those who followed the latest RSA SFO 2011 conference via tweet or live likely attended one of the – ‘Ahh I have been breached, now what” themed sessions.  These were very popular and provided great insight.  The firms participating in the discussion represented some of the largest and most sophisticated in the world.  The greatest takeaway I took from these discussions was the necessity shift our approach to infections.

I wrote briefly on the subject, but after receiving the February 17, 2011 updated Malware Response guide from Microsoft and thanks to Roger Halbheer for the quick post I can provide a bit more substance.

Today most malware that hits businesses infects on average 15 machines (pulled from presentation at RSA).  That means the intent is to infect, not a lot but enough – and then change.  This makes simple AV and help desk responses minimally effective.  In fact, based on several statistics the AV component as a control is less than 30% effective.  The additional challenge is the malware today is very sophisticated and designed to exfiltrate data from the business.  Approaching these infections without the full attention and talent of the organization is a mistake.

Specifically, I would propose that ANY infection be treated as an INCIDENT unless otherwise proven different.  Meaning the incident response functions must be more effective, must scale, and must be able to rapidly diagnose the situation on a machine basis.  The Infrastructure Planning and Design Guide for Malware Reponse is a great basis to build from by Microsoft.

Here is the link to Roger’s short post highlighting the graphic model.  Pointed attention to the 3 major options:

  1. Attempt to clean the system
  2. Attempt to restore system state
  3. Rebuild the system

Key phrase – Attempt, and this is from the provider of the operating system and a substantial amount of infrastructure enterprise safeguard tools.

Bottom line:  the threats have shifted and therefore our responses within corporations must match play for play.  I see this shift in the PCI DSS standard in how assessments and audits are being conducted.  I also am beginning to see a shift beyond presence of controls to coverage and effective.  The AICPA has introduced SOC1, SOC2, SOC3 as a broader effort to demonstrate greater clarity in our technology world.

Other thoughts?  Challenges?

James DeLuccia IV

Security and Privacy on mobile devices are not always equal

I spend a great deal of time on global security programs where the focus is beyond the bit and bytes (finally) and includes the people process side of the equation surrounding information security.  One may argue this has always existed when just looking at the regulations and standards we have built our compliance programs around.  I would politely highlight this is not always the case and not to a sufficient level.

A common challenge in the security world is that a lot of bad can and does happen online.  The only difference between what scares people one day to the next, lies in what is being focused on.  Exposed emails are nothing new; financials leaked on torrents; or simply the acronym APT are not as new as they appear.  What is substantially new is the emerging device universe and the consumerization of tools beyond and into the enterprise.

These devices not only introduce entire new platforms with application risks, but also the manner of handling the traffic and the data itself is also different.  A bit of an example to clarify:

In the late 1990s web browsers and websites allowed fields that went unchecked by the server – why?  Well, there was no reason anyone would send a bazzillion letter ‘A’s, or would type in SQL statements that might interfere with the database backend, right?

Switch to 2011, and with smartphone devices we have new platforms and a model where assumptions are being built into the applications and interfaces on what the users will do.  It is a given that we are wiser today on these points, but with the release of new code and applications the level of complexity increases rapidly on each device and technology ecosystem.  The consequences of these individual applications interacting on the same device have yet to be realized.

Another point of view is what happens with the data being handled by the service provider?  As organizations switched to mobile sites 3rd party systems were used, but those are being pushed aside by custom built and iOS type applications.  As was highlighted in a nice little post by Dan Wallach – not all communication settings are adhered to for every device and every channel (his example Android and Facebook).

There is an immense opportunity to reduce current and future difficulties by reflecting on the past and applying the correct safeguards in place today – completely.  Coverage is key – without it, we are just plugging holes and hoping others don’t look at the others.

A bit broad, but look forward to challenge and alternate perspectives,

James DeLuccia IV

RSA SFO 2011 is done

This week has been a blitz of sessions, one-on-one deep discussions, and random swarms of passionate people descending on any table to discuss all things information security.  The sessions were good, the products somewhat interesting, and the networking was fantastic.  I did my best to tweet as much as I could from sessions throughout the conference, but there is a theme I saw and wanted to share for debate and consumption.

The risks are severe and quite frankly the offensive capability of attackers (individuals, attack teams like Anonymous, and nation state sponsored groups) is excellent.  Organizations are suffering from exfiltrated data at an alarming scale, and lack of maturity in managing these threats is ad-hoc.

A single vendor this would come across as F.U.D., but this was expressed by the Director of the NSA, and at nearly every session and keynote.

So what does this mean?  Well, much like at RSA there is a need to translate and form an opinion, or lovingly called the ‘Apply Slide’.  Below are the points that resonated for me – in no particular priority order:

  • There is a need for a more meaningful appreciation of what is valuable to every organization.  This discussion needs to happen with the management, legal, risk management, internal audit, and technology leadership.  A primary effort of bringing these individuals together is to ascertain what is valuable and what forms may it exist throughout the business.
  • A sophisticated incident handling process is needed.  This is a topic highlighted by the likes of Google and Signal Intelligence experts.  The point though was lost I feel to the majority of attendees.  The need is not simply to have trained team members with tools to be activated in the case of a breach.  That is needed, but there is a much deeper need:
    • The maturing and sustaining of a firmwide global effort to respond to every infection / malware-instance / behavioral anomaly.  Here is the thesis:  Today most of these are addressed through a help desk function that follows a decade old process of risk identification and remediation.  The common response is to update patches and have the behavior cease (removal of the error is considered a “fix”).  It is widely accepted that the attackers and infection tools are highly sophisticated, and removal is not a linear path nor a guarantee of a “clean” system.  In addition the statistics reinforce this fact when we look at the effectiveness of the anti-virus tools, the amount of malware that is unique and unknown, and the percentage of exfiltration events that occur resulting from this code.  Finally, there is a stigma to ‘activating an incident response’ team in many organizations.  Together these create an atmosphere where keyloggers / botnets / stuxnet / and similar malware toolsets can infect, avoid destruction, increase infiltration, and have intelligent exfiltration of desired data.
  • Cloud was a very popular topic all week, and despite professional annoyance of the media focusing on a single aspect of information technology one simple fact remains true.  These sessions were packed.  The information provided was not clear and visibility remains beyond immediate grasp.  So – my response here is … these sessions were packed and the term is everywhere, because we do not have this at a state of understanding.  I foresee this will be a long and great area to continue developing.

Thank to everyone and hope to see you again  – soon!

James DeLuccia