Below is my analysis and breakdown of the summary judgement awarding a case to the bank that provided security safeguards to a client who was the victim of a malware attack and lost hundreds of thousands of dollars as a result. You can read in-depth articles on the events here at Wired and here. I broke down my analysis showing the page number, a quote from that page, and then my reflection on it.
Court order thoughts:
Page 21 “…did not communicate any wishes separate and apart from the security procedures included in the agreements governing eBanking…never express its dissatisfaction with these security procedures to Ocean Bank personnel.” <– silence is acceptance, hence the value of risk assessments of partners / vendors / customers where query, analysis, and resolution are methodical.
Page 24 ” …Rule was set at $100,000 to ensure customers were not inconvenienced by frequent prompts for challenge questions…intentionally lowered the Dollar Amount Rule from $100,000 to $1…nor any other outside security professional recommended that…threshold” <– very safeguard designed to prevent fraud was pushed beyond effective range, and then lowered without analysis of the risks or impact. This modification is warned against by the organization that developed the fraud technology.
Page 24 “…from time to time and purposefully changed the challenge question amount triggers…to thwart frauders’ software that was designed to identify the online system’s default…thresholds” <– an old, primitive, and ineffective control…but common enough.
Page 27 “…’something the user has’: device identification information specific to the client’s personal computer and its use of the Bank’s application…” <– an interesting twist on multi-factor authentication where the computing device itself is attributed as the authentication system. Given the simplicity and commonality of malware infected systems the use of these devices as such greatly limits the intent of multi-factor authentication.
Page 28 “…FFIEC Guidance explains that a multifactor system must include at least two of the three basic factors, but says nothing about how banks must respond when one of these factors detects an anomaly” <– Legal perspective on “intent guidance” vs. “prescrptive guidance”
The use of “other controls” as permitted by FFIEC is interesting in that there is a distinction and perhaps an allowance in having controls ‘available’ vs actually ’employed’ in an environment. Onus appears to be on users of systems to apply and leverage these actively and with awareness to the independent risks that result (page 29).
Page 36 “The risk-scoring engine generated a risk score of 790…’a very high-risk transaction’…legitimate transactions generally produced risk scores in the range of 10 to 214…” <– Risk measurement system deployed, and created a score 3x higher than ever before but no monitoring so therefore no response / control. A challenge of these environments is the necessity to have controls and the attention to manage and mature these systems to respond.
Page 36-38 The fraud damage could have been worse…as the attackers had the ability to change the out-of-band contact / call-back-information online. In addition, the attackers used some account numbers that were incorrect (meaning they sent the stolen money to a bad bank account), so lost some possible cash.
Page 39/40 “…architecture houses the authentication information as encrypted data only and stores parts of it in multiple locations (including…on the system of RSA…This design is such that it would be virtually impossible for someone to garner all of the authentication information without obtaining it from the end-user. It has been Jack Henry’s experience that virtually all of the fraud incidents involving banks…relate back to a compromise of the end-user’s machine.” <– This is an interesting set of statements as the defense is that split keys makes it only possible to breach the credentials via the consumer. This is untrue in cases where the key systems are compromised or misconfigured (or stolen as in the case of RSA Tokens). It is interesting to state that compromises ‘all originate from the consumer’, as it is the intent of using these multi-factor systems to mitigate a portion of that very risk.
Page 42 “…because the configuration file [of Zeus/Zbot] cannot be decrypted and analyzed…it is impossible to say with any certainty that Zeus or another form of malware or something else altogether (e.g., Patco sharing its credentials with a third party) was responsible for the alleged fraudulent withdrawals.” <– Highlighting the importance of forensic responsibility and that failure to do created a possibility of negligence. Given the amount known of Zeus, the source code being publicly accessible, and purpose of its design it is difficult to imagine another source. This could be also determined locally using the local firewalls and logs to analyze traffic and such.
Page 43 “…security system asks the user the precise challenge question, such as ‘What is your spouse’s middle name?'” <– These types of challenge questions continue to lose effectiveness as the amount of such data is breached. The effect is cumulative, meaning the more breaches occur (published and non-published) the more likely these answers are not ‘secret’. In addition to the breached data, the increasing of public intentional privacy releases (such as Facebook, and social networking sites) the effect is the same. Commercial systems need to move beyond, and consider such things as rotating key cards (similar to utilized by the IRS Federal sites).
Page 52 “The agreed-to obligation…,a commercial banking customer, to monitor its accounts daily in turn fits the definition of a ‘security procedure’: ‘a procedure established by agreement of a customer and a receiving bank for the purpose of . . . detecting error in the transmission or the content of the pyament order or communication.'” <– Important stipulation whereby each party independent and in cooperation form the information security controls and security procedures. In this case, such agreement was established through a modified agreement that was displayed for acceptance online to the user of the system. Given the diversity, immensity, and complexity of such agreements it will be interesting if this principal holds in future examples. In the meantime, it therefore is prudent to have service-provider agreements reviewed by the legal department and information security team to identify and properly manage these control environments. This is an area that is often not included in standard information security programs.
Page 53 “The concept of what is commercially reasonable in a given case is flexible….It is reasonable to require large money center banks to make available state-of-the-art security procedures. On the other hand, the same requirement may not ‘be reasonable for a small country bank’.” <– This section attempts to state that as some organizations handle higher value transactions they are more at risk and should institute sophisticated security safeguards. This is true when transactions are handled manually and uniquely, but not so with electronic systems. The presence of electronic systems constitutes a need for adequate security, in this case instituting a more mature and appropriate program that reflects the risks within the industry. It is always prudent to employ risk assessments, but clearly (as both parties are litigating and seeking to recover sums of money here) both organizations would have identified these transactions as high risk and worthy of reasonable safeguards as necessary in online connected systems.
The conclusion of this summary judgement is in favor of the Bank, page 69. An interesting gap in this entire case is the examination of the bank’s information security program. There is no examination on the method or means taken in establishing the security programs. In addition, there seems to be an allowance for controls to be installed but not correctly deployed – seeming to represent the classic compliant with FFIEC, but not secure.
The cost of this fraud was hundreds of thousands of dollars for the bank and hundreds more to the actual victim company. Another global perspective is the lack of communication existing to businesses (any size) on the importance and responsibility that exists when using these computing systems. Translating the need to adhere to leading practice is a challenge and one not clearly done well.
This case brings forward many questions, such as:
- What techniques exist to cross the security/risk divide to business leadership?
- Who must be the banking fraud expert… Who is tasked with handling these complex systems… it seems, in this case, the user / customer. Where do these roles and responsibilities land?
I welcome any thoughts or further analysis. Especially interested in other cases that are helping build the case law around this topic. An article referencing a bit more history and another case may be found here – thanks for sharing!