Tag Archives: fraud

Mobile ad fraud costs advertisers $1 billion a year, study says

Mobile devices are easy targets and when more dependency on wifi is enabled the conduct of fraud is easier to execute without detection. Also thinking this would be pretty to execute such advertising fraud, as described in the article, by installing similar tech onto all of the unsecured/patched/Internet of Things devices on the internet. Imagine this fraud with all of the consumer internet routers!

Details from the Fortune article:

The firm said that it tracked down more than 5,000 apps that were exhibiting suspicious behavior. It found the apps by using the real-time tracking data that it gets from the various mobile ad networks that it is integrated with, which allowed it to look for the kind of rapid ad-loading and background functions that most malicious apps exhibit…

Forensiq said its research showed that more than 13% of total mobile app inventory was at risk, and 14% of all mobile apps on iOS, Android and Windows Mobile platforms.

Over a period of 10 days, Forensiq says it observed more than 12 million unique devices with installed apps that exhibited fraudulent behavior: about 1% of all devices it observed in the U.S. and between 2% and 3% of those in Europe & Asia.

Mobile ad fraud costs advertisers $1 billion a year, study says.

… My comments on this report (not posted on Fortune due requirement to link social media account):

It’d be valuable to know how those Apps identified for fraud were ranked in the ‘App stores’. This way we could identify the popularity and likely spread of these apps. The 12 million figure is large, but out of a possible 1.3 billion devices it is hard to understand the sampling effect.

I’d love more intelligence on the ‘what’, so that regular readers of the article and users of the devices could clean out these Apps off their devices.

Gotta love Blackhat and DefCon week! All the research docs are released.

James

Advertisements

Analysis of McAfee’s Operation Shady RAT Report and highlights

Tis Blackhat & Defcon, so follows are my thoughts …
McAfee released yesterday their Operation Shady RAT paper.  It focuses on data captured from a command and control server that had logs over a 6 year period.  They go into nice detail breaking down the attacks; timeframe; and elude to the motivations of the (single) attacker.  What does this mean for organizations and safeguarding information.  I think this paragraph articles the value crisply:

“What we have witnessed over the past five to six years has been nothing short of a historically unprecedented transfer of wealth—closely guarded national secrets (including from classified government networks), source code, bug databases, email archives, negotiation plans and exploration details for new oil and gas field auctions, document stores, legal contracts, SCADA configurations, design schematics and much more has “fallen off the truck” of numerous, mostly Western companies and disappeared in the ever-growing electronic archives of dogged adversaries.

Interesting details:

“…key to these intrusions is that the adversary is motivated by a massive hunger for secrets and intellectual property; this is different from the immediate financial gratification…”

<– As we have progressed from freelance and curious to now the motivation has changed, but so has the economic model.  These attackers were concerned with the long term and therefore were financed for the long haul too.  This is a key assumption of the threat landscape that must change from prior models.  The fun days of watching attack patterns change with the annual summer school break and DefCon are over.  Businesses and models must change accordingly.

Interesting … the 14 geographic regions listed are missing one particular nation…

The description of the organizations that were breached and captured in these logs certainly is across the board.  Given the author’s mention that virtually all organization’s have been breached based on his insight it is hard to look at the list hoping to not be on the list – everyone is ..  What is interesting to me is the continued deep penetration at what I term ‘Infrastructure Level Attacks”.  Systemic attacks designed to bypass the base assumptions and safeguards – such as the encryption certificates; tokens of the 2-factor authentication; the cellphone and voicemail systems; and (as highlighted here) Communications technology company, international trade organizations that are privy to competitive information, satellite operators, and defense contractors (perhaps creating the opportunity for the recent influx of malicious control chips shipped out of China).

There have been a rich number of papers produced over the past few years that present and provide greater information on this threat.  I would encourage reading these intelligence reports as time permits.  A good site that continually has actionable information is here.

A short note on the flurry of posts and messages:

It’s Blackhat and Defcon week which means copious amounts of reports, presentations, and sometimes seismic events within the information security and intelligence space.  As interesting bits come to my attention I am posting them via twitter, and will try and post any excerpts that catch my eye.  I strongly encourage reading the full presentations and research papers.  Massive efforts went into these works, and it is now our opportunity to apply that knowledge appropriately.  I do look forward to others sharing their opinion, research, and links.

Other thoughts?

James

Court ruling in favor of bank… setting precedent?

Below is my analysis and breakdown of the summary judgement awarding a case to the bank that provided security safeguards to a client who was the victim of a malware attack and lost hundreds of thousands of dollars as a result.  You can read in-depth articles on the events here at Wired and here.  I broke down my analysis showing the page number, a quote from that page, and then my reflection on it.

Court order thoughts:

Page 21 “…did not communicate any wishes separate and apart from the security procedures included in the agreements governing eBanking…never express its dissatisfaction with these security procedures to Ocean Bank personnel.” <– silence is acceptance, hence the value of risk assessments of partners / vendors / customers where query, analysis, and resolution are methodical.

Page 24 ” …Rule was set at $100,000 to ensure customers were not inconvenienced by frequent prompts for challenge questions…intentionally lowered the Dollar Amount Rule from $100,000 to $1…nor any other outside security professional recommended that…threshold”  <– very safeguard designed to prevent fraud was pushed beyond effective range, and then lowered without analysis of the risks or impact.  This modification is warned against by the organization that developed the fraud technology.

Page 24 “…from time to time and purposefully changed the challenge question amount triggers…to thwart frauders’ software that was designed to identify the online system’s default…thresholds” <– an old, primitive, and ineffective control…but common enough.

Page 27 “…’something the user has’: device identification information specific to the client’s personal computer and its use of the Bank’s application…” <– an interesting twist on multi-factor authentication where the computing device itself is attributed as the authentication system.  Given the simplicity and commonality of malware infected systems the use of these devices as such greatly limits the intent of multi-factor authentication.

Page 28 “…FFIEC Guidance explains that a multifactor system must include at least two of the three basic factors, but says nothing about how banks must respond when one of these factors detects an anomaly” <– Legal perspective on “intent guidance” vs. “prescrptive guidance”

The use of “other controls” as permitted by FFIEC is interesting in that there is a distinction and perhaps an allowance in having controls ‘available’ vs actually ’employed’ in an environment.  Onus appears to be on users of systems to apply and leverage these actively and with awareness to the independent risks that result (page 29).

Page 36 “The risk-scoring engine generated a risk score of 790…’a very high-risk transaction’…legitimate transactions generally produced risk scores in the range of 10 to 214…”  <– Risk measurement system deployed, and created a score 3x higher than ever before but no monitoring so therefore no response / control.  A challenge of these environments is the necessity to have controls and the attention to manage and mature these systems to respond.

Page 36-38 The fraud damage could have been worse…as the attackers had the ability to change the out-of-band contact / call-back-information online.  In addition, the attackers used some account numbers that were incorrect (meaning they sent the stolen money to a bad bank account), so lost some possible cash.

Page 39/40 “…architecture houses the authentication information as encrypted data only and stores parts of it in multiple locations (including…on the system of RSA…This design is such that it would be virtually impossible for someone to garner all of the authentication information without obtaining it from the end-user.  It has been Jack Henry’s experience that virtually all of the fraud incidents involving banks…relate back to a compromise of the end-user’s machine.”  <– This is an interesting set of statements as the defense is that split keys makes it only possible to breach the credentials via the consumer.  This is untrue in cases where the key systems are compromised or misconfigured (or stolen as in the case of RSA Tokens).  It is interesting to state that compromises ‘all originate from the consumer’, as it is the intent of using these multi-factor systems to mitigate a portion of that very risk.

Page 42 “…because the configuration file [of Zeus/Zbot] cannot be decrypted and analyzed…it is impossible to say with any certainty that Zeus or another form of malware or something else altogether (e.g., Patco sharing its credentials with a third party) was responsible for the alleged fraudulent withdrawals.”  <– Highlighting the importance of forensic responsibility and that failure to do created a possibility of negligence.  Given the amount known of Zeus, the source code being publicly accessible, and purpose of its design it is difficult to imagine another source.  This could be also determined locally using the local firewalls and logs to analyze traffic and such.

Page 43 “…security system asks the user the precise challenge question, such as ‘What is your spouse’s middle name?'” <– These types of challenge questions continue to lose effectiveness as the amount of such data is breached.  The effect is cumulative, meaning the more breaches occur (published and non-published) the more likely these answers are not ‘secret’.  In addition to the breached data, the increasing of public intentional privacy releases (such as Facebook, and social networking sites) the effect is the same.  Commercial systems need to move beyond, and consider such things as rotating key cards (similar to utilized by the IRS Federal sites).

Page 52 “The agreed-to obligation…,a commercial banking customer, to monitor its accounts daily in turn fits the definition of a ‘security procedure’: ‘a procedure established by agreement of a customer and a receiving bank for the purpose of . . . detecting error in the transmission or the content of the pyament order or communication.'” <– Important stipulation whereby each party independent and in cooperation form the information security controls and security procedures.  In this case, such agreement was established through a modified agreement that was displayed for acceptance online to the user of the system.  Given the diversity, immensity, and complexity of such agreements it will be interesting if this principal holds in future examples.  In the meantime, it therefore is prudent to have service-provider agreements reviewed by the legal department and information security team to identify and properly manage these control environments.  This is an area that is often not included in standard information security programs.

Page 53 “The concept of what is commercially reasonable in a given case is flexible….It is reasonable to require large money center banks to make available state-of-the-art security procedures.  On the other hand, the same requirement may not ‘be reasonable for a small country bank’.” <– This section attempts to state that as some organizations handle higher value transactions they are more at risk and should institute sophisticated security safeguards.  This is true when transactions are handled manually and uniquely, but not so with electronic systems.  The presence of electronic systems constitutes a need for adequate security, in this case instituting a more mature and appropriate program that reflects the risks within the industry.  It is always prudent to employ risk assessments, but clearly (as both parties are litigating and seeking to recover sums of money here) both organizations would have identified these transactions as high risk and worthy of reasonable safeguards as necessary in online connected systems.

The conclusion of this summary judgement is in favor of the Bank, page 69.  An interesting gap in this entire case is the examination of the bank’s information security program.  There is no examination on the method or means taken in establishing the security programs.  In addition, there seems to be an allowance for controls to be installed but not correctly deployed – seeming to represent the classic compliant with FFIEC, but not secure.

The cost of this fraud was hundreds of thousands of dollars for the bank and hundreds more to the actual victim company.  Another global perspective is the lack of communication existing to businesses (any size) on the importance and responsibility that exists when using these computing systems.  Translating the need to adhere to leading practice is a challenge and one not clearly done well.

This case brings forward many questions, such as:

  • What techniques exist to cross the security/risk divide to business leadership?
  • Who must be the banking fraud expert… Who is tasked with handling these complex systems… it seems, in this case, the user / customer.  Where do these roles and responsibilities land?

Here is a link to the direct summary judgement (PDF).

I welcome any thoughts or further analysis.  Especially interested in other cases that are helping build the case law around this topic.  An article referencing a bit more history and another case may be found here – thanks for sharing!

Best,

James DeLuccia

RSA Europe Conference 2009, Day 3 Recap

I attended the session first thing in the morning on Mal-ware with Michael Thumann of ERNW GmbH.  He gave quite a bit of technical detail on how to methodically go through the detection and investigation process highlighting specific tools and tricks.  His slides are available online and provide sufficient detail to be understood out of the presentation context.  Definitely recommend downloading them!  A few nuggets I took away:

Malware best practices:

  • When you have been targeted by an unknown (to anti-virus) malware attack and specifically if it is against key persons within the organization the code should be preserved and reverse engineered by a professional to uncover what details were employed and including in the application to understand the risk landscape.
  • Most malware (windows) is designed to NOT run on virtualized VMWARE systems.  The reason is these systems are used primarily in the anti-malware reverse engineering environment.  So – 2 points:  First virtualized client workstations sounds like a reasonable defense for some organizations, and second be sure to follow de-Vmware identification efforts when building such labs (check using tools such as ScoobyNG).

Virtualization panel moderated by Becky Bace with Hemma Prafullchandra, Lynn Terwoerds, and John Howie;

The panel was fantastic – best of the entire conference and should have been given another hour or a keynote slot.  Becky gave great intelligence around the challenges and framed the panel perfectly.  There was an immense amount of content covered, and below are my quick notes (my apologies for broken flows in the notes):

Gartner says by 2009: 4M virtual machines

  • 2011: 660M
  • 2012: 50% of all x86 server workload will be running VMs

Nemertes Research:

  • 93% of organizations are deploying server virtualization
  • 78% have virtualization deployed

Morgan Stanley CIO Survey stated that server virtualization management and admin functions for 2009 include: Disaster recovery, High availability, backup, capacity planning, provisioning, live migration, and lifecycle management (greatest to smallest)
Currently advisories vs vulnerabilities are still showing patches leading before actual vulnerabilities being introduced… i.,e, the virtualization companies are fixing things before they become vulnerabilities by a fair degree.

Questions to consider?

  1. How does virtualization change/impact my security strategy & programs?
  2. Are the mappings (policy, practice, guidance, controls, process) more complex?   How can we deal with it?
  3. Where are the shortfalls, landmines, and career altering opportunities?
  4. What are the unique challenges of compliance in the virtualized infrastructure?
  5. How are the varying compliance frameworks (FISMA, SAS 70, PCI, HIPAA, SOX, etc) affected by virtualization?
  6. How do you demonstrate compliance? e.g., patching or showing isolation at the machine, storage, and network level
  7. How do you deal with scale rate of change?  (Asset/patch/Update) mgmt tools?
  8. What’s different or the same with operational security?
  9. Traditionally separation was king – separation of duties, network zones, dedicated hardware & storage per purpose and such – now what?
  10. VMs are “data” – they can be moved, copied, forgotten, lost and yet when active they are the machines housing the applications and perhaps even the app data – do you now have to apply all your data security controls too?  What about VMs at rest?

Hemma

  • Due to the rapidity of creation and deletion it is necessary to put in procedures and process that SLOWS down activities to include review and careful adherence to policies.  The ability to accidentally wipe out a critical machine or to trash a template are quick and final.  Backups do not exist in shared system / storage environment.
  • Better access control on WHO can make a virtual machine; notification of creation to a control group; people forget that their are licensing implications (risk 1); it is so cheap to create vm; people are likely to launch vm to fill the existing capacity of a system; storage management must be managed; VMs are just bits and can fill space rapidly; VMs can be forgotten and must be reviewed; Audit VMs on disk and determine use and implementation; VMs allow “Self-Service IT” where business units can reboot and operate systems;  Businesses will never delete a virtual machine; Policy that retires VMs after X period and then archives, and then deletes after Y perood; Create policy akin to user access policies.
  • In the virtualization world you have consolidation of 10 physical servers on 1 server .. when you give cloud-guru access you give them access to all systems and data and applications.

Lynn

  • People that come from highly regulated industries require you to put your toe in where you can … starting with redundant systems to give flexibility…  the lesson learned is managing at a new level of complexity and bringing forward issues that existed within the organization and become more emphasized.
    We need some way to manage that complexity.  while there are cost pressures and variables driving us towards virtualization we need ways to manage these issues.

  • Must show that virtualization will offset the cost in compliance and lower or hold the cost bar – otherwise impossible to get approval to deploy technology.

  • The complexity you are trying to manage is the the risk and audit folks.  This means internal and external bodies

John Howie

  • Utility computing will exacserbate the problems of instantiating an amazon instance is preferred over using internal resources.  Challenge is getting users to not go and setup an Amazon server is at the forefront – saying no and penalties are not the right pathway…must find positive pro-business rewards to bringing such advantages internal.
  • Finance is saying “take the cheapest route to get this done” … Capex to OpEx is easier to manage financially.
  • Is there a tool / appliance that allows you to do full lifecycle machines?  Is there a way to track usage statistics of specific virtual machines?  -> answer yes, available in the systems.

GREATEST concern of panel:  The shadow IT systems are not tracked or do not have the life cycle.  The deployment of systems on amazon are a black hole – especially due to the fact or use of credit cards…. Is the fix simply as having company setup an account to allow employees to use?

Classic Apply slide:  (download the slides!)

  • Do not IGNORE virtualization – it is happening:
  • Review programs and strategies as they affect virtualization / cloud
  • Existing security & compliance tools and techs will not work….

The other session I enjoyed on the conference was the Show Me the Money: Fraud Management Solutions session with Stuart Okin of Comsec Consulting and Ian Henderson of Advanced Forensics:

Tips:

  • Always conduct forensic analysis
  • Fraudsters historically hide money for later use post-jail time.
  • Consider IT forensics and be aware of disk encryption – especially if it was an IT administrator and the system is part of the corporation.  Basically – be sure of the technology in place and carefully work forward.
  • Syncronize time clocks – including the CCTV and data systems
  • Be aware of the need for quality logs and court submitable evidence
  • There are many tools that can support the activities of a fraud investigation and daily operations, but the necessity is complete and sufficient logs.  Meaning that the logs have to be captured and they have come from all the devices that matter.  Scope is key to ensuring full visibility.
  • Make someone is responsible for internal fraud
  • Management must instate a whistle-blowing policy w/hotline
  • Be attentive to changes in behavior
  • Obligatory vacation time
  • Ensure job rotation

IT:

  • Audit employee access activation and logging
  • maintain and enforce strict duty
  • Pilot deterrent technologies (EFM) <– as the use of these will highlight problems in regular operations and help lift the kimono momentarily allowing aggressive improvements.

Overall a great conference.  Much smaller than the San Francisco Conference, but the result is better conversations; deeper examination of topics, and superior networking with peers.

Till next year,

James DeLuccia IV

Federal Court fines Payment Processor for poor Business Practices

Proper business practices are a necessity in business, and when dealing with other people’s money it is paramount.  The FTC, again, has charged a fine against a business for not doing proper due diligence on new accounts within their operations.  ChoicePoint, now owned wholly by Lexis-Nexis, was previously found guilty of such practices in their infamous “breach” where an account was setup and pilfered 100,000s of accounts records.

The latest fine is against a payment provider who did not properly follow its own guidelines for onboarding new merchants.  The result was the fraudulent charges against consumers of more than $2.38 million.  The business has been ordered by Federal Court to pay $1,779,000 in consumer redress and end the illegal practices.

…the payment processor did not follow its own guidelines for new merchants and did not check addresses, phone numbers, or references the bogus merchant provided. The FTC alleged that the defendants anticipated that the scam would generate high return rates, that they did not request or obtain proof that consumers had authorized debits to their accounts, and that they continued to process charges even after receiving complaints from consumers and banks and unacceptable explanations about unauthorized debits from the merchant. The complaint alleged that more than 70 percent of the merchant’s transactions were returned or refused by the consumers’ banks

What is interesting is – what type of risk management practices existed in the business to let this occur for so long, and what audit efforts were conducted that did not catch these deficiencies in existing controls?

Guidelines and proper business practices are NOT check boxes for the sole purpose of checking them, but to be adhered in a manner that ensures the operational integrity of the business and the fidelity of operations.

A great article on the power of “check lists” is available here at the New Yorker.

Best regards,

James DeLuccia IV

How does Fraud and PCI go together?

An interesting phenomenon has occurred in the world of privacy data breaches, and specifically PCI DSS card holder data breaches, in that fraud (acts committed intentionally by insiders or through thefts that are suspected of fraud) has almost completely been forgotten. Not to say that one does not consider fraud generally in an organization’s basic risk register, but more so realizing that perhaps a level of perception bias may have enveloped the world. This perception bias is truly an example of a complacency effect that arises in most risk manager’s minds. This complacency bias is reinforced by the overwhelming amount of successful hack attacks on organizations. To business this is an important risk that must be addressed prudently throughout the organization.

An excellent set of resources is available through the Association of Certified Fraud Examiners (ACFE) where there are numerous articles and guides addressing many kinds of threats in an organization. I raise this issue, as I recently conducted a research effort that evaluated the threats to organizations, retailers specifically, and how the control environment should be appropriately tuned. A thorough analysis (using in part the excellent Privacy Rights ClearingHouse Data Breach Data) highlighted that although online attacks are more fruitful to attackers, there are nearly three times as many incidents under the fraud umbrella. The implications of this data is different for each organization, but must be considered with each risk management effort. As part of a fraud strategy, organizations should take serious consideration of SAS 99. Below is a table from the research:

PCI_Breachdata

PCI DSS specifically requires controls that align with ACFE and AICPA fraud prevention practices. The usage of PCI DSS control – Access Authorization, Separation of Duties, and clear job responsibilities all support the prevention of fraud in an organization.

Over time I will expand this article, as I find more data and expand on what core controls of PCI are beneficial for preventing Fraud. There is also a richer breakdown on SAS 99 at IT Compliance and Controls for those interested.

I would be interested to hear examples where Fraud played a role in a data breach, and what areas of the PCI DSS standard were critical in the detection or mitigation of the fraud.

Best,

James DeLuccia IV