Category Archives: information security

Methodology for the identification of critical connected infrastructure and services — SAAS, shared services..

ENISA released a study with a methodology identifying critical infrastructure in communication networks. While this is important and valuable as a topic, I dove into this study for a particularly selfish reason … I am SEEKING a methodology that we could leverage for identifying critical connected infrastructure (cloud providers, SAAS, shared services internally for large corporations, etc..) for the larger public/private sector.  Here are my highlights – I would value any additional analysis, always:

  • Challenge to the organization: “..which are exactly those assets that can be identified as Critical Information Infrastructure and how we can make sure they are secure and resilient?”
  • Key success factors:
    • Detailed list of critical services
    • Criticality criteria for internal and external interdependencies
    • Effective collaboration between providers (internal and external)
  • Interdependency angles:
    • Interdependencies within a category of service
    • Interdependencies between categories of services
    • Interdependencies among data assets
  • Establish baseline security guidelines (due care):
    • Balanced to business risks & needs
    • Established at procurement cycle
    • Regularly verified (at least w/in 3 yr cycle)
  • Tagging/Grouping of critical categories of service
    • Allows for clean tracking & regular security verifications
    • Enables troubleshooting
    • Threat determination and incident response
  • Methodology next steps:
    • Partner with business and product teams to identify economic entity / market value
    • Identify the dependencies listed about and mark criticality based on entity / market value
    • Develop standards needed by providers
    • Investigate how monitoring to standards can be managed and achieved (in some cases contracts can support you, others will be a monopoly and you’ll need to augment their processes to protect you)
    • Refresh and adjust annually to reflect modifications of business values

I hope this breakout is helpful. The ENISA document has a heavy focused on promoting government / operator ownership, but businesses cannot rely or wait for such action and should move accordingly. The above is heavily modified and original thinking based on my experience with structuring similar business programs. A bit about ENISA’s original intent of the study:

This study aims to tackle the problem of identification of Critical Information Infrastructures in communication networks. The goal is to provide an overview of the current state of play in Europe and depict possible improvements in order to be ready for future threat landscapes and challenges. Publication date: Feb 23, 2015 via Methodologies for the identification of Critical Information Infrastructure assets and services — ENISA.

Best, James

The “appearance of trustability” on foo.Github.io

Github is an awesome repository system that is very popular. Basically if you want to work on something (code, a book, electronic files) and then allow others to freely make suggested modifications (think track changes in a Microsoft Word doc), GitHub is the new way of life. I have used on publishing a book, writing code, taking a Python course online, and others are using it at a scale to produce some of the fantastic tools you see online.

I recently saw a post (included below) that clarified how their encryption was setup. Basically encryption allows you to confidentially send data to another party without the fear of others intercepting, stealing, or modifying it. It appears though that for foo.GitHub.io they are presenting the appearance of encryption, but in fact do not have it. Meaning the actual files are sent in the clear.

This is a problem in our structure of security and compliance. Today we have regulations and industry standards that are designed to prescribe specific security safeguards and levels to ensure a baseline amount of security. If organizations don’t meet the true intent of the regulations, do only enough to pass inspection, but create an environment that is susceptible to basic attacks – the user (you and me) are the one’s who suffer.

While it is disappointing for an organization to setup something that clearly creates false trust and checks a box, it is more a call to action for those who operate these systems to embrace pride of the services they are delivering. Much as Steve Jobs desired the insides and outsides of a system to be done correct – the security of an organization should not just look but be right.

We must do better as owners, operators, and security professionals. Trust depends on indicators and expectations being met, and to violate that begs the question… what else is being done in the same manner?

“cben” comment below on github.com issues post:

Turns out there is no end-to-end security even with foo.github.io domain. Got this response from GH support (emphasis mine):

[…opening commentary removed…]

While HTTPS requests may appear to work, our CDN provider is adding and removing the encryption at their end, and then the request is transmitted over the open internet from our CDN provider to our GitHub Pages infrastructure, creating the appearance of trustability.

This is why we do not yet officially support HTTPS for GitHub Pages. We definitely appreciate the feedback and I’ll add a +1 to this item on out internal Feature Request List.

via Add HTTPS support to Github Pages · Issue #156 · isaacs/github · GitHub.

Best,

James

State of compliance vs. compliant, re: New PCI Compliance Study | PCI Guru

A new study was released by Branden Williams and the Merchants Acquirer Committee (MAC), and it is worth a read. One aspect that jumped to me is the percentage of compliance vs compliant rates shared in the study. The difference here is those who have represented being PCI Compliant through Attestations of Compliance (AOC) vs. those who have had their programs pressure tested by the criminals of the world, and been found wanting.

Here is the snippet from PCI GURU that highlights this state of discrepancy:

The biggest finding of the study and what most people are pointing to is the low compliance percentages across the MAC members’ merchants.  Level 1, 2 and 3 merchants are only compliant around 67% to 69% of the time during their assessments.  However, most troubling is that Level 4 merchants are only 39% compliant.

Depending on the merchant level, these figures are not even close to what Visa last reported back in 2011.  Back then, Visa was stating that 98% of Level 1 merchants were reported as compliant.  Level 2 merchants were reported to be at 91% compliance.  Level 3 merchants were reported at 57% compliance.  As is Visa’s practice, it only reported that Level 4 merchants were at a “moderate” level of compliance.

via New PCI Compliance Study | PCI Guru.

Here is the link to the report from Branden & MAC

Board of Directors, CISO, and legal should all care deeply that PCI (and of course and certainly other contractual agreements) security is achieved honestly. To often organizations view this like registering a car with the government. This is far to complex and impactful to people within and outside a given business. The cyber economic connections between proper, efficient, and effective security all lend to better products in the market and more focus on what the business is driving towards.

Is your program honestly secure and fully addressing these least practice principles?

Best,

James

Bank Hackers Steal Millions ($100M+) via Malware & long campaign – NYTimes.com

A good article was released on the NYT today highlighting an elongated attack into up to 100 banks where methods were learned by attackers, and then exploited. What is interesting here is that the attackers studied the banks own processes and then customized their behaviors accordingly.

It would be difficult to imagine these campaigns to succeed for such a long period as occurred if the malware was detected (which is possible with interval security process studies), and or the bank processes were re-examined by risk officers for activity within the dollar range thresholds. It is typical for data to be slowly “dripped” out of networks to stay below range (hence when signatures are essentially worthless as a preventive/detective tool), and thus similar fraud behavior is needed at the human/software process level.

I look forward to the report to analyze the campaign and share any possible learnings beyond this surface article. Two highlights of the NYT article jump to me, include:

Kaspersky Lab says it has seen evidence of $300 million in theft from clients, and believes the total could be triple that. But that projection is impossible to verify because the thefts were limited to $10 million a transaction, though some banks were hit several times. In many cases the hauls were more modest, presumably to avoid setting off alarms.

The hackers’ success rate was impressive. One Kaspersky client lost $7.3 million through A.T.M. withdrawals alone, the firm says in its report. Another lost $10 million from the exploitation of its accounting system. In some cases, transfers were run through the system operated by the Society for Worldwide Interbank Financial Telecommunication, or Swift, which banks use to transfer funds across borders. It has long been a target for hackers — and long been monitored by intelligence agencies.

via Bank Hackers Steal Millions via Malware – NYTimes.com.

The report is planned for release on Feb 16, and I hope there are substantial facts on the campaign.

Thanks for Kaspersky to continue to lead research and providing solutions.

Best,

James

 

1 Billion Data Records Stolen in 2014, WSJ

A nice summation of the Gemalto report regarding the data breaches in 2014.

Identity theft was by far the largest type of attack, with 54% of the breaches involving the theft of personal data, up from 23% in 2013.

Data records are defined as personally identifiable information such as email addresses, names, passwords, banking details, health information, and social security numbers.

via 1 Billion Data Records Stolen in 2014, Says Gemalto – Digits – WSJ.

Key points:

  1. 4% of the data breached was encrypted – demonstrating it’s effectiveness and it’s still lack of proper adoption
  2. 78% of breaches were from U.S. companies, followed by the U.K.

Lessons abound, and I am working on publishing a new piece on the evolution of these breaches, and how “we” have misinterpreted the utility of this data.

On a similar topic, please join me in pursuing to build leading habits for everyday user’s to minimize the impact of these breaches at – http://www.hownottobehacked.com my new research project.

Best,

James

Attribution & Intent challenges: Comparing Regin module 50251 and “Qwerty” keylogger

Kaspersky Labs (a pretty wicked good set of researchers) published an analysis on the Snowden shared source code and found it identical in part to a piece of malware known as Regin. Regin has been in the digital space for nearly 10 years and has been attributed to a number of infected systems globally.

I would encourage everyone to read and understand the analysis as it is quite thorough and interesting .. go ahead, I’ll wait .. Comparing the Regin module 50251 and the “Qwerty” keylogger – Securelist.

While I cannot speak to the course and reason behind this tool, beyond the obvious conjectures, I would stress one critical point.  Attribution and intent.

Attribution is hard and of little value

As we find with other digital attacks, attribution is very difficult and I often tell clients to not focus on that as a basis for sanity and response. This is obvious in the difficulty in attributing such attacks, but also the problems with incorrectly making such assertions. I.e., JP Morgan’s “Russian attack on the bank due to their activities” during Ukraine incident was in fact a breach due to simple human error on configuring a server.

Intent

We as the observers do not know the intent of the operatives with the malware. In this case with the NSA we have identified malware in various locations, but as we all know … malware code spreads pretty freely without much direction. The concept that one system was infected unintentionally or without purpose from the operators is pretty high.

This comes to the forefront with our own internal analysis of attacks and breaches in our corporate environments. We must seek out all of the possible vectors, and not allow our bias or evidence on hand sway us incorrectly.

Spiegel.de article on Kaspersky report and other thoughts

Thoughts?

James

Information Security executives … is responsibility being abdicated?

Is “it is your decision not ours” statement and philosophy a cop-out within the Information Security sphere?

This is a common refrain and frustration I hear across the world of information security and information technology.  Is this true?  Is it the result of personality types that are attracted to these roles?  Is it operational and reporting structure?

In Audit it is required for independence and given visibility. Does not the business (the CIO) and the subject expertise (CISO) not have that visibility possess a requirement of due care to MAKE it work?

The perfect analogy is the legal department – they NEVER give in and walk away with a mumble, they present their case until all the facts are known and a mutual understanding is reached. Balance happens but it happens with understanding.

This point is so important to me, that it warranted a specific sharing of the thought.  I hope we can reframe our approach, and to follow a presentation off TED – focus on the WHY.  (need to find link…sorry)   These individuals in these roles provide the backbone and customer facing layer of EVERY business.

Thoughts and realizations made from stumbling around our community and today during RSA resulting from the presentations with underlying tones.

Always seek,

James DeLuccia