Tag Archives: google

Hardware failure can lead to 70% breakout in Cloud / virtualization setup

cosmic raysGoogle released details on how an attacker can take advantage of the physical design and setup of some memory chips in computers. This exploit basically is based on setting and releasing a charge on one memory block to the point it leaks over to the neighbor block (simplifying here). Stated another way – Imagine cutting an onion and then using the same knife to cut a tomato… the taste of the onion would definitely transfer to the tomato, ask any toddler 😉

  • What does this mean to enterprises – well it is early, but this type of risk to an organization should be addressed and covered in your third party supplier / procurement security team. Leading organizations are already vetting hardware vendors and the components included in each purchase to prevent malicious firmware and snooping technology.
  • In addition, the supplier team managing all of the deployed cloud and virtualization relationships (your Cloud Relationship Manager) should begin a process of reviewing their provider evaluations.

Of course this is a new release and the attack is not simple, but that doesn’t mean it won’t and could not occur.

The attack identified by Google plus the virtualized environment creates a situation where an attacker “…can design a program such that a single-bit error in the process address space gives him a 70% probability of completely taking over the JVM to execute arbitrary code” – Research paper

Given the probability of success, it is definitely valuable to have this on your risk and supplier program evaluations.

Here is the full analysis by Google and the virtualized research paper.

Best,

James DeLuccia

d

To

Big Data is in early maturity stages, and could learn greatly from Infosec :re: Google Flu Trend failure

The concept of analysing large data sets, crossing data sets, and seeking the emergence of new insights and better clarity is a constant pursuit of Big Data. Given the volumn of data being produced by people and computing systems, stored, and ultimately now available for analysis – there are many possible applications that have not been designed.

The challenge with any new 'science', is that the concept to application process can not always be a straight line, or a line that ends where you were hoping. The implications for business using this technology, like the use of Information Security, requires an understanding of it's possibilities and weaknesses. False positives and exaggerations were a problem of past information security times, and now the problem seems almost understated.

An article from Harvard Business details how the Google Flu Trends project failed 100 out of 108 comparable periods. The article is worth a read, but I wanted to highlight two sections below as they relate to business leadership.

The quote picks up where the author is speaking about the problem of the model:

“The first sign of trouble emerged in 2009, shortly after GFT launched, when it completely missed the swine flu pandemic… it’s been wrong since August 2011. The Science article further points out that a simplistic forecasting model—a model as basic as one that predicts the temperature by looking at recent-past temperatures—would have forecasted flu better than GFT.

So in this analysis the model and the Big Data source was inaccurate. There are many cases where such events occur, and if you have ever followed the financial markets and their predictions – you see if more often wrong than right. In fact, it is a psychological (flaw) habit where we as humans do not zero in on those times that were predicted wrong, but those that were right. This is a risky proposition in anything, but it is important for us in business to focus on the causes of such weakness and not be distracted by false positives or convenient answers.

The article follows up the above conclusion with this statement relating to the result:

“In fact, GFT’s poor track record is hardly a secret to big data and GFT followers like me, and it points to a little bit of a big problem in the big data business that many of us have been discussing: Data validity is being consistently overstated. As the Harvard researchers warn: “The core challenge is that most big data that have received popular attention are not the output of instruments designed to produce valid and reliable data amenable for scientific analysis.”

The quality of the data is challenged here for being at fault, and I would challenge that ..

The analogy is from information security where false positives and such trends were awful in the beginning and have become much better overtime. The key inputs of data and the analysis within information security is from sources that are commonly uncontrolled and certainly not the most reliable for scientific analysis. We live in a (data) dirty world, where systems are behaving as unique to the person interfacing them.

We must continue to develop tolerances in our analysis within big data and the systems we are using to seek benefit from them. This clearly must balance criticism to ensure that the source and results are true, and not an anomaly.

Of course, the counter argument .. could be: if the recommendation is to learn from information security as it has had to live in a dirty data world, should information security instead be focusing on creating “instruments designed to produce valid and reliable data amenable for scientific analysis”? Has this already occurred? At every system component?

A grand adventure,

James

 

Convergence Risk: Google Chrome and Extensions, at BlackHat 2011

Interesting quotes from guys that demonstrated attack vectors in Google’s Chrome during Blackhat 2011:

“The software security model we’ve been dealing with for decades now has been reframed,” Johansen said.  “It’s moved into the cloud and if you’re logged into bank, social network and email accounts, why do I care what’s stored in your hard drive?”

  • An important illumination regarding the shifting of the risk landscape.  How the user interfaces with data and the system has changed and challenges the current technology controls relied upon to safeguard the intellectual property.
  • What is the effective rate of end-point security (malware / phishing agents, anti-virus) on this new user case?
  • What is being deployed and effective – policy, procedure, technology, a hybrid?

“While the Chrome browser has a sandboxing security feature to prevent an attack from accessing critical system processes, Chrome extensions are an exception to the rule. They can communicate among each other, making it fairly easy for an attacker to jump from a flawed extension to steal data from a secure extension.”

  • Speaks to the issue of convergence of apps that are emerging on iPhones, Androids, respective tablets, TVs, browsers, operating systems, etc…  Similar to the fragmentation attacks of the past – where packets would be innocent separate, but when all received they would reform to something capable of malicious activity.

Interesting extension of risk here is that the platform and / or devices may be trusted and accepted by enterprises, but it is these Apps / Widgets / Extensions that are creating the security scenarios.  This requires a policy and process for understanding the state of these platforms (platforms here including all mobile devices, browsers, and similar App-Loadable environments) beyond the gold configuration build.

Another article on the Google Chrome extension risk described above.

Thoughts?

James DeLuccia

Infrastructure Security Response, Google excludes 11M+ domains

Google officially removed a “freehost” provider from a Korean Company that was providing the .co.cc domain (link to The Register article).  This was done on the basis of a large percentage of spammy or low-quality sites.  According to the Anti-Phishing Working Group (report) this top level domain accounted for a large number of mal-ware, phishing, and spam traffic.

This defensive move by Google frames nicely a counter move to what I have termed as ‘Infrastructure level attacks’.  These types of attacks are executed through planned and global programs designed to bypass the fundamental security safeguards organizations deploy.  The popular examples are RSA SecureID Tokens and Comodo certificates.

The challenge has been how to respond equally to such attacks, and here we are seeing an exploration into this response.  The U.S. Government is exploring filters and preventive tools at the ISP level, and here we have a propagator of search results eliminating the possibility of users connecting to such domains – regardless of any possible non-malicious site.

This highlights the need to examine the information security program of your organization and the core providers.  This examination must consider risks that are known and ‘far-fetched ideas’ (such as the domain being blocked at the ISP level) that may impact your business.  Such continuous programs of risk assessment are key, but just as critical is the examination and pivoting of the program itself.  (yes.. a risk assessment of the risk assessment program).

Counter thoughts?

James DeLuccia