Tag Archives: cio

Russians used non-public exploits to hack governments; Debunking: skill vs. budget

blind-men-and-the-elephant

Organizations being hacked is not always the result of superior adversary, but more often than not (I think the figure is closer to 85% defender mistakes vs. 15% “very skilled) the result of poor defenses. The recent Russian hacking highlights against the White House website (note that GAO rated MOST Federal agencies as failing w/ regards to their information security postures) was noted as skilled, because they used yet known vulnerabilities. This is a generous leap in conclusion.

Their sophistication is not a factor here, but they have budget to buy such vulnerabilities off the open market. These are easily available and a successful attack could be orchestrated with less than $10k. According to public sources, the very expensive vulnerabilities cost around $100k. Easily within the reach of any financed attack group.

As we enter the week of RSA, and likely a slew of discoveries that are released this week let’s be pragmatic on their impacts and the defenders role.

They’ve determined that APT28, a politically-motivated Russian hacking group, used unpatched exploits in Flash Player and Windows in a series of assaults against a “specific foreign government organization” on April 13th. Patches for both flaws are either ready or on the way, but the vulnerabilities reinforce beliefs that APT28 is very skilled — less experienced groups would use off-the-shelf code.

via Russians are using undiscovered exploits to hack governments.

See you at RSA!

James @jdeluccia

1,600 Security Badges Missing From ATL Airport in 2 yr period – NBC News

While not a complicated or strategic topic that I would normally highlight, this one bit of news is from my home airport and personally meaningful.

Basically the report shows that 1,600 badges were lost or stolen in a 2 year period. This seems like a big number (2.6%), but this is a control that should (and not highlighted in broadcast) secondary supportive controls, such as:

  • Key card access review logs to prevent duplicate entries (i.e., same person cannot badge in 2x)
  • Analytics on badge entries against the work shifts of the person assigned
  • Access to areas not zoned for that worker
  • Termination of employees who don’t report in 12 hours on lost/missing badge

There are safeguards highlighted in broadcast that are good, but easily modified to the point of not being any value, and include:

  • Pin (can be easily observed due to tones and no covering)
  • Picture (every movie ever shows how easy this is done)
  • An old badge could be re-programmed and be a duplicate of another higher ranking / alternate security zone

Bottom line is organizations, especially those tasked with safety of human life, must have the primary and secondary controls in place. Hopefully the remarks of a minor risk are based on their security assessments with the considerations above (and more perhaps).

Article:
Hundreds of ID badges that let airport workers roam the nation’s busiest hub have been stolen or lost in the last two years, an NBC News investigation has found.

While experts say the missing tags are a source of concern because they could fall into the wrong hands, officials at Hartsfield-Jackson Atlanta International Airport insist they don’t pose “a significant security threat.”

via Hundreds of Security Badges Missing From Atlanta Airport – NBC News.com.

Also thanks to the new new aggregator (competitor to AllTops) Inside on Security or the clean new interface.

Best,

James

 

Moving forward: Who cared about encrypted phone calls to begin with…The Great SIM Heist

key-slide-540x351

TOP-SECRET GCHQ documents reveal that the intelligence agencies accessed the email and Facebook accounts of engineers and other employees of major telecom corporations and SIM card manufacturers in an effort to secretly obtain information that could give them access to millions of encryption keys.

-The Great SIM Heist: How Spies Stole the Keys to the Encryption Castle.

This news made a number of people upset, but after studying it for several weeks and trying to consider the macro effects to regular end users and corporations I have reached a contrarian point in my analysis.

Who cared?  Nobody (enough)

Sure the implications are published and are known, but who ever considered their cell phone encrypted and secure mobile device? I don’t think any consumer ever had that feeling and most professionals that WANT security in their communications use special precautions – such as the Black Phone.

So, if nobody expected it, demanded it, and the feature was primarily used to help billing than what SHOULD happen moving forward?

  • The primary lesson here is that our assumptions must be revisited, challenged, valued, and addressed at the base level of service providers
  • Second, businesses that depend (if they ever did so for instance on mobile device encrypted communication) on such safeguards – must pay for it

I would be interested in others points of view on the lessons forward. I have spent a good deal of time coordinating with leaders in this space and believe we can make a difference if we drop the assumptions, hopes, and focus on actual effective activities.

Helpful links on the Black Phone by SGP:

Mapping the Startup Maturity Framework to flexible information security fundamentals

MappingsAfter over a decade of working with startups, private equity, and over the last 5 years of deep big 4 client services acting in different executive roles (CISO, CIO Advisor, Board of Directors support)  I am certain there is a need and lack of implementation for adapted information security that is reflective of the size, maturity, and capabilities of the business. This applies independently to the the product and the enterprise as a whole. To that end, I have begun building models of activities to match each level of maturity to try and bring clarity or at least a set of guidelines.

As I share with my clients … in some cases a founder is deciding between EATING and NOT. So every function and feature, including security habits, must contribute to the current needs!

I have begun working with several partners and venture capital firms on this model, but wanted to share a nice post that highlights some very informative ‘Patterns in Hyper-growth Organizations‘ and what needs to be considered (employee type, tools, etc..). Please check it out and I look forward to working with the community on these models.

A snippet on her approach and great details:

We’re going to look at the framework for growth. The goal is to innovate on that growth. In terms of methods, the companies I’ve explored are high-growth, technology-driven and venture-backed organizations. They experience growth and hyper-growth (doubling in size in under 9 months) frequently due to network effects, taking on investment capital, and tapping into a global customer base.

Every company hits organizational break-points. I’ve seen these happening at the following organizational sizes:

via Mapping the Startup Maturity Framework | Likes & Launch.

Big Data is in early maturity stages, and could learn greatly from Infosec :re: Google Flu Trend failure

The concept of analysing large data sets, crossing data sets, and seeking the emergence of new insights and better clarity is a constant pursuit of Big Data. Given the volumn of data being produced by people and computing systems, stored, and ultimately now available for analysis – there are many possible applications that have not been designed.

The challenge with any new 'science', is that the concept to application process can not always be a straight line, or a line that ends where you were hoping. The implications for business using this technology, like the use of Information Security, requires an understanding of it's possibilities and weaknesses. False positives and exaggerations were a problem of past information security times, and now the problem seems almost understated.

An article from Harvard Business details how the Google Flu Trends project failed 100 out of 108 comparable periods. The article is worth a read, but I wanted to highlight two sections below as they relate to business leadership.

The quote picks up where the author is speaking about the problem of the model:

“The first sign of trouble emerged in 2009, shortly after GFT launched, when it completely missed the swine flu pandemic… it’s been wrong since August 2011. The Science article further points out that a simplistic forecasting model—a model as basic as one that predicts the temperature by looking at recent-past temperatures—would have forecasted flu better than GFT.

So in this analysis the model and the Big Data source was inaccurate. There are many cases where such events occur, and if you have ever followed the financial markets and their predictions – you see if more often wrong than right. In fact, it is a psychological (flaw) habit where we as humans do not zero in on those times that were predicted wrong, but those that were right. This is a risky proposition in anything, but it is important for us in business to focus on the causes of such weakness and not be distracted by false positives or convenient answers.

The article follows up the above conclusion with this statement relating to the result:

“In fact, GFT’s poor track record is hardly a secret to big data and GFT followers like me, and it points to a little bit of a big problem in the big data business that many of us have been discussing: Data validity is being consistently overstated. As the Harvard researchers warn: “The core challenge is that most big data that have received popular attention are not the output of instruments designed to produce valid and reliable data amenable for scientific analysis.”

The quality of the data is challenged here for being at fault, and I would challenge that ..

The analogy is from information security where false positives and such trends were awful in the beginning and have become much better overtime. The key inputs of data and the analysis within information security is from sources that are commonly uncontrolled and certainly not the most reliable for scientific analysis. We live in a (data) dirty world, where systems are behaving as unique to the person interfacing them.

We must continue to develop tolerances in our analysis within big data and the systems we are using to seek benefit from them. This clearly must balance criticism to ensure that the source and results are true, and not an anomaly.

Of course, the counter argument .. could be: if the recommendation is to learn from information security as it has had to live in a dirty data world, should information security instead be focusing on creating “instruments designed to produce valid and reliable data amenable for scientific analysis”? Has this already occurred? At every system component?

A grand adventure,

James