Tag Archives: ciso

Russians used non-public exploits to hack governments; Debunking: skill vs. budget

blind-men-and-the-elephant

Organizations being hacked is not always the result of superior adversary, but more often than not (I think the figure is closer to 85% defender mistakes vs. 15% “very skilled) the result of poor defenses. The recent Russian hacking highlights against the White House website (note that GAO rated MOST Federal agencies as failing w/ regards to their information security postures) was noted as skilled, because they used yet known vulnerabilities. This is a generous leap in conclusion.

Their sophistication is not a factor here, but they have budget to buy such vulnerabilities off the open market. These are easily available and a successful attack could be orchestrated with less than $10k. According to public sources, the very expensive vulnerabilities cost around $100k. Easily within the reach of any financed attack group.

As we enter the week of RSA, and likely a slew of discoveries that are released this week let’s be pragmatic on their impacts and the defenders role.

They’ve determined that APT28, a politically-motivated Russian hacking group, used unpatched exploits in Flash Player and Windows in a series of assaults against a “specific foreign government organization” on April 13th. Patches for both flaws are either ready or on the way, but the vulnerabilities reinforce beliefs that APT28 is very skilled — less experienced groups would use off-the-shelf code.

via Russians are using undiscovered exploits to hack governments.

See you at RSA!

James @jdeluccia

1,600 Security Badges Missing From ATL Airport in 2 yr period – NBC News

While not a complicated or strategic topic that I would normally highlight, this one bit of news is from my home airport and personally meaningful.

Basically the report shows that 1,600 badges were lost or stolen in a 2 year period. This seems like a big number (2.6%), but this is a control that should (and not highlighted in broadcast) secondary supportive controls, such as:

  • Key card access review logs to prevent duplicate entries (i.e., same person cannot badge in 2x)
  • Analytics on badge entries against the work shifts of the person assigned
  • Access to areas not zoned for that worker
  • Termination of employees who don’t report in 12 hours on lost/missing badge

There are safeguards highlighted in broadcast that are good, but easily modified to the point of not being any value, and include:

  • Pin (can be easily observed due to tones and no covering)
  • Picture (every movie ever shows how easy this is done)
  • An old badge could be re-programmed and be a duplicate of another higher ranking / alternate security zone

Bottom line is organizations, especially those tasked with safety of human life, must have the primary and secondary controls in place. Hopefully the remarks of a minor risk are based on their security assessments with the considerations above (and more perhaps).

Article:
Hundreds of ID badges that let airport workers roam the nation’s busiest hub have been stolen or lost in the last two years, an NBC News investigation has found.

While experts say the missing tags are a source of concern because they could fall into the wrong hands, officials at Hartsfield-Jackson Atlanta International Airport insist they don’t pose “a significant security threat.”

via Hundreds of Security Badges Missing From Atlanta Airport – NBC News.com.

Also thanks to the new new aggregator (competitor to AllTops) Inside on Security or the clean new interface.

Best,

James

 

Moving forward: Who cared about encrypted phone calls to begin with…The Great SIM Heist

key-slide-540x351

TOP-SECRET GCHQ documents reveal that the intelligence agencies accessed the email and Facebook accounts of engineers and other employees of major telecom corporations and SIM card manufacturers in an effort to secretly obtain information that could give them access to millions of encryption keys.

-The Great SIM Heist: How Spies Stole the Keys to the Encryption Castle.

This news made a number of people upset, but after studying it for several weeks and trying to consider the macro effects to regular end users and corporations I have reached a contrarian point in my analysis.

Who cared?  Nobody (enough)

Sure the implications are published and are known, but who ever considered their cell phone encrypted and secure mobile device? I don’t think any consumer ever had that feeling and most professionals that WANT security in their communications use special precautions – such as the Black Phone.

So, if nobody expected it, demanded it, and the feature was primarily used to help billing than what SHOULD happen moving forward?

  • The primary lesson here is that our assumptions must be revisited, challenged, valued, and addressed at the base level of service providers
  • Second, businesses that depend (if they ever did so for instance on mobile device encrypted communication) on such safeguards – must pay for it

I would be interested in others points of view on the lessons forward. I have spent a good deal of time coordinating with leaders in this space and believe we can make a difference if we drop the assumptions, hopes, and focus on actual effective activities.

Helpful links on the Black Phone by SGP:

Mapping the Startup Maturity Framework to flexible information security fundamentals

MappingsAfter over a decade of working with startups, private equity, and over the last 5 years of deep big 4 client services acting in different executive roles (CISO, CIO Advisor, Board of Directors support)  I am certain there is a need and lack of implementation for adapted information security that is reflective of the size, maturity, and capabilities of the business. This applies independently to the the product and the enterprise as a whole. To that end, I have begun building models of activities to match each level of maturity to try and bring clarity or at least a set of guidelines.

As I share with my clients … in some cases a founder is deciding between EATING and NOT. So every function and feature, including security habits, must contribute to the current needs!

I have begun working with several partners and venture capital firms on this model, but wanted to share a nice post that highlights some very informative ‘Patterns in Hyper-growth Organizations‘ and what needs to be considered (employee type, tools, etc..). Please check it out and I look forward to working with the community on these models.

A snippet on her approach and great details:

We’re going to look at the framework for growth. The goal is to innovate on that growth. In terms of methods, the companies I’ve explored are high-growth, technology-driven and venture-backed organizations. They experience growth and hyper-growth (doubling in size in under 9 months) frequently due to network effects, taking on investment capital, and tapping into a global customer base.

Every company hits organizational break-points. I’ve seen these happening at the following organizational sizes:

via Mapping the Startup Maturity Framework | Likes & Launch.

Big Data is in early maturity stages, and could learn greatly from Infosec :re: Google Flu Trend failure

The concept of analysing large data sets, crossing data sets, and seeking the emergence of new insights and better clarity is a constant pursuit of Big Data. Given the volumn of data being produced by people and computing systems, stored, and ultimately now available for analysis – there are many possible applications that have not been designed.

The challenge with any new 'science', is that the concept to application process can not always be a straight line, or a line that ends where you were hoping. The implications for business using this technology, like the use of Information Security, requires an understanding of it's possibilities and weaknesses. False positives and exaggerations were a problem of past information security times, and now the problem seems almost understated.

An article from Harvard Business details how the Google Flu Trends project failed 100 out of 108 comparable periods. The article is worth a read, but I wanted to highlight two sections below as they relate to business leadership.

The quote picks up where the author is speaking about the problem of the model:

“The first sign of trouble emerged in 2009, shortly after GFT launched, when it completely missed the swine flu pandemic… it’s been wrong since August 2011. The Science article further points out that a simplistic forecasting model—a model as basic as one that predicts the temperature by looking at recent-past temperatures—would have forecasted flu better than GFT.

So in this analysis the model and the Big Data source was inaccurate. There are many cases where such events occur, and if you have ever followed the financial markets and their predictions – you see if more often wrong than right. In fact, it is a psychological (flaw) habit where we as humans do not zero in on those times that were predicted wrong, but those that were right. This is a risky proposition in anything, but it is important for us in business to focus on the causes of such weakness and not be distracted by false positives or convenient answers.

The article follows up the above conclusion with this statement relating to the result:

“In fact, GFT’s poor track record is hardly a secret to big data and GFT followers like me, and it points to a little bit of a big problem in the big data business that many of us have been discussing: Data validity is being consistently overstated. As the Harvard researchers warn: “The core challenge is that most big data that have received popular attention are not the output of instruments designed to produce valid and reliable data amenable for scientific analysis.”

The quality of the data is challenged here for being at fault, and I would challenge that ..

The analogy is from information security where false positives and such trends were awful in the beginning and have become much better overtime. The key inputs of data and the analysis within information security is from sources that are commonly uncontrolled and certainly not the most reliable for scientific analysis. We live in a (data) dirty world, where systems are behaving as unique to the person interfacing them.

We must continue to develop tolerances in our analysis within big data and the systems we are using to seek benefit from them. This clearly must balance criticism to ensure that the source and results are true, and not an anomaly.

Of course, the counter argument .. could be: if the recommendation is to learn from information security as it has had to live in a dirty data world, should information security instead be focusing on creating “instruments designed to produce valid and reliable data amenable for scientific analysis”? Has this already occurred? At every system component?

A grand adventure,

James

 

How to determine how much money to spend on security…

A question that many organizations struggle with is how much is the appropriate money to spend annually per user, per year on information security. While balancing security, privacy, usability, profitability, compliance, and sustainability is an art organization's have a new data point to consider.

Balancing – information security and compliance operations

The ideal approach that businesses take must always be based on internal and external factors that are weighted against the risks to their assets (assets in this case is generally inclusive of customers, staff, technology, data, and physical-environmental). An annual review identifying and quantifying the importance of these assets is a key regular exercise with product leadership, and then an analysis of the factors that influence those assets can be completed.

Internal and external factors include a number of possibilities, but key ones that rise to importance for business typically include:

  1. Contractual committments to customers, partners, vendors, and operating region governments (regulation)
  2. Market demands (activities necessary to match the market expectations to be competitive)

At the aggregate and distributed based upon the quantitative analysis above, safeguards and practices may be deployed, adjusted, and removed. Understanding the economic impact of the assets and the tributary assets/business functions that enable the business to deliver services & product to market allows for a deeper analysis. I find the rate of these adjustments depend on the business industry, product cycle, and influenced by operating events. At the most relaxed cadence, these would happen over a three year cycle with annual minor analysis conducted across the business.

Mature organization's would continue a cycle of improvement (note – improvement does not mean more $$ or more security / regulation, but is improvement based on the internal and external factors and I certainly see it ebbing and flowing)

Court settlement that impacts the analysis and balance for information security & compliance:

Organization's historically had to rely on surveys and reading of the tea leaf financial reports where costs of data breaches and FTC penalties were detailed. These collections of figures showed the cost of a data breach anywhere between $90-$190 per user. Depending on the need, other organizations would baseline costing figures against peers (i.e., do we all have the same # of security on staff; how much of a % of revenue is spent, etc…).

As a result of a recent court case, I envision the below figures to be joined in the above analysis. It is important to consider a few factors here:

  1. The data was considered sensitive (which could be easily argued across general Personally Identifiable Information or PII)
  2. There was a commitment to secure the data by the provider (a common statement in many businesses today)
  3. The customers paid a fee to be with service provider (premiums, annual credit card fees, etc.. all seem very similar to this case)
  4. Those that had damages and those that did not were included within the settlement

The details of the court case:

The parties' dispute dates back to December 2010, when Curry and Moore sued AvMed in the wake of the 2009 theft of two unencrypted laptops containing the names, health information and Social Security numbers of as many as 1.2 million AvMed members.

The plaintiffs alleged the company's failure to implement and follow “basic security procedures” led to plaintiffs' sensitive information falling “in the hands of thieves.” – Law360

A settlement at the end of 2013, a new fresh input:

“Class members who bought health insurance from AvMed can make claims from the settlement fund for $10 for each year they bought insurance, up to a $30 cap, according to the motion. Those who suffered identify theft will be able to make claims to recover their losses.”

For businesses conducting their regular analysis this settlement is important as the math applied here:

$10 x (# of years a client) x client = damages .. PLUS all of the upgrades required and the actual damages impacting the customers.

Finally

Businesses should update their financial analysis with the figures and situational factors of this court case. This will in some cases reduce budgets, but others where service providers have similar models/data the need for better security will be needed.

As always, the key is regular analysis against the internal & external factors to be nimble and adaptive to the ever changing environment. While balancing these external factors, extra vigilance needs to ensure the internal asset needs are being satisfied and remain correct (as businesses shift to cloud service providers and through partnering, the asset assumption changes .. frequently .. and without any TPS memo).

Best,

James

 

How to do DevOps – with security not as a bottle neck

As in any good morning, I read a nice article written by George Hulme that got me thinking on this topic; that lead to a discussion with colleagues in the Atlanta office, and resulted in me drawing crazy diagrams on my ipad trying to explain sequencing. Below I share my initial thoughts and diagrams for consumption and critique to improve the idea.

Problem StatementIs Security a bottleneck to development and likely more so in a continuous delivery culture?

Traditional development cycles look like this …

  1. A massive amount of innovation and effort occurs by developers
  2. Once everything works to spec, it is sent to security for release to Ops
  3. In most cases security “fights” and in a few cases fails the release to ask developers to patch (patching itself implies not a real solution but a fix and not a solution), and then
  4. a final push through security to Ops
There are many problems here, but to tackle the first myth – security here is a bottleneck, because that is how it is structurally placed in the development cycle.
On a time (man days; duration; level of work) basis, security is barely even present on the product develop to deploy timeline – this is akin to thinking that man has been on Earth for a long time, but is a mistake when taken relative to the creation of the planet.. but I digress
Solution In a continuous develop environment – iterate security cycles

As Mr. Hulme highlighted in the article, integration of information security with development through automation will certainly help scale #infosec tasks, but there is more. Integrate through rapid iterations and a feedback (note the attempt at coloring of the feedbacks by infosec & ops, joined and consistent with the in-flight development areas)

While high level, I find that as I work with leadership within organizations – clearly communicating and breaking out the benefits to their security posture; ability to hold market launch dates, and clarity for technology attestations is equally as important as the code base itself. Awareness and comprehension of the heavy work being done by developers, security, Ops, and compliance audit teams allows for leadership to provide appropriate funding, resources, governance, monitoring, and timelines (time is always the greatest gift).

How have I developed my viewpoint?

I have been spending an increasing amount of time these past few years working with service provider organizations and global F100. The common thread I am finding is the acceleration and dependecy of third party providers (Cloud, BPO, integrated operators, etc..) and in the process have had an interesting role with continuous delivery and high deploy partners. Specifically, my teams have audited and implemented global security compliance programs of those running these high deploy environments, and sought to establish a level of assurance (from the audit public accounting perspective) and security (actually being secure and better sustainable operations).

Best,

James