Product development – Battlefield leadership series: WN60 – defensive positions by Germans at Omaha Beach

Leading up to the invasion of Normandy (read this book on the topic, 2 week perspective shifting emotional journey), the leaders of each side had differing ideas about when an invasion should and would occur. The Allies came to the conclusion of low to mid-tide times, and the Germans believed that that the Allies would prefer to invade during high-tide.

The Germans built obstacles around the Omaha Beach shore. They created mines throughout the beach that would be hidden during high tide. Based on gun placements along the cliffs, the Germans were confident that this would be ideal in protecting their own. After preparations were finished, the Germans had dozens of gun placements providing criss-crossing machine gun fire over the entirety of Omaha Beach. As history shows, the Allied casualty rate indicates exactly how successful these gun placements were.

In preparation for attack, the Allies took the opposite perspective. Low tide provided easy exit pathways later at high tide. Low tide also allowed the Allies to see the obstacles, carefully avoid them, and easily destroy them. During the battle, the removal of obstacles allowed for a continued steady landing of forces after the initial invasion.

The Allies won; they got Omaha Beach. They were able to exploit gaps in the German defensive strategy through the application of carefully planned actions.

Business Reflections…

In a free market world, there is always someone who sees an opportunity that others do not. The advantages to each opportunity are weighed and measured. The result can be great or completely opposite. During the invasion of Normandy, fire from the Germans required the infantry on the ground to adjust from the original plan (most Allied troops were landed in the wrong zones, without the equipment they needed, and the general leadership structure was fractured due to the loss of so many soldiers at the landing). This ability — the ability to go off course of the original plan in order to find success in the heat of battle — is crucial to businesses and their teams.

Leaders are not always on the ground and cannot be effective if the teams have to seek out answers prior to taking an initiative. The successful Allies learned from prior landings to implement the following (all applicable to businesses as well):

  1. Training, a lot of training. The troops were trained clearly, relentlessly, and aggressively. The training included hands-on challenges with similar landscape and environmental hurdles.
  2. Building culture. Teams, squads, packs, etc. of individuals were grouped together, in most cases, since enlisting. These groupings created mass cohesiveness and inspired troops to push themselves and their fellow soldiers further than they thought possible (as in the desire to ‘stand strong in front of their comrades’).
  3. Unit command – localized leadership and decision making allowed for the teams to respond, re-group, and deploy without micro-managed leadership (the Germans required authority to engage and move assets, and thus were to late in being effective in resisting the invasion force).

Leaders must consider how they are embracing the above, and how they have made themselves leaders instead of micro-managers with teams executing check-sheets. 


 

What is Battlefield Leadership and what is this series about … 

This is the second paper in this series. As part of my pursuit to learn and grow, I sought out the excellent management training team at Battlefield Leadership. I am professionally leveraging this across multi-million dollar projects I am overseeing (currently I am the lead executive building global compliance and security programs specifically in the online services / cloud leader space). Personally I am bringing these lessons to bear within my pursuits to cross the chasm. To often I see brilliant technical individuals fail to communicate to very smart business leaders and to the common person on the street. My new book – How Not to be hacked seeks to be a first step in bringing deep information security practices beyond the technologist.

Most exciting the Battlefield group for this training placed it in Normandy France. This allowed for senior executives to be trained in a setting where serious decisions were placed by both sides, and each provided a lesson. This series represents my notes (that I could take down) and takeaways. I share to continue the conversation with those great individuals I met, and with the larger community.

Kind regards,

James

 

 

 

Innovating and penetrating the market – Battlefield Leadership Series – lessons and thoughts

Longues Sur Mer

At this location on the coast of Normandy you can see the immense naval guns setup to attack oncoming ships in World War II. The Germans expended resources and relied heavily upon on these guns in their defensive strategy. Unfortunately for the Germans, the treatment of the workers and locals, the sheer lack of natural intelligence, and exposure of building such vast emplacements was their downfall.

The Allies often received intelligence on the exact positions of German construction. This was provided by those building and living in the area. Specifically, a local farmer boy who was blind and actually counted each step precisely and then supplied locations through the French resistance and Allied intelligence networks.

The result was a gap in the German defensive strategy, a waste of resources, and ultimately, a failure to defend the coast.

Business Reflections: Innovating and Penetrating the market…

  • How are you establishing a product development strategy and running your business as a whole?
  • Are there defensible attributes that you deem critical, and how can they be routed?

Practical example: In the information security and intellectual property sector, there are very real threats and running a secure business requires constant new methods of defense.  How have you reevaluated these based on the shifts internally of your business and the known threats in the market itself? How did this analysis compare to prior years, and how have the effectiveness of your defenses proven?

From a product innovation perspective – are you developing in features from the highest and lowest levels? What are the high impact:low development efforts underway, and what could be added. Product and innovation requires views on the long and short run – to often we make complexity because we are able to handle complexity, when sometimes the user really only needs something less complex.

Leadership requires action:

Simply acknowledging the risks and accepting the situation does not prevent disastrous outcomes.


 

What is Battlefield Leadership and what is this series about … 

As part of my pursuit to learn and grow, I sought out the excellent management training team at Battlefield Leadership. I am professionally leveraging this across multi-million dollar projects I am overseeing (currently I am the lead executive building global compliance and security programs specifically in the online services / cloud leader space). Personally I am bringing these lessons to bear within my pursuits to cross the chasm. To often I see brilliant technical individuals fail to communicate to very smart business leaders and to the common person on the street. My new bookHow Not to be hacked seeks to be a first step in bringing deep information security practices beyond the technologist.

Most exciting the Battlefield group for this training placed it in Normandy France. This allowed for senior executives to be trained in a setting where serious decisions were placed by both sides, and each provided a lesson. This series represents my notes (that I could take down) and takeaways. I share to continue the conversation with those great individuals I met, and with the larger community.

Kind regards,

James

 

Review – Fmr. CIA Dir. Jim Woolsey warns of existential EMP threat to America

I have been studying First World worst case scenarios where Cyber and life intertwine, and was recommended to review this session.  It is a panel discussion that included former CIA Director on the threat of EMP on the U.S. national infrastructure.

Mr. Woolsey takes roughly the first 10 minutes to set the stage and it is worth listening to help anchor why the NERC/FERC CIP, Executive Order, and the betterment initiatives led by private industry are so important.

A bit of an extreme and not something many ‘concern themselves’ on, but it is important to start translating what information security and cyber mean in a tangible fashion. To often we deal only in probabilities and numbers and forget all else.

Fmr. CIA Dir. Jim Woolsey warns of existential EMP threat to America – YouTube.

Change all your passwords, now.. it is that simple

There is a lot of reason to change passwords and in most business settings passwords are requested to be changed every 90 days. This is usually for the end users and rarely for the system to system accounts. A recent vulnerability creates the possibility that any account that accesses a system on the internet (specifically using HTTPS w/ OpenSSL, but lets not complicate the clarion call here) is exposed and known by someone other than the owner.

By that very condition the password should be changed, and now.

So if you are a person reading this …

  1. Pull up your accounts and begin methodically changing them to a fresh new version (there is a condition here that the site you are updating at has already fixed the vulnerability and has internally followed good practices, but lets presume best scenario here)
  2. Add a note on your calendar 3-4 months from now, to again change the passwords

If you run an technology environment that had OpenSSL installed and was vulnerable, grab a cup of coffee and sandwich, then…

  1. Begin the methodical (perimeter first .. working your way in through layers) and careful task of updating all of the certificates, credentials, and end-user accounts. Also consider end-users too.
  2. Write amazing and clear explanations to the need, value, and importance of this process to your users
  3. Set all users that have accounts accessing your services, to be forced to reset.
  4. Log out (invalidate sessions) all Apps and online cookie sessions (revoke, etc..)
  5. Reissue your private key and SSL certificate
  6. Review and examine your API and third party connections to confirm these are updated, reset, and secured
  7. Add a bit of extra monitoring on the logs for a bit

This is all the result of the Heartbleed.com disclosure, but lets not get technical here .. these are good practices, but now with the probability above 'unlikely', it is a timely habit to re-embrace.

 

Stay safe,

 

James

Big Data is in early maturity stages, and could learn greatly from Infosec :re: Google Flu Trend failure

The concept of analysing large data sets, crossing data sets, and seeking the emergence of new insights and better clarity is a constant pursuit of Big Data. Given the volumn of data being produced by people and computing systems, stored, and ultimately now available for analysis – there are many possible applications that have not been designed.

The challenge with any new 'science', is that the concept to application process can not always be a straight line, or a line that ends where you were hoping. The implications for business using this technology, like the use of Information Security, requires an understanding of it's possibilities and weaknesses. False positives and exaggerations were a problem of past information security times, and now the problem seems almost understated.

An article from Harvard Business details how the Google Flu Trends project failed 100 out of 108 comparable periods. The article is worth a read, but I wanted to highlight two sections below as they relate to business leadership.

The quote picks up where the author is speaking about the problem of the model:

“The first sign of trouble emerged in 2009, shortly after GFT launched, when it completely missed the swine flu pandemic… it’s been wrong since August 2011. The Science article further points out that a simplistic forecasting model—a model as basic as one that predicts the temperature by looking at recent-past temperatures—would have forecasted flu better than GFT.

So in this analysis the model and the Big Data source was inaccurate. There are many cases where such events occur, and if you have ever followed the financial markets and their predictions – you see if more often wrong than right. In fact, it is a psychological (flaw) habit where we as humans do not zero in on those times that were predicted wrong, but those that were right. This is a risky proposition in anything, but it is important for us in business to focus on the causes of such weakness and not be distracted by false positives or convenient answers.

The article follows up the above conclusion with this statement relating to the result:

“In fact, GFT’s poor track record is hardly a secret to big data and GFT followers like me, and it points to a little bit of a big problem in the big data business that many of us have been discussing: Data validity is being consistently overstated. As the Harvard researchers warn: “The core challenge is that most big data that have received popular attention are not the output of instruments designed to produce valid and reliable data amenable for scientific analysis.”

The quality of the data is challenged here for being at fault, and I would challenge that ..

The analogy is from information security where false positives and such trends were awful in the beginning and have become much better overtime. The key inputs of data and the analysis within information security is from sources that are commonly uncontrolled and certainly not the most reliable for scientific analysis. We live in a (data) dirty world, where systems are behaving as unique to the person interfacing them.

We must continue to develop tolerances in our analysis within big data and the systems we are using to seek benefit from them. This clearly must balance criticism to ensure that the source and results are true, and not an anomaly.

Of course, the counter argument .. could be: if the recommendation is to learn from information security as it has had to live in a dirty data world, should information security instead be focusing on creating “instruments designed to produce valid and reliable data amenable for scientific analysis”? Has this already occurred? At every system component?

A grand adventure,

James

 

How to determine how much money to spend on security…

A question that many organizations struggle with is how much is the appropriate money to spend annually per user, per year on information security. While balancing security, privacy, usability, profitability, compliance, and sustainability is an art organization's have a new data point to consider.

Balancing – information security and compliance operations

The ideal approach that businesses take must always be based on internal and external factors that are weighted against the risks to their assets (assets in this case is generally inclusive of customers, staff, technology, data, and physical-environmental). An annual review identifying and quantifying the importance of these assets is a key regular exercise with product leadership, and then an analysis of the factors that influence those assets can be completed.

Internal and external factors include a number of possibilities, but key ones that rise to importance for business typically include:

  1. Contractual committments to customers, partners, vendors, and operating region governments (regulation)
  2. Market demands (activities necessary to match the market expectations to be competitive)

At the aggregate and distributed based upon the quantitative analysis above, safeguards and practices may be deployed, adjusted, and removed. Understanding the economic impact of the assets and the tributary assets/business functions that enable the business to deliver services & product to market allows for a deeper analysis. I find the rate of these adjustments depend on the business industry, product cycle, and influenced by operating events. At the most relaxed cadence, these would happen over a three year cycle with annual minor analysis conducted across the business.

Mature organization's would continue a cycle of improvement (note – improvement does not mean more $$ or more security / regulation, but is improvement based on the internal and external factors and I certainly see it ebbing and flowing)

Court settlement that impacts the analysis and balance for information security & compliance:

Organization's historically had to rely on surveys and reading of the tea leaf financial reports where costs of data breaches and FTC penalties were detailed. These collections of figures showed the cost of a data breach anywhere between $90-$190 per user. Depending on the need, other organizations would baseline costing figures against peers (i.e., do we all have the same # of security on staff; how much of a % of revenue is spent, etc…).

As a result of a recent court case, I envision the below figures to be joined in the above analysis. It is important to consider a few factors here:

  1. The data was considered sensitive (which could be easily argued across general Personally Identifiable Information or PII)
  2. There was a commitment to secure the data by the provider (a common statement in many businesses today)
  3. The customers paid a fee to be with service provider (premiums, annual credit card fees, etc.. all seem very similar to this case)
  4. Those that had damages and those that did not were included within the settlement

The details of the court case:

The parties' dispute dates back to December 2010, when Curry and Moore sued AvMed in the wake of the 2009 theft of two unencrypted laptops containing the names, health information and Social Security numbers of as many as 1.2 million AvMed members.

The plaintiffs alleged the company's failure to implement and follow “basic security procedures” led to plaintiffs' sensitive information falling “in the hands of thieves.” – Law360

A settlement at the end of 2013, a new fresh input:

“Class members who bought health insurance from AvMed can make claims from the settlement fund for $10 for each year they bought insurance, up to a $30 cap, according to the motion. Those who suffered identify theft will be able to make claims to recover their losses.”

For businesses conducting their regular analysis this settlement is important as the math applied here:

$10 x (# of years a client) x client = damages .. PLUS all of the upgrades required and the actual damages impacting the customers.

Finally

Businesses should update their financial analysis with the figures and situational factors of this court case. This will in some cases reduce budgets, but others where service providers have similar models/data the need for better security will be needed.

As always, the key is regular analysis against the internal & external factors to be nimble and adaptive to the ever changing environment. While balancing these external factors, extra vigilance needs to ensure the internal asset needs are being satisfied and remain correct (as businesses shift to cloud service providers and through partnering, the asset assumption changes .. frequently .. and without any TPS memo).

Best,

James

 

How to do DevOps – with security not as a bottle neck

As in any good morning, I read a nice article written by George Hulme that got me thinking on this topic; that lead to a discussion with colleagues in the Atlanta office, and resulted in me drawing crazy diagrams on my ipad trying to explain sequencing. Below I share my initial thoughts and diagrams for consumption and critique to improve the idea.

Problem StatementIs Security a bottleneck to development and likely more so in a continuous delivery culture?

Traditional development cycles look like this …

  1. A massive amount of innovation and effort occurs by developers
  2. Once everything works to spec, it is sent to security for release to Ops
  3. In most cases security “fights” and in a few cases fails the release to ask developers to patch (patching itself implies not a real solution but a fix and not a solution), and then
  4. a final push through security to Ops
There are many problems here, but to tackle the first myth – security here is a bottleneck, because that is how it is structurally placed in the development cycle.
On a time (man days; duration; level of work) basis, security is barely even present on the product develop to deploy timeline – this is akin to thinking that man has been on Earth for a long time, but is a mistake when taken relative to the creation of the planet.. but I digress
Solution - In a continuous develop environment – iterate security cycles

As Mr. Hulme highlighted in the article, integration of information security with development through automation will certainly help scale #infosec tasks, but there is more. Integrate through rapid iterations and a feedback (note the attempt at coloring of the feedbacks by infosec & ops, joined and consistent with the in-flight development areas)

While high level, I find that as I work with leadership within organizations – clearly communicating and breaking out the benefits to their security posture; ability to hold market launch dates, and clarity for technology attestations is equally as important as the code base itself. Awareness and comprehension of the heavy work being done by developers, security, Ops, and compliance audit teams allows for leadership to provide appropriate funding, resources, governance, monitoring, and timelines (time is always the greatest gift).

How have I developed my viewpoint?

I have been spending an increasing amount of time these past few years working with service provider organizations and global F100. The common thread I am finding is the acceleration and dependecy of third party providers (Cloud, BPO, integrated operators, etc..) and in the process have had an interesting role with continuous delivery and high deploy partners. Specifically, my teams have audited and implemented global security compliance programs of those running these high deploy environments, and sought to establish a level of assurance (from the audit public accounting perspective) and security (actually being secure and better sustainable operations).

Best,

James