Tag Archives: 2014

Continuous Improvement, Audit, and the Agile 2014 Conference .. My lessons


Agile 2014 Conference session

Every moment we are learning something new. The greatest challenge is to take advantage of this new information and do something substantial – something real – with it.

As an adventureman in the DevOps / Audit space, I have the privilege of evaluating the opportunities, risks, and future directions for many enterprises. The sophistication of these enterprises spans far and wide – From Fortune 20 companies, 700-person agile teams, to small startups and even smaller teams of five. These companies have one thing in common: a desire to create a business partnership that will accomplish secure, privacy minded, and compliant operations. To put it simply, these companies have the passion and rigor of overcoming a Big 4 audit.

On Wednesday I spoke at the Agile 2014 conference with esteemed author and innovator, Gene Kim. Our session title was, Keeping the Auditor away; DevOps Compliance Case Study. Attendees at this lecture benefited from a 90-person open collaboration and sharing of ideas. A few points resonated with me.

On Leadership:
To lead a product development team requires skill beyond balancing the needs and output of the teams; it requires the talent of connecting the development activities to the governance of the business at the highest control level. The ability to serve the customer is only half of the job description. The other half consists of considering internal business stakeholders (internal auditing, marketing, information security, and compliance procedures).

On Execution:

  • As soon as a process that is efficient and effective is identified, automate as many things as possible
  • Automatically set gates throughout the testing process, against the configurable standards
  • Leverage the application gates with configurable standards to conduct repeatable, verifiable, and scalable operational testing
  • Operational testing must include complete and inline testing
  • Centrally manage versioning of the configurations and deployments
  • The testing executed should reflect internal and external requirements, general security, information security, compliance, development, and audit designed safeguards
  • The output of this testing and the automated gates should result in hard evidence that can be easily presented during audits (ie: logs)

Startups and enterprises alike have the opportunity to be more secure, deploy better product, and achieve balance across controls to audit safeguards beyond those of traditional brick-and-mortar development shops. The basic attributes of success are highlighted above. Add some extreme talent in development, integration with security, compliance, and marketing and success is easily obtainable!

Thank you to everyone who attended and contributed. It was a truly outstanding experience and I look forward to continuing the collaboration. The slides from our presentation are available here.

To see the shared toolkit that is being developed for DevOps and Auditors – visit our shared public work in progress at http://bit.ly/DevOpsAudit.

A special thank you to Gene Kim and all those in the space who welcome everyone with a passion and desire to be a part of something great.


James DeLuccia



How do you decide what is Critical vs. Important – Battlefield Leadership series

The Difference Between Critical and Important

The understanding of self and team dynamic is paramount to success in the business world. The definition of success is ‘the achievement of the general objective.’ All too often individuals, teams, and companies lose focus and become distracted during action. Knowing what is important, being able to recognize a distraction, and refocusing resources on what is most critical are the best steps to success under fire.

Hillman Battery

Even today, A walk through Hillman Battery shows the defensive position of the Germans in the immediate path of the British Infantry. The Allies’ most critical task was to liberate Caen after the invasion, but the Allied (British) unit became distracted with destroying a defensive obstacle and resulted in being stalled for an entire day. Ultimately, The Allies were forced to repel counter attacks by the Germans along their flanks which delayed liberation of Caen until July.

If you are unaware of this part of D-Day, you can check out Stephen Ambrose’s book D-Day, which provides some rich details.

Business Reflections…

In business the correlation of ‘team’ and ‘self’ is critical. Often times, important resources are lost when the team is disjointed. For example, wasting time (our most valuable resource!) can occur when you lose sight of the bigger picture. Thus, breaking down the big picture and defining what is important to you and your team allows for clear establishment and allocation of resources.

How does one avoid distractions? How can these be identified, measured, managed, and pushed off? Is the philosophy of saying ‘NO’ to everything but that which is the ultimate goal valuable? How does one position teams to understand the big picture and their critical objectives? Is the communication chain with choke points necessary, or can these be empowered within the teams?

  • Myself: The ‘big picture’ is being a parent directly and in the presence of my daughter. My secondary task is racing, training, and writing to better myself and others.
  • At Ernst & Young: Our Big Picture is realizing vision 2020, the creation of a Better Working World. My teams constantly seeking to create the best security and compliance programs based on global standards that are realized through the eyes of practitioners 
  • What are yours?


What is Battlefield Leadership and what is this series about … 

This is the fourth paper in this series. As part of my pursuit to learn and grow, I sought out the excellent management training team at Battlefield Leadership. I am professionally leveraging this across multi-million dollar projects I am overseeing (currently I am the lead executive building global compliance and security programs specifically in the online services / cloud leader space). Personally I am bringing these lessons to bear within my pursuits to cross the chasm. To often I see brilliant technical individuals fail to communicate to very smart business leaders and to the common person on the street. My new book – How Not to be hacked seeks to be a first step in bringing deep information security practices beyond the technologist.

Most exciting the Battlefield group for this training placed it in Normandy France. This allowed for senior executives to be trained in a setting where serious decisions were placed by both sides, and each provided a lesson. This series represents my notes (that I could take down) and takeaways. I share to continue the conversation with those great individuals I met, and with the larger community.

Kind regards,


Review – Fmr. CIA Dir. Jim Woolsey warns of existential EMP threat to America

I have been studying First World worst case scenarios where Cyber and life intertwine, and was recommended to review this session.  It is a panel discussion that included former CIA Director on the threat of EMP on the U.S. national infrastructure.

Mr. Woolsey takes roughly the first 10 minutes to set the stage and it is worth listening to help anchor why the NERC/FERC CIP, Executive Order, and the betterment initiatives led by private industry are so important.

A bit of an extreme and not something many ‘concern themselves’ on, but it is important to start translating what information security and cyber mean in a tangible fashion. To often we deal only in probabilities and numbers and forget all else.

Fmr. CIA Dir. Jim Woolsey warns of existential EMP threat to America – YouTube.

Change all your passwords, now.. it is that simple

There is a lot of reason to change passwords and in most business settings passwords are requested to be changed every 90 days. This is usually for the end users and rarely for the system to system accounts. A recent vulnerability creates the possibility that any account that accesses a system on the internet (specifically using HTTPS w/ OpenSSL, but lets not complicate the clarion call here) is exposed and known by someone other than the owner.

By that very condition the password should be changed, and now.

So if you are a person reading this …

  1. Pull up your accounts and begin methodically changing them to a fresh new version (there is a condition here that the site you are updating at has already fixed the vulnerability and has internally followed good practices, but lets presume best scenario here)
  2. Add a note on your calendar 3-4 months from now, to again change the passwords

If you run an technology environment that had OpenSSL installed and was vulnerable, grab a cup of coffee and sandwich, then…

  1. Begin the methodical (perimeter first .. working your way in through layers) and careful task of updating all of the certificates, credentials, and end-user accounts. Also consider end-users too.
  2. Write amazing and clear explanations to the need, value, and importance of this process to your users
  3. Set all users that have accounts accessing your services, to be forced to reset.
  4. Log out (invalidate sessions) all Apps and online cookie sessions (revoke, etc..)
  5. Reissue your private key and SSL certificate
  6. Review and examine your API and third party connections to confirm these are updated, reset, and secured
  7. Add a bit of extra monitoring on the logs for a bit

This is all the result of the Heartbleed.com disclosure, but lets not get technical here .. these are good practices, but now with the probability above 'unlikely', it is a timely habit to re-embrace.


Stay safe,



Big Data is in early maturity stages, and could learn greatly from Infosec :re: Google Flu Trend failure

The concept of analysing large data sets, crossing data sets, and seeking the emergence of new insights and better clarity is a constant pursuit of Big Data. Given the volumn of data being produced by people and computing systems, stored, and ultimately now available for analysis – there are many possible applications that have not been designed.

The challenge with any new 'science', is that the concept to application process can not always be a straight line, or a line that ends where you were hoping. The implications for business using this technology, like the use of Information Security, requires an understanding of it's possibilities and weaknesses. False positives and exaggerations were a problem of past information security times, and now the problem seems almost understated.

An article from Harvard Business details how the Google Flu Trends project failed 100 out of 108 comparable periods. The article is worth a read, but I wanted to highlight two sections below as they relate to business leadership.

The quote picks up where the author is speaking about the problem of the model:

“The first sign of trouble emerged in 2009, shortly after GFT launched, when it completely missed the swine flu pandemic… it’s been wrong since August 2011. The Science article further points out that a simplistic forecasting model—a model as basic as one that predicts the temperature by looking at recent-past temperatures—would have forecasted flu better than GFT.

So in this analysis the model and the Big Data source was inaccurate. There are many cases where such events occur, and if you have ever followed the financial markets and their predictions – you see if more often wrong than right. In fact, it is a psychological (flaw) habit where we as humans do not zero in on those times that were predicted wrong, but those that were right. This is a risky proposition in anything, but it is important for us in business to focus on the causes of such weakness and not be distracted by false positives or convenient answers.

The article follows up the above conclusion with this statement relating to the result:

“In fact, GFT’s poor track record is hardly a secret to big data and GFT followers like me, and it points to a little bit of a big problem in the big data business that many of us have been discussing: Data validity is being consistently overstated. As the Harvard researchers warn: “The core challenge is that most big data that have received popular attention are not the output of instruments designed to produce valid and reliable data amenable for scientific analysis.”

The quality of the data is challenged here for being at fault, and I would challenge that ..

The analogy is from information security where false positives and such trends were awful in the beginning and have become much better overtime. The key inputs of data and the analysis within information security is from sources that are commonly uncontrolled and certainly not the most reliable for scientific analysis. We live in a (data) dirty world, where systems are behaving as unique to the person interfacing them.

We must continue to develop tolerances in our analysis within big data and the systems we are using to seek benefit from them. This clearly must balance criticism to ensure that the source and results are true, and not an anomaly.

Of course, the counter argument .. could be: if the recommendation is to learn from information security as it has had to live in a dirty data world, should information security instead be focusing on creating “instruments designed to produce valid and reliable data amenable for scientific analysis”? Has this already occurred? At every system component?

A grand adventure,



How to determine how much money to spend on security…

A question that many organizations struggle with is how much is the appropriate money to spend annually per user, per year on information security. While balancing security, privacy, usability, profitability, compliance, and sustainability is an art organization's have a new data point to consider.

Balancing – information security and compliance operations

The ideal approach that businesses take must always be based on internal and external factors that are weighted against the risks to their assets (assets in this case is generally inclusive of customers, staff, technology, data, and physical-environmental). An annual review identifying and quantifying the importance of these assets is a key regular exercise with product leadership, and then an analysis of the factors that influence those assets can be completed.

Internal and external factors include a number of possibilities, but key ones that rise to importance for business typically include:

  1. Contractual committments to customers, partners, vendors, and operating region governments (regulation)
  2. Market demands (activities necessary to match the market expectations to be competitive)

At the aggregate and distributed based upon the quantitative analysis above, safeguards and practices may be deployed, adjusted, and removed. Understanding the economic impact of the assets and the tributary assets/business functions that enable the business to deliver services & product to market allows for a deeper analysis. I find the rate of these adjustments depend on the business industry, product cycle, and influenced by operating events. At the most relaxed cadence, these would happen over a three year cycle with annual minor analysis conducted across the business.

Mature organization's would continue a cycle of improvement (note – improvement does not mean more $$ or more security / regulation, but is improvement based on the internal and external factors and I certainly see it ebbing and flowing)

Court settlement that impacts the analysis and balance for information security & compliance:

Organization's historically had to rely on surveys and reading of the tea leaf financial reports where costs of data breaches and FTC penalties were detailed. These collections of figures showed the cost of a data breach anywhere between $90-$190 per user. Depending on the need, other organizations would baseline costing figures against peers (i.e., do we all have the same # of security on staff; how much of a % of revenue is spent, etc…).

As a result of a recent court case, I envision the below figures to be joined in the above analysis. It is important to consider a few factors here:

  1. The data was considered sensitive (which could be easily argued across general Personally Identifiable Information or PII)
  2. There was a commitment to secure the data by the provider (a common statement in many businesses today)
  3. The customers paid a fee to be with service provider (premiums, annual credit card fees, etc.. all seem very similar to this case)
  4. Those that had damages and those that did not were included within the settlement

The details of the court case:

The parties' dispute dates back to December 2010, when Curry and Moore sued AvMed in the wake of the 2009 theft of two unencrypted laptops containing the names, health information and Social Security numbers of as many as 1.2 million AvMed members.

The plaintiffs alleged the company's failure to implement and follow “basic security procedures” led to plaintiffs' sensitive information falling “in the hands of thieves.” – Law360

A settlement at the end of 2013, a new fresh input:

“Class members who bought health insurance from AvMed can make claims from the settlement fund for $10 for each year they bought insurance, up to a $30 cap, according to the motion. Those who suffered identify theft will be able to make claims to recover their losses.”

For businesses conducting their regular analysis this settlement is important as the math applied here:

$10 x (# of years a client) x client = damages .. PLUS all of the upgrades required and the actual damages impacting the customers.


Businesses should update their financial analysis with the figures and situational factors of this court case. This will in some cases reduce budgets, but others where service providers have similar models/data the need for better security will be needed.

As always, the key is regular analysis against the internal & external factors to be nimble and adaptive to the ever changing environment. While balancing these external factors, extra vigilance needs to ensure the internal asset needs are being satisfied and remain correct (as businesses shift to cloud service providers and through partnering, the asset assumption changes .. frequently .. and without any TPS memo).




How to do DevOps – with security not as a bottle neck

As in any good morning, I read a nice article written by George Hulme that got me thinking on this topic; that lead to a discussion with colleagues in the Atlanta office, and resulted in me drawing crazy diagrams on my ipad trying to explain sequencing. Below I share my initial thoughts and diagrams for consumption and critique to improve the idea.

Problem StatementIs Security a bottleneck to development and likely more so in a continuous delivery culture?

Traditional development cycles look like this …

  1. A massive amount of innovation and effort occurs by developers
  2. Once everything works to spec, it is sent to security for release to Ops
  3. In most cases security “fights” and in a few cases fails the release to ask developers to patch (patching itself implies not a real solution but a fix and not a solution), and then
  4. a final push through security to Ops
There are many problems here, but to tackle the first myth – security here is a bottleneck, because that is how it is structurally placed in the development cycle.
On a time (man days; duration; level of work) basis, security is barely even present on the product develop to deploy timeline – this is akin to thinking that man has been on Earth for a long time, but is a mistake when taken relative to the creation of the planet.. but I digress
Solution In a continuous develop environment – iterate security cycles

As Mr. Hulme highlighted in the article, integration of information security with development through automation will certainly help scale #infosec tasks, but there is more. Integrate through rapid iterations and a feedback (note the attempt at coloring of the feedbacks by infosec & ops, joined and consistent with the in-flight development areas)

While high level, I find that as I work with leadership within organizations – clearly communicating and breaking out the benefits to their security posture; ability to hold market launch dates, and clarity for technology attestations is equally as important as the code base itself. Awareness and comprehension of the heavy work being done by developers, security, Ops, and compliance audit teams allows for leadership to provide appropriate funding, resources, governance, monitoring, and timelines (time is always the greatest gift).

How have I developed my viewpoint?

I have been spending an increasing amount of time these past few years working with service provider organizations and global F100. The common thread I am finding is the acceleration and dependecy of third party providers (Cloud, BPO, integrated operators, etc..) and in the process have had an interesting role with continuous delivery and high deploy partners. Specifically, my teams have audited and implemented global security compliance programs of those running these high deploy environments, and sought to establish a level of assurance (from the audit public accounting perspective) and security (actually being secure and better sustainable operations).