Tag Archives: Compliance

FedRamp on the Cloud: AWS Architecture and Security Recommendations

In December Amazon released a nice guide with architecture layouts + tips across the NIST 800-53 standard. This is an important tool for ANY business looking to accelerate their operations into a distributed system model.

I took a few things away from this PDF – the two are that every company moving to the cloud should read this document. It not only provides an architecture layout that is critical in planning, but it also has numerous nuggets of awesome sprinkled throughout – an example:

 Many of the SAAS service providers do not have a FedRAMP ATO, so using their services will have to be discussed with the authorizing official at the sponsoring agency. Pg 28 <– sounds simple, but very costly if done under hopeful assumptions of acceptance!

Regarding the need to harden a base system:

AWS has found that installing applications on hardened OS’s can be problematic. When the registry is locked down, it can be very difficult to install applications without a lot of errors. If this becomes an issue, our suggestion is to install applications on a clean version of windows, snapshot the OS and use GPOs (either locally or from the AD server) to lock down the OS. When applying the GPOs and backing off security settings, reboot constantly because many of the registry changes only take effect upon reboot.

A bit about the White paper as described by Amazon:

Moving from traditional data centers to the AWS cloud presents a real opportunity for workload owners to select from over 200 different security features (Figure 1 – AWS Enterprise Security Reference ) that AWS provides. “What do I need to implement in order to build a secure and compliant system that can attain an ATO from my DAA?” is a common question that government customers ask. In many cases, organizations do not possess a workforce with the necessary real-world experience required to make decision makers feel comfortable with their move to the AWS cloud. This can make it seem challenging for customers to quickly transition to the cloud and start realizing the cost benefits, increased scalability, and improved availability that the AWS cloud can provide

A helpful guide and glad to see a major Cloud provider enabling it’s clients to excel at information security operations, and in this case – FedRamp

Methodology for the identification of critical connected infrastructure and services — SAAS, shared services..

ENISA released a study with a methodology identifying critical infrastructure in communication networks. While this is important and valuable as a topic, I dove into this study for a particularly selfish reason … I am SEEKING a methodology that we could leverage for identifying critical connected infrastructure (cloud providers, SAAS, shared services internally for large corporations, etc..) for the larger public/private sector.  Here are my highlights – I would value any additional analysis, always:

  • Challenge to the organization: “..which are exactly those assets that can be identified as Critical Information Infrastructure and how we can make sure they are secure and resilient?”
  • Key success factors:
    • Detailed list of critical services
    • Criticality criteria for internal and external interdependencies
    • Effective collaboration between providers (internal and external)
  • Interdependency angles:
    • Interdependencies within a category of service
    • Interdependencies between categories of services
    • Interdependencies among data assets
  • Establish baseline security guidelines (due care):
    • Balanced to business risks & needs
    • Established at procurement cycle
    • Regularly verified (at least w/in 3 yr cycle)
  • Tagging/Grouping of critical categories of service
    • Allows for clean tracking & regular security verifications
    • Enables troubleshooting
    • Threat determination and incident response
  • Methodology next steps:
    • Partner with business and product teams to identify economic entity / market value
    • Identify the dependencies listed about and mark criticality based on entity / market value
    • Develop standards needed by providers
    • Investigate how monitoring to standards can be managed and achieved (in some cases contracts can support you, others will be a monopoly and you’ll need to augment their processes to protect you)
    • Refresh and adjust annually to reflect modifications of business values

I hope this breakout is helpful. The ENISA document has a heavy focused on promoting government / operator ownership, but businesses cannot rely or wait for such action and should move accordingly. The above is heavily modified and original thinking based on my experience with structuring similar business programs. A bit about ENISA’s original intent of the study:

This study aims to tackle the problem of identification of Critical Information Infrastructures in communication networks. The goal is to provide an overview of the current state of play in Europe and depict possible improvements in order to be ready for future threat landscapes and challenges. Publication date: Feb 23, 2015 via Methodologies for the identification of Critical Information Infrastructure assets and services — ENISA.

Best, James

State of compliance vs. compliant, re: New PCI Compliance Study | PCI Guru

A new study was released by Branden Williams and the Merchants Acquirer Committee (MAC), and it is worth a read. One aspect that jumped to me is the percentage of compliance vs compliant rates shared in the study. The difference here is those who have represented being PCI Compliant through Attestations of Compliance (AOC) vs. those who have had their programs pressure tested by the criminals of the world, and been found wanting.

Here is the snippet from PCI GURU that highlights this state of discrepancy:

The biggest finding of the study and what most people are pointing to is the low compliance percentages across the MAC members’ merchants.  Level 1, 2 and 3 merchants are only compliant around 67% to 69% of the time during their assessments.  However, most troubling is that Level 4 merchants are only 39% compliant.

Depending on the merchant level, these figures are not even close to what Visa last reported back in 2011.  Back then, Visa was stating that 98% of Level 1 merchants were reported as compliant.  Level 2 merchants were reported to be at 91% compliance.  Level 3 merchants were reported at 57% compliance.  As is Visa’s practice, it only reported that Level 4 merchants were at a “moderate” level of compliance.

via New PCI Compliance Study | PCI Guru.

Here is the link to the report from Branden & MAC

Board of Directors, CISO, and legal should all care deeply that PCI (and of course and certainly other contractual agreements) security is achieved honestly. To often organizations view this like registering a car with the government. This is far to complex and impactful to people within and outside a given business. The cyber economic connections between proper, efficient, and effective security all lend to better products in the market and more focus on what the business is driving towards.

Is your program honestly secure and fully addressing these least practice principles?

Best,

James

Continuous Improvement, Audit, and the Agile 2014 Conference .. My lessons

 

Agile 2014 Conference session


Every moment we are learning something new. The greatest challenge is to take advantage of this new information and do something substantial – something real – with it.


As an adventureman in the DevOps / Audit space, I have the privilege of evaluating the opportunities, risks, and future directions for many enterprises. The sophistication of these enterprises spans far and wide – From Fortune 20 companies, 700-person agile teams, to small startups and even smaller teams of five. These companies have one thing in common: a desire to create a business partnership that will accomplish secure, privacy minded, and compliant operations. To put it simply, these companies have the passion and rigor of overcoming a Big 4 audit.

On Wednesday I spoke at the Agile 2014 conference with esteemed author and innovator, Gene Kim. Our session title was, Keeping the Auditor away; DevOps Compliance Case Study. Attendees at this lecture benefited from a 90-person open collaboration and sharing of ideas. A few points resonated with me.

On Leadership:
To lead a product development team requires skill beyond balancing the needs and output of the teams; it requires the talent of connecting the development activities to the governance of the business at the highest control level. The ability to serve the customer is only half of the job description. The other half consists of considering internal business stakeholders (internal auditing, marketing, information security, and compliance procedures).

On Execution:

  • As soon as a process that is efficient and effective is identified, automate as many things as possible
  • Automatically set gates throughout the testing process, against the configurable standards
  • Leverage the application gates with configurable standards to conduct repeatable, verifiable, and scalable operational testing
  • Operational testing must include complete and inline testing
  • Centrally manage versioning of the configurations and deployments
  • The testing executed should reflect internal and external requirements, general security, information security, compliance, development, and audit designed safeguards
  • The output of this testing and the automated gates should result in hard evidence that can be easily presented during audits (ie: logs)

Startups and enterprises alike have the opportunity to be more secure, deploy better product, and achieve balance across controls to audit safeguards beyond those of traditional brick-and-mortar development shops. The basic attributes of success are highlighted above. Add some extreme talent in development, integration with security, compliance, and marketing and success is easily obtainable!

Thank you to everyone who attended and contributed. It was a truly outstanding experience and I look forward to continuing the collaboration. The slides from our presentation are available here.


To see the shared toolkit that is being developed for DevOps and Auditors – visit our shared public work in progress at http://bit.ly/DevOpsAudit.


A special thank you to Gene Kim and all those in the space who welcome everyone with a passion and desire to be a part of something great.

Best,

James DeLuccia

 

 

How to determine how much money to spend on security…

A question that many organizations struggle with is how much is the appropriate money to spend annually per user, per year on information security. While balancing security, privacy, usability, profitability, compliance, and sustainability is an art organization's have a new data point to consider.

Balancing – information security and compliance operations

The ideal approach that businesses take must always be based on internal and external factors that are weighted against the risks to their assets (assets in this case is generally inclusive of customers, staff, technology, data, and physical-environmental). An annual review identifying and quantifying the importance of these assets is a key regular exercise with product leadership, and then an analysis of the factors that influence those assets can be completed.

Internal and external factors include a number of possibilities, but key ones that rise to importance for business typically include:

  1. Contractual committments to customers, partners, vendors, and operating region governments (regulation)
  2. Market demands (activities necessary to match the market expectations to be competitive)

At the aggregate and distributed based upon the quantitative analysis above, safeguards and practices may be deployed, adjusted, and removed. Understanding the economic impact of the assets and the tributary assets/business functions that enable the business to deliver services & product to market allows for a deeper analysis. I find the rate of these adjustments depend on the business industry, product cycle, and influenced by operating events. At the most relaxed cadence, these would happen over a three year cycle with annual minor analysis conducted across the business.

Mature organization's would continue a cycle of improvement (note – improvement does not mean more $$ or more security / regulation, but is improvement based on the internal and external factors and I certainly see it ebbing and flowing)

Court settlement that impacts the analysis and balance for information security & compliance:

Organization's historically had to rely on surveys and reading of the tea leaf financial reports where costs of data breaches and FTC penalties were detailed. These collections of figures showed the cost of a data breach anywhere between $90-$190 per user. Depending on the need, other organizations would baseline costing figures against peers (i.e., do we all have the same # of security on staff; how much of a % of revenue is spent, etc…).

As a result of a recent court case, I envision the below figures to be joined in the above analysis. It is important to consider a few factors here:

  1. The data was considered sensitive (which could be easily argued across general Personally Identifiable Information or PII)
  2. There was a commitment to secure the data by the provider (a common statement in many businesses today)
  3. The customers paid a fee to be with service provider (premiums, annual credit card fees, etc.. all seem very similar to this case)
  4. Those that had damages and those that did not were included within the settlement

The details of the court case:

The parties' dispute dates back to December 2010, when Curry and Moore sued AvMed in the wake of the 2009 theft of two unencrypted laptops containing the names, health information and Social Security numbers of as many as 1.2 million AvMed members.

The plaintiffs alleged the company's failure to implement and follow “basic security procedures” led to plaintiffs' sensitive information falling “in the hands of thieves.” – Law360

A settlement at the end of 2013, a new fresh input:

“Class members who bought health insurance from AvMed can make claims from the settlement fund for $10 for each year they bought insurance, up to a $30 cap, according to the motion. Those who suffered identify theft will be able to make claims to recover their losses.”

For businesses conducting their regular analysis this settlement is important as the math applied here:

$10 x (# of years a client) x client = damages .. PLUS all of the upgrades required and the actual damages impacting the customers.

Finally

Businesses should update their financial analysis with the figures and situational factors of this court case. This will in some cases reduce budgets, but others where service providers have similar models/data the need for better security will be needed.

As always, the key is regular analysis against the internal & external factors to be nimble and adaptive to the ever changing environment. While balancing these external factors, extra vigilance needs to ensure the internal asset needs are being satisfied and remain correct (as businesses shift to cloud service providers and through partnering, the asset assumption changes .. frequently .. and without any TPS memo).

Best,

James

 

RSA 2014 – 2 themes from Tuesday

A fresh post in a long while ..

So, after writing for clients and my research being all consuming this past year I am re-focusing time in my day to share observations and thoughts. Why? Quite simply I learn more when I write; share, and get feedback then living in an echo chamber. How will this benefit the world/you.. simple, you will share in the knowledge I gain from sweat and toil and learn through the same iteration cycle as I. I also will begin focusing my posts on my dedicated portal for such topics and (attempt) to limit my writings here to on-topic. I hope you will continue to join me on the new(er) site and the other media platforms.

Also, I am trying to aim for a high iteration format instead of the long form of old. Meaning, shorter (I hope) posts that are succinct on ideas without the typical pre/post writings that are common in most write-ups. My ask, please share, challenge, and seek to understand my perspective – as I will do for you.

Onward then …

Today is RSA day and 2 themes that are evident and of most importance based on several large client discussions; analyst discussions; and a few researchers I had the privilelege of speaking with today:

  1. Communicating the WHY is of paramount importance today (WHY are we spending security budgets on X assets? WHY are our practices for managing enablement between development, operations, and security out of sync? Etc..)
  2. Passive Resistance (my phrase, but after a day of hearing about NSA, RSA, Crypto architects disowning responsibility for operational deployment, and “enable” privacy, security this is where I landed) is the idea of persons and organizations being asked to respond to these threats in a manner that impings their capabilities. There are many problems with this stated position, but I shall leave that for another day and your own pondering

Businesses must address #1 and be extremely cautious with #2, and #2 will be a heavy discussion during my RSA session on Thursday for all that are present. If you are unable to attend, I will as usual post my work and research in note form online. Looking forward to learning and expanding my thinking with you.

Best,

 

James

 

What do major developments in big data, cloud, mobile, and social media mean? A CISO perspective..

Screen Shot 2013-02-26 at 6.52.56 PM

Tuesday afternoon the CISO-T18 – Mega-Trends in Information Risk Management for 2013 and Beyond: CISO Views session as presented focused on the results of a survey sponsored by RSA (link below).  It provided a back drop for some good conversation, but more so it gave me a nice environment to elaborate on some personal observations and ideas.  The first tweet I sent, hammered the main slide:

“Major developments with Big Data, Cloud, Mobile, and Social media” – the context and reality here is cavernous.. “

My analysis and near-random break down of this tweet are as follows with quotes pulled from the panel.

First off – be aware that these key phrases / buzz words mean different things to different departments and from each level (strategic executives through tactical teams). Big Data analytics may not be a backend operational pursuit, but a revenue generating front end activity (such as executed by WalMart). These different instantiations are likely happening at different levels with varied visibility across the organization.

Owning” the IT infrastructure is not a control to prevent the different groups from launching to these other ‘Major developments’.

The cost effectiveness of the platforms designed to serve businesses (i.e., Heroku, Puppet Labs, AWS, etc…) is what is defining the new cost structure. CIO and CISO must

>The cloud is not cheaper if it does have any controls. This creates a risk of the data being lost due to “no controls” – highlighted by Melanie from the panel.  <– I don’t believe this statement is generally true and generally FUD.

Specifically – There is a service level expectation by cloud service providers to compensate for the lack of audit ability those “controls”. There are motions to provide a level of assurance to these cloud providers beyond the ancient method established through ‘right to audit‘.

A method of approaching these challenging trends, specifically Big Data, below as highlighted by one of the CISO (apologies missed his name) w/ my additions:

  • Data flow mapping is a key to providing efficient and positive ‘build it’ product development. It helps understand what matters (to support and have it operational), but also see if anything is breaking as a result.
  • Breaking = violating a contract, breaking a compliance requirement, or negatively effecting other systems and user requirements.

Getting things Done – the CISO 

Two observations impacting the CISO and information technology organization include:

  1. The Board is starting to become aware and seeking to see how information security is woven within ERM
  2. Budgets are not getting bigger, and likely shrinking due to expectations of productivity gains / efficiency / cloud / etc…

Rationalization on direction, controls, security responses, must be be fast for making decisions and executing…

Your ability to get things done has little do with YOU doing things, but getting others to do things. Enabling, partnering, and teaming is what makes the business move. CIO and CISO must create positive build-it inertia.

Support and partner with the “middle management” the API of the business if you will.

  • We to often focus on “getting to the board” and deploying / securing the “end points” .. Those end points are the USERS and between them and the Board are your API to achieving your personal objectives.

Vendor Management vs procurement of yester-year

Acquiring the technology and services must be done through a renewed and redeveloped vendor management program. The current procurement team’s competencies are inadequate and lacking the toolsets to ensure these providers are meeting the existing threats. To be a risk adaptive organization you must tackle these vendors with renewed. Buying the cheapest parts and service today does not mean what it meant 10 years ago. Today the copied Cisco router alternative that was reverse engineered lacks an impressive amount of problems immediately after acquisition. Buying is easy – it is the operational continuance that is difficult. This is highlighted by the 10,000+ vulnerabilities that exist with networked devices that will never be updated within corporations that must have their risks mitigated, at a very high and constant cost.

Panel referenced the following report:
http://www.emc.com/microsites/rsa/security-for-business-innovation-council.htm

Thank you to the panel for helping create a space to think and seek answers, or at least more questions!

James DeLuccia IV