Tag Archives: cloud practices

Innovating and penetrating the market – Battlefield Leadership Series – lessons and thoughts

Longues Sur Mer

At this location on the coast of Normandy you can see the immense naval guns setup to attack oncoming ships in World War II. The Germans expended resources and relied heavily upon on these guns in their defensive strategy. Unfortunately for the Germans, the treatment of the workers and locals, the sheer lack of natural intelligence, and exposure of building such vast emplacements was their downfall.

The Allies often received intelligence on the exact positions of German construction. This was provided by those building and living in the area. Specifically, a local farmer boy who was blind and actually counted each step precisely and then supplied locations through the French resistance and Allied intelligence networks.

The result was a gap in the German defensive strategy, a waste of resources, and ultimately, a failure to defend the coast.

Business Reflections: Innovating and Penetrating the market…

  • How are you establishing a product development strategy and running your business as a whole?
  • Are there defensible attributes that you deem critical, and how can they be routed?

Practical example: In the information security and intellectual property sector, there are very real threats and running a secure business requires constant new methods of defense.  How have you reevaluated these based on the shifts internally of your business and the known threats in the market itself? How did this analysis compare to prior years, and how have the effectiveness of your defenses proven?

From a product innovation perspective – are you developing in features from the highest and lowest levels? What are the high impact:low development efforts underway, and what could be added. Product and innovation requires views on the long and short run – to often we make complexity because we are able to handle complexity, when sometimes the user really only needs something less complex.

Leadership requires action:

Simply acknowledging the risks and accepting the situation does not prevent disastrous outcomes.


 

What is Battlefield Leadership and what is this series about … 

As part of my pursuit to learn and grow, I sought out the excellent management training team at Battlefield Leadership. I am professionally leveraging this across multi-million dollar projects I am overseeing (currently I am the lead executive building global compliance and security programs specifically in the online services / cloud leader space). Personally I am bringing these lessons to bear within my pursuits to cross the chasm. To often I see brilliant technical individuals fail to communicate to very smart business leaders and to the common person on the street. My new bookHow Not to be hacked seeks to be a first step in bringing deep information security practices beyond the technologist.

Most exciting the Battlefield group for this training placed it in Normandy France. This allowed for senior executives to be trained in a setting where serious decisions were placed by both sides, and each provided a lesson. This series represents my notes (that I could take down) and takeaways. I share to continue the conversation with those great individuals I met, and with the larger community.

Kind regards,

James

 

Passwords are Dead, Part II 2nd False Premise – a collaborative research effort, being presented at RSA 2013

The advent of user created, managed and handled passwords as the sole means of authenticating is coming to an end. The utility of these was defined in an era based on assumptions of brute force capability, system computing power and pro-active security teams.   – After much debate and analysis … there is the thesis

Screen Shot 2013-02-12 at 9.58.14 AM

This is Part II of the topic being explored and discussed at my Wednesday session at the RSA Conference in San Francisco (2013).  To see the first thesis and False Premise 1, please see the original post.  Jumping right in – looking forward to more feedback (thanks for a generous emails, but don’t be shy at the comment field below)!

————————————————————————

FALSE PREMISE TWO: Password strength should transcend devices – mobile, tablets (iPad, surface) [Updated 2/12/2013]

MOBILE devices:
What is the intent of the password? To stop high CPU encryption cracking systems .. or prevent inadvertent strangers from accessing the data?  Today we wrap in mobile (BYOD type if that suits you) systems into the corporate password requirement sphere, and in some cases are being more creative than other platforms.

For instance, it is recommended on a popular Apple iOS device site to use “accent characters for creating a super strong password“. Agreed these are more difficult to guess, but is that the threat we are seeking to mitigate?  In the space of X character spaces how creative must we get?

What are the risks to these mobile devices:

  • Theft
  • Data leakage violating regulatory, contractual, or privacy expectations of customers

If we consider the two threats – Theft is not mitigated by the password, as the device will simply be wiped.

[Updated 2/09/13] Data leakage is only possible if the device is ON and the password guessed before it locks itself permanently.  A feature readily available and easily implemented by the end-user, even more robust with corporate implementation technologies.

  • So in this case, the password only needs to not be one of the top 10 most common phone passwords.  At that point the device locks and can self wipe.
  • Another scenario is that the password was gleaned through recording / shoulder surfing / or simply left unlocked.  Each case the password strength was not an issue.  Other situations?

As we move into an ever mobile, data everywhere, and always connected scenario an interesting ecosystem of access & authentication appears, that requires continued serious challenge against the assumptions of our security and assurance programs.

Diving in …

Data is mobile – what role does a single password play in accessing sensitive data? Data stored on device (Cloud storage we can address on the integration point below) is at risk to a number of threats:

  • The device can be attacked directly (similar to any other computing device with IP addresses and Ports) wirelessly, but typically requires physical proximity (simplest) which is reserved for either random or very targeted attackers.
  • The device can be stolen, and if no OS passwords, than the Data itself is attacked/accessed directly. An unlocked device introduces risk mitigation techniques that are harder, so password is EASIEST. A password on the data within an application is a worthless without some form of self-destruct functionality similar to that of the OS level safeguards.

>> Why are passwords WORTHLESS at the application level in this situation?

>>> If the attacker is ON the device (physically or remotely) and our Use Case is an encrypted database – the attacker can copy that encrypted database to their system for local attacking (easy and zero user awareness), or they can access the database locally via brute force until they get in.

The data is at risk regardless without some form of self-destruct and tremendous levels of assurance related to the encryption of the data(base) itself.

  • Other thoughts here?
  • What is missing?

Passwords plays a significant role at certain tollgates upon the data (when stored on the device), and less the more “access” the attacker gets to the underlying system. A common refrain of attackers is – with “physical” access I can break into anything. We must today deal with ALL ACCESS is PHYSICAL when the data is mobile.

Plethora of devices – Today data is accessed from many devices, some owned by corporations, by end-users, or nobody – kiosks. Single passwords entered into systems allowing single thread authentication where NO assurance is understood of the underlying system and no situational awareness of the User presence seeking authentication results in failed security.

  • The reuse of passwords across devices threatens the confidentiality of the password itself (as much as that matters).
  • The multitude of devices increases the need to redefine what is “access” and the functions of authorization (I used “functions” instead of “rules” intentionally to draw attention on the necessity for a broader approach to solving this constraint)

Integration with third party service providers – [to be expanded…]

—————————-

Conclusion – a preview:

  1. Stationarity, is defined as a quality of a process in which the statistical parameters (mean and standard deviation) of the process do not change with time.” – Challis and Kitney November 1991
  2. Offline Data level authentication – Offline in an ‘always connected’ world

[Disclaimer: First off this is my research and not anyone else’s. Second, the examples above are meant to illustrate technical realities in a reasonably understood presentation. Lets focus on the problem .. identify weaknesses in the argument; and introduce the mitigation so greatly required in our online world.

I share and seek these answers for the preservation and enhancement for our way of life… as simple as that and I appreciate you being a part of my journey]

Always seek, everything…

James DeLuccia

Twitter: @jdeluccia

Download iCloud backups without the device, risk considerations

The ability to save data, sync information, and manipulate information online is extremely convenient and productive.  The risks that exist with this activity are fresh and emerging.  Unlike the physical-world risks that are impacted by kinetic attacks; weather impacts, and human events where the current state remains stable (i.e., the building doesn’t move addresses dynamically over night) the online Cloud-mobile device ecosystem does not.

This lack of state is an aspect that is both a feature that serves and cuts the enterprise.  The ability to provide updates without patches to the user is a key feature to cloud-mobile providers, and something in fact expected now of the end-users.   The virtual elimination of patching is upon us.

To highlight but a few alternate challenges that must be considered:

  • Configuration management – the onboarding of new applications and systems into an enterprise is done in a manner that ensures the system components themselves comply with the enterprise policy and industry operating standards.  This is challenged and new triggers and actions are required for cloud-mobile systems, as these may have new features (i.e., file transfer) automatically available to all users over night
  • Access and authorization – as the title implies (though such attacks are not limited to that of Apple) the technology deployed may permit alternative data access methods.  In the cloud-mobile ecosystem there is a presumption by the user that there is a physical link between the device (iphone / tablet) and possession implies such control / protection.  This is a fallacy of cloud-mobile, as possession of the device is not the key but instead the credentials associated to the stored repository of data.  Therefore the safeguards and information governance programs must focus on these assets (the credentials).
  • Geo-Location – Knowing where the data is processed & stored is critical, but not to a severity of knowing a zipcode.  Precise locations being shared introduce a security risk in general and do not serve the risk mitigation process.  Instead, gaining confirmation on regions of operation (i.e., so that weather can be considered in BCP), and the number of in parallel instances that will be running are important additions to consider with regard to these facilities.

So, as the “attack / forensic method” described in the title … one can w/ the authentication credentials of the user connect to iCloud and download a user’s total backup.  This may include contacts, documents, etc…  The tool is described here.

Again the point here is to begin considering operational and information security risk at the governance and sustainment level with respect to these mobile-cloud ecosystems.  Not focus on the single symptom of operating in this environment (having data d/led from iCloud).

Other thoughts?

James DeLuccia

/cp ITCC

When you play whack-a-mole with File Transfer tools, you chase the mole! DropBox blocked; Pipe wins

In the enterprise businesses are seeking to block channels of transferring files, and in many cases the need to manage these is valid and vital to specific business operations. In some cases such activities are based upon the unknown unknown fears that lie, and others upon identified disclosure risks of sensitive data / research and development in-flight materials.

A more common discussion is to block file sharing services directly – such as DropBox. The challenge here is how … some choose policies and others push to technical blocks within the enterprise. The more aggressive will even run regular end-point policy blocks to disable the applications on the work station, and policies blocking URL / IP browsing to the service providers addresses. This is one of the only way to really block, and does not prevent a sneaker network from occurring bypassing all of these mitigation efforts.

If though the business has achieved a high success rate of blocks and change of user behavior to some other approved file passing process, then all is good. In some cases “good” may be back to the “old way” or some newly designed implementation of an excellent corporate tool.

The risk is not filling in the need here of the end users, and that results in the need remaining and the market & user connecting with alternate solutions. This, of course, sustains the risk / threats that were raised to block the first “mole” into perhaps a less preferred channel.

I recently came across such an occurrence with the introduction of Pipe that uses Facebook Connect – allowing point to point file transfer of up to 1 gig of files. The ability to transfer files without a new user account; leveraging the existing user base, and capitalizing on the already permitted service of facebook internally (on corporate devices; ipads; tablets; iphones; etc…) is brilliant from a market entry point and user ease of use offering. From a business standpoint, this escalates the businesses need to develop a social media and mobile device strategy – not tactical solutions (b/c the market is shifting) and not policy (b/c the words will not stop the traffic from flowing alone).

Other considerations:

  • How are you assessing the risks of these emerging platforms; technologies?
  • Are you understanding the business processes of your business and where such tools and needs exist within the user base?
  • What monitoring and metrics exist to keep aware of these activities to improve the technology services to meet the business demands?
  • How is data management securing the sensitive and important data within the organization?
  • How is your security program / audit group (PCI QSA too) viewing the presence of these applications within the research; financial reporting; and card data environment?

Here is a nice article elaborating on Pipe and it’s offering, at the Verge.

Other thoughts?

Best,

James DeLuccia IV

Mind the Gap: When third party services are not enough to achieve security or compliance to PCI DSS

MasterCard published a very brief document outlining the very popular Use Case where a Merchant leverages a third party e-commerce system for processing transactions by redirecting to a separate hosted site.  The attraction is the obvious shift of the payment card environment to that of the hosted page provider.  This does help in reducing the PCI DSS scope, but as highlighted within the paper “…does not remove the need for a robust information security program.”

The brief highlights there is a risk to Merchants (“Based on the current compromise and attack trends”) where attackers may attack the Merchant’s web environment to redirect the traffic from the approved Hosted-Page vendor to a malicious party site.  This can be executed with a fake page where nothing but an error occurs, or the attackers can proxy (pass through) the traffic to the true Host-Page vendor.  This second approach allows the transaction to occur without any notice to the user of the attack.

The attack mitigation presented (follow best practices) are expected.  It does not say to solely or specifically to follow PCI DSS specifications, but instead to follow best practices appropriate for the web environment itself.

An additional attack mitigation stated is to establish SSL tunnels to fixed addresses and certificates.  This is definitely effective when securing the point to point connection, but generally would be ineffective from the attack described (as an attacker could simply compromise from the Merchant Host itself).

An alternative mitigation approach to consider would be expanding the monitoring & response capabilities.  As an example, if traffic is being redirected and the host Merchant server is compromised than the next best technique would be (among many) to have automatic triggers at the IPS, FW, and ACL points when these hosts are transmitting to unapproved targets.  This highlights the important need of when procuring services with valuable data, to have a deep process of onboarding the Service Provider in a manner that brings to light these technical details and establishes operational response capabilities jointly with the vendor.

The article is short and worth a read.  A key question that rang throughout the article was – does the issuance of this guidance make it clear that if the Use Case Attack happens than Y Merchant is deemed out of PCI DSS compliance?  The closing paragraph provides some light.  Would love others thoughts here too!

“While a merchant may be able to reduce or remove the scope of its environment’s applicability to comply with PCI DSS requirements by using hosted payment pages, it does not remove the merchant’s risk of being involved in, or even the source of, an account data compromise event.

Merchants still have a duty to employ security controls based on industry best practices to their web based environment to protect payment card data.”

Link directly to the guidance.

Best,

James DeLuccia

Infrastructure Security Response, Google excludes 11M+ domains

Google officially removed a “freehost” provider from a Korean Company that was providing the .co.cc domain (link to The Register article).  This was done on the basis of a large percentage of spammy or low-quality sites.  According to the Anti-Phishing Working Group (report) this top level domain accounted for a large number of mal-ware, phishing, and spam traffic.

This defensive move by Google frames nicely a counter move to what I have termed as ‘Infrastructure level attacks’.  These types of attacks are executed through planned and global programs designed to bypass the fundamental security safeguards organizations deploy.  The popular examples are RSA SecureID Tokens and Comodo certificates.

The challenge has been how to respond equally to such attacks, and here we are seeing an exploration into this response.  The U.S. Government is exploring filters and preventive tools at the ISP level, and here we have a propagator of search results eliminating the possibility of users connecting to such domains – regardless of any possible non-malicious site.

This highlights the need to examine the information security program of your organization and the core providers.  This examination must consider risks that are known and ‘far-fetched ideas’ (such as the domain being blocked at the ISP level) that may impact your business.  Such continuous programs of risk assessment are key, but just as critical is the examination and pivoting of the program itself.  (yes.. a risk assessment of the risk assessment program).

Counter thoughts?

James DeLuccia

#RSAC Panel on Cloud – security and privacy

Security, Privacy, and liability are top issues for #Cloud security and a popular topic at #RSAC (RSA SFO 2011 Conference)  The first session today was moderated by Drue Reeves (Gartner), Michelle Dennedy (iDennedy), Tanya Forsheit (InfoLawGroup LLP), Archie Reed (HP), and Eran Feigenbaum (Google).  A great discussion and lots of interesting points, and I would highlight ideas for managing security pragmatically for organizations.  Below are my notes.  Apologies for broken flows in logic, as I was trying to capture ideas put forward by panel I sometimes got lost in discussion.

Customers cannot rely only on provider to ensure data confidentiality and compliance, but are seeking assurances.

Cloud-Risks

  • Erin Feigenbaum (Google) – Customers want more transparency in general in the Cloud.  Google is seeing smaller companies move into the cloud and we see that the service and type of cloud sought varies.  Some clouds vary in ability to serve (Gmail in 2010 had uptime of 99.984%).
  • Panel – Due diligence is necessary for both sides of the customer-cloud provider model.  As such must  and get a fair assessment of is happening today for both sides – to know what is happening today.  Understanding what the customer is doing individually to create an ‘honest conversation’.  Create a performance and services assessment of internal (corporate data center and software services) delivery and then determine what Cloud providers meet the current and future state target.  Understanding what is essential to your business is critical to having reasonable expectations and having a proper cost/benefit return.

Legal, procurement, internal audit, business, and technology team members must get together to determine what is important and rate these items.  This then can allow for a better data set identification and procurement of service providers.

  • The end result is the business needs to determine what are their risk tolerance – such as what are they willing to accept.  The universe of Cloud providers allows businesses to identify those that can meet and demonstrate adherence to the criteria that matters to the business.

Focusing on the dataset is what matters and consideration of the period of time.  The dataset released to the cloud must meet your internal safeguard and risk tolerance criteria.

  1. Set Principles first – save money, keep agility, achieve availability
  2. Check application – is it generating revenue; does it create a loss of life scenario
  3. Keeping it in-house does not eliminate the risk vs. having it in the cloud.

Must focus at the strategic level …

Shadow IT, an example:

  • Shadow IT is a problem and is still ongoing.  A security survey with a bank in Canada where the marketing department did a survey in Salesforce.com.  The problem was using the system the data of private Canadian citizens was crossing the U.S. border – which is against the law.  This required a re-architecture effort to correct these activities.

There is a need for awareness and education on the implications of engaging cloud providers and how the flow of datasets impact the business’ legal obligations.

Consumer Technology in Business:

  • Eran – 50% of people surveyed installed applications that are not allowed by their corporations and IT.  The consumerization of technology is creating complex and intertwined technology ecosystems that must be considered by the business, risk management, legal, and security.
  • It is your responsibility to do the due diligence on what the cloud providers are doing to provide assurance, and work with those that provide such information.  The necessity is a balance between providing sufficient information security confidence and mapping out attack vectors for criminals.

Google Growth rate on Cloud:

  • 3,000 new businesses are signing up on the Google cloud every day – impossible to respond uniquely to each one individually.

Data Location

  • It is up to the customer on knowing what are the legal aspects and appropriate uses of the business data.  Understanding the transportation of sensitive data across borders is the business responsibility.
  • It is up to the business to understand and act to protect the data of the business – pushing the information onto a Cloud provider is not a transfer of risk / ownership / responsibility.

If you had the chance today to rebuild your systems, would you do it the same way?

  • Cloud does provide unique technologies beyond what you have already today.  Cloud providers today have allowed them to rebuild their centers that consider today’s technology data architecture and leverage new tech.

Points of reality and impossibility

  • If an organization does not have deep Identity Access Management (IAM) it is poor to try and bolt this on while transitioning to the cloud.  Reasonable expectations must be had for both the consumer and of the cloud provider.

Liability and Allocation between Customers and Clouds

  • Customers with data in their own data centers – they are basically self-insuring their operations.  When moving to the Cloud these customers are now transferring this a third party.  There is a financial aspect here.  How can liability be balanced between customer and service provider?
  • When Customer absorbs all liability they are hesitant to put X data on Cloud.  If Cloud absorbs liability the cost will be to high.

Data in Space

  • People are putting data on the cloud based on rash decisions without unique risk assessments on the data sets and providers.

Agreeing on Liability in the Cloud

  • Organizations have been able to negotiate liability clauses with cloud providers.  Ponemon institute figures are used in determining the limit of liability and are a good way of coming to a proper number that is even with industry figures.  I.e., If Ponemon institute says cost of a breach per record is $224 and business has 20,000 employee records —> The limit of liability should equal the product of these two numbers, and this has proven to be a reasonable discussion with cloud providers.  Indemnification is generally a non-discussion point.
  • The world will move into specialized applications and services.  These point organizations allows for specific legal and technology considerations that are appropriate for that niche.  This is seen at the contract level, notification levels, prioritization on RTR, and across many areas.

Everything is negotiable for the right amount of money or love – Eran

  • Cloud providers do not like to do one-offs.  Cloud providers including Google will negotiate.

APPROACH to cleanse data with confidence

  • Best tip is to encrypt data online… When de-provisioning systems and cleansing .. consider rewriting databases / applications / instances with clean values fully.  Is this a practical method of ensuring the data is satisfied.  How long should the data be in this state to ensure the data is pushed to other parallel instances?
  • Are PCI, SIGS, and such standards for financial services appropriate for the Cloud provider?  The responsibility is always the data owner.  Internal controls must be migrated out to the cloud evenly as applied internally.  It is the business’ risk and responsibility.

Recommendations of the Panel

Archie Reed:  Everyone becomes a broker and recommend that IT teams to embrace this role.  Need to understand how to source, and the chemistry and structure of the IT organization needs to shift.  It will and must include working with the business to have such parties as legal, internal audit, and risk management.

Tanya Forsheit:  I would love to see standards developed and the customers participate in a meaningful way.  The provider side has thought through these seriously over the last few years.  The business to business relationship within the Cloud – Customer relationship is weak.  Be reasonable.

Eran: There is a paradigm shift from a server you can touch and be managed by an Admin that you hired vs. one that is acquired by a contract through a Cloud providers.  Google has over 200 security professionals.  Bank robbers go where the data is – the Cloud has the data.

How do you respond to a vulnerability, how do you respond to a hack … ARE THESE the new / right questions to seek of Cloud providers?

Michelle Dennedy: Leverage and plan for a loss with cloud providers.

Drue:  There are risks you can identify to mitigate risks on the technology side, and there are financial tools (insurance, etc…) that must be deployed.

Question and Answer:

  • Cloud providers have the opportunity to have a dashboard to track and demonstrate controls.  These are hard we know.
  • FedRamp and continuous auditing is a future component of the Cloud providers (that some) will adhere to and demonstrate.

An engaging panel and some interesting and useful points raised.  Welcome any feedback and expansions on the ideas above,

James DeLuccia