Tag Archives: james deluccia

Bored w/ Security warnings? MRIs show our brains shutting down when we see security prompts

Ever find yourself just click click clicking through every message box that pops up? Most people click through a warning (which in the land of Web Browsers usually means STOP DON’T GO THERE!!) in less than 2 seconds. The facts seem to be due to be from habituation – basically, you are used to clicking, and now we have the brain scans to prove it!

What does this mean for you? Well specifically you won’t be able to re-wire your brain, but perhaps you can turn up the settings on your web browser to not allow you to connect to a site that has the issues your web browser is warning against. Simple – let the browser deal with it and take away one nuisance.

From the study:

The MRI images show a “precipitous drop” in visual processing after even one repeated exposure to a standard security warning and a “large overall drop” after 13 of them. Previously, such warning fatigue has been observed only indirectly, such as one study finding that only 14 percent of participants recognized content changes to confirmation dialog boxes or another that recorded users clicking through one-half of all SSL warnings in less than two seconds.

via MRIs show our brains shutting down when we see security prompts | Ars Technica. (photo credit Anderson, et al)

Don’t forget to check out – www.facebook.com/hntbh if you are looking for quick reminders. The book is coming along and chapter releases are (finally) coming in April!

1,600 Security Badges Missing From ATL Airport in 2 yr period – NBC News

While not a complicated or strategic topic that I would normally highlight, this one bit of news is from my home airport and personally meaningful.

Basically the report shows that 1,600 badges were lost or stolen in a 2 year period. This seems like a big number (2.6%), but this is a control that should (and not highlighted in broadcast) secondary supportive controls, such as:

  • Key card access review logs to prevent duplicate entries (i.e., same person cannot badge in 2x)
  • Analytics on badge entries against the work shifts of the person assigned
  • Access to areas not zoned for that worker
  • Termination of employees who don’t report in 12 hours on lost/missing badge

There are safeguards highlighted in broadcast that are good, but easily modified to the point of not being any value, and include:

  • Pin (can be easily observed due to tones and no covering)
  • Picture (every movie ever shows how easy this is done)
  • An old badge could be re-programmed and be a duplicate of another higher ranking / alternate security zone

Bottom line is organizations, especially those tasked with safety of human life, must have the primary and secondary controls in place. Hopefully the remarks of a minor risk are based on their security assessments with the considerations above (and more perhaps).

Article:
Hundreds of ID badges that let airport workers roam the nation’s busiest hub have been stolen or lost in the last two years, an NBC News investigation has found.

While experts say the missing tags are a source of concern because they could fall into the wrong hands, officials at Hartsfield-Jackson Atlanta International Airport insist they don’t pose “a significant security threat.”

via Hundreds of Security Badges Missing From Atlanta Airport – NBC News.com.

Also thanks to the new new aggregator (competitor to AllTops) Inside on Security or the clean new interface.

Best,

James

 

Moving forward: Who cared about encrypted phone calls to begin with…The Great SIM Heist

key-slide-540x351

TOP-SECRET GCHQ documents reveal that the intelligence agencies accessed the email and Facebook accounts of engineers and other employees of major telecom corporations and SIM card manufacturers in an effort to secretly obtain information that could give them access to millions of encryption keys.

-The Great SIM Heist: How Spies Stole the Keys to the Encryption Castle.

This news made a number of people upset, but after studying it for several weeks and trying to consider the macro effects to regular end users and corporations I have reached a contrarian point in my analysis.

Who cared?  Nobody (enough)

Sure the implications are published and are known, but who ever considered their cell phone encrypted and secure mobile device? I don’t think any consumer ever had that feeling and most professionals that WANT security in their communications use special precautions – such as the Black Phone.

So, if nobody expected it, demanded it, and the feature was primarily used to help billing than what SHOULD happen moving forward?

  • The primary lesson here is that our assumptions must be revisited, challenged, valued, and addressed at the base level of service providers
  • Second, businesses that depend (if they ever did so for instance on mobile device encrypted communication) on such safeguards – must pay for it

I would be interested in others points of view on the lessons forward. I have spent a good deal of time coordinating with leaders in this space and believe we can make a difference if we drop the assumptions, hopes, and focus on actual effective activities.

Helpful links on the Black Phone by SGP:

FedRamp on the Cloud: AWS Architecture and Security Recommendations

In December Amazon released a nice guide with architecture layouts + tips across the NIST 800-53 standard. This is an important tool for ANY business looking to accelerate their operations into a distributed system model.

I took a few things away from this PDF – the two are that every company moving to the cloud should read this document. It not only provides an architecture layout that is critical in planning, but it also has numerous nuggets of awesome sprinkled throughout – an example:

 Many of the SAAS service providers do not have a FedRAMP ATO, so using their services will have to be discussed with the authorizing official at the sponsoring agency. Pg 28 <– sounds simple, but very costly if done under hopeful assumptions of acceptance!

Regarding the need to harden a base system:

AWS has found that installing applications on hardened OS’s can be problematic. When the registry is locked down, it can be very difficult to install applications without a lot of errors. If this becomes an issue, our suggestion is to install applications on a clean version of windows, snapshot the OS and use GPOs (either locally or from the AD server) to lock down the OS. When applying the GPOs and backing off security settings, reboot constantly because many of the registry changes only take effect upon reboot.

A bit about the White paper as described by Amazon:

Moving from traditional data centers to the AWS cloud presents a real opportunity for workload owners to select from over 200 different security features (Figure 1 – AWS Enterprise Security Reference ) that AWS provides. “What do I need to implement in order to build a secure and compliant system that can attain an ATO from my DAA?” is a common question that government customers ask. In many cases, organizations do not possess a workforce with the necessary real-world experience required to make decision makers feel comfortable with their move to the AWS cloud. This can make it seem challenging for customers to quickly transition to the cloud and start realizing the cost benefits, increased scalability, and improved availability that the AWS cloud can provide

A helpful guide and glad to see a major Cloud provider enabling it’s clients to excel at information security operations, and in this case – FedRamp

Mapping the Startup Maturity Framework to flexible information security fundamentals

MappingsAfter over a decade of working with startups, private equity, and over the last 5 years of deep big 4 client services acting in different executive roles (CISO, CIO Advisor, Board of Directors support)  I am certain there is a need and lack of implementation for adapted information security that is reflective of the size, maturity, and capabilities of the business. This applies independently to the the product and the enterprise as a whole. To that end, I have begun building models of activities to match each level of maturity to try and bring clarity or at least a set of guidelines.

As I share with my clients … in some cases a founder is deciding between EATING and NOT. So every function and feature, including security habits, must contribute to the current needs!

I have begun working with several partners and venture capital firms on this model, but wanted to share a nice post that highlights some very informative ‘Patterns in Hyper-growth Organizations‘ and what needs to be considered (employee type, tools, etc..). Please check it out and I look forward to working with the community on these models.

A snippet on her approach and great details:

We’re going to look at the framework for growth. The goal is to innovate on that growth. In terms of methods, the companies I’ve explored are high-growth, technology-driven and venture-backed organizations. They experience growth and hyper-growth (doubling in size in under 9 months) frequently due to network effects, taking on investment capital, and tapping into a global customer base.

Every company hits organizational break-points. I’ve seen these happening at the following organizational sizes:

via Mapping the Startup Maturity Framework | Likes & Launch.

Methodology for the identification of critical connected infrastructure and services — SAAS, shared services..

ENISA released a study with a methodology identifying critical infrastructure in communication networks. While this is important and valuable as a topic, I dove into this study for a particularly selfish reason … I am SEEKING a methodology that we could leverage for identifying critical connected infrastructure (cloud providers, SAAS, shared services internally for large corporations, etc..) for the larger public/private sector.  Here are my highlights – I would value any additional analysis, always:

  • Challenge to the organization: “..which are exactly those assets that can be identified as Critical Information Infrastructure and how we can make sure they are secure and resilient?”
  • Key success factors:
    • Detailed list of critical services
    • Criticality criteria for internal and external interdependencies
    • Effective collaboration between providers (internal and external)
  • Interdependency angles:
    • Interdependencies within a category of service
    • Interdependencies between categories of services
    • Interdependencies among data assets
  • Establish baseline security guidelines (due care):
    • Balanced to business risks & needs
    • Established at procurement cycle
    • Regularly verified (at least w/in 3 yr cycle)
  • Tagging/Grouping of critical categories of service
    • Allows for clean tracking & regular security verifications
    • Enables troubleshooting
    • Threat determination and incident response
  • Methodology next steps:
    • Partner with business and product teams to identify economic entity / market value
    • Identify the dependencies listed about and mark criticality based on entity / market value
    • Develop standards needed by providers
    • Investigate how monitoring to standards can be managed and achieved (in some cases contracts can support you, others will be a monopoly and you’ll need to augment their processes to protect you)
    • Refresh and adjust annually to reflect modifications of business values

I hope this breakout is helpful. The ENISA document has a heavy focused on promoting government / operator ownership, but businesses cannot rely or wait for such action and should move accordingly. The above is heavily modified and original thinking based on my experience with structuring similar business programs. A bit about ENISA’s original intent of the study:

This study aims to tackle the problem of identification of Critical Information Infrastructures in communication networks. The goal is to provide an overview of the current state of play in Europe and depict possible improvements in order to be ready for future threat landscapes and challenges. Publication date: Feb 23, 2015 via Methodologies for the identification of Critical Information Infrastructure assets and services — ENISA.

Best, James

Big Data is in early maturity stages, and could learn greatly from Infosec :re: Google Flu Trend failure

The concept of analysing large data sets, crossing data sets, and seeking the emergence of new insights and better clarity is a constant pursuit of Big Data. Given the volumn of data being produced by people and computing systems, stored, and ultimately now available for analysis – there are many possible applications that have not been designed.

The challenge with any new 'science', is that the concept to application process can not always be a straight line, or a line that ends where you were hoping. The implications for business using this technology, like the use of Information Security, requires an understanding of it's possibilities and weaknesses. False positives and exaggerations were a problem of past information security times, and now the problem seems almost understated.

An article from Harvard Business details how the Google Flu Trends project failed 100 out of 108 comparable periods. The article is worth a read, but I wanted to highlight two sections below as they relate to business leadership.

The quote picks up where the author is speaking about the problem of the model:

“The first sign of trouble emerged in 2009, shortly after GFT launched, when it completely missed the swine flu pandemic… it’s been wrong since August 2011. The Science article further points out that a simplistic forecasting model—a model as basic as one that predicts the temperature by looking at recent-past temperatures—would have forecasted flu better than GFT.

So in this analysis the model and the Big Data source was inaccurate. There are many cases where such events occur, and if you have ever followed the financial markets and their predictions – you see if more often wrong than right. In fact, it is a psychological (flaw) habit where we as humans do not zero in on those times that were predicted wrong, but those that were right. This is a risky proposition in anything, but it is important for us in business to focus on the causes of such weakness and not be distracted by false positives or convenient answers.

The article follows up the above conclusion with this statement relating to the result:

“In fact, GFT’s poor track record is hardly a secret to big data and GFT followers like me, and it points to a little bit of a big problem in the big data business that many of us have been discussing: Data validity is being consistently overstated. As the Harvard researchers warn: “The core challenge is that most big data that have received popular attention are not the output of instruments designed to produce valid and reliable data amenable for scientific analysis.”

The quality of the data is challenged here for being at fault, and I would challenge that ..

The analogy is from information security where false positives and such trends were awful in the beginning and have become much better overtime. The key inputs of data and the analysis within information security is from sources that are commonly uncontrolled and certainly not the most reliable for scientific analysis. We live in a (data) dirty world, where systems are behaving as unique to the person interfacing them.

We must continue to develop tolerances in our analysis within big data and the systems we are using to seek benefit from them. This clearly must balance criticism to ensure that the source and results are true, and not an anomaly.

Of course, the counter argument .. could be: if the recommendation is to learn from information security as it has had to live in a dirty data world, should information security instead be focusing on creating “instruments designed to produce valid and reliable data amenable for scientific analysis”? Has this already occurred? At every system component?

A grand adventure,

James