How dangerous can IoT cyber weaknesses be? .. Expanding on the assault against Krebs on Security

There is every indication that this attack was launched with the help of a botnet that has enslaved a large number of hacked so-called “Internet of Things,” (IoT) devices — mainly routers, IP cameras and digital video recorders (DVRs) that are exposed to the Internet and protected with weak or hard-coded passwords.

Krebs site was brought down with a Denial of Service attack that was 2x larger than any ever done before (approximately). As highlighted above, the majority of this was executed leveraging IoT devices that were sold and or setup insecurely. I am not pointing fingers here, but this if anything must be a clear call to action to all of us in the consumer business to address these massive cybersecurity concerns at their inception (within the product development cycles and the core components (Raspberry Pi I am looking at you .. open source libraries, you too). Note most of those devices are on the consumer end of the “who is responsible for updating, patching, securing, and cleaning up” ownership spectrum.

We are developing and deploying code at scale and currently there are 10 BILLION Internet connected devices (IoT), and this is only going to radically increase. Now is our chance to protect the stability of our connected world, ensure the safety of our families, and maintain the integrity of our life dependent services.

Curious … on the IoT scale problem? Check out this fun Infographic from Verizon chock full of stats across industries. This is not F.U.D., but a challenge for us to ensure the platforms we are creating can be sustained, and those associated freedoms – such as Krebs excellent work, can persevere.

Source: The Democratization of Censorship — Krebs on Security

Expanding on Failing to scale for DevOps … Inspired by Mirco Hering

What you otherwise do once a week manually you might now do 100 times a day with automation

Imagine your organization wants to embrace higher velocity deployment, and ultimately achieve a full stack engineering model reflective of DevOps patterns and practices. A common oversight I see in organization’s is the quote I re-read from Mirco Herring. What your organization does today must scale, but what does that mean?

Here is what it means from a security operators perspective:

  1. Verification engines must assess for high quality code, compliance verifications of business requirements, dynamic and static scanning, and all of the integrity checks along the road must now occur now 1x per build (monthly?), but perhaps daily (30x a month)
  2. All of the systems need to be built everytime, queued everytime, and bits & bytes consumed to achieve the above activities of #1
  3. Sign-off of findings, tickets created, workflows activated to loop back to engineers any findings, and all less than 24 hours (< 12 hours to hit a “B” score.

Now the twist, as Mirco highlights, can the…

  • infrastructure handle the increased volume?
  • do we have the licenses to run that many simultaneous activties?
  • where are the human choke points (does a human provision the tool, sort the data, create the tickets, verify the tickets, etc…)?
  • quality of these checks under automation to be effective in identifying and efficiently leading to code improvement?

Most organizations have more than 1 group developing, so as you grow the team size the impact becomes even more a severe steep climb – 1,000 apps? 2,000? 3,000 apps?

How are you handling these growth curves internally to enable and allow the growth instead of becoming the department of NO or “we didn’t have time to..”

Source: Not A Factory Anymore | A blog about software delivery, Agile & DevOps by Mirco Hering

DevOps Instrumentation: Metrics, Monitors, and Logs | DevOps Automation

As an author and practitioner in the DevOps space, I spend a good deal of my quality time with brilliant engineers and visionary product managers who have amazing capabilities. An area where I enhance their solutions is isolating, simplifying, and eliminating cyber security threats across the development lifecycle into production.

This a challenge as we move to greater abstraction their is a distinct need for a hacker mindset at each stage. I am pulling together research on this topic and look forward to plenty of hysterics and feedback😉

For now … I wanted to share a nice article on Logging and Monitoring. A core area of trouble that by applying these principles should be less opaque. Looking forward to collaborating this year.

Should services log? To what detail? Where should those logs go? To a syslog daemon, to files on disk, to a message queue? When is it appropriate to start instrumenting your code? Once you’ve started, what sort of information should you be recording? And what metrics system makes sense? Here are a few recommendations :

  • Understand that logging is expensive. I’ve seen entire teams of absolutely brilliant engineers spend years building, managing, and evolving logging infrastructure.
  • Services should only log actionable information.
  • Serious, panic-level logs may have to be consumed by end users.
  • Structured data needs to be consumed by machine, to take necessary actions.
  • Logging has limited applicability.
  • Avoid multiple production levels (warn, info, error) and runtime level configuration. In production you should be logging information that needs to be seen. An exception is debugging which is useful during development and problem diagnosis. Therefore, make the application log level a dynamic configuration variable.
  • It’s the responsibility of the OS or the agent to route the stdout/stderr to the desired destination.
  • Make sure that your logging agent is lightweight and does not consume too many system resources. Monitoring In God we trust, the rest we monitor.

Source: DevOps Instrumentation: Metrics, Monitors, and Logs | DevOps Automation

My Labor Day Reading continues: How the NSA’s Firmware Hacking Works and Why It’s So Unsettling | WIRED

Sometimes you just are so much in the Flow with life that you don’t get to dive in as much as you would like on all topics. For me recently I have been heavy training CrossFit in preparation for some competitions coming in October, and very busy with some new projects. I also have a few personal pursuits underway that pretty much have made a the evenings more likely to be part of my day than anything else…. so this Labor Day weekend I worked, studied, did research, tinkered, hacked, and honestly just focused on consuming and not listening to ANY media outlets. I wanted to cut the noise and just get into the particulars. Now, those particulars are not always great for the regular reader, so I have included below a simple and concise write-up from Wired on this topic. I’ll try and share as all the research soon, but for now … happy Labor Day


The Trojanized firmware lets attackers stay on the system even through software updates. If a victim, thinking his or her computer is infected, wipes the computer’s operating system and reinstalls it to eliminate any malicious code, the malicious firmware code remains untouched. It can then reach out to the command server to restore all of the other malicious components that got wiped from the system. Even if the firmware itself is updated with a new vendor release, the malicious firmware code may still persist because some firmware updates replace only parts of the firmware, meaning the malicious portions may not get overwritten with the update. The only solution for victims is to trash their hard drive and start over with a new one. The attack works because firmware was never designed with security in mind. Hard disk makers don’t cryptographically sign the firmware they install on drives the way software vendors do. Nor do hard drive disk designs have authentication built in to check for signed firmware. This makes it possible for someone to change the firmware. And firmware is the perfect place to conceal malware because antivirus scanners don’t examine it. There’s also no easy way for users to read the firmware and manually check if it’s been altered.

Source: How the NSA’s Firmware Hacking Works and Why It’s So Unsettling | WIRED

How much did Power outage at Delta cost?| Reuters

In the world of technology, it is often very hard to put a $ to an event. A power outage is particularly hard, but when that outage impacts the physical world (people flying) and your customers (those people flying) there is an opportunity to do some math.

Now I don’t know the full math, but the idea of a Skymile credit seems like a good start – and THANK YOU Delta for the credit. It wasn’t expected and is an awesome surprise.

So “As a valued Medallion® member, you expect more, which is why we’ve added 20,000 bonus miles to your SkyMiles account.” and we know that roughly 451 flights were canceled. Now I was on a flight canceled WED! Which is pretty far removed from the Monday event, but we’ll keep the numbers conservative and state the total flight disruptions from the power outage was 451.

451 Planes w/ 110 (average passenger guestimate) * (20% with enough status to get a credit) * 20,000 miles = 198,440,000

…1 skymile is roughly equal to 1.3 cents

So …198,440,000 * 1.3cents = $2,579,720

… Of course, if we assume EVERYONE got the miles, that would cost Delta $12,898,600 in Skymile credits alone.

Article on the power outage:

Atlanta-based Delta, the second-largest U.S. airline by passenger traffic, said it had canceled 451 flights after a power outage that began around 2:30 a.m. EDT (0630 GMT) in Atlanta. Flights gradually resumed about six hours later.

Source: Power outage at Delta causes flight cancellations, delays | Reuters

CNBC shows how not to handle a security screwup

Sometimes the best lessons happen in public and are based on our mistakes. Take a look at the series of errors taken by CNBC related to collecting passwords from their online readers. The commentary is a bit wild, but I think the passion shows the level of expectation sought for such a reputable business.

When someone entered a password into the text box and hit the button, a lot more was going on than a test. The password was being sent over the site’s http (unencrypted) connection to CNBC’s third-party partners, such as ScorecardResearch and SecurePubAds (DoubleClick).

After posting the findings on Twitter, a researcher who works on Let’s Encrypt (free, easy https for websites) joined the dogpile. He added that — inexplicably — CNBC was also saving the passwords to a Google Docs spreadsheet when the user hit “submit.”

Source: CNBC shows how not to handle a security screwup

Two more healthcare networks caught up in outbreak of hospital ransomware through very old vulnerability | Ars Technica

I have been developing a cybersecurity exploitation and threat lifecycle model and this article caught my attention in it highlighting the evolution of the deployment of the ransomware tech. Initially spread through phishing, it is now being used as the payload in the attacks. Interesting.

This also creates an interesting base cost of not safeguarding a network environment. Consider that the attacks are becoming automated (automatic identification of a server running known vulnerability and then automatic installation of malware which then automatically takes over network for ransom) the attacks scale easily, and there is a bit of near certainty here. More thoughts, developed out with hard data, to come on this topic.

“This is really one of the first times we’ve seen ransomware spread by a network vulnerability,” Craig Wilson of Talos Research told Ars, …The malware, called “Samsam” by Talos, uses old, very public exploits right out of JexBoss—an open source vulnerability testing tool for JBoss. Once the malware has a foothold on the server, it spreads to Windows machines on the same network. “I wouldn’t be surprised if this [malware approach] was extended toward WordPress and other content management systems,” Wilson said. “This is really just the natural progression of ransomware.”

Source: Two more healthcare networks caught up in outbreak of hospital ransomware | Ars Technica