Ever since the day I setup VMWARE and other equivalents in a lab I was amazed at the potential to sandbox systems for testing and usage. In these environments I built honeynets, tested software exploits, analyzed virii and other malware, and deployed hundreds of enterprise applications (using legit keys mind you) to test and evaluate unique architecture deployments. As time progressed these virtual sandboxes became a critical tool that I began to run on my home labs, and my laptops as I traveled around the world. A few years ago I helped a major international company configure a secure architecture to address their virtualization of the entire N.A. data center – converting nearly 600 physical servers down to sub 200!
The end result is the backroom and unique instances of sandboxed devices is the past, and our future is with virtualized applications-devices (soon “devices” will be lost in our vocabulary as we virtualize devices into application virtual memory blocks – think CheckPoint’s Virtual Firewall implementations where we can have dozens of firewalls running on a single device). The challenge that we in the audit, security, and business space is ensuring these are done so with more diligence and thought than the wave of wireless implementations that sprouted all over the business.
The analogy between virtualization and wireless adoption is unfortunately similar, and with only proactive efforts by all parties can we avoid another train wreck – being relative to our IT Control environments. Wireless implementations were done mostly as a grass root fashion (similar to people plugging in dial-up modems to access AOL through the company PBX, or installing PC Anywhere through a modem b/c the user wanted remote access that wasn’t made available by IT, yet) where users installed Linksys routers to the corporate network, or made their laptops an access point instead of a client.
The lessons learned from the past (dial-up, remote access, wireless access) is that architecture changes must occur, and a structure must be put in place to adapt to the new technologies. Refusing to adjust and develop new solutions will only make the threats to the business greater. The security controls must be adapted to handle these virtual environment and may need to consider contextual security – the thought where security is implemented based on the data processed and not the physical location (which is key when the physical is removed from the equation).
Andreas Antonopoulos of Network World had a nice description of the conundrum for this “physical problem” – “In a virtualized environment, some of the old concepts have to go: IP addresses do not identify servers because servers can be redeployed on the fly to a different subnet. So your “IP A.A.A.A can send packets to IP B.B.B.B” access control design is no longer relevant or helpful. What was at IP A.A.A.A has moved to a different subnet/data center/continent.”
Richard Bejtlich has a great article that summarizes his and Ofir Arkin thoughts on NAC and why that virtual security has its weaknesses. It is important to understand these risks as these can settle into the virtualized application space too. HIs post is under ShmooCon 2007 Wrap-Up
I would appreciate any comments on additional resources or approaches to securing this aspect of the growing data center. Especially as industry moves to more shared and remote hosting solutions – virtualization will occur w/o any interference due to the low cost nature of the implementation.