Latest report shows significant changes in the scale and type of attacks being executed, as recorded by one of the largest internet infrastructure companies that includes additional data sources. Akamai published their quarterly report today (January 23, 2013) and I am nearly through it … a few striking details that shift how I will recommend clients to identify; consider; and mitigate risks. The top two items that are significant (one obvious) and important include:
- China held its spot as the #1 source of observed attack traffic at 33%, with the United States at #2 at 13% (Not a huge surprise but an affirmation for many)
- The amount of attack traffic that was seen during the activist (Operation Ababil) DDoS attacks was ~60x larger than the greatest amount of traffic that it had seen before for similar activist-related attacks (The volume, intensity, and strategy of the attacks is important as most do not consider a SIXTY TIMES in factor in risk mitigation calculations)
About the Akamai State of the Internet report
Each quarter, Akamai publishes a “State of the Internet” report. This report includes data gathered from across the Akamai Intelligent Platform about attack traffic, broadband adoption, mobile connectivity and other relevant topics concerning the Internet and its usage, as well as trends seen in this data over time. Please visit www.akamai.com/stateoftheinternet
You can request access to (registration) the report here, and the individual images from the report available here. There is also a great set of write-ups coming out here and here.
Senior leadership (board of directors, audit committee members, CIO, COO) must ensure these realities are absorbed into the organization’s business processes. Leadership and strategy shifts required to tackle these evolutions remains an executive responsibility.
James DeLuccia IV
*See me speak at RSA 2013 in February on – The Death of Passwords
Posted in Compliance, IT Controls, Security
Tagged 2013, akamai, best practices, cybersecurity, ddos, denial of service attacks, ffiec, infosec, it compliance and controls, IT Controls, james deluccia, jdeluccia, sec disclosure, Security, statistics
What best practices can we derive from FINRA based on the attack and subsequent response by Davidson & Co.?
A common statement by practitioners is that regulation speak to intent and reasonable security safeguards, but do not stipulate precisely what exactly is required for satisfying a regulation. It is understood by most that security and managing risk is a fluid process, so (thankfully) most regulations allow for time as a factor in meeting the needs of the consumers of such systems and technologies. This breach provides excellent quantitative factors to consider for any security program, regardless of industry.
Davidson & Co. was breached using SQL Injection – a nasty and highly successful type of attack. Records were stolen and FINRA fined the business $375,000 based on a number of factors to include:
- No known use of stolen customer data (the fine is based on the lack of proof that the data was used maliciously, despite the fact there was a blackmail attempt by the perpetrators.)
- Davidson & Co. were cooperative with law enforcement
Safeguards that were highlighted considered necessary to prevent the breach by FINRA:
- Sensitive data must be encrypted
- Vendor passwords should be changed from default settings
- Network logs should be actively managed and reviewed sufficiently to identify network intrusions
- Firewalls and application services should be configured to minimize direct connections to the public internet (including databases)
- Deploying an active detection solution, such as Network Intrusion Detection
Finally, an interesting point – Davidson & Co. stated they had a 3rd party auditor conduct a penetration test and failed to breach the security. This is an important point as it speaks to the necessity to ensure that such tests are done in balance to a full information security program. Such practices must at least include an internal evaluation of firewalls; network configurations; server management; change control; people-process; and essential IT Controls are required to ensure a satisfactory level of operational integrity (secure; compliant; happy customers).
FINRA through this judgment clearly states what is expected – encrypted sensitive data (as is encouraged by over 50+ state and federal laws); current security safeguards; and serious attention wherever required. The FTC, SEC, and UK counterparts have provided exceptional such detail over the past few years and should be considered through regular updates to each companies GRC programs.
Link to the ComputerWorld article is here.
Link to another great write up is here at Wired’s Threat Level (more details of perpetrators).
Thoughts / Insights?
Posted in Compliance
Tagged 2010, best practices, Compliance, data breaches, denial of service attacks, fines, finra, it compliance and controls, IT Controls, regulation, Security
As mentioned in prior posts, Cloud security and addressing the risks that exist (the new risks and the new tools to address these risks) is fundamental to ensuring a successful and beneficial use of the Cloud provider environments. The RSA London conference held several strong documents highly to help approach the best practices for cloud security. The two most commonly referenced were:
A nice article (October 2009 “Amazon EC2 attack prompts customer support changes“) posted on TechTarget highlights the Denial of Service Attack against a hosted website on AWS EC2. Check out the article here. Overall the results from this attack were very promising for instilling confidence in Amazon AWS, but also highlights the duties and next steps in evolving beyond simply “starting instances” on the Cloud. A few of the key points that jumped from the screen, and should be carefully considered include:
- “The problem was that no one could see the complete picture…” AWS took 18 hour to respond to attack – primarily the result that the backend AWS environment (internal IP traffic) was just fine, but the outside public facing IP was bogged down.
- AWS responded immediately to fix the issue – demonstrating their dedication to ensuring a great operating environment
The target organization acknowledged that “they weren’t taking full advantage of AWS’s unique characteristics.” to reduce the impact of this type of attack. Indeed it is the availability of new enterprising environments and access to a broad set of resources that makes the Cloud such a rich platform.
- There are ways and means of improving the operational integrity of solutions leveraging the Cloud but it requires, Peter DeSantis VP of AWS EC2 states that “customers take proactive measures, such as distributing instances for redundancy and safety. He said that there were distinct advantages in a cloud computing environment that many weren’t aware of or haven’t learned about…We are underplaying tools that are at people’s disposal…”
- A great set of lessons are further elaborated in the article. An additional observation – no other customer operating environments were reportedly impacted, which speaks very positively for Amazon’s architecture and current deployment.
Other thoughts and concerns?
James DeLuccia IV
Posted in Compliance
Tagged 2009, best practices, cloud computing, Compliance, denial of service attacks, it compliance and controls, IT Controls, PCI DSS, regulation, rsa, Security, virtualization
What happens to the organization when the data that represents the heart of the business is distributed through Twitter, Facebook, torrent networks, gaming consoles, iphones, google phones, and such peripherals. Many would state that DLP is the holy grail to ensuring the data never reaches these platforms, but I would challenge that statement with the fact that much content moving forward will be generated from these devices. The difficulty of greater platforms, interfaces, availability of API, and the now efficient and mature malware market creates a new risk landscape.
Visit me next week live to discuss these challenges in depth at RSA London 2009. I have brought together leading thinkers in this space and interjected client engagements to make it relevant and actionable. A brief (9 minutes) podcast was published last week, and may be viewed here w/ abstract, or here for the direct link to the mp3.
A new risk landscape exists – how have you adjusted?
James DeLuccia IV
Posted in Compliance
Tagged botnet, Compliance, data breaches, denial of service attacks, grid computing, it compliance and controls, pci, rsa, Security, twitter, virtualization
There is a great deal of misinformation regarding the Denial of Service Attack that has been ongoing. While many of the facts are not fully available the misinformation is plainly visible.
- First off, a denial of services attack (ddos or dos) can be launched from anywhere in the world.
- Secondly, such an attack is typically done using computers that have been infected by malware – unbeknown to the user / owner.
- Thirdly, such attacks can be coordinated through multiple locations – the end result, no abosolute clear view as to the originator of the crime.
The Wall Street Journal Article, New Web Attacks Hit Some South Korean Sites, today blended two stories together. That of the cyberattack that is present and loose ties to how N. Korea is having leadership changes and is more aggressive militarily (a weak correlation to be sure). Another news story at The Hankyoreh paper (link is in English and available in Korean) states that 26,000 computers in South Korea were executing the DDoS attack. They provide an interesting perspective on how this attack differs from others. It is inaccurate however for them to be physically examining a computer (as shown in the picture included in the article) and it’s chips to determine the cause of the attack – it is malware (MyDoom, Conflicker, etc…)
Additional Articles with information on this denial of services attack:
The security industry has been stating the danger of allowing such malware to infect systems, and the result is now evident. This attack is only orchestrating an attack with 26,000 computers. The University of California Researchers had control of over 182,914 hosts – nearly 7 TIMES more systems, and this one attack that is ongoing is from one particular geographic location.
A note of caution, attacks such as this create a lot of noise. Such noise can be used to conceal elicit activities of criminals. In the security and audit world we expect and have in place technology to trigger alerts and initiate security protocols when such events occur. If the number of events however exhaust the resources, then prioritization begins to play a part. Businesses, and governments, must consider these conditions and risks when responding to such situations.
Situations such as these should evoke thought and action, but not necessarily motion – as Benjamin Franklin states quite eloquently, “Never confuse motion with action”. It would be ill advised for governments to erect vast regulatory bodies / Czars / Committee reviews of this situation – the cause and solution are known, just precise action and response is required.
Contrary Thoughts / Insights into the actual originators?
James DeLuccia IV
My profile on LinkedIN
I will be speaking at RSA 2009 Europe, please register and join the discussion on the future of data security and privacy (links coming soon)
These past few days have seen numerous packet attacks against some very prominent institutions. Now while most of these are simply PR and marketing front-ends, and not truely the operating environments, the attacks are annoying and introduce a few specific threats and concerns that should be considered today in your environment and for the future of the internet.
More packets are not the answer – The typical response to an attack is to attack back, or add encryption, or create greater integrity checks on the data. Adding to the pile of data pushing through a pipe (by increasing size for cryptos and md5 hashes) only clogs the system that is already clogged. Careful consideration should be taken in rolling out additional solutions without consideration to the matrial effect such solutions and technologies will have on the environment and attack threat.
Seperate is not always separate – It is common and best practice to operate core business services on secure environments that are resilient to such DDoS attacks and other common public internet attack vectors. Unfortunately sometimes the technical architectures overlap and cross as a result of cost management and simple lack policies and procedures. These public attacks should highlight the need to carefully review:
- Your current redudant and resilient environments
- Careful review and continued adherence to your change control and approval program.
Attacks may appear closer then they appear – These attacks are originating from someplace, but not the place where one thinks. The attackers have employed trojaned computers from around the world and are orchestrating this through a command and control server. This is a very common practice. Investigators, businesses, and governments should be cautious in pointing fingers as to the source due to the ability to take over systems from one country or from the whole world.
Regulating bandwidth – Today most organizations throttle bandwidth for different types of traffic and based on source-destination ip addresses. It is quite conceivable we could live in an online world where DoS attacks are ongoing and continuous. The next step in the arms race would be a land grab on routers and other devices to secure virtual private channels. Conceivably one could see Google locking a specific set of traffic for every network device.
More thoughts spring to mind, but this is a reminder to take technology problems through a thought through strategy, and not through one-off shots.
James DeLuccia IV
Posted in Compliance
Tagged audit, availability, bandwidth, best practices, cnbc, Compliance, cyberattacks, denial of service attacks, nyse, privacy, regulation, Security