I read today a NY Times article on the recent attacks on Google and couldn't help to comment on it :), seeing on how much it touches on the problems associated with modern web application development.
The short version is that in December 2009 Google was hit by Chinese hackers. The event was highly publicized in January and even caused a strain in relations between US and
The new piece of the puzzle, from an undisclosed source, is that the attack was targeted at a development team working on the Google single sign on system, GAIA. My understanding is that GAIA is responsible for all authentication to Google, probably being the same system servicing apps like: GMail, Adsense and even You Tube.
The attackers found out who the GAIA developers were by accessing an internal Google corporate directory called Moma. Then probably using IM information from this system sent a link via MSN Messenger to one of developers. The link in question took the victim to a site containing malware. The attackers then took over the Google employee's computer, accessed the Google code repository, and stole the code.
So what is the morale of this story? Well there are several important conclusions that I want to discuss:
- Web applications are the main target of modern security threats
- Security testing early in the development lifecycle is critical
- Security education for employees is of outmost importance
- Developers of business critical applications could be the main target of attacks so they need to watch out
Web applications are the main target of modern security threats
We all know that the Internet has taken over a big portion of our day to day activities. We work, socialize and relax on the web. Web phenomena like social networking sites are changing human interaction rules and opening new communication avenues. In a way we live on the web. Web sites are our meeting places, pubs, museums, theatres.
What we don't realize is that our places on the web are much less protected than our places in the real world and much more easy to attack. So it is perfectly understandable how security threats have shifted towards the web application layer and how nowadays the Russian Mob has decided to invest in a full complement of software developers that work night and day on writing viruses. spyware and malicious sites.
The underground world has invested a lot in finding ways to attack the Web, however companies and governments don't seem to put
What would law enforcement surveillance mean in the context of web application security? Of course nothing like an actual policeman monitoring the videos on You Tube or your friends’ posts on Facebook and blowing their whistle when some infraction has occurred... I'm mainly referring to compliance standards and policy enforcement through audits conducted with automated security scanners...
The magnitude of the Google attack is immense. Imagine that the hackers did manage to control the Google login mechanism without Google finding out. How many people do you think use their GMail password for other things, like their bank accounts? The hackers could get control over the lives of thousands of users, sell their passwords, sell their personal information, use their e-mails for spam, and trick them into installing malware....
Google, Facebook, Twitter become mission critical through their popularity. There's no wonder every one of these sites has been targeted in the past year. In fact Google apps have been one of the favourite targets of our very own AppScan Security Research team(for ethical hacking of course) . Their latest disclosure was in March of this year. Read more here.
In conclusion while web application attacks keep increasing exponentially organizations need to keep the pace and strengthen their security measures which in the case of web applications mean to write secure code and properly test it. That takes me to my second conclusion.
Security testing early in the development lifecycle is critical
You might wonder why I am talking about security testing of web applications when in this case it was clearly the user that was attacked through social engineering. Well let's not forget the goal of the attack: capture the Google authentication code in order to identify security vulnerabilities and exploit them. While they were at it maybe they even planted a backdoor in the existing code repository.
Many hacker attacks are never uncovered. How do you protect your code?
You test it!
The testing should occur
The type of testing employed is very important
Security Testing cannot protect on its own. Developers need to be empowered with security knowledge, the company must adopt a Secure Development Process and every player in the development lifecycle needs to take active part in security testing, use secure coding practices and be aware of security exploits, (like the good old drive-by-download process used in this case). That takes me to bullet # 3.
Security education for employees is of outmost importance
No matter what network filters you use, or what antivirus software you install there's no better protection that knowing to not click on links from strangers. If that Google developer had known this good web practice nothing like this would have happened.
We often help organizations integrate security practices in their development process and often I realize that most developers have no idea of the security threats out there. They are focused on the product features and have almost no interest in security aspects. Educating your employees on security can go a long way and you especially want to target those who work on business critical applications like the login. This takes me to my final point.
Developers of business critical applications could be the main target of attacks so they need to watch out
In the Google case the hackers knew exactly where to hit and maybe this could have been prevented.
Should business critical developers be isolated from the rest of the organization like monks? Why not? I would say that any developer that works on a security feature needs to be highly skilled in security aspects and definitely not have their MSN address together with the rest of the employees. Maybe these developers should also be required to have security clearance to insure they are not working for the bad guy.
Also let's not forget the Source Control system. It should support access control and use the least privilege approach. An actual developer of the code should have different privileges than someone that works on translating the code or entering security defects. You want to make sure that you put a lot of security around your code so it's not just copied from some FTP and uploaded to another server. I think we can safely assume that a proper configuration of the Source Control system could have saved a lot of pain in this case.
The Google case is just one of many. We can only hope that organizations will learn from their mistakes and do more about protecting their web applications. Web technologies continue to evolve and many new challenges continue to arise every day.
Are you doing anything about it?