Ajax Security opens up a whole new can of worms
*Update on the TS/SCI Security Blog*
First of all, I would like to announce that I will be retiring the long, diluted threads that have recently appeared on the TS/SCI Security Blog. This is the last of the "longer" threads I've been saving up for our readers before I embark on a new journey.
What should you expect? Well, I've decided to start a blogging theme called "A Day-In-The-Life of an Enterprise Security Team". This theme will consist of 20 short posts, all pertaining to what I see as the future of Enterprise-class Information Security Management teams. The posts will demonstrate how they should conduct their strategic, operational, and tactical security measures. The primary focus will be defense (hardening, protections) and security testing (penetration-testing, inspections, audits).
These new posts will be available over the course of the next few weeks, possibly bleeding into the new year. Be sure to tune in as we uncover the latest in defense/testing trends! I know that Marcin is working on a few upcoming posts as well, which will hopefully be staggered throughout the "Day-In-The-Life" theme.
For this post, I'm excited to talk about the Ajax Security book by Billy Hoffman and Bryan Sullivan. The last blog post, Collaborative Systems and Ajax/RIA Security uncovered some of the highlights of the book, but here are my last words and review before moving on to bigger and better things.
*Ajax Security*: What was good
Here are some of the "new" concepts that I enjoyed most:
- Hijacking Ajax apps
- Attacking Offline Ajax apps
Normally Ajax is worried about client-side code being changed (i.e. function clobbering) by hijacking Ajax apps as described above. The solution is to reduce trust of client-side problem domain logic before it is brought back to the server-side. If input is taken, it should always be processed through a whitelist filter (with an additional blacklist to prevent specific-attacks) before being executed server-side or passed on to another internal service. If client-side problem domain logic is expected to follow a certain path of execution, the end result of such flow execution should match expectations. The server-side should therefore provide the input validation to support expectation-matching. Client-side validation can be easily bypassed by changing the control flow execution (that's what an in-browser debugger such as Firebug can do when the code is easily available for viewing and editing).
In order to prevent this access and subsequent querying of local objects, client-side code in the Ajax problem domain logic must be validated for input. So, basically the reverse is now true for offline applications! Instead of server-side input validation - a client-side validation is necessary to keep data safe.
- Ajax proxy exposure of third-party XML/JSON data
Ajax relies on XHR's (XmlHttpRequests) to send data back and forth between client-server, but also server-server transactions. This is very useful when information is not purposefully published via a Web Service such as SOAP or REST. The server-to-server XHR traffic can be used to send XML or JSON data between third-parties that expose their API's to the public (or semi-privately).
Ajax proxies therefore allow creation of "mashups". The third-party provides access to its data/API via another hosted site. Mashup access for clients is now hosted by the web application of the first-party site to the third-party site, where the first-party acts as a "go-between". Often, the third-party also acts as a first-party to other sites -- and sometimes regular clients as well (but possibly restricted by the number of transactions per second or number of API calls a day).
A hostile client can attack this Ajax proxy infrastructure in a variety of ways -- choosing to go direct to the third-party to attack the sites that re-host data/content/API-access or attack the third-party via the first-party site. Worst of all, "Aggregate Sites" will access all kinds of API's and content (think: RSS) and bring it to one central location, usually a home portal such as iGoogle. Hostile clients can then attack any of the aggregated content before it gets to the home portal. The web becomes a serious of interconnected transitive trust relationships that can be exploited.
Best parts about the book
I really enjoyed the suggested defenses against "mashup" attacks as well as JSON API Hijacking. Without going into detail (I don't want to ruin the book and the authors' hard work), I can say that the explanations are not only better than mine -- but that the imagination and creativity for optimal solutions were clearly first and foremost in the authors' intentions. This is really where their good intentions shined.
The authors also did a great job in the first few chapters, as well as in two other chapters ("Request Origin" and "Worm" ones, specifically) exposing all of the intricacies of Ajax, HTTP, HTML, and XHR abuse issues. They showed that with great power comes great responsibility. The level of attack capability that HTTP/XHR can muster is scary indeed. Whether it's a simple HTML IFrame or a complex XHR -- the authors show both the good and evil sides of the equation.
It's quite possible that many Star Wars Ajax security fans will be calling Billy Hoffman, the great "Obi-Wan", and pdp "Lord Vader" to represent the "light" and "dark" sides that is The Force behind the power wielded by Ajax. Case in point is the IFrame, which is used to "jail" the aforementioned "mashup" Ajax technology.
What was missing [and we'll hopefully see in a second edition!]
There was a lot missing, and I'm sure I can't remember all of it right now. Let's start with what I gathered from my last blog post, which covered Zimbra in rough detail.
Zimbra, GMail/GDocs, and many others have this "Design Mode" theme in common where the site allows you to edit HTML (usually mail) content inside the browser. Web spiders and crawlers have a difficult time replaying this data, so it is currently impossible for web application security scanners to test "Design Mode" portions of websites.
Web application security testing tools have these and other issues that are being worked out as the tools come into maturity. The authors didn't provide solutions or even discuss these sorts of problems, although they did cover a lot of related "Presentation Layer" information. You definitely don't want to miss out on what they have to say about attacking that layer! There hasn't been a lot of research that I've seen, and some of the attacks seem incredibly daunting.
For example, caching content locally for browsers to access could be subverted (say, on a kiosk). The authors recommend "no-cache", but I looked to the standards and there's a bit more to it than they cover in the book. I'm working on a document that describes ways of handling this particular issue, but clearly more research will need to come about in this area. Hopefully more sites will be adding SSL and automatic HTTP redirect to SSL (which is another fix for the "Aggregate Sites" problem spoken to above).
Here comes the best part! I know that a lot of you are curious if the book covers Samy. Of course it does! The book also covers the less exciting but discussion-relevant Yammaner worm. I was very excited to read this chapter, but also afraid of some of the "dark side" prescriptions it gave.
XSS is the root-cause of a few advanced attack vectors, and Ajax makes them turbo-speed. I discovered this on my own after viewing a disturbing video about XSS worms, and spoke up about it on the ha.ckers.org blog. It's possible that I subconsciously read about the basics of this new attack vector from Wade Alcorn, who introduced a less complete concept a few months earlier. I also led a short discussion (that same week) about these techniques at the OWASP Chicago chapter meeting.
Clearly, Wade put two and two together to come up with this meta-level concept. Before announcing his Inter-protocol Exploitation and Communication papers, Wade had announced a vulnerability in the Asterisk server two years earlier. Combine BeEF with Metasploit, and you have yourself a nice hole through any firewall. Even before RSnake, hackathology, anddk of BlogSecurity/MichaelDaw picked up on the Inter-protocol topic, there was already early discussion on RSnake's blog related to JS Spam and IMAP+XSS. Even earlier work had been done (much of it is now called OS Command Injection, MX Injection, SMTP Injection, Mail Command Injection -- all really the same thing) labeled as the HTML Form Protocol Attack and Extended version.
Two weeks after I envisioned the "new Web worm vector", Wade made The Register for an article on "Worms 2.0!". I wouldn't be surprised if he was the author behind the Find, Exploit, & Cure XSS w0rms video on milw0rm that inspired my delayed vision. Earlier that same month, two exploits were released for the Yahoo messenger: one for the Webcam Viewer and the other for its Uploader. The XSS w0rms video demonstrated an attack against Meebo, where all meebo users could have their YIM (or AIM, MSN, etc) buddies enumerated and attacked via an XSS worm. Combine these concepts and the attack effectively crosses the Fourth Wall from a XSS-based web application worm to a shellcode-based "fat applications" worm.