tssci security

Day 13: ITSM Vulnerability Assessment techniques

Lesson 13: Just this week, in lessons 12 and 13, we've covered -- at least partially -- how to significantly reduce risk and vulnerability to system and network infrastructure. We touched on protecting applications, but we weren't able to go into specific detail about how to handle the path of execution to the attacks, only levels of defenses through functionality.

At the system or OS level, there are functional defenses such as hardware stack protection and ASLR. Not every operating system includes these sorts of protections, and not all do them correctly. Sometimes they can even be subverted. I hear comments all the time about people trying to subvert GRSecurity or Vista ASLR. We know it's possible, it's just a matter of time and resources.

Today, we're going to cover how to decrease the footprint that malware or shellcode/exploit has access to the systems, and therefore underlying applications.

Part I: Information assurance vulnerability assessment — Protecting system access

Back in another place, another time -- firewalls did not exist and everyone logged into machines using Berkeley r-services. This led to complications such as the AIX "rlogin -froot" fiasco (which, just over a year ago now was found in the latest Solaris telnet, but I digress), and add to the fact that nearly everyone was on a public IP.

Everyone on a public IP with every service running -- all unencrypted. We have come a long way, and maybe we should even pat ourselves on the backs. However, success in security is measured in the number and criticality of breaches -- and if you compare where we were at now to where we were back then: we're failing!

In the mid 1990's, Cisco Systems purchased a company claiming to have the first Network Address Translation (NAT) device -- the PIX (or Private Internet Exchange). That company's name was Network Translation, Inc. and you can read about it all on the Wikipedia article about the Cisco PIX. Thus began the era where Cisco would coin RFC 1918 as a "security feature".

Unfortunately, Cisco was too late in my mind. SSH was also released that year, and any security professional in the know built their own NAT solutions where the only external services were SSH and maybe ftp and HTTP. It's funny that over the years, the only application/network-service that I've been owned through was SSH -- and it's doubly ironic that I used OPIE (one-time passwords) to access my account through SSH, as well as to su to root.

However, the reason why I was owned, and the reason that many people get owned is not because of a technical vulnerability (although at one point there had to be at least one software weakness exposed), but instead a simple concept of trust. While I was using SSH and OPIE to access my machine that I shared with another trusted administrator (who also used OPIE to access his account and su to root), it turns out that this other admistrator had made a special exception for a certain female friend of his. That person logged in from a terminal that was located at a certain commune that was built and occupied by a reputable hacking group. The ssh binary she used contained a backdoor, and the rest was history. Or, well, our machines were history.

In the late-90's (when this sort of thing went down regularly), the attacks were focused on system access to the server, and the exposure was mostly to attack the server. In the case above, it was the client that was backdoored -- but this wasn't the primary focus.

Today, client-side exploits are the primary focus, especially along with backdoors and botnets. The trust models become worse as we keep adding file and protocol handlers to our browsers and IM clients. AV software adds more file and protocol handlers to detect exploits in file and protocol handlers, making them more vulnerable to attacks as well. The attack surface and trust relationships built in modern software is at a peak of risk and exposure.

Recommendation: As an introduction to the below recommendations, you might want to check out my older post on Client-side attacks: protecting the most vulnerable.

First of all, protection at the local area network (LAN) must come in the form of SSL VPN (with both client and server side certificates correctly configured). You get your DHCP, you get your DNS locally. The DNS server should only have an entry for the DNS server, the DHCP server, and the SSL VPN server. I prefer open-source software that is code reviewed before deployment. Again, Pyramid Linux or Vyatta on embedded hardware are logical starting-points. For an SSL VPN server, I recommend checking out SSL-Explorer.

Once connected via SSL VPN, clients should be on another LAN that has another DHCP and DNS server on it, and two proxy servers (preferably something open-source such as Squid proxy). None of these DNS servers have full Internet access, only the DNS names and reverse for the local servers. For the two proxy servers, one can connect to Intranet, or local services -- while another can connect to external sources such as partners and the Internet.

Each Squid server can have access to the full Intranet or Internet DNS. This way the clients must set their web browser proxy (or pick it up automatically). Yes, this requires setting your proxy differently if you plan on access Intranet or Internet websites. I consider this a bonus.

Jay Beale gave talks on ClientVA at Toorcon and Shmoocon that involved setting up RSnake's Mr. T (Master Recon Tool) to check browsers for versions and plugin versions. Additionally, the Squid proxies can have every URL (or even POST operation) whitelisted using an open-source package such as Whitetrash.

Mr. T and the Squid proxy should be able to verify versions of QuickTime, Windows Media Player, Flash, and even Adobe Acrobat Reader (or other PDF viewer with a file handler / browser plugin). It won't find out which versions of MS Office or OpenOffice are installed. This is why some people recommend running MOICE, the Microsoft Office Isolated Conversion Environment, instead of the full Office version. Before I open files in MOICE and convert them into Office 2007 XML format, I also usually scan them with OfficeCat from the Snort project.

Additionally, users on WiFi will benefit from the SSL VPN immediately, but they could be at risk if their drivers are vulnerable to a kernel based attack. Using assessment software such as WiFiDEnum will check for these types of known vulnerabilities in local drivers.

Part 2: Software assurance vulnerability assessment — File inclusion attacks

Best {Remote|Local} file inclusion attack tools dorkscan.py, FIS, WebSpidah, Wapiti, Pitbullbot, RFIScan.py, Syhunt Sandcat

Best {Remote|Local} file inclusion attack helper tools AppPrint, AppCodeScan, Inspekt, RATS, PHP-SAT, PHPSecAudit, PSA3, PFF, SWAAT, ASP-Auditor, LAPSE, Orizon, Milk

File inclusion only affects PHP, but the concepts can easily be extended to remote script execution in JSP or ASP pages, both HTML based as well as driven by their scripting languages. There is quite a bit of information about file inclusion and remote scripting attacks available in the Web Application Hacker's Handbook, from Portswigger.

Portswigger also recently posted about CSRF and threat ratings on his blog, and he'll be teaching (from his book?) at BlackHat Eurpoe next week.

Posted by dre on Thursday, March 20, 2008 in Defense, Hacking, Itsm and Security.

blog comments powered by Disqus
blog comments powered by Disqus