tssci security

Day 7: ITSM Vulnerability Assessment techniques

Lesson 7: Today I wanted to bring the real meaning behind these techniques into the spotlight. Learning about how IT groups do real security is only part of this.

I'm also talking about what I've seen that IT security shops don't do. What penetration-testers or auditors don't recommend. What everyone misses. This is my way of shaping the industry in a way. I feel it's extremely important.

You may laugh about some of the ideas or think they are trivial. That's fine -- let me know what you do and don't like! You may also be lost in a sea of acronyms and content. This is why I make sure to recommend additional books you can read that explain the topics well.

I see all of this as a step-up for instructional capital around vulnerability assessments. I want to give everyone a chance to improve their skillsets, techniques, as well as toolchains.

Part 1: Information assurance vulnerability assessment — Network Access Control

This post is not about NAC, the common technology you may all be familiar with or studying/researching. The Cisco version of NAC is a poor concept. The endpoint security work in the IETF by Juniper Networks and the Microsoft work (or StillSecure and anybody else) is also a complete waste of time. This industry and these products are dead and or dying. So let's just skip Cisco NAC and concentrate on the real topic: access controls in networks.

Is anyone here familiar with VLAN's? How about VACL's? What about Cisco's new access control layer in the network? This is more along the lines of what I'm talking about. However, I don't want to get into protecting the network infrastructure (that's next week).

Recommendation: There is a concept, which is really more of a dream that I have. In the spirit of Martin Luther King Jr. (who was born on this day, 79 years ago) -- I am going to get my favorite industry to believe in this dream. To believe in a network that has no traffic except SSL, GPG, and SSH (or any of the other secure channel technologies talked about on Day 5). Every packet that flies over the WiFi, the Ethernet, and between servers is all wrapped in happy encryption. All ports are closed unless they speak SSL.

But wait -- what about DNS? Surely, DNS must be used for the local Intranet -- to get to local servers and printers, and to get to an SSL VPN or automatically-configured SSL proxy (so that packets can go to/from the global Internet). But Internet-aware DNS can be restricted internally, so that only the SSL proxy has access to the Internet. This keeps people from tunneling SSH in DNS or other strange network affairs.

All other services besides some Intranet DNS, a little printer port access, and SSL can be completely shut down and shut off. We don't need it anymore. This is 2008, not 1998. Certainly, there will be exceptions at some companies or organizations -- but these can be monitored with compensating controls. Even Intranet servers and printers should technically be behind an "Intranet proxy" which works in a similar way as the Internet SSL proxy. All traffic can thus be wrapped in SSL, so that printers can be accessed via HTTP (although it would be nice if they supported SSL natively). The Intranet proxy could then whitelist scripts coming from the Internet to prevent cross-site printing or other unwanted script execution.

Better - the SSL proxies can be configured with concepts from ClientVA and Whitetrash. All client-side applications that access the Internet not only are passively verified for their version numbers (also don't see: NAC scanning which actively verifies by scanning), but they are additionally only allowed to certain URI's via a whitelist.

I'm not really big on Windows or Apple file sharing for distributed clients. iSCSI might be ok on a server backend network, but these sorts of protocols don't belong on a safe network regardless of how they are encrypted. I'm mostly referring to SMB, which is safe to turn off in 2008. We have Intranet wiki software that allows flies to be uploaded instead.

Intranet web servers that do allow file uploads/downloads should have the presence of web shells or malicious Javascript detected. Some network-IPS devices do block some web shells or malicious Javascript content, but this is hit or miss. It's best to run a utility such as SpyBye locally, or utilize a commercial tool such as Nessus or PVS to detect malicious Javascript. While on the server, documents can also be scanned for versions by using Metagoofil or OfficeCat.

Also, ideally all user clients would be thin clients -- but this is possibly a dream for a different day.

Part 2: Software assurance vulnerability assessment — Stored XSS

Best Stored-XSS attack tools

w3af, HTMangLe, Hackvertor (or anything Gareth Heyes writes), PostInterceptor, Tamper Data, Burp Suite, Paros, OWASP WebScarab, XSSscan.py, Acunetix XSS Scanner, Syhunt Sandcat Free, Wapiti, OWASP CAL9000, Wfuzz, SPIKEproxy, Gamja, screamingCSS

Best XSS attack helper tools Web Developer, RSnake's Security Bookmarklet methodToggle, .mario's Encoding tools, Dean Edwards' Packer, JSMin, RSnake's XSS Cheat Sheet, OWASP SWFIntruder, HackVertor (and everything else Gareth Heyes writes), HTMangLe, ExtendedScanner, Hunting Security Bugs' Reference for ASP.NET Control Encoding and Web Text Converer, WhiteAcid's XSS Assistant, Net-Force Tools' NF-Tools, Hunting Security Bugs' Overlong UTF, RefControl, User Agent Switcher, ProxMon, OWASP Pantera, AppPrint, AppCodeScan, FindBugs, FxCop, Pixy (for PHP), Milk (for Java), SWAAT (for ASP, JSP, PHP), ASP-Auditor (for ASP.NET), XSSDetect (for .NET), Mod-Security, PHP-IDS, CORE GRASP, Inspekt, RATS, PSA3, PFF, PHP-SAT, PHPSecAudit, Orizon, LAPSE

Unlike black-box testing Reflected-XSS, it is highly recommended that every time that you change a parameter or POST a form -- that each test string be unique. Know which tool you used and where in the application your vector was initially sent. Stored-XSS can pop up anywhere throughout the web application much in the same way that a fuzzer can crash a fat application by sending a sequence of strings. This is the reason why humans are needed and manual testing must be done to expose all flaws.

However, I am highly impressed with the accuracy and methods used when doing manual code review for finding XSS. There are plenty of SQL injection and XSS that can only be found by looking through the source code. Often, either finding the source code through information disclosure or other means (identifying an open-source application or component is the easiest/obvious way) will allow finding the most obscure of stored-XSS vulnerabilities.

Posted by Dre on Tuesday, January 15, 2008 in Defense, Hacking, Itsm and Security.

blog comments powered by Disqus
blog comments powered by Disqus