Mark Jenkins

Information Technologist


Passphrase Cracking

A fellow member of Skullspace forgot a valuable encryption passphrase.
I cracked it with a custom program I wrote in python. Here’s a sample run:

$ python sample.key 
37 words, 33 punctuation choices, 2 constants
up to 196791012 passphrases will be tried
found #tigeran7ysalmon after 17884626 tries
L3E9iJuT298YBoT3m3asSws4Hy9GTTM193w9U4ogtdXGU4zpDifF 2013-08-03T10:51:55Z

Before coding, I conducted a forensic password interview, and he remembered:

  • One of four words, “tiger”, “bear”, “salmon”, and “elephant”
  • Word is capitalized or lower case
  • The use of a four letter constant, “an7y” right after that
  • The use of a punctuation mark such as # possibly as a prefix, suffix, or possibly middle separator from the word and constant being used twice.

So he was thinking something like #bearan7y#tigeran7y.

Starting with these four words I built a larger word list, added each of the original words with the first or last letter missing, and took this larger list and added in a capitalized and fully upper case version of each word. Also added in the blank word ('') to cover the case of no word being used at all.

WORDS_ORIGINAL = WORDS = ('tiger', 'bear', 'salmon', 'elephant')

# with last letter missing
WORDS = WORDS + tuple( word[:-1] for word in WORDS_ORIGINAL)
# with first letter missing
WORDS = WORDS + tuple( word[1:] for word in WORDS_ORIGINAL )

WORDS = WORDS + tuple( word.capitalize() for word in WORDS_LOWER)
WORDS = WORDS + tuple( word.upper() for word in WORDS_LOWER)
/WORDS = WORDS + ('',) # word missing

Which gives us

('tiger', 'bear', 'salmon', 'elephant',
 'tige', 'bea', 'salmo', 'elephan',
 'iger', 'ear', 'almon', 'lephant',
 'Tiger', 'Bear', 'Salmon', 'Elephant',
 'Tige', 'Bea', 'Salmo', 'Elephan',
 'Iger', 'Ear', 'Almon', 'Lephant',
 'IGER', 'EAR', 'ALMON', 'LEPHANT', '')

I used the builtin punctuation set in python plus blank ('') to include the possibility of not using any punctuation

from string import punctuation
PUNCS = tuple(punctuation) + ('',)

It looks like

('!', '"', '#', '$', '%', '&', "'", '(', ')', '*', '+', ',', '-', '.', '/',
 ':', ';', '<', '=', '>', '?', '@', '[', '\\', ']', '^', '_', '`', '{', '|',
 '}', '~', '')

A similar treatment for the four letter constant


In order to try all combinations of punctuation + word + constant + punctuation + word + constant + punctuation, I took advantage of the builtin (cross)product function to try all possibilities:

from itertools import product
for punc_1, punc_2, punc_3, word_1, word_2, four_let_1, four_let_2 in \
        product(PUNCS, PUNCS, PUNCS, WORDS, WORDS,
    passphrase = (
        punc_1 + 
        word_1 + 
        four_let_1 +
        punc_2 + 
        word_2 +
        four_let_2 +
    result, plaintext  = try_decrypt(ciphertext, salt, passphrase)

This is much cleaner then nesting eight loops:

for punc_1 in PUNCS:
    for punc_2 in PUNCS:
        for punc_3 in PUNCS:
            for word_1 in WORDS:
                for word_2 in WORDS:
                    for word_3 in WORDS:
                        for four_let_1 in FOUR_LETTER_CONSTANTS:
                            for four_let_2 in FOUR_LETTER_CONSTANTS:
                                passphrase = (punc_1 + 
                                              word_1 + 
                                              four_let_1 +
                                              punc_2 + 
                                              word_2 +
                                              four_let_2 +
                                 result, plaintext  = try_decrypt(
                                           ciphertext, salt, passphrase)

Here’s a random sample of the passwords this ends up trying


You can find my full code in a GitHub Gist that also includes a sample .key file this can crack.


Are your backups safe from your backups?

People keep backups so they can recover data if it is lost or altered. They imagine how relieved they will feel if a disaster wipes out their original data but the information is safely stowed somewhere else.

But, how safe are your backups?

Often we don’t think we’ll be unlucky enough to lose both our original and backed up data at the same time. Or we think that the disaster is so unlikely that we are willing to accept the potential consequences, especially if the data isn’t that important. For example, a fire in your home could wipe out a music collection in your den and the backup in the basement. However, if that happens, you’ve got bigger problems than some missing mp3s, so you’re willing to take the risk.

In that scenario, both copies are lost because you put them in the same building. They had a common link — location — that gave them a common fate. It was obvious that the disaster was possible all along.

Often, however, we compromise our backups by linking them to our originals in ways that aren’t obvious.

Consider the popular storage service Dropbox, which is said to have 175 million users. They provide a folder on your computer that is automatically synchronized with an online copy managed by Dropbox (or you can access it through the web).

A malicious person who has access to your computer can delete or deface the documents, and these changes will be backed up to Dropbox as well, erasing the original. This could be someone breaking into your premises, your disgrunted employee-nephew, or malware. The problem here is that the very location where the original data is stored also has access to the backups.

With systems like Dropbox, you can discover just how easily your files can be compromised, because by directly interacting with the software you realize that your own changes and deletions show up in both the original and backup. It is clear that someone else sitting at your computer could do the same. (You also realize that Dropbox doesn’t protect you from your own mistakes.)

Other backups schemes have the same problem, but are less transparent to the user. Many popular schemes available today are automatic, run overnight, and quietly copy your data to an offsite location.

What you may not realize is that the qualities that make such a scheme convenient also make it more dangerous.

To run automatically, your computer has to retain login credentials for whatever online storage service is being used.  Those credentials can be used for their intended purpose, or for a malicious purpose – to log into the backup service and delete your backups while also deleting the original on your computer.

There’s a naive solution to this problem: configure the backup system to only transmit file changes, never orders to delete.

No problem. As your malicious babysitter Rupert, I will just change the 200 page thesis that you have both on your computer and on your backup service to the following one-line poem: “There once was a man named Foobar”. Then I’ll trigger a re-run of the backup routine (or just wait for it to kick in automatically at 2am if I’m lazy). Now we have an original and backup copy of my work work and no copies (original or backup) left of your thesis. I’m sorry.

This could still be a problem if you do manual, local backups with something like a USB drive. If your data is damaged and you manually back it up on on top of your backup copy, then you’ve got two copies of garbage and no copy of a good version.

Keeping multiple versions of the whatever you work with on the same portable drive might not save you from this either. Let’s make this Hollywood and consider a high stakes industrial sabotage scenario. You’re an engineer at Small Lab Inc. and are designing Titanic II. Every time you start a major revision, you copy and rename your design file to reflect your new version, e.g. “titantic_II_v35.svg”. As you finish for the day, you pull out your USB drive and add this new revision to your set of backups.

Too bad that Larry, the polite intern with keys and overnight access, installed a malware program that triggers when your USB drive is connected. His malicious program goes after all of them, titantic_II_v1.svg, titantic_II_v2.svg… titantic_II_v35.svg, both on your main drive and backup drive. He replaces all of them with Nyan Cats. All of them.

If you’re very lucky, he at least retains his own copy of the originals for extortion purposes, or says, “Happy April Fool’s Day! Here’s your file back.”

There are two things to take away from all of this:

First, you must have multiple backup copies of your files.

Second, you must isolate all your other backups from the process by which your latest backup takes place.

If you’d like to avoid ending up with all Nyan Cats, I can help. I can evaluate your existing schemes and implement new ones.


Plone CVE-2011-0720 details

This was originally posted on the Full Disclosure mailing list, April 17, 2011.

A replacement for the broken skullspace blog link is
(see the article “Hackathon 4 was a huge success!”)

This is in regards to CVE-2011-0720, a Plone vulnerability announced in early February.

As noted on
“An attacker can exploit this issue using a browser.”

To fill in a few more details:

Plone is implemented with Zope — an object oriented system web application framework. Many Zope objects can be referenced by url of a file system like hierarchy formed by object names. Methods of such objects are thus addressable as /path_to_parent_object/path_to_object/name_of_method . Arguments as listed in these function definitions co-respond to field names as per standard URL encoding (

Object paths consist of object names and are not necessarily related by type. To search by object type, use the find feature in the Zope Management Interface.

I studied the released hotfix and documented co-responding patches in the subversion repositories that were slated to go into Plone 4.0.4 . (easier than reading the hotfix)

Used the Zope Management Interface find feature in my own test deployment of Plone 4.0.3 to find objects of the affected types.

Searching for type “Pluggable Auth Service” (PAS) as patched by was most fruitful. On default Plone installations a PAS can be found in /acl_users/ for each installed site.

The exposed getUsers and userSetPassword methods are a fairly dangerous combination that can be exploited by anonymous attackers. Other functions are of more limited value or require stronger permissions.

These methods are also listed in the log checker
but with the /acl_users/ part absent.

— End Details —

On the matter of disclosure gap and necessary capabilities:

I spent around 16 waking hours and 26 clock hours to go from having seen the original vulnerability announcement to exploiting. This is in my guess a high upper bound for the capabilities required to go from “vuln” to “sploit”.

I had only user-level prior familiarity with Plone and no prior familiarity with Zope.

To test if someone else could reasonably translate these public vulnerability details into an exploit, I presented the basic knowledge of Zope URL based invocation and how I found /acl_users/, and pointed to the above relevant patch over the course of 2 hours at a
competition/talk on March 19th. Another individual was able to identify the appropriate function name and arguments with an additional hour, escalated to an administrator account, and vandalized a test site running for the occasion.

I regret that a recording was not made despite best efforts and that my slides are of such limited detail to not warrant publication.
(this email has way more useful information)

Though both myself and the other individual have programming backgrounds, I guess that a moderately determined individual without such capabilities could also close the disclosure gap.

The crucial step of finding /acl_users/ with the find feature in ZMI is an interactive, “play and use”, kind of step. Finding the relevant function name is a matter of reading. The direct relationship between the method names and argument names with the URLs is spelled out in multiple Zope tutorials.

Correct me if I’m wrong, but I believe this post is the first public comment to go beyond the patches, hotfix, and logchecker released by the Plone foundation.

Mark Jenkins


In the end, not quite:
“you’ll have 30 minutes before the exploit worms start knocking on
doors, I say.”

But probably not
“I have doubts if there will be an exploit script ever”


Reducing redundancy in bind zone files

I assume fairly advanced knowledge of bind and DNS here.

I’m feeling redundant

If you’ve ever looked at the zone files in a typical BIND DNS setup, you’ll find quite a bit of redundancy between them. Every single zone will have a separate file, each with SOA, NS, A, and possibly MX and CNAME records. Often these files are almost identical to each other, typically the result of the file being copied from another. I’ve encountered this most often on systems that provide web and email service for several domain names.

Each time a new domain name is added, the admin typically copies a previous zone file. If they’ve been using absolute (not relative) record names in the zone files, they have to go through the new file and change this everywhere. If they’ve been using the $ORIGIN directive they have to change that too.

Change is hard

Sure enough, a day comes where the admin decides to make a change that is common to all of these zones, such as an ip address (A record) change, new secondary name server (NS), etc.. If there are many zone files, the change will require some serious scripting. This won’t just be a matter of replacing an ip address in all places with another, serious DNS admins will lower the TTL, wait the old TTL, change the record, and bring back the OLD TTL when making some kinds of changes.

Now, double all of this trouble and opportunity for screwup when you start using split DNS. (this means your nameserver gives different answers to queries depending on where they original from, common use case is to provide different service to a LAN from the world)

To avoid what I call “zone file redundancy hell”, you should take advantage of the $INCLUDE directive in your zone files to move redundant information into one place.

Common Configuration

I’m going to take you through an example setup that provides authoritative, master nameservice for cool.tld. and super.tld., and avoids redundancy. Note that my configuration files are derived from the bind9 package in Debian and Ubuntu, which puts configuration in /etc/bind where they belong. You can adapt the ideas here to a more typical BIND configuration.

// this is named.conf, it implements split DNS
include "/etc/bind/named.conf.options";

view "local_network"
	match-clients {localhost; };
	recursion yes;

	// prime the server with knowledge of the root servers
	zone "." {
		type hint;
		file "/etc/bind/db.root";

	// Consider adding the 1918 zones here, if they are not used in your
	// organization
	include "/etc/bind/zones.rfc1918";

	// be authoritative for the localhost forward and reverse zones, and for
	// broadcast zones as per RFC 1912

	zone "localhost" {
		type master;
		file "/etc/bind/db.local";

	zone "" {
		type master;
		file "/etc/bind/db.127";

	zone "" {
		type master;
		file "/etc/bind/db.0";

	zone "" {
		type master;
		file "/etc/bind/db.255";

	include "/etc/bind/internal_zone_list.zones";

view "external_network"
	match-clients {!localhost; any; };
	recursion no;

	// prime the server with knowledge of the root servers
	zone "." {
		type hint;
		file "/etc/bind/db.root";

	include "/etc/bind/zone_list.zones";

/etc/bind/zone_list.zones and
are our zone list files. Both of them contain zone entries for cool.tld. and super.tld. . One specifies the zone files for the external side of the split DNS, /etc/bind/cool.tld.db and /etc/bind/super.tld.db, and other for the internal side, /etc/bind/internal_cool.tld.db and /etc/bind/internal_super.tld.db . To avoid redundancy, we don’t want to have to manualy edit both of these files where a new zone is added, so we maintain a common file zone_list_file


and we use a Makefile and a python script (make_zone_file) to autogenerate zone_list.zones and internal_zone_list.zones from zone_list_file.

# Makefile

all: zone_list.zones internal_zone_list.zones

zone_list.zones: $(ZONE_LIST) Makefile
	./make_zone_list --prefix "/etc/bind/" \
	--suffix $(ZONE_FILE_SUFFIX) $^ > $@

internal_zone_list.zones: $(ZONE_LIST) Makefile
	./make_zone_list --prefix "/etc/bind/internal_" \
        --suffix $(ZONE_FILE_SUFFIX) $^ > $@


#!/usr/bin/env python

from optparse import OptionParser
from sys import stdout

option_parser = OptionParser()
option_parser.add_option("-p", "--prefix", default="")
option_parser.add_option("-s", "--suffix", default="")
(options, args) = option_parser.parse_args()

def iterjoin(join_str, iterable):
    first = True
    for value in iterable:
        if not first:
            yield join_str
            first = False
        yield value

if len(args) > 0:
    input_file = file(args[0])
        ("""zone "%(zone_name)s" {
\tfile "%(file_prefix)s%(zone_name)s%(file_suffix)s";
\ttype master;
""" % {'zone_name': line.strip(), 
       'file_prefix': options.prefix,
       'file_suffix': options.suffix, }
         for line in input_file 
         if len(line.strip()) > 0 ) ) )

You end up with autogenerated zone entries like:

zone "cool.tld" {
	file "/etc/bind/cool.tld.db";
	type master;

We’ve thought through it, and we already know that all of our zones are going to have a common set of SOA, NS, CNAME, and MX records, and a common TTL for all records, so we create common_TTL_SOA_NS_CNAME_MX_for_cool_zones.db

; default TTL
$TTL 3h

; common SOA
@       IN      SOA     cool.tld. (
	2008081401 ; serial, todays date + todays serial
	3H         ; slave refresh frequency
	15M        ; slave retry rate when refresh fails
	4W         ; expire time until slaves give up on refresh
	2D )       ; minimum-TTL if one isn't specified

; common NS
@	NS	cool.tld.

; common CNAME
www	CNAME	@

; common MX
@	MX	10 cool.tld.

Different view

Now we start getting into the differences between zones. We want to have different A records for the internal view of our split DNS compared to the external view. So, we define common_TTL_SOA_NS_CNAME_MX_A_for_cool_zones.db :

$INCLUDE "/etc/bind/common_TTL_SOA_NS_CNAME_MX_for_cool_zones.db";
@	A

and common_TTL_SOA_NS_CNAME_MX_A_for_internal_cool_zones.db

$INCLUDE "/etc/bind/common_TTL_SOA_NS_CNAME_MX_for_cool_zones.db";
@	A

With those files in place, we don’t even need real files zone files for cool.tld. and super.tld, we could simply create symlinks from common_TTL_SOA_NS_CNAME_MX_A_for_cool_zones.db to cool.tld.db and super.tld.db and from common_TTL_SOA_NS_CNAME_MX_A_for_internal_cool_zones.db to internal_cool.tld.db and internal_super.tld.db . Now we can query the system for (cool.tld., SOA), (super.tld., SOA), (cool.tld., NS), (super.tld., NS), (, CNAME), (www.super.tld., CNAME), (cool.tld., MX), and (super.tld, MX).

More, zones!

If we want to add another domain name (new.tld.), it is a few simple steps

  1. Add it to zone_list_file
  2. Run make to regenerate zone_list.zones and internal_zone_list.zones
  3. Add a symlink from common_TTL_SOA_NS_CNAME_MX_A_for_cool_zones.db to new.tld.db and common_TTL_SOA_NS_CNAME_MX_A_for_internal_cool_zones.db to internal_new.tld.db
  4. Reload nameserver

More subdomains!

Time for another change, we want to add more subdomains to cool.tld, but not have them apply to super.tld. The cool.tld.db and internal_cool.tld.db zone files are thus now different, they can no longer be sym links, so we make real files

pics	CNAME	@
chat	CNAME	@


$INCLUDE "/etc/bind/common_TTL_SOA_NS_CNAME_MX_A_for_cool_zones.db";
$INCLUDE "/etc/bind/cool_extra_sub_domains.db";


$INCLUDE "/etc/bind/common_TTL_SOA_NS_CNAME_MX_A_for_internal_cool_zones.db";
$INCLUDE "/etc/bind/cool_extra_sub_domains.db";

As a result, we now have (, CNAME), and (, CNAME), and we have them in both the internal and external view of the cool.tld. zone. How much work is there if we want one more subdomain? Just add it to cool_extra_sub_domains.db and again, both zones with have it.

All files are available in my GitHub Gist