An (Important) Disproof Of The One-Time Pad

In a recent TechCrunch article I was quoted calling the one-time pad a “unicorn”. Inevitably, I was roundly criticized. But what none of the commenters (or myself) realized, however, was how appropriate the term “unicorn” was. The perfect security of the one-time pad, like a unicorn, is imaginary.

Claude Shannon’s seminal proof of the perfect secrecy of the one-time pad fails because he uses two different definitions of “random” but treats them as equivalent.

G. S. Vernam’s original 1917 paper proposing an unbreakable stream cipher introduces the concept with this sentence: “If, now, instead of using English words or sentences, we employ a key composed of letters selected absolutely at random, a cipher system is produced which is absolutely unbreakable.”

This implies that the set of letter sequences that are valid English is a completely different set from letter sequences chosen completely at random. This is obviously untrue. The set of letter sequences that are valid English is a subset of all possible letter sequences, not a separate set. If one chooses three letters completely at random, approximately 6% will be English words. 15% of random two letter sequences are English words.

But Vernam wasn’t thinking of random sequences in the probabilistic sense, he was thinking of sequences that have a high level of entropy (in the information theory sense). Bruce Schneier (in Applied Cryptography) describes a high entropy sequence as follows: “It looks random. This means that it passes all the statistical tests of randomness we can find… It is unpredictable… [it] should not be compressible”. A sequence with a high level of entropy never looks like “ABABABABABABABAB...“, for example. But because these kind of sequences are excluded from high-entropy keys, they are not random in the traditional sense.

This concept of randomness exists because these sorts of sequences are very useful for stream ciphers, to prevent cryptanalysis using frequency analysis. The distribution of letters (or bits) in a high entropy sequence is uniform even for relatively short sequences (i.e. is “locally random”), whereas when using an English language passphrase the letter “e” appears far more often than, say, the letter “q”. Computer random number generators produce high entropy sequences. However, these kinds of sequences are not appropriate for block ciphers (e.g. AES) which require truly random numbers.

Shannon understood that a one-time pad would require high entropy keys to be secure. He explicitly differentiates the Vernam cipher from earlier ciphers: “A running key cipher is a Vernam type system where, in place of a random sequence of letters, the key is a meaningful text. Now it is known that running key ciphers can usually be solved uniquely.”

However, when Shannon attempts to prove that the Vernam cipher is perfect, rather than simply secure, he uses a definition for the key that is truly random in the probabilistic sense. In his words: “Perfect systems in which the number of cryptograms, the number of messages, and the number of keys are all equal are characterized by the properties that (1) each M is connected to each E by exactly one line, (2) all keys are equally likely.” All keys are equally likely – even keys that don’t “look random”.

A simple example will demonstrate why the one-time pad can not be secure when using truly random keys.

We’ll use Schneier’s one-time pad example, from Applied Cryptography. First, using his choice of a high entropy key, and then using a different key that is possible if we were using a truly random key.

Schneier chooses the plaintext ONETIMEPAD and encrypts using the key TBFRGFARFM, producing the ciphertext IPKLPSFHGQ. But, if (as Schneier writes) “all keys are equally likely”, the one-time pad must be secure for every key.

Let’s choose the very first key (in alphabetical order): AAAAAAAAAA. When we encrypt our plaintext of ONETIMEPAD with this key, we end up with a ciphertext of… ONETIMEPAD. Oops.

While in theory it’s possible that an adversary (knowing we are using a one-time pad) could be fooled, this would only be possible if we live in Mos Eisley (“this is not the plaintext you are looking for“). A less weak-minded adversary would rationally assume that ONETIMEPAD was the plaintext, and that we had sent our message unencrypted. A cipher must have a formal security proof – it can’t be a state of mind.

But how likely is a key of AAAAAAAAAA really? That’s the thing about true randomness – the key AAAAAAAAAA is just as likely as a “random looking” key like TBFRGFARFM.

Of course, this particular key is only one of the problems with trying to achieve perfect secrecy with a truly random key. The keys BBBBBBBBBBCCCCCCCCCCDDDDDDDDDD, etc. turn our “perfect” one-time pad into a Caesar cipher, which is easily broken. Many keys produced randomly would be English language words or phrases, turning our one-time pad into a running key or Vigenère cipher.

We can’t fix Shannon’s proof by restricting our keys to only the high-entropy keys. By definition, perfect secrecy requires as many keys as possible messages, and so with a key of the same length as the message, all keys (not just the high-entropy ones) must be possible. A one-time pad cannott have “weak keys” the way DES does.

In conclusion, the Vernam (one-time pad) cipher can not be perfectly secure, because any proof of perfect secrecy would require two incompatible definitions of randomness. In fact, in some scenarios a well-implemented one-time pad is the least secure of all ciphers.

Jack Deneut is the founder of Zendo, an encrypted messaging service.