Entropy in cryptography – A Secret to Share

3.7.2 Entropy in cryptography

So, why is entropy so fundamental to cryptography? If the source used to generate secrets (or unique values used in cryptographic protocols) has a poor entropy, the number of values that can be possibly drawn from that source will be limited, and some values will be (much) more likely than others.

That, in turn, makes it much easier for Eve to mount a brute-force attack by exhaustively searching the key space because there are much fewer values to test and Eve can concentrate on the most common ones. This is exactly what happens in password-guessing attacks: Eve knows exactly which passwords are the most likely to be chosen by Alice and starts her attack with a list of the most common ones.

As another example, consider the key space of the AES-256 encryption algorithm. Its size is 2256. If the AES-256 key was selected using a truly random source, Eve would have to try on average 2255 candidate keys before she comes to the correct key. If, on the other hand, Alice generates the key by first choosing a 32-bit random number and then turning it into a 256-bit key using some complicated but deterministic expansion algorithm E, then Eve needs to try only 231 possible keys on average (obtained by running every possible 32-bit random number through E).

To prevent this, Alice and Bob must generate the keys or, in case of key agreement, the keying material, using sources that have enough entropy. In other words, they need to generate them randomly.

Therefore, cryptography employs random or pseudo-random number generators for generating keys. According to [117], ”a Random Number Generator (RNG) is a device or algorithm that outputs a sequence of statistically independent and unbiased binary digits”. We now turn to the difference between true randomness and pseudo-randomness.

Leave a Reply

Your email address will not be published. Required fields are marked *