The abuse, and the fix that depends on game thoery

Over a long time, while not affecting the overall network, Polkadot contained a really interesting vulnerability related to the zero address. It has a simple, yet really tricky fix, whose success depends on game thoery. This post documents the interesting aspects of this vulnerability.

Polkadot’s addresses are arrays of 32 bytes, encoded in SS58 format. The zero address is where all those 32 bytes are zero. In Polkadot, the zero address encodes to 111111111111111111111111111111111HC1.

In Ethereum, zero address exists as well, at 0x0000000000000000000000000000000000000000. Because the private key of the address is unknown (and is impossible to know), the address is considered a “burn address”. In Polkadot, some people inherited the idea and attempted to burn money by sending it to Polkadot’s zero account.

Unfortunately, it turned out that the private key of Polkadot’s zero account is well known.

Building on the elliptic curve, a private key is simply a scalar. The public key of the corresponding private key is the private key times the curve basepoint on the elliptic curve. The private key for zero account turns out to be simply all-zero.

Using a fork of schnorrkel (the crypto library of Polkadot), we can generate valid signatures for the zero address.

const SIGNING_CTX: &[u8] = b"substrate";

let privkey_raw = [0x00; 32];
let privkey_scalar =
schnorrkel::Scalar::from_bytes_mod_order(privkey_raw);

let mut privkey = schnorrkel::keys::SecretKey::generate();
privkey.key = privkey_scalar.clone();

return privkey.sign_simple(SIGNING_CTX, b"a test message", &pubkey)

You now get a valid signature for the zero address and can use this method to send transactions on its behalf.

In Ethereum, the above method does not work, not only because of not having a zero private key, but also that the address is a hash of public keys (instead of in Polkadot’s case, raw public keys). Having zero private key, on the other hand, simplifies certain crypto protocols like secret sharing, but brings unintended side effects like above.

In the wild

Hackers have long discovered Polkadot’s zero address and was abusing it in the wild.

In the past, several wallets, either due to considering the zero address as burn address or software errors, sent ~2000 DOT in total to the zero address. Those money was drained away.

In addition, the hacker has been trying to send various extrinsics to see if there are additional vulnerabilities in relation to the zero address. Fortunately, none of such vulnerabilities exist. In certain time, someone probably got bored and proposed a treasury tip. None seemed to matter, and everything was later simply drained away.

The fix that depends on game theory

After the issue was discovered, a fix was proposed. The fix looks simple in the beginning – we disable the zero address from sending out transactions.

A hacker can easily defeat the fix by utilizing Polkadot’s proxy feature. A proxy can be set on the zero address while it can send transactions. Because the fix will always take time to be applied (as there’s an activation period for a runtime upgrade), a hacker can always attempt to set the proxy in the last minute. After the fix, the zero address acts same as an anonymous proxy and can continue to be used as a normal account. This is either valuable due to the signature value of zero address, or for the prospect that people will continue to mistakenly use the zero address as burn address. If the hacker succeeds, the fix is in vain.

However, the hacker cannot assume that he is the only one abusing the zero address. There may be others. If there are, then they can abuse the hacker’s abuse and drain his deposit fees, by repeatedly sending batch([remove_proxies, transfer_all]) extrinsics.

For hackers that try to defeat the fix, there is a fixed cost of deposit fee. For hackers that try to drain other hackers’ deposit fees, there is no associated cost. Both groups have certain chances of success. The best strategies for both groups are to wait till the very last block until the fix runtime upgrade, and compete to be the last extrinsic included in the block. An additional factor is that the first group requires two extrinsics (one for sending the deposit, and another for setting the proxy), while the second group only requires one. In this case, the second group certainly has a much higher chance of success. However, we still need to note that there is a reasonable chance that someone managed to defeat the fix.

In reality, the above did not happen. The hacker was probably aware of the strategy to defeat the fix, but ended up deciding that it is too much of a risk, and gave up. He never tried again.