Re: Proof-of-work difficulty increasing

Participants: Laszlo Hanyecz

Maybe someone with a little background in this statistics/math stuff can shed some light on this..

The way this thing works is it takes a (basically random) block of data and alters a 32 bit field inside it by starting at 1 and incrementing. The block of data also contains a timestamp and that’s incremented occasionally just to keep mixing it up (but the incrementing field isn’t restarted when the timestamp is update). If you get a new block from the network you sort of end up having to start over with the incrementing field at 1 again.. however all the other data changed too so it’s not the same thing you’re hashing anyway.

The way I understand it, since the data that’s being hashed is pretty much random and because the hashing algorithm exhibits the ‘avalanche effect’ it probably doesn’t matter if you keep starting with 1 and incrementing it or if you use pseudo random values instead, but I was wondering if anyone could support this or disprove it.

Can you increase your likelihood of finding a low numerical value hash by doing something other than just sequentially incrementing that piece of data in the input? Or is this equivalent to trying to increase your chances of rolling a 6 (with dice) by using your other hand?