Block-header-only, faster startup client
Bitcoin newbies have to endure an hour or two (or more) while bitcoin downloads and indexes all transactions and blocks.
Satoshi has mostly implemented code that downloads just the block headers; as long as you’re not generating blocks, you don’t need all the old transactions.
See the blockheaders feature branch here for initial work on this. Notes from Satoshi:
CBlockIndex contains all the information of the block header, so to operate with headers only, I just maintain the CBlockIndex structure as usual. The nFile/nBlockPos are null, since the full block is not recorded on disk.
The code to gracefully switch between client-mode on/off without deleting blk*.dat in between is not implemented yet. It would mostly be a matter of having non-client LoadBlockIndex ignore block index entries with null block pos. That would make it re-download those as full blocks. Switching back to client-mode is no problem, it doesn’t mind if the full blocks are there.
If the initial block download becomes too long, we’ll want client mode as an option so new users can get running quickly. With graceful switch-off of client mode, they can later turn off client mode and have it download the full blocks if they want to start generating.
@gavinandresen, we need to add another service category (NODE_SPV??) for SPV clients, so that they can operate without getting queried for blocks by other peers.
Why, how is this any different from what SPV clients do? See #3884 for another step towards this.
@laanwj, my understanding is that most SPV clients at the moment either get headers from a centralised source (Electrum) or use their own p2p network. I don’t know, some may use the main Bitcoin network, but wouldn’t they be queried for blocks by other clients and have to ignore those requests which could confuse the other peer (maybe?). I think it would be neater if other peers knew that you could be asked for headers, but not for blocks/txs. Anyway, sorry to bring up an old issue. Just thought someone should start working through the massive backlog, so I started from the back.