Fixpoint

2020-01-16

Draft gbw-node frontend, part 2

Filed under: Bitcoin, Software — Jacob Welsh @ 18:52

Continued from schema and part 1. Source: gbw-node.py.

What follows is a kluge and probably the sorest point of the thing in my mind. I will attempt to retrace the decision tree that led to it.

Ultimately what we're after is a way to examine all confirmed transactions so as to filter for relevant ones and record these in the database ("scanning"). Since on disk they're already grouped into blocks of constrained size, it makes sense from an I/O standpoint to load them blockwise. But how? We could read directly from the blk####.dat files, but this would go against the "loose coupling" design. Concretely, this manifests as several messes waiting to happen: do we know for sure that only validated blocks get written to these files? How will we detect and handle blocks that were once valid but abandoned when a better chain was found? Might we read a corrupt block if the node is concurrently writing? Much more sensible would be to work at the RPC layer which abstracts over internal storage details and provides the necessary locking.

Unfortunately there is presently no getblock method in TRB, only dumpblock which strikes me as more of a debugging hack than anything else, returning its result by side effect through the filesystem. I worried this could impose a substantial cost in performance and SSD wear, once multiplied across the many blocks and possible repeated scans, especially if run on a traditional filesystem with more synchronous operations than the likes of ext4. It occurred to me to use a named pipe to allow the transfer to happen in memory without requiring changes to the TRB code. I took this route, after some discussion with management confirmed that minimizing TRB changes was preferable. I hadn't really spelled out the thinking on files versus pipes, or anticipated what now seems a fundamental problem with the pipe approach: if the reading process aborts for any reason, the write operation in bitcoind can't complete; in fact, due to coarse locking, the whole node ends up frozen. (This can be cleaned up manually e.g. by cat ~/.gbw/blockpipe >/dev/null).

Given this decision, the next problem was the ordering of reads and writes, as reading the block data through the pipe must complete before the RPC method will return. Thus some sort of threading is needed: either decomposing the RPC client into separate send and receive parts so as to read the pipe in between, or using a more general facility. Hoping to preserve the abstraction of the RPC code, I went with the latter in the form of Python's thread support. I tend to think this was a mistake; besides all the unseen complexity it invokes under the hood, it turned out the threading library provides no reliable way to cancel a thread, which I seemed to need for the case where the RPC call returns an error without sending data. I worked around by making the reader a long-lived "daemon" thread, which ensures it terminates with the program, and using an Event object to synchronize handoff of the result through a global variable.

getblock_thread = None
getblock_done = Event()
getblock_result = None
def getblock_reader(pipe):
	global getblock_result
	while True:
		fd = os_open(pipe, O_RDONLY)
		getblock_result = read_all(fd)
		getblock_done.set()
		close(fd)

def getblock(height):
	global getblock_thread
	pipe = gbw_home + '/blockpipe'
	if getblock_thread is None:
		require_fifo(pipe)
		getblock_thread = Thread(target=getblock_reader, args=(pipe,))
		getblock_thread.daemon = True
		getblock_thread.start()
	if not rpc('dumpblock', height, pipe):
		raise ValueError('dumpblock returned false')
	getblock_done.wait()
	getblock_done.clear()
	return getblock_result

The astute reader may notice a final and rather severe "rake" waiting to be stepped on: a fixed filesystem path is used for the named pipe, so concurrent scan processes will interfere with the result of corrupted blocks and all sorts of possible failures following. While I'm not sure there's any reason to run such concurrent scans, at least this possibility should be excluded. I'm thinking the process ID would make a sufficiently unique identifier to add to the filename, but then the pipe will need to be removed afterward, and what if the process dies first? Yet more code to clean up possibly-defunct pipes? Or one could keep the fixed name and use filesystem locking. At any rate I hope this all makes it clear why I'm not too enamored of dumpblock.

To be continued.

10 Comments »

  1. do we know for sure that only validated blocks get written to these files?

    FTR, blocks that got "orphaned" in reorg in fact remain permanently in blk* files. TRB has no provision for their removal. This is why very rarely two installations have bitwise-identical blk* data.

    Comment by Stanislav Datskovskiy — 2020-01-16 @ 19:11

  2. @Stanislav Datskovskiy: indeed, and I had observed this behaviorally. I'm curious though: do you see this use of named pipes as an abuse of the spirit of "dumpblock"? Any particular reason you used the filesystem there rather than sending through the JSON-RPC response ala PRB "getblock"?

    Comment by Jacob Welsh — 2020-01-17 @ 00:16

  3. ...do you see this use of named pipes as an abuse of the spirit...

    Can't see why it would be "an abuse".

    ...Any particular reason you used the filesystem there rather than sending through the JSON-RPC response...

    Wanted to avoid either shitting binariola to stdout (no other TRB knob does this, and it is generally unwanted behaviour) or, alternatively, the need for 7bit-clean encodings (e.g. base64) (avoided, to keep the patch small and eliminate req for post-processing.)

    The "dumpblock" knob was orig. made for fast export of raw blocks, for archival, quick analysis, and (later) fast-sync of newly-built boxen. For this, the "dump to fs" mechanism sufficed. For your application it would IMHO make sense to patch TRB to cleanly get at the items you want.

    Comment by Stanislav Datskovskiy — 2020-01-17 @ 17:31

  4. Alright, thanks.

    Comment by Jacob Welsh — 2020-01-17 @ 17:49

  5. [...] 1, 2, 3 [...]

    Pingback by Draft gbw-node frontend, part 4 « Fixpoint — 2020-01-19 @ 16:38

  6. [...] 1, 2, 3, 4 [...]

    Pingback by Draft gbw-node frontend, part 5 « Fixpoint — 2020-01-19 @ 19:03

  7. [...] F. Welsh (WoT : jfw) which he documented on Fixpoint through his work with JWRD Computing in a 1 2 3 4 5 6, count'em, 6 article [...]

    Pingback by GBW-NODE : Gales Bitcoin Wallet Node verified acquisition, build, install and run in 21ish short, simple steps. « Dorion Mode — 2020-07-18 @ 14:56

  8. [...] gbw-node as a small component, trusting the data it receives from a local Bitcoin daemon, and I used dumpblock for lack of alternatives, it can thus import bad transaction data into its SQLite database and incorporate it in the derived [...]

    Pingback by Alert: dumpblock flaw in bitcoind can cause gbw-node to incorrectly report transactions « Fixpoint — 2021-06-20 @ 05:21

  9. [...] presentation of Python code for the node extension: 1, 2, 3, 4, 5, [...]

    Pingback by Gales Bitcoin Wallet (re)release « Fixpoint — 2021-12-03 @ 08:59

  10. [...] a dramatic reduction in scan time overhead per block, and finally a path toward full removal of the sorest point, the notorious dumpblock [...]

    Pingback by Smartening up gbw-node « Fixpoint — 2023-04-05 @ 01:38

RSS feed for comments on this post. TrackBack URL

Leave a comment

Powered by MP-WP. Copyright Jacob Welsh.