- Replace CLSAGs with a single fcmp_pp
- fcmp_pp is an opaque vector of bytes. The length of the vector
is calculated from the number of inputs on serialization (i.e. the
length is not serialized, only the raw bytes are serialized)
- Includes tests for binary serialization happy path and errors
- validate output and commitment in tuple conversion function
- function to get_unlock_height from height in chain + unlock_time
- tx_outs_to_leaf_tuples function
- cleaned up trim impl (reduced num params in instructions and
conditional complexity)
- renamed locked_outputs table to locked_leaves (clearer tie to
merkle tree)
- size_t -> uint64_t for db compatibility across 32-bit and 64-bit
machines
- added hash_grow tests
The initial impl didn't capture the following edge case:
- Tree has 3 (or more) layers + 1 leaf layeri
- Leaf layer last chunk IS full
- Layer 0 last chunk is NOT full
- Layer 1 last chunk is NOT full
- Layer 2 last chunk IS NOT full
In this case, when updating layer 1, we need to use layer 0's old
last hash to update layer 1's old last hash. Same for Layer 2.
The solution is to use logic that checks the *prev* layer when
updating a layer to determine if the old last hash from the prev
layer is needed.
This commit restructures the grow_tree impl to account for this
and simplifies the approach as follows:
1. Read the tree to get num leaf tuples + last hashes in each layer
2. Get the tree extension using the above values + new leaf tuples
2a. Prior to updating the leaf layer, call the function
get_update_leaf_layer_metadata. This function uses existing totals
in the leaf layer, the new total of leaf tuples, and tree params
to calculate how the layer after the leaf layer should be updated.
2b. For each subsequent layer, call the function
get_update_layer_metadata. This function uses the existing totals
in the *prev* layer, the new total of children in the *prev* layer,
and tree params to calculate how the layer should be updated.
3. Grow the tree using the tree extension.
This approach isolates update logic and actual hashing into neat
structured functions, rather than mix the two. This makes the code
easier to follow without needing to keep so much in your head at
one time.