- Removed call to hash_init_point in constructor
- Replaced global static CURVE_TREES_V1 with a smart pointer
- Don't need to link Rust static lib when including curve_trees.h
- leaves table doesn't need dupsort flags, all leaves should be
unique by key
- rename fcmp -> fcmp_pp
- return when 0 leaves passed into trim_tree
- Can derive {O.x,I.x,C.x} from {O,C}
- Note: this slows down tests since they do the derivation both
on insertion into the tree, and when auditing the tree
- At the hard fork, we don't need to store {O,C} in the
output_amounts table anymore since that table will no longer be
useful
- validate output and commitment in tuple conversion function
- function to get_unlock_height from height in chain + unlock_time
- tx_outs_to_leaf_tuples function
- cleaned up trim impl (reduced num params in instructions and
conditional complexity)
- renamed locked_outputs table to locked_leaves (clearer tie to
merkle tree)
- size_t -> uint64_t for db compatibility across 32-bit and 64-bit
machines
- added hash_grow tests
The initial impl didn't capture the following edge case:
- Tree has 3 (or more) layers + 1 leaf layeri
- Leaf layer last chunk IS full
- Layer 0 last chunk is NOT full
- Layer 1 last chunk is NOT full
- Layer 2 last chunk IS NOT full
In this case, when updating layer 1, we need to use layer 0's old
last hash to update layer 1's old last hash. Same for Layer 2.
The solution is to use logic that checks the *prev* layer when
updating a layer to determine if the old last hash from the prev
layer is needed.
This commit restructures the grow_tree impl to account for this
and simplifies the approach as follows:
1. Read the tree to get num leaf tuples + last hashes in each layer
2. Get the tree extension using the above values + new leaf tuples
2a. Prior to updating the leaf layer, call the function
get_update_leaf_layer_metadata. This function uses existing totals
in the leaf layer, the new total of leaf tuples, and tree params
to calculate how the layer after the leaf layer should be updated.
2b. For each subsequent layer, call the function
get_update_layer_metadata. This function uses the existing totals
in the *prev* layer, the new total of children in the *prev* layer,
and tree params to calculate how the layer should be updated.
3. Grow the tree using the tree extension.
This approach isolates update logic and actual hashing into neat
structured functions, rather than mix the two. This makes the code
easier to follow without needing to keep so much in your head at
one time.
- When retrieving last chunks, set next_start_child_chunk_index
so can know the correct start index without needing to modify
the offset
- Other smaller cleanup
Read more about k-anonymity [here](https://en.wikipedia.org/wiki/K-anonymity). We implement this feature in the monero daemon for transactions
by providing a "Txid Template", which is simply a txid with all but `num_matching_bits` bits zeroed out, and the number `num_matching_bits`. We add an operation to `BlockchainLMDB` called
`get_txids_loose` which takes a txid template and returns all txids in the database (chain and mempool) that satisfy that template. Thus, a client can
ask about a specific transaction from a daemon without revealing the exact transaction they are inquiring about. The client can control the statistical
chance that other TXIDs (besides the one in question) match the txid template sent to the daemon up to a power of 2. For example, if a client sets their `num_matching_bits`
to 5, then statistically any txid has a 1/(2^5) chance to match. With `num_matching_bits`=10, there is a 1/(2^10) chance, so on and so forth.
Co-authored-by: ACK-J <60232273+ACK-J@users.noreply.github.com>
On startup, it checks against the difficulty checkpoints, and if any mismatch is found, recalculates all the blocks with wrong difficulties. Additionally, once a week it recalculates difficulties of blocks after the last difficulty checkpoint.
The db txn in add_block ending caused the entire overarching
batch txn to stop.
Also add a new guard class so a db txn can be stopped in the
face of exceptions.
Also use a read only db txn in init when the db itself is
read only, and do not save the max tx size in that case.
This curbs runaway growth while still allowing substantial
spikes in block weight
Original specification from ArticMine:
here is the scaling proposal
Define: LongTermBlockWeight
Before fork:
LongTermBlockWeight = BlockWeight
At or after fork:
LongTermBlockWeight = min(BlockWeight, 1.4*LongTermEffectiveMedianBlockWeight)
Note: To avoid possible consensus issues over rounding the LongTermBlockWeight for a given block should be calculated to the nearest byte, and stored as a integer in the block itself. The stored LongTermBlockWeight is then used for future calculations of the LongTermEffectiveMedianBlockWeight and not recalculated each time.
Define: LongTermEffectiveMedianBlockWeight
LongTermEffectiveMedianBlockWeight = max(300000, MedianOverPrevious100000Blocks(LongTermBlockWeight))
Change Definition of EffectiveMedianBlockWeight
From (current definition)
EffectiveMedianBlockWeight = max(300000, MedianOverPrevious100Blocks(BlockWeight))
To (proposed definition)
EffectiveMedianBlockWeight = min(max(300000, MedianOverPrevious100Blocks(BlockWeight)), 50*LongTermEffectiveMedianBlockWeight)
Notes:
1) There are no other changes to the existing penalty formula, median calculation, fees etc.
2) There is the requirement to store the LongTermBlockWeight of a block unencrypted in the block itself. This is to avoid possible consensus issues over rounding and also to prevent the calculations from becoming unwieldy as we move away from the fork.
3) When the EffectiveMedianBlockWeight cap is reached it is still possible to mine blocks up to 2x the EffectiveMedianBlockWeight by paying the corresponding penalty.
Note: the long term block weight is stored in the database, but not in the actual block itself,
since it requires recalculating anyway for verification.