On Mac, size_t is a distinct type from uint64_t, and some
types (in wallet cache as well as cold/hot wallet transfer
data) use pairs/containers with size_t as fields. Mac would
save those as full size, while other platforms would save
them as varints. Might apply to other platforms where the
types are distinct.
There's a nasty hack for backward compatibility, which can
go after a couple forks.
There are quite a few variables in the code that are no longer
(or perhaps never were) in use. These were discovered by enabling
compiler warnings for unused variables and cleaning them up.
In most cases where the unused variables were the result
of a function call the call was left but the variable
assignment removed, unless it was obvious that it was
a simple getter with no side effects.
A 20% fluff probability increases the precision of a spy connected to
every node by 10% on average, compared to a network using 0% fluff
probability. The current value (10% fluff) should increase precision by
~5% compared to baseline.
This decreases the expected stem length from 10 to 5. The embargo
timeout was therefore lowered to 39s; the fifth node in a stem is
expected to have a 90% chance of being the first to timeout, which is
the same probability we currently have with an expected stem length of
10 nodes.
Miners with MLSAG txes which they'd already verified included
a couple in that block, but the consensus rules had changed
in the meantime, so that block is technically invalid and any
node which did not already have those two txes in their txpool
could not sync. Grandfather them in, since it has no effect in
practice.
d20ff4f64 functional_tests: add a large (many randomx epochs) p2p reorg test (moneromooo-monero)
6a0b3b1f8 functional_tests: add randomx tests (moneromooo-monero)
9d42649d5 core: fix mining from a block that's not the current top (moneromooo-monero)
The cache is discarded when a block is popped, but then gets
rebuilt when the difficulty for next block is requested.
While this is all properly locked, it does not take into account
the delay caused by a database transaction being only committed
(and thus its effects made visible to other threads) later on,
which means another thread could request difficulty between
the pop and the commit, which would end up using stale database
view to build the cache, but that cache would not be invalidated
again when the transaction gets committed, which would cause the
cache to not match the new database data.
To fix this, we now keep track of when the cache is invalidated
so we can invalidate it again upon database transaction commit
to ensure it gets calculated again with fresh data next time it
is nedeed.