hash: add prehashed version cn_slow_hash_prehashed
slow-hash: let cn_slow_hash take 4th parameter for deciding prehashed or not
slow-hash: add support for prehashed version for the other 3 platforms
When #3303 was merged, a cyclic dependency chain was generated:
libdevice <- libcncrypto <- libringct <- libdevice
This was because libdevice needs access to a set of basic crypto operations
implemented in libringct such as scalarmultBase(), while libringct also needs
access to abstracted crypto operations implemented in libdevice such as
ecdhEncode(). To untangle this cyclic dependency chain, this patch splits libringct
into libringct_basic and libringct, where the basic crypto ops previously in
libringct are moved into libringct_basic. The cyclic dependency is now resolved
thanks to this separation:
libcncrypto <- libringct_basic <- libdevice <- libcryptonote_basic <- libringct
This eliminates the need for crypto_device.cpp and rctOps_device.cpp.
Also, many abstracted interfaces of hw::device such as encrypt_payment_id() and
get_subaddress_secret_key() were previously implemented in libcryptonote_basic
(cryptonote_format_utils.cpp) and were then called from hw::core::device_default,
which is odd because libdevice is supposed to be independent of libcryptonote_basic.
Therefore, those functions were moved to device_default.cpp.
0e7ad2e2 Wallet API: generalize 'bool testnet' to 'NetworkType nettype' (stoffu)
af773211 Stagenet (stoffu)
cc9a0bee command_line: allow args to depend on more than one args (stoffu)
55f8d917 command_line::get_arg: remove 'required' for dependent args as they're always optional (stoffu)
450306a0 command line: allow has_arg to handle arg_descriptor<bool,false,true> #3318 (stoffu)
9f9e095a Use `genesis_tx` parameter in `generate_genesis_block`. #3261 (Jean Pierre Dudey)
The basic approach it to delegate all sensitive data (master key, secret
ephemeral key, key derivation, ....) and related operations to the device.
As device has low memory, it does not keep itself the values
(except for view/spend keys) but once computed there are encrypted (with AES
are equivalent) and return back to monero-wallet-cli. When they need to be
manipulated by the device, they are decrypted on receive.
Moreover, using the client for storing the value in encrypted form limits
the modification in the client code. Those values are transfered from one
C-structure to another one as previously.
The code modification has been done with the wishes to be open to any
other hardware wallet. To achieve that a C++ class hw::Device has been
introduced. Two initial implementations are provided: the "default", which
remaps all calls to initial Monero code, and the "Ledger", which delegates
all calls to Ledger device.
e4646379 keccak: fix mdlen bounds sanity checking (moneromooo-monero)
2e3e90ac pass large parameters by const ref, not value (moneromooo-monero)
61defd89 blockchain: sanity check number of precomputed hash of hash blocks (moneromooo-monero)
9af6b2d1 ringct: fix infinite loop in unused h2b function (moneromooo-monero)
8cea8d0c simplewallet: double check a new multisig wallet is multisig (moneromooo-monero)
9b98a6ac threadpool: catch exceptions in dtor, to avoid terminate (moneromooo-monero)
24803ed9 blockchain_export: fix buffer overflow in exporter (moneromooo-monero)
f3f7da62 perf_timer: rewrite to make it clear there is no division by zero (moneromooo-monero)
c6ea3df0 performance_tests: remove add_arg call stray extra param (moneromooo-monero)
fa6b4566 fuzz_tests: fix an uninitialized var in setup (moneromooo-monero)
03887f11 keccak: fix sanity check bounds test (moneromooo-monero)
ad11db91 blockchain_db: initialize m_open in base class ctor (moneromooo-monero)
bece67f9 miner: restore std::cout precision after modification (moneromooo-monero)
1aabd14c db_lmdb: check hard fork info drop succeeded (moneromooo-monero)
6d8b29ef fix some link errors in debug mode for macos (stoffu)
fdd4c5e5 move memwipe to epee to avoid common<->crypto circular dependencies (moneromooo-monero)
40ab12a7 epee: remove dependency on common (moneromooo-monero)
bd5cce07 network_throttle: fix ineffective locking (moneromooo-monero)
e0a61299 network_throttle: remove unused xxx static member (moneromooo-monero)
24f584d9 cryptonote_core: remove unused functions with off by one bugs (moneromooo-monero)
b1634aa3 blockchain: don't leave dangling pointers in this (moneromooo-monero)
8e60b81c cryptonote_core: fix db leak on error (moneromooo-monero)
213e326c abstract_tcp_server2: log init_server errors as fatal (moneromooo-monero)
b51dc566 use const refs in for loops for non tiny types (moneromooo-monero)
f0568ca6 net_parse_helpers: fix regex error checking (moneromooo-monero)
b49ddc76 check accessing an element past the end of a container (moneromooo-monero)
2305bf26 check return value for generate_key_derivation and derive_public_key (moneromooo-monero)
a4240d9f catch const exceptions (moneromooo-monero)
45a1c4c0 add empty container sanity checks when using front() and back() (moneromooo-monero)
56fa6ce1 tests: fix a buffer overread in a unit test (moneromooo-monero)
b4524892 rpc: guard against json parsing a non object (moneromooo-monero)
c2ed8618 easylogging++: avoid buffer underflow (moneromooo-monero)
187a6ab2 epee: trap failure to parse URI from request (moneromooo-monero)
061789b5 checkpoints: trap failure to load JSON checkpoints (moneromooo-monero)
ba2fefb9 checkpoints: pass std::string by const ref, not const value (moneromooo-monero)
38c8f4e0 mlog: terminate a string at last char, just in case (moneromooo-monero)
d753d716 fix a few leaks by throwing objects, not newed pointers to objects (moneromooo-monero)
fe568db8 p2p: use size_t for arbitrary counters instead of uint8_t (moneromooo-monero)
46d6fa35 cryptonote_protocol: sanity check chain hashes from peer (moneromooo-monero)
25584f86 cryptonote_protocol: print peer versions when unexpected (moneromooo-monero)
490a5d41 rpc: do not try to use an invalid txid in relay_tx (moneromooo-monero)
Scheme by luigi1111:
Multisig for RingCT on Monero
2 of 2
User A (coordinator):
Spendkey b,B
Viewkey a,A (shared)
User B:
Spendkey c,C
Viewkey a,A (shared)
Public Address: C+B, A
Both have their own watch only wallet via C+B, a
A will coordinate spending process (though B could easily as well, coordinator is more needed for more participants)
A and B watch for incoming outputs
B creates "half" key images for discovered output D:
I2_D = (Hs(aR)+c) * Hp(D)
B also creates 1.5 random keypairs (one scalar and 2 pubkeys; one on base G and one on base Hp(D)) for each output, storing the scalar(k) (linked to D),
and sending the pubkeys with I2_D.
A also creates "half" key images:
I1_D = (Hs(aR)+b) * Hp(D)
Then I_D = I1_D + I2_D
Having I_D allows A to check spent status of course, but more importantly allows A to actually build a transaction prefix (and thus transaction).
A builds the transaction until most of the way through MLSAG_Gen, adding the 2 pubkeys (per input) provided with I2_D
to his own generated ones where they are needed (secret row L, R).
At this point, A has a mostly completed transaction (but with an invalid/incomplete signature). A sends over the tx and includes r,
which allows B (with the recipient's address) to verify the destination and amount (by reconstructing the stealth address and decoding ecdhInfo).
B then finishes the signature by computing ss[secret_index][0] = ss[secret_index][0] + k - cc[secret_index]*c (secret indices need to be passed as well).
B can then broadcast the tx, or send it back to A for broadcasting. Once B has completed the signing (and verified the tx to be valid), he can add the full I_D
to his cache, allowing him to verify spent status as well.
NOTE:
A and B *must* present key A and B to each other with a valid signature proving they know a and b respectively.
Otherwise, trickery like the following becomes possible:
A creates viewkey a,A, spendkey b,B, and sends a,A,B to B.
B creates a fake key C = zG - B. B sends C back to A.
The combined spendkey C+B then equals zG, allowing B to spend funds at any time!
The signature fixes this, because B does not know a c corresponding to C (and thus can't produce a signature).
2 of 3
User A (coordinator)
Shared viewkey a,A
"spendkey" j,J
User B
"spendkey" k,K
User C
"spendkey" m,M
A collects K and M from B and C
B collects J and M from A and C
C collects J and K from A and B
A computes N = nG, n = Hs(jK)
A computes O = oG, o = Hs(jM)
B anc C compute P = pG, p = Hs(kM) || Hs(mK)
B and C can also compute N and O respectively if they wish to be able to coordinate
Address: N+O+P, A
The rest follows as above. The coordinator possesses 2 of 3 needed keys; he can get the other
needed part of the signature/key images from either of the other two.
Alternatively, if secure communication exists between parties:
A gives j to B
B gives k to C
C gives m to A
Address: J+K+M, A
3 of 3
Identical to 2 of 2, except the coordinator must collect the key images from both of the others.
The transaction must also be passed an additional hop: A -> B -> C (or A -> C -> B), who can then broadcast it
or send it back to A.
N-1 of N
Generally the same as 2 of 3, except participants need to be arranged in a ring to pass their keys around
(using either the secure or insecure method).
For example (ignoring viewkey so letters line up):
[4 of 5]
User: spendkey
A: a
B: b
C: c
D: d
E: e
a -> B, b -> C, c -> D, d -> E, e -> A
Order of signing does not matter, it just must reach n-1 users. A "remaining keys" list must be passed around with
the transaction so the signers know if they should use 1 or both keys.
Collecting key image parts becomes a little messy, but basically every wallet sends over both of their parts with a tag for each.
Thia way the coordinating wallet can keep track of which images have been added and which wallet they come from. Reasoning:
1. The key images must be added only once (coordinator will get key images for key a from both A and B, he must add only one to get the proper key actual key image)
2. The coordinator must keep track of which helper pubkeys came from which wallet (discussed in 2 of 2 section). The coordinator
must choose only one set to use, then include his choice in the "remaining keys" list so the other wallets know which of their keys to use.
You can generalize it further to N-2 of N or even M of N, but I'm not sure there's legitimate demand to justify the complexity. It might
also be straightforward enough to support with minimal changes from N-1 format.
You basically just give each user additional keys for each additional "-1" you desire. N-2 would be 3 keys per user, N-3 4 keys, etc.
The process is somewhat cumbersome:
To create a N/N multisig wallet:
- each participant creates a normal wallet
- each participant runs "prepare_multisig", and sends the resulting string to every other participant
- each participant runs "make_multisig N A B C D...", with N being the threshold and A B C D... being the strings received from other participants (the threshold must currently equal N)
As txes are received, participants' wallets will need to synchronize so that those new outputs may be spent:
- each participant runs "export_multisig FILENAME", and sends the FILENAME file to every other participant
- each participant runs "import_multisig A B C D...", with A B C D... being the filenames received from other participants
Then, a transaction may be initiated:
- one of the participants runs "transfer ADDRESS AMOUNT"
- this partly signed transaction will be written to the "multisig_monero_tx" file
- the initiator sends this file to another participant
- that other participant runs "sign_multisig multisig_monero_tx"
- the resulting transaction is written to the "multisig_monero_tx" file again
- if the threshold was not reached, the file must be sent to another participant, until enough have signed
- the last participant to sign runs "submit_multisig multisig_monero_tx" to relay the transaction to the Monero network
43f5269f Wallets now do not depend on the daemon rpc lib (moneromooo-monero)
bb89ae8b move connection_basic and network_throttle from src/p2p to epee (moneromooo-monero)
4abf25f3 cryptonote_core does not depend on p2p anymore (moneromooo-monero)
Partially implements #74.
Securely erases keys from memory after they are no longer needed. Might have a
performance impact, which I haven't measured (perf measurements aren't
generally reliable on laptops).
Thanks to @stoffu for the suggestion to specialize the pod_to_hex/hex_to_pod
functions. Using overloads + SFINAE instead generalizes it so other types can
be marked as scrubbed without adding more boilerplate.
3dffe71b new wipeable_string class to replace std::string passphrases (moneromooo-monero)
7a2a5741 utils: initialize easylogging++ in on_startup (moneromooo-monero)
54950829 use memwipe in a few relevant places (moneromooo-monero)
000666ff add a memwipe function (moneromooo-monero)
0d9c0db9 Do not build against epee_readline if it was not built (Howard Chu)
178014c9 split off readline code into epee_readline (moneromooo-monero)
a9e14a19 link against readline only for monerod and wallet-wallet-{rpc,cli} (moneromooo-monero)
437421ce wallet: move some scoped_message_writer calls from the libs (moneromooo-monero)
e89994e9 wallet: rejig to avoid prompting in wallet2 (moneromooo-monero)
ec5135e5 move input_line from command_line to simplewallet (moneromooo-monero)
082db75f move cryptonote command line options to cryptonote_core (moneromooo-monero)
This patch allows to filter out sensitive information for queries that rely on the pool state, when running in restricted mode.
This filtering is only applied to data sent back to RPC queries. Results of inline commands typed locally in the daemon are not affected.
In practice, when running with `--restricted-rpc`:
* get_transaction_pool will list relayed transactions with the fields "last relayed time" and "received time" set to zero.
* get_transaction_pool will not list transaction that have do_not_relay set to true, and will not list key images that are used only for such transactions
* get_transaction_pool_hashes.bin will not list such transaction
* get_transaction_pool_stats will not count such transactions in any of the aggregated values that are computed
The implementation does not make filtering the default, so developers should be mindful of this if they add new RPC functionality.
Fixes#2590.
ff7745bb Edited test readme for accuracy and depth (Cole Lightfighter)
c300ae56 Added test documentation & Keccak unit test (Cole Lightfighter)
f6119a8e Added test documentation & Keccak unit test (Cole Lightfighter)
Tests for checking proper error throwing for out-of-bounds subaddress
indexes, and proper addition of subaddresses.
Signed-off-by: Cole Lightfighter <cole@onicsla.bz>
The commands handler must not be destroyed before the config
object, or we'll be accessing freed memory.
An earlier attempt at using boost::shared_ptr to control object
lifetime turned out to be very invasive, though would be a
better solution in theory.
- internal nullptr checks
- prevent modifications to network_address (shallow copy issues)
- automagically works with any type containing interface functions
- removed fnv1a hashing
- ipv4_network_address now flattened with no base class
6137a0b9 blockchain: reject unsorted ins and outs from v7 (moneromooo-monero)
16afab90 core: sort ins and outs key key image and public key, respectively (moneromooo-monero)
0c36b9f9 common: add apply_permutation file and function (moneromooo-monero)
It was always returning true, and could not be foreseen to
usefully return errors in the future. This silences CID 162652
as well as saves some checking code in a few places.
And optimize import startup:
Remember start_height position during initial count_blocks pass
to avoid having to reread entire file again to arrive at start_height
If monerod is started with default sync mode, set it to SAFE after
synchronization completes. Set it back to FAST if synchronization
restarts (e.g. because another peer has a longer blockchain).
If monerod is started with an explicit sync mode, none of this
automation takes effect.
Add get_fork_version and add_ideal_fork_version to core so
cryptonote_protocol does not have to need the Blockchain
class directly, as it's not in its dependencies, and add
those to the fake core classes in tests too.
A block queue is now placed between block download and
block processing. Blocks are now requested only from one
peer (unless starved).
Includes a new sync_info coommand.
Existing tests: block, transaction, signature, cold outputs,
cold transaction.
Data for these is in tests/data/fuzz.
A convenience shell script is in contrib/fuzz_testing/fuzz.sh, eg:
contrib/fuzz_testing/fuzz.sh signature
The fuzzer will run indefinitely, ^C to stop.
Fuzzing is currently supported for GCC only. I can't get CLANG
to build Monero here as it dies on some system headers, so if
someone wants to make it work on both, that'd be great.
In particular, the __AFL_LOOP construct should be made to work
so that a given run can fuzz multiple inputs, as the C++ load
time is substantial.
- Performance improvements
- Added `span` for zero-copy pointer+length arguments
- Added `std::ostream` overload for direct writing to output buffers
- Removal of unused `string_tools::buff_to_hex`
Minimum mixin 4 and enforced ringct is moved from v5 to v6.
v5 is now used for an increased minimum block size (from 60000
to 300000) to cater for larger typical/minimum transaction size.
The fee algorithm is also changed to decrease the base per kB
fee, and add a cheap tier for those transactions which we do
not care if they get delayed (or even included in a block).
a427235e core: add a missing newline on a string to be logged (moneromooo-monero)
b6a2230e unit_tests: fix minor blockchain_db regression (moneromooo-monero)
c488eca5 hardfork: tone down some logs (moneromooo-monero)
5adcb5a4 tx_pool: add a debug message when adding a tx to the pool (moneromooo-monero)
9faef1f8 cryptonote_protocol: misc fluffy block fixes (moneromooo-monero)
7403e56f performance_tests: report small time per call in microseconds (moneromooo-monero)
cadada2d performance_tests: add tests for sc_reduce32 and cn_fast_hash (moneromooo-monero)
962c72b6 performance_tests: initialize logging at startup (moneromooo-monero)
- fix wrong block being used when a new block is received between
a node elaying a fluffy block and sending a new fluffy block
with txes a peer did not have
- misc a neverending ping pong requesting the same missing txids
when a new block is received in the meantime, causing the top
block to not be the one we need
- send the original fluffy block message block height when sending
a new fluffy block, not the current top height, which might
have been updated since
- avoid sending back the whole block blob when asking for txes,
send only the hash instead
- plus misc cleanup and additional debugging logs
In simple terms, add_subdirectory() is replaced with ExternalProject_Add().
This change is inspired by https://crascit.com/2015/07/25/cmake-gtest/
with one difference, no download, using the source we already have.
Before this change, make debug-test must be preceded by make clean.
Otherwise, a subsequent build would be polluted by cmake options made
by tests/gtest/.
Also removed the changed compiler flags. My test build did not have
the affected warnings.
3ae79a59 core: set missing verifivation_failed flag when rejecting a tx (moneromooo-monero)
ea6549e9 core_tests: decrease trace level from trace to debug (moneromooo-monero)
- http_simple_client now uses std::chrono for timeouts
- http_simple_client accepts timeouts per connect / invoke call
- shortened names of epee http invoke functions
- invoke command functions only take relative path, connection
is not automatically performed
This replaces the epee and data_loggers logging systems with
a single one, and also adds filename:line and explicit severity
levels. Categories may be defined, and logging severity set
by category (or set of categories). epee style 0-4 log level
maps to a sensible severity configuration. Log files now also
rotate when reaching 100 MB.
To select which logs to output, use the MONERO_LOGS environment
variable, with a comma separated list of categories (globs are
supported), with their requested severity level after a colon.
If a log matches more than one such setting, the last one in
the configuration string applies. A few examples:
This one is (mostly) silent, only outputting fatal errors:
MONERO_LOGS=*:FATAL
This one is very verbose:
MONERO_LOGS=*:TRACE
This one is totally silent (logwise):
MONERO_LOGS=""
This one outputs all errors and warnings, except for the
"verify" category, which prints just fatal errors (the verify
category is used for logs about incoming transactions and
blocks, and it is expected that some/many will fail to verify,
hence we don't want the spam):
MONERO_LOGS=*:WARNING,verify:FATAL
Log levels are, in decreasing order of priority:
FATAL, ERROR, WARNING, INFO, DEBUG, TRACE
Subcategories may be added using prefixes and globs. This
example will output net.p2p logs at the TRACE level, but all
other net* logs only at INFO:
MONERO_LOGS=*:ERROR,net*:INFO,net.p2p:TRACE
Logs which are intended for the user (which Monero was using
a lot through epee, but really isn't a nice way to go things)
should use the "global" category. There are a few helper macros
for using this category, eg: MGINFO("this shows up by default")
or MGINFO_RED("this is red"), to try to keep a similar look
and feel for now.
Existing epee log macros still exist, and map to the new log
levels, but since they're used as a "user facing" UI element
as much as a logging system, they often don't map well to log
severities (ie, a log level 0 log may be an error, or may be
something we want the user to see, such as an important info).
In those cases, I tried to use the new macros. In other cases,
I left the existing macros in. When modifying logs, it is
probably best to switch to the new macros with explicit levels.
The --log-level options and set_log commands now also accept
category settings, in addition to the epee style log levels.
3ff54bdd Check for correct thread before ending batch transaction (Howard Chu)
eaf8470b Must wait for previous batch to finish before starting new one (Howard Chu)
c903c554 Don't cache block height, always get from DB (Howard Chu)
eb1fb601 Tweak default db-sync-mode to fast:async:1 (Howard Chu)
0693cff9 Use batch transactions when syncing (Howard Chu)
If a checksum word is present, language detection would use
just the word prefixes. However, a set of word prefixes may
be found in more than one language, and so the wrong language
may be found first, which could then fail the checksum, since
the check may be done with a different unique prefix length
from the one it was created from.
We now make a checksum test when we we detect a language from
prefixes only, to make sure we have the correct one.
5783dd8c tests: add unit tests for uri parsing (moneromooo-monero)
82ba2108 wallet: add API and RPC to create/parse monero: URIs (moneromooo-monero)
d9001b43 epee: add functions to convert from URL format (ie, %XX values) (moneromooo-monero)
48b57d8 monero.supp: valgrind suppressions file (moneromooo-monero)
ffd8c41 ringct: check the size of amount_keys is the same as destinations (moneromooo-monero)
836669d ringct: always shutdown the boost io service (moneromooo-monero)
The fee will vary based on the base reward and the current
block size limit:
fee = (R/R0) * (M0/M) * F0
R: base reward
R0: reference base reward (10 monero)
M: block size limit
M0: minimum block size limit (60000)
F0: 0.002 monero
Starts applying at v4
8b20cbf libwallet_api: do not use fast-refresh on recovery (Ilya Kitaev)
10fe626 libwallet_api: fast-refresh in case of opening non-synced wallet (Ilya Kitaev)
0019e31 libwallet_api: fix unhandled exception on address check (Ilya Kitaev)
1f73f80 libwallet_api: fast-refresh for new wallet (Ilya Kitaev)
4789347 libwallet_api: test for create/init wallet on mainnet (Ilya Kitaev)
bba6af9 wallet: cold wallet transaction signing (moneromooo-monero)
9872dcb wallet: fix log confusion between bytes and kilobytes (moneromooo-monero)
d9b0bf9 cryptonote_core: make extra field removal more generic (moneromooo-monero)
98f19d4 serialization: add support for serializing std::pair and std::list (moneromooo-monero)
This change adds the ability to create a new unsigned transaction
from a watch only wallet, and save it to a file. This file can
then be moved to another computer/VM where a cold wallet may load
it, sign it, and save it. That cold wallet does not need to have
a blockchain nor daemon. The signed transaction file can then be
moved back to the watch only wallet, which can load it and send
it to the daemon.
Two new simplewallet commands to use it:
sign_transfer (on the cold wallet)
submit_transfer (on the watch only wallet)
The transfer command used on a watch only wallet now writes an
unsigned transaction set in a file called 'unsigned_monero_tx'
instead of submitting the tx to the daemon as a normal wallet does.
The signed tx file is called 'signed_monero_tx'.
Keep the immediate direct deps at the library that depends on them,
declare deps as PUBLIC so that targets that link against that library
get the library's deps as transitive deps.
Break dep cycle between blockchain_db <-> crytonote_core.
No code refactoring, just hide cycle from cmake so that
it doesn't complain (cycles are allowed only between
static libs, not shared libs).
This is in preparation for supproting BUILD_SHARED_LIBS cmake
built-in option for building internal libs as shared.
Since this queries block heights for blocks that may or may not
exist, queries for non existing blocks would throw an exception,
and that would slow down the loop a lot. 7 seconds to go through
a 30 hash list.
Fix this by adding an optional return block height to block_exists
and using this instead. Actual errors will still throw an
exception.
This also cuts down on log exception spam.
When RingCT is enabled, outputs from coinbase transactions
are created as a single output, and stored as RingCT output,
with a fake mask. Their amount is not hidden on the blockchain
itself, but they are then able to be used as fake inputs in
a RingCT ring. Since the output amounts are hidden, their
"dustiness" is not an obstacle anymore to mixing, and this
makes the coinbase transactions a lot smaller, as well as
helping the TXO set to grow more slowly.
Also add a new "Null" type of rct signature, which decreases
the size required when no signatures are to be stored, as
in a coinbase tx.
This allows the key to be not the same for two outputs sent to
the same address (eg, if you pay yourself, and also get change
back). Also remove the key amounts lists and return parameters
since we don't actually generate random ones, so we don't need
to save them as we can recalculate them when needed if we have
the correct keys.
The whole rct data apart from the MLSAGs is now included in
the signed message, to avoid malleability issues.
Instead of passing the data that's not serialized as extra
parameters to the verification API, the transaction is modified
to fill all that information. This means the transaction can
not be const anymore, but it cleaner in other ways.
Since these are needed at the same time as the output pubkeys,
this is a whole lot faster, and takes less space. Only outputs
of 0 amount store the commitment. When reading other outputs,
a fake commitment is regenerated on the fly. This avoids having
to rewrite the database to add space for fake commitments for
existing outputs.
This code relies on two things:
- LMDB must support fixed size records per key, rather than
per database (ie, all records on key 0 are the same size, all
records for non 0 keys are same size, but records from key 0
and non 0 keys do have different sizes).
- the commitment must be directly after the rest of the data
in outkey and output_data_t.
The mixRing (output keys and commitments) and II fields (key images)
can be reconstructed from vin data.
This saves some modest amount of space in the tx.
It may be suboptimal, but it's a pain to have to rebuild everything
when some of this changes.
Also, no clue why there seems to be two different code paths for
serializing a tx...
This plugs a privacy leak from the wallet to the daemon,
as the daemon could previously see what input is included
as a transaction input, which the daemon hadn't previously
supplied. Now, the wallet requests a particular set of
outputs, including the real one.
This can result in transactions that can't be accepted if
the wallet happens to select too many outputs with non standard
unlock times. The daemon could know this and select another
output, but the wallet is blind to it. It's currently very
unlikely since I don't think anything uses non default
unlock times. The wallet requests more outputs than necessary
so it can use spares if any of the returns outputs are still
locked. If there are not enough spares to reach the desired
mixin, the transaction will fail.
Compilation of bitmonero on Arch with gcc 6.1 results in the following
error:
/home/mwo/bitmonero/tests/unit_tests/hardfork.cpp: In member function ‘virtual void TestDB::set_hard_fork_version(uint64_t, uint8_t)’:
/home/mwo/bitmonero/tests/unit_tests/hardfork.cpp:132:5: error: this ‘if’ clause does not guard... [-Werror=misleading-indentation]
if (versions.size() <= height) versions.resize(height+1); versions[height] = version;
This can be fixed by simply unfolding this line into three lines.
The tests for rejection of unmixable outputs in v2 are commented out,
as there are no unmixable outputs created anymore. This should be
restored at some point.
This is a list of existing output amounts along with the number
of outputs of that amount in the blockchain.
The daemon command takes:
- no parameters: all outputs with at least 3 instances
- one parameter: all outputs with at least that many instances
- two parameters: all outputs within that many instances
The default starts at 3 to avoid massive spamming of all dust
outputs in the blockchain, and is the current minimum mixin
requirement.
An optional vector of amounts may be passed, to request
histogram only for those outputs.
We also replace the --fakechain option with an optional structure
containing details about configuration for the core/blockchain,
for test purposes. This seems more future friendly.
7658ac0 blockchain: revert handle_get_objects adding block id on tx not found (moneromooo-monero)
3a0f4d8 berkeleydb: fix delete/free mismatch (moneromooo-monero)
1642be2 minor bugfixes and refactoring (Thomas Winget)
098dcf2 unit_tests: fix mnemonics unit test testing invalid seeds (moneromooo-monero)
119eb10 unit_tests: fix hard fork unit tests and add a test for major too (moneromooo-monero)
64a2aa3 hardfork: allow passing chain height in get(height) for convenience (moneromooo-monero)
Some word triplets, such as "mugged names nail", are not valid
results from any 32 bit value. If used to decode a 32 bit value,
the result will therefore encode to a different word triplet.
Fix this by using random words converted from an actual random
bitstring, ensuring we always get valid triplets.
Some tests assume the first output in a transaction goes to the recipient.
However, it can be the change. When it is, the recipient's keys will not
recognize this output. To fix this, we send all we have, to ensure there
is no change, and the first output goes to the recipient.
I'm not sure why this worked with Cryptonote. The tests sent 17 coins,
which seems way smaller than the first Bytecoin block reward, so there
would have been change too. Maybe outputs were not shuffled originally.
Block reward may now be less than the full amount allowed.
This was breaking the bitflipping test.
We now keep track of whether a block which was accepted by the core
has a lower than allowed block reward, and allow this in the test.
They were trying to send too much monero, and thus failing.
The parameters were set in such a way that the (simple) output
gathering code could fulfill them for 4 block rewards for the
original Bytecoin emission, but that does not work with monero
so we need to use smaller values.
The current monero consensus uses 0.01 per kB fees, so use enough
for 2 kB transactions for now. It'll probably have to be either
bumped further or changed to calculate the proper fee.
The core tests use the blockchain, and reset it to be able
to add test data to it. This does not play nice with the
databases, since those will save that data without an explicit
save call.
We add a fakechain flag that the tests will set, which tells
the core and blockchain code to use a separate database, as
well as skip a few things like checkpoints and fixup, which
only make sense for real data.
Also add some more tests, and rename some instances of
"version" and "add" for clarity.
NOTE: the starting height values are sometimes wrong.
I suspect this is due to the hard fork reorg code being
buggy, since they're good when syncing after the fact.
However, they're not actually used by the consensus code,
so I'm ignoring this for now, but this needs debugging.
The last relayed time of a transaction is maintained, and
transactions will be relayed again if they are still in the
pool after a certain amount of time, which increases with
the transaction's age. All such transactions are resent,
whether or not they originated on the local node.
baf101e More changes for 2-min blocks Use the correct block time for realtime fuzz on locktime Use the correct block time to calculate next_difficulty on alt chains (will not work as-is with voting) Lock unit tests to original block time for now (Javier Smooth)
4fea1a5 Adjust difficulty target (2 min) and full reward zone (60 kbytes) for block version 2 (Javier Smooth)
Use the correct block time for realtime fuzz on locktime
Use the correct block time to calculate next_difficulty on alt chains (will not work as-is with voting)
Lock unit tests to original block time for now
Using major version would cause older daemons to reject those
blocks as they fail to deserialize blocks with a major version
which is not 1. There is no such restriction on the minor
version, so switching allows older daemons to coexist with
newer ones till the actual fork date, when most will hopefully
have updated already.
Also, for the same reason, we consider a vote for 0 to be a
vote for 1, since older daemons set minor version to 0.
This allows knowing the hard fork a block must obey in order to be
added to the blockchain. The previous semantics would use that new
block's version vote to determine this hard fork, which made it
impossible to use the rules to validate transactions entering the
tx pool (and made it impossible to validate a block before adding
it to the blockchain).
This ensures one can't instanciate a DNSResolver object by
mistake, but uses the singleton. A separate create static
function is added for cases where a new object is explicitely
needed.
Bockchain:
1. Optim: Multi-thread long-hash computation when encountering groups of blocks.
2. Optim: Cache verified txs and return result from cache instead of re-checking whenever possible.
3. Optim: Preload output-keys when encoutering groups of blocks. Sort by amount and global-index before bulk querying database and multi-thread when possible.
4. Optim: Disable double spend check on block verification, double spend is already detected when trying to add blocks.
5. Optim: Multi-thread signature computation whenever possible.
6. Patch: Disable locking (recursive mutex) on called functions from check_tx_inputs which causes slowdowns (only seems to happen on ubuntu/VMs??? Reason: TBD)
7. Optim: Removed looped full-tx hash computation when retrieving transactions from pool (???).
8. Optim: Cache difficulty/timestamps (735 blocks) for next-difficulty calculations so that only 2 db reads per new block is needed when a new block arrives (instead of 1470 reads).
Berkeley-DB:
1. Fix: 32-bit data errors causing wrong output global indices and failure to send blocks to peers (etc).
2. Fix: Unable to pop blocks on reorganize due to transaction errors.
3. Patch: Large number of transaction aborts when running multi-threaded bulk queries.
4. Patch: Insufficient locks error when running full sync.
5. Patch: Incorrect db stats when returning from an immediate exit from "pop block" operation.
6. Optim: Add bulk queries to get output global indices.
7. Optim: Modified output_keys table to store public_key+unlock_time+height for single transaction lookup (vs 3)
8. Optim: Used output_keys table retrieve public_keys instead of going through output_amounts->output_txs+output_indices->txs->output:public_key
9. Optim: Added thread-safe buffers used when multi-threading bulk queries.
10. Optim: Added support for nosync/write_nosync options for improved performance (*see --db-sync-mode option for details)
11. Mod: Added checkpoint thread and auto-remove-logs option.
12. *Now usable on 32-bit systems like RPI2.
LMDB:
1. Optim: Added custom comparison for 256-bit key tables (minor speed-up, TBD: get actual effect)
2. Optim: Modified output_keys table to store public_key+unlock_time+height for single transaction lookup (vs 3)
3. Optim: Used output_keys table retrieve public_keys instead of going through output_amounts->output_txs+output_indices->txs->output:public_key
4. Optim: Added support for sync/writemap options for improved performance (*see --db-sync-mode option for details)
5. Mod: Auto resize to +1GB instead of multiplier x1.5
ETC:
1. Minor optimizations for slow-hash for ARM (RPI2). Incomplete.
2. Fix: 32-bit saturation bug when computing next difficulty on large blocks.
[PENDING ISSUES]
1. Berkely db has a very slow "pop-block" operation. This is very noticeable on the RPI2 as it sometimes takes > 10 MINUTES to pop a block during reorganization.
This does not happen very often however, most reorgs seem to take a few seconds but it possibly depends on the number of outputs present. TBD.
2. Berkeley db, possible bug "unable to allocate memory". TBD.
[NEW OPTIONS] (*Currently all enabled for testing purposes)
1. --fast-block-sync arg=[0:1] (default: 1)
a. 0 = Compute long hash per block (may take a while depending on CPU)
b. 1 = Skip long-hash and verify blocks based on embedded known good block hashes (faster, minimal CPU dependence)
2. --db-sync-mode arg=[[safe|fast|fastest]:[sync|async]:[nblocks_per_sync]] (default: fastest:async:1000)
a. safe = fdatasync/fsync (or equivalent) per stored block. Very slow, but safest option to protect against power-out/crash conditions.
b. fast/fastest = Enables asynchronous fdatasync/fsync (or equivalent). Useful for battery operated devices or STABLE systems with UPS and/or systems with battery backed write cache/solid state cache.
Fast - Write meta-data but defer data flush.
Fastest - Defer meta-data and data flush.
Sync - Flush data after nblocks_per_sync and wait.
Async - Flush data after nblocks_per_sync but do not wait for the operation to finish.
3. --prep-blocks-threads arg=[n] (default: 4 or system max threads, whichever is lower)
Max number of threads to use when computing long-hash in groups.
4. --show-time-stats arg=[0:1] (default: 1)
Show benchmark related time stats.
5. --db-auto-remove-logs arg=[0:1] (default: 1)
For berkeley-db only. Auto remove logs if enabled.
**Note: lmdb and berkeley-db have changes to the tables and are not compatible with official git head version.
At the moment, you need a full resync to use this optimized version.
[PERFORMANCE COMPARISON]
**Some figures are approximations only.
Using a baseline machine of an i7-2600K+SSD+(with full pow computation):
1. The optimized lmdb/blockhain core can process blocks up to 585K for ~1.25 hours + download time, so it usually takes 2.5 hours to sync the full chain.
2. The current head with memory can process blocks up to 585K for ~4.2 hours + download time, so it usually takes 5.5 hours to sync the full chain.
3. The current head with lmdb can process blocks up to 585K for ~32 hours + download time and usually takes 36 hours to sync the full chain.
Averate procesing times (with full pow computation):
lmdb-optimized:
1. tx_ave = 2.5 ms / tx
2. block_ave = 5.87 ms / block
memory-official-repo:
1. tx_ave = 8.85 ms / tx
2. block_ave = 19.68 ms / block
lmdb-official-repo (0f4a036437)
1. tx_ave = 47.8 ms / tx
2. block_ave = 64.2 ms / block
**Note: The following data denotes processing times only (does not include p2p download time)
lmdb-optimized processing times (with full pow computation):
1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 1.25 hours processing time (--db-sync-mode=fastest:async:1000).
2. Laptop, Dual-core / 4-threads U4200 (3Mb) - 4.90 hours processing time (--db-sync-mode=fastest:async:1000).
3. Embedded, Quad-core / 4-threads Z3735F (2x1Mb) - 12.0 hours processing time (--db-sync-mode=fastest:async:1000).
lmdb-optimized processing times (with per-block-checkpoint)
1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 10 minutes processing time (--db-sync-mode=fastest:async:1000).
berkeley-db optimized processing times (with full pow computation)
1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 1.8 hours processing time (--db-sync-mode=fastest:async:1000).
2. RPI2. Improved from estimated 3 months(???) into 2.5 days (*Need 2AMP supply + Clock:1Ghz + [usb+ssd] to achieve this speed) (--db-sync-mode=fastest:async:1000).
berkeley-db optimized processing times (with per-block-checkpoint)
1. RPI2. 12-15 hours (*Need 2AMP supply + Clock:1Ghz + [usb+ssd] to achieve this speed) (--db-sync-mode=fastest:async:1000).
Based on tewinget's update.
Make OpenAlias address format independent of existing DNS functions.
Add tests.
Test:
make debug-test
cd build/debug/tests/unit_tests
# test that regular DNS functions work, including IPv4 lookups.
# also test function that converts OpenAlias address format
make && ./unit_tests --gtest_filter=DNSResolver*
# test that OpenAlias addresses like donate@getmonero.org work from
# wallet tools
make && ./unit_tests --gtest_filter=AddressFromURL.Success
DNSSEC is now implemented with the hardcoded key from unbound.
This will need to be not hardcoded in the future, but is okay for now.
Unit tests updated for DNSSEC (as well as for the fact that, contrary to
previous assumption, example.com does not have a static IP address).
51e3579 Fixed bug in static linking boost on MINGW (Thomas Winget)
f78bb00 Hopefully fixes build on Windows for real this time (Thomas Winget)
2b0583b Hopefully fixes build on Windows (Thomas Winget)
9dab105 DNS checkpoint loading for testnet should now be correct (Thomas Winget)
52f9629 sending commands to forked daemon works on testnet now (Thomas Winget)
76289d0 Fix tests building -- function signatures changed (Thomas Winget)
db53e19 revert stop_daemon method to use correct exit (Thomas Winget)
96cbecf RPC calls for background daemon added in (Thomas Winget)
9193d6f Daemonize changes pulled in -- daemon builds (Thomas Winget)
Everything except actually *using* BlockchainBDB is wired up, but the db
itself is not yet working. Some error about user mem not large enough.
I think I know what this error means, but I can't determine the cause.
Notes: BerkeleyDB does not allow 0-indexing in its recno type databases,
so block numbers *in the database* will be 1-indexed. Modifications
to indexing have been made as needed.
+toc -doc -drmonero
Fixed the windows path, and improved logging and data
(for graph) logging, fixed some locks and added more checks.
Still there is a locking error,
not added by my patches, but present in master version
(locking of map/list of peers).
commands and options for network limiting
works very well e.g. for 50 KiB/sec up and down
ToS (QoS) flag
peer number limit
TODO some spikes in ingress/download
TODO problems when other up and down limit
added "otshell utils" - simple logging (with colors, text files channels)