** CHANGES ARE EXPERIMENTAL (FOR TESTING ONLY)

Bockchain:
1. Optim: Multi-thread long-hash computation when encountering groups of blocks.
2. Optim: Cache verified txs and return result from cache instead of re-checking whenever possible.
3. Optim: Preload output-keys when encoutering groups of blocks. Sort by amount and global-index before bulk querying database and multi-thread when possible.
4. Optim: Disable double spend check on block verification, double spend is already detected when trying to add blocks.
5. Optim: Multi-thread signature computation whenever possible.
6. Patch: Disable locking (recursive mutex) on called functions from check_tx_inputs which causes slowdowns (only seems to happen on ubuntu/VMs??? Reason: TBD)
7. Optim: Removed looped full-tx hash computation when retrieving transactions from pool (???).
8. Optim: Cache difficulty/timestamps (735 blocks) for next-difficulty calculations so that only 2 db reads per new block is needed when a new block arrives (instead of 1470 reads).

Berkeley-DB:
1. Fix: 32-bit data errors causing wrong output global indices and failure to send blocks to peers (etc).
2. Fix: Unable to pop blocks on reorganize due to transaction errors.
3. Patch: Large number of transaction aborts when running multi-threaded bulk queries.
4. Patch: Insufficient locks error when running full sync.
5. Patch: Incorrect db stats when returning from an immediate exit from "pop block" operation.
6. Optim: Add bulk queries to get output global indices.
7. Optim: Modified output_keys table to store public_key+unlock_time+height for single transaction lookup (vs 3)
8. Optim: Used output_keys table retrieve public_keys instead of going through output_amounts->output_txs+output_indices->txs->output:public_key
9. Optim: Added thread-safe buffers used when multi-threading bulk queries.
10. Optim: Added support for nosync/write_nosync options for improved performance (*see --db-sync-mode option for details)
11. Mod: Added checkpoint thread and auto-remove-logs option.
12. *Now usable on 32-bit systems like RPI2.

LMDB:
1. Optim: Added custom comparison for 256-bit key tables (minor speed-up, TBD: get actual effect)
2. Optim: Modified output_keys table to store public_key+unlock_time+height for single transaction lookup (vs 3)
3. Optim: Used output_keys table retrieve public_keys instead of going through output_amounts->output_txs+output_indices->txs->output:public_key
4. Optim: Added support for sync/writemap options for improved performance (*see --db-sync-mode option for details)
5. Mod: Auto resize to +1GB instead of multiplier x1.5

ETC:
1. Minor optimizations for slow-hash for ARM (RPI2). Incomplete.
2. Fix: 32-bit saturation bug when computing next difficulty on large blocks.

[PENDING ISSUES]
1. Berkely db has a very slow "pop-block" operation. This is very noticeable on the RPI2 as it sometimes takes > 10 MINUTES to pop a block during reorganization.
   This does not happen very often however, most reorgs seem to take a few seconds but it possibly depends on the number of outputs present. TBD.
2. Berkeley db, possible bug "unable to allocate memory". TBD.

[NEW OPTIONS] (*Currently all enabled for testing purposes)
1. --fast-block-sync arg=[0:1] (default: 1)
	a. 0 = Compute long hash per block (may take a while depending on CPU)
	b. 1 = Skip long-hash and verify blocks based on embedded known good block hashes (faster, minimal CPU dependence)
2. --db-sync-mode arg=[[safe|fast|fastest]:[sync|async]:[nblocks_per_sync]] (default: fastest:async:1000)
	a. safe = fdatasync/fsync (or equivalent) per stored block. Very slow, but safest option to protect against power-out/crash conditions.
	b. fast/fastest = Enables asynchronous fdatasync/fsync (or equivalent). Useful for battery operated devices or STABLE systems with UPS and/or systems with battery backed write cache/solid state cache.
	Fast    - Write meta-data but defer data flush.
	Fastest - Defer meta-data and data flush.
	Sync    - Flush data after nblocks_per_sync and wait.
	Async   - Flush data after nblocks_per_sync but do not wait for the operation to finish.
3. --prep-blocks-threads arg=[n] (default: 4 or system max threads, whichever is lower)
        Max number of threads to use when computing long-hash in groups.
4. --show-time-stats arg=[0:1] (default: 1)
	Show benchmark related time stats.
5. --db-auto-remove-logs arg=[0:1] (default: 1)
	For berkeley-db only. Auto remove logs if enabled.

**Note: lmdb and berkeley-db have changes to the tables and are not compatible with official git head version.
	At the moment, you need a full resync to use this optimized version.

[PERFORMANCE COMPARISON]
**Some figures are approximations only.
Using a baseline machine of an i7-2600K+SSD+(with full pow computation):
1. The optimized lmdb/blockhain core can process blocks up to 585K for ~1.25 hours + download time, so it usually takes 2.5 hours to sync the full chain.
2. The current head with memory can process blocks up to 585K for ~4.2 hours + download time, so it usually takes 5.5 hours to sync the full chain.
3. The current head with lmdb can process blocks up to 585K for ~32 hours + download time and usually takes 36 hours to sync the full chain.

Averate procesing times (with full pow computation):
lmdb-optimized:
1. tx_ave = 2.5 ms / tx
2. block_ave = 5.87 ms / block
memory-official-repo:
1. tx_ave = 8.85 ms / tx
2. block_ave = 19.68 ms / block
lmdb-official-repo (0f4a036437)
1. tx_ave = 47.8 ms / tx
2. block_ave = 64.2 ms / block

**Note: The following data denotes processing times only (does not include p2p download time)
lmdb-optimized processing times (with full pow computation):
1. Desktop,  Quad-core / 8-threads 2600k  (8Mb) - 1.25 hours processing time (--db-sync-mode=fastest:async:1000).
2. Laptop,   Dual-core / 4-threads U4200  (3Mb) - 4.90 hours processing time (--db-sync-mode=fastest:async:1000).
3. Embedded, Quad-core / 4-threads Z3735F (2x1Mb) - 12.0 hours processing time (--db-sync-mode=fastest:async:1000).

lmdb-optimized processing times (with per-block-checkpoint)
1. Desktop,  Quad-core / 8-threads 2600k  (8Mb) - 10 minutes processing time (--db-sync-mode=fastest:async:1000).

berkeley-db optimized processing times (with full pow computation)
1. Desktop, Quad-core / 8-threads 2600k  (8Mb) - 1.8 hours processing time (--db-sync-mode=fastest:async:1000).
2. RPI2. Improved from estimated 3 months(???) into 2.5 days (*Need 2AMP supply + Clock:1Ghz + [usb+ssd] to achieve this speed) (--db-sync-mode=fastest:async:1000).

berkeley-db optimized processing times (with per-block-checkpoint)
1. RPI2. 12-15 hours (*Need 2AMP supply + Clock:1Ghz + [usb+ssd] to achieve this speed) (--db-sync-mode=fastest:async:1000).
This commit is contained in:
NoodleDoodleNoodleDoodleNoodleDoodleNoo 2015-07-10 13:09:32 -07:00
parent 1f83444d3d
commit e5d2680094
33 changed files with 4052 additions and 2373 deletions

View file

@ -31,7 +31,7 @@ set(blockchain_db_sources
lmdb/db_lmdb.cpp
)
if (NOT STATIC)
if (BERKELEY_DB)
set(blockchain_db_sources
${blockchain_db_sources}
berkeleydb/db_bdb.cpp
@ -46,7 +46,7 @@ set(blockchain_db_private_headers
lmdb/db_lmdb.h
)
if (NOT STATIC)
if (BERKELEY_DB)
set(blockchain_db_private_headers
${blockchain_db_private_headers}
berkeleydb/db_bdb.h

File diff suppressed because it is too large Load diff

View file

@ -30,6 +30,11 @@
#include "blockchain_db/blockchain_db.h"
#include "cryptonote_protocol/blobdatatype.h" // for type blobdata
#include <unordered_map>
// ND: Enables multi-threaded bulk reads for when getting indices.
// TODO: Disabled for now, as it doesn't seem to provide noticeable improvements (??. Reason: TBD.
// #define BDB_BULK_CAN_THREAD
namespace cryptonote
{
@ -83,10 +88,145 @@ struct bdb_txn_safe
{
return &m_txn;
}
private:
DbTxn* m_txn;
};
// ND: Class to handle buffer management when doing bulk queries
// (DB_MULTIPLE). Allocates buffers then handles thread queuing
// so a fixed set of buffers can be used (instead of allocating
// every time a bulk query is needed).
template <typename T>
class bdb_safe_buffer
{
// limit the number of buffers to 8
const size_t MaxAllowedBuffers = 8;
public:
bdb_safe_buffer(size_t num_buffers, size_t count)
{
if(num_buffers > MaxAllowedBuffers)
num_buffers = MaxAllowedBuffers;
set_count(num_buffers);
for (size_t i = 0; i < num_buffers; i++)
m_buffers.push_back((T) malloc(sizeof(T) * count));
m_buffer_count = count;
}
~bdb_safe_buffer()
{
for (size_t i = 0; i < m_buffers.size(); i++)
{
if (m_buffers[i])
{
free(m_buffers[i]);
m_buffers[i] = nullptr;
}
}
m_buffers.resize(0);
}
T acquire_buffer()
{
std::unique_lock<std::mutex> lock(m_lock);
m_cv.wait(lock, [&]{ return m_count > 0; });
--m_count;
size_t index = -1;
for (size_t i = 0; i < m_open_slot.size(); i++)
{
if (m_open_slot[i])
{
m_open_slot[i] = false;
index = i;
break;
}
}
assert(index >= 0);
T buffer = m_buffers[index];
m_buffer_map.emplace(buffer, index);
return buffer;
}
void release_buffer(T buffer)
{
std::unique_lock<std::mutex> lock(m_lock);
assert(buffer != nullptr);
auto it = m_buffer_map.find(buffer);
if (it != m_buffer_map.end())
{
auto index = it->second;
assert(index < m_open_slot.size());
assert(m_open_slot[index] == false);
assert(m_count < m_open_slot.size());
++m_count;
m_open_slot[index] = true;
m_buffer_map.erase(it);
m_cv.notify_one();
}
}
size_t get_buffer_size() const
{
return m_buffer_count * sizeof(T);
}
size_t get_buffer_count() const
{
return m_buffer_count;
}
typedef T type;
private:
void set_count(size_t count)
{
assert(count > 0);
m_open_slot.resize(count, true);
m_count = count;
}
std::vector<T> m_buffers;
std::unordered_map<T, size_t> m_buffer_map;
std::condition_variable m_cv;
std::vector<bool> m_open_slot;
size_t m_count;
std::mutex m_lock;
size_t m_buffer_count;
};
template <typename T>
class bdb_safe_buffer_autolock
{
public:
bdb_safe_buffer_autolock(T &safe_buffer, typename T::type &buffer) :
m_safe_buffer(safe_buffer), m_buffer(nullptr)
{
m_buffer = m_safe_buffer.acquire_buffer();
buffer = m_buffer;
}
~bdb_safe_buffer_autolock()
{
if (m_buffer != nullptr)
{
m_safe_buffer.release_buffer(m_buffer);
m_buffer = nullptr;
}
}
private:
T &m_safe_buffer;
typename T::type m_buffer;
};
class BlockchainBDB : public BlockchainDB
{
public:
@ -159,8 +299,9 @@ public:
virtual uint64_t get_num_outputs(const uint64_t& amount) const;
virtual crypto::public_key get_output_key(const uint64_t& amount, const uint64_t& index) const;
virtual output_data_t get_output_key(const uint64_t& amount, const uint64_t& index);
virtual output_data_t get_output_key(const uint64_t& global_index) const;
virtual void get_output_key(const uint64_t &amount, const std::vector<uint64_t> &offsets, std::vector<output_data_t> &outputs);
virtual tx_out get_output(const crypto::hash& h, const uint64_t& index) const;
/**
@ -175,9 +316,11 @@ public:
tx_out get_output(const uint64_t& index) const;
virtual tx_out_index get_output_tx_and_index_from_global(const uint64_t& index) const;
virtual void get_output_tx_and_index_from_global(const std::vector<uint64_t> &global_indices,
std::vector<tx_out_index> &tx_out_indices) const;
virtual tx_out_index get_output_tx_and_index(const uint64_t& amount, const uint64_t& index) const;
virtual void get_output_tx_and_index(const uint64_t& amount, std::vector<uint64_t> &offsets, std::vector<tx_out_index> &indices) const;
virtual tx_out_index get_output_tx_and_index(const uint64_t& amount, const uint64_t& index);
virtual void get_output_tx_and_index(const uint64_t& amount, const std::vector<uint64_t> &offsets, std::vector<tx_out_index> &indices);
virtual std::vector<uint64_t> get_tx_output_indices(const crypto::hash& h) const;
virtual std::vector<uint64_t> get_tx_amount_output_indices(const crypto::hash& h) const;
@ -198,7 +341,12 @@ public:
virtual void batch_abort();
virtual void pop_block(block& blk, std::vector<transaction>& txs);
virtual bool has_bulk_indices() const { return true; }
#if defined(BDB_BULK_CAN_THREAD)
virtual bool can_thread_bulk_indices() const { return true; }
#else
virtual bool can_thread_bulk_indices() const { return false; }
#endif
private:
virtual void add_block( const block& blk
@ -214,7 +362,7 @@ private:
virtual void remove_transaction_data(const crypto::hash& tx_hash, const transaction& tx);
virtual void add_output(const crypto::hash& tx_hash, const tx_out& tx_output, const uint64_t& local_index);
virtual void add_output(const crypto::hash& tx_hash, const tx_out& tx_output, const uint64_t& local_index, const uint64_t unlock_time);
virtual void remove_output(const tx_out& tx_output);
@ -227,6 +375,7 @@ private:
virtual void remove_spent_key(const crypto::key_image& k_image);
void get_output_global_indices(const uint64_t& amount, const std::vector<uint64_t> &offsets, std::vector<uint64_t> &global_indices);
/**
* @brief convert a tx output to a blob for storage
*
@ -253,10 +402,13 @@ private:
*
* @return the global index of the desired output
*/
uint64_t get_output_global_index(const uint64_t& amount, const uint64_t& index) const;
uint64_t get_output_global_index(const uint64_t& amount, const uint64_t& index);
void checkpoint_worker() const;
void check_open() const;
void *m_buffer;
bool m_run_checkpoint;
std::unique_ptr<boost::thread> m_checkpoint_thread;
typedef bdb_safe_buffer<void *> bdb_safe_buffer_t;
bdb_safe_buffer_t m_buffer;
DbEnv* m_env;

View file

@ -64,7 +64,7 @@ void BlockchainDB::add_transaction(const crypto::hash& blk_hash, const transacti
{
for (uint64_t i = 0; i < tx.vout.size(); ++i)
{
add_output(tx_hash, tx.vout[i], i);
add_output(tx_hash, tx.vout[i], i, tx.unlock_time);
}
for (const txin_v& tx_input : tx.vin)

View file

@ -138,6 +138,15 @@ namespace cryptonote
// typedef for convenience
typedef std::pair<crypto::hash, uint64_t> tx_out_index;
#pragma pack(push, 1)
struct output_data_t
{
crypto::public_key pubkey;
uint64_t unlock_time;
uint64_t height;
};
#pragma pack(pop)
/***********************************
* Exception Definitions
***********************************/
@ -279,7 +288,7 @@ private:
virtual void remove_transaction_data(const crypto::hash& tx_hash, const transaction& tx) = 0;
// tells the subclass to store an output
virtual void add_output(const crypto::hash& tx_hash, const tx_out& tx_output, const uint64_t& local_index) = 0;
virtual void add_output(const crypto::hash& tx_hash, const tx_out& tx_output, const uint64_t& local_index, const uint64_t unlock_time) = 0;
// tells the subclass to remove an output
virtual void remove_output(const tx_out& tx_output) = 0;
@ -313,7 +322,7 @@ protected:
mutable uint64_t time_tx_exists = 0;
uint64_t time_commit1 = 0;
bool m_auto_remove_logs = true;
public:
@ -461,7 +470,8 @@ public:
virtual uint64_t get_num_outputs(const uint64_t& amount) const = 0;
// return public key for output with global output amount <amount> and index <index>
virtual crypto::public_key get_output_key(const uint64_t& amount, const uint64_t& index) const = 0;
virtual output_data_t get_output_key(const uint64_t& amount, const uint64_t& index) = 0;
virtual output_data_t get_output_key(const uint64_t& global_index) const = 0;
// returns the output indexed by <index> in the transaction with hash <h>
virtual tx_out get_output(const crypto::hash& h, const uint64_t& index) const = 0;
@ -471,9 +481,11 @@ public:
// returns the transaction-local reference for the output with <amount> at <index>
// return type is pair of tx hash and index
virtual tx_out_index get_output_tx_and_index(const uint64_t& amount, const uint64_t& index) const = 0;
virtual void get_output_tx_and_index(const uint64_t& amount, std::vector<uint64_t> &offsets, std::vector<tx_out_index> &indices) const = 0;
virtual bool has_bulk_indices() const = 0;
virtual tx_out_index get_output_tx_and_index(const uint64_t& amount, const uint64_t& index) = 0;
virtual void get_output_tx_and_index(const uint64_t& amount, const std::vector<uint64_t> &offsets, std::vector<tx_out_index> &indices) = 0;
virtual void get_output_key(const uint64_t &amount, const std::vector<uint64_t> &offsets, std::vector<output_data_t> &outputs) = 0;
virtual bool can_thread_bulk_indices() const = 0;
// return a vector of indices corresponding to the global output index for
// each output in the transaction with hash <h>
@ -485,7 +497,10 @@ public:
// returns true if key image <img> is present in spent key images storage
virtual bool has_key_image(const crypto::key_image& img) const = 0;
void set_auto_remove_logs(bool auto_remove) { m_auto_remove_logs = auto_remove; }
bool m_open;
mutable epee::critical_section m_synchronization_lock;
}; // class BlockchainDB

View file

@ -65,10 +65,19 @@ struct lmdb_cur
done = false;
}
~lmdb_cur() { close(); }
~lmdb_cur()
{
close();
}
operator MDB_cursor*() { return m_cur; }
operator MDB_cursor**() { return &m_cur; }
operator MDB_cursor*()
{
return m_cur;
}
operator MDB_cursor**()
{
return &m_cur;
}
void close()
{
@ -87,7 +96,8 @@ private:
template<typename T>
struct MDB_val_copy: public MDB_val
{
MDB_val_copy(const T &t): t_copy(t)
MDB_val_copy(const T &t) :
t_copy(t)
{
mv_size = sizeof (T);
mv_data = &t_copy;
@ -99,7 +109,8 @@ private:
template<>
struct MDB_val_copy<cryptonote::blobdata>: public MDB_val
{
MDB_val_copy(const cryptonote::blobdata &bd): data(new char[bd.size()])
MDB_val_copy(const cryptonote::blobdata &bd) :
data(new char[bd.size()])
{
memcpy(data.get(), bd.data(), bd.size());
mv_size = bd.size();
@ -109,7 +120,8 @@ private:
std::unique_ptr<char[]> data;
};
auto compare_uint64 = [](const MDB_val *a, const MDB_val *b) {
auto compare_uint64 = [](const MDB_val *a, const MDB_val *b)
{
const uint64_t va = *(const uint64_t*)a->mv_data;
const uint64_t vb = *(const uint64_t*)b->mv_data;
if (va < vb) return -1;
@ -117,6 +129,20 @@ auto compare_uint64 = [](const MDB_val *a, const MDB_val *b) {
else return 1;
};
int compare_hash32(const MDB_val *a, const MDB_val *b)
{
uint32_t *va = (uint32_t*) a->mv_data;
uint32_t *vb = (uint32_t*) b->mv_data;
for (int n = 7; n >= 0; n--)
{
if (va[n] == vb[n])
continue;
return va[n] < vb[n] ? -1 : 1;
}
return 0;
}
const char* const LMDB_BLOCKS = "blocks";
const char* const LMDB_BLOCK_TIMESTAMPS = "block_timestamps";
const char* const LMDB_BLOCK_HEIGHTS = "block_heights";
@ -235,6 +261,26 @@ void mdb_txn_safe::allow_new_txns()
void BlockchainLMDB::do_resize(uint64_t increase_size)
{
CRITICAL_REGION_LOCAL(m_synchronization_lock);
const uint64_t add_size = 1LL << 30;
// check disk capacity
try
{
boost::filesystem::path path(m_folder);
boost::filesystem::space_info si = boost::filesystem::space(path);
if(si.available < add_size)
{
LOG_PRINT_RED_L0("!! WARNING: Insufficient free space to extend database !!: " << si.available / 1LL << 20L);
return;
}
}
catch(...)
{
// print something but proceed.
LOG_PRINT_YELLOW("Unable to query free disk space.", LOG_LEVEL_0);
}
MDB_envinfo mei;
mdb_env_info(m_env, &mei);
@ -250,6 +296,9 @@ void BlockchainLMDB::do_resize(uint64_t increase_size)
if (increase_size > 0)
new_mapsize = mei.me_mapsize + increase_size;
// add 1Gb per resize, instead of doing a percentage increase
// uint64_t new_mapsize = (double) mei.me_mapsize + add_size;
new_mapsize += (new_mapsize % mst.ms_psize);
mdb_txn_safe::prevent_new_txns();
@ -270,9 +319,7 @@ void BlockchainLMDB::do_resize(uint64_t increase_size)
mdb_env_set_mapsize(m_env, new_mapsize);
LOG_PRINT_L0("LMDB Mapsize increased."
<< " Old: " << mei.me_mapsize / (1024 * 1024) << "MiB"
<< ", New: " << new_mapsize / (1024 * 1024) << "MiB");
LOG_PRINT_GREEN("LMDB Mapsize increased." << " Old: " << mei.me_mapsize / (1024 * 1024) << "MiB" << ", New: " << new_mapsize / (1024 * 1024) << "MiB", LOG_LEVEL_0);
mdb_txn_safe::allow_new_txns();
}
@ -280,6 +327,7 @@ void BlockchainLMDB::do_resize(uint64_t increase_size)
// threshold_size is used for batch transactions
bool BlockchainLMDB::need_resize(uint64_t threshold_size) const
{
#if defined(ENABLE_AUTO_RESIZE)
MDB_envinfo mei;
mdb_env_info(m_env, &mei);
@ -311,12 +359,19 @@ bool BlockchainLMDB::need_resize(uint64_t threshold_size) const
return false;
}
if ((double)size_used / mei.me_mapsize > RESIZE_PERCENT)
std::mt19937 engine(std::random_device{}());
std::uniform_real_distribution<double> fdis(0.6, 0.9);
double resize_percent = fdis(engine);
if ((double)size_used / mei.me_mapsize > resize_percent)
{
LOG_PRINT_L1("Threshold met (percent-based)");
return true;
}
return false;
#else
return false;
#endif
}
void BlockchainLMDB::check_and_resize_for_batch(uint64_t batch_num_blocks)
@ -389,12 +444,8 @@ uint64_t BlockchainLMDB::get_estimated_batch_size(uint64_t batch_num_blocks) con
return threshold_size;
}
void BlockchainLMDB::add_block( const block& blk
, const size_t& block_size
, const difficulty_type& cumulative_difficulty
, const uint64_t& coins_generated
, const crypto::hash& blk_hash
)
void BlockchainLMDB::add_block(const block& blk, const size_t& block_size, const difficulty_type& cumulative_difficulty, const uint64_t& coins_generated,
const crypto::hash& blk_hash)
{
LOG_PRINT_L3("BlockchainLMDB::" << __func__);
check_open();
@ -545,7 +596,7 @@ void BlockchainLMDB::remove_transaction_data(const crypto::hash& tx_hash, const
}
void BlockchainLMDB::add_output(const crypto::hash& tx_hash, const tx_out& tx_output, const uint64_t& local_index)
void BlockchainLMDB::add_output(const crypto::hash& tx_hash, const tx_out& tx_output, const uint64_t& local_index, const uint64_t unlock_time)
{
LOG_PRINT_L3("BlockchainLMDB::" << __func__);
check_open();
@ -574,10 +625,15 @@ void BlockchainLMDB::add_output(const crypto::hash& tx_hash, const tx_out& tx_ou
if (tx_output.target.type() == typeid(txout_to_key))
{
MDB_val_copy<crypto::public_key> val_pubkey(boost::get<txout_to_key>(tx_output.target).key);
result = mdb_put(*m_write_txn, m_output_keys, &k, &val_pubkey, 0);
if (result)
throw0(DB_ERROR(std::string("Failed to add output pubkey to db transaction: ").append(mdb_strerror(result)).c_str()));
output_data_t od;
od.pubkey = boost::get < txout_to_key > (tx_output.target).key;
od.unlock_time = unlock_time;
od.height = m_height;
MDB_val_copy<output_data_t> data(od);
//MDB_val_copy<crypto::public_key> val_pubkey(boost::get<txout_to_key>(tx_output.target).key);
if (mdb_put(*m_write_txn, m_output_keys, &k, &data, 0))
throw0(DB_ERROR("Failed to add output pubkey to db transaction"));
}
@ -808,47 +864,17 @@ tx_out BlockchainLMDB::output_from_blob(const blobdata& blob) const
return o;
}
uint64_t BlockchainLMDB::get_output_global_index(const uint64_t& amount, const uint64_t& index) const
uint64_t BlockchainLMDB::get_output_global_index(const uint64_t& amount, const uint64_t& index)
{
LOG_PRINT_L3("BlockchainLMDB::" << __func__);
check_open();
mdb_txn_safe txn;
if (mdb_txn_begin(m_env, NULL, MDB_RDONLY, txn))
throw0(DB_ERROR("Failed to create a transaction for the db"));
lmdb_cur cur(txn, m_output_amounts);
MDB_val_copy<uint64_t> k(amount);
MDB_val v;
auto result = mdb_cursor_get(cur, &k, &v, MDB_SET);
if (result == MDB_NOTFOUND)
std::vector <uint64_t> offsets;
std::vector <uint64_t> global_indices;
offsets.push_back(index);
get_output_global_indices(amount, offsets, global_indices);
if (!global_indices.size())
throw1(OUTPUT_DNE("Attempting to get an output index by amount and amount index, but amount not found"));
else if (result)
throw0(DB_ERROR("DB error attempting to get an output"));
size_t num_elems = 0;
mdb_cursor_count(cur, &num_elems);
if (num_elems <= index)
throw1(OUTPUT_DNE("Attempting to get an output index by amount and amount index, but output not found"));
mdb_cursor_get(cur, &k, &v, MDB_FIRST_DUP);
for (uint64_t i = 0; i < index; ++i)
{
mdb_cursor_get(cur, &k, &v, MDB_NEXT_DUP);
}
mdb_cursor_get(cur, &k, &v, MDB_GET_CURRENT);
uint64_t glob_index = *(const uint64_t*)v.mv_data;
cur.close();
txn.commit();
return glob_index;
return global_indices[0];
}
void BlockchainLMDB::check_open() const
@ -903,8 +929,7 @@ void BlockchainLMDB::open(const std::string& filename, const int mdb_flags)
// check for existing LMDB files in base directory
boost::filesystem::path old_files = direc.parent_path();
if (boost::filesystem::exists(old_files / "data.mdb") ||
boost::filesystem::exists(old_files / "lock.mdb"))
if (boost::filesystem::exists(old_files / "data.mdb") || boost::filesystem::exists(old_files / "lock.mdb"))
{
LOG_PRINT_L0("Found existing LMDB files in " << old_files.string());
LOG_PRINT_L0("Move data.mdb and/or lock.mdb to " << filename << ", or delete them, and then restart");
@ -970,7 +995,7 @@ void BlockchainLMDB::open(const std::string& filename, const int mdb_flags)
lmdb_db_open(txn, LMDB_OUTPUT_TXS, MDB_INTEGERKEY | MDB_CREATE, m_output_txs, "Failed to open db handle for m_output_txs");
lmdb_db_open(txn, LMDB_OUTPUT_INDICES, MDB_INTEGERKEY | MDB_CREATE, m_output_indices, "Failed to open db handle for m_output_indices");
lmdb_db_open(txn, LMDB_OUTPUT_AMOUNTS, MDB_INTEGERKEY | MDB_DUPSORT | MDB_CREATE, m_output_amounts, "Failed to open db handle for m_output_amounts");
lmdb_db_open(txn, LMDB_OUTPUT_AMOUNTS, MDB_INTEGERKEY | MDB_DUPSORT | MDB_DUPFIXED | MDB_CREATE, m_output_amounts, "Failed to open db handle for m_output_amounts");
lmdb_db_open(txn, LMDB_OUTPUT_KEYS, MDB_INTEGERKEY | MDB_CREATE, m_output_keys, "Failed to open db handle for m_output_keys");
/*************** not used, but kept for posterity
@ -982,6 +1007,11 @@ void BlockchainLMDB::open(const std::string& filename, const int mdb_flags)
mdb_set_dupsort(txn, m_output_amounts, compare_uint64);
mdb_set_dupsort(txn, m_tx_outputs, compare_uint64);
mdb_set_compare(txn, m_spent_keys, compare_hash32);
mdb_set_compare(txn, m_block_heights, compare_hash32);
mdb_set_compare(txn, m_txs, compare_hash32);
mdb_set_compare(txn, m_tx_unlocks, compare_hash32);
mdb_set_compare(txn, m_tx_heights, compare_hash32);
// get and keep current height
MDB_stat db_stats;
@ -995,6 +1025,34 @@ void BlockchainLMDB::open(const std::string& filename, const int mdb_flags)
throw0(DB_ERROR("Failed to query m_output_indices"));
m_num_outputs = db_stats.ms_entries;
// ND: This "new" version of the lmdb database is incompatible with
// the previous version. Ensure that the output_keys database is
// sizeof(output_data_t) in length. Otherwise, inform user and
// terminate.
if(m_height > 0)
{
MDB_val_copy<uint64_t> k(0);
MDB_val v;
auto get_result = mdb_get(txn, m_output_keys, &k, &v);
if(get_result != MDB_SUCCESS)
{
txn.abort();
m_open = false;
return;
}
// LOG_PRINT_L0("Output keys size: " << v.mv_size);
if(v.mv_size != sizeof(output_data_t))
{
txn.abort();
mdb_env_close(m_env);
m_open = false;
LOG_PRINT_RED_L0("Existing lmdb database is incompatible with this version.");
LOG_PRINT_RED_L0("Please delete the existing database and resync.");
return;
}
}
// commit the transaction
txn.commit();
@ -1604,26 +1662,33 @@ uint64_t BlockchainLMDB::get_num_outputs(const uint64_t& amount) const
return num_elems;
}
crypto::public_key BlockchainLMDB::get_output_key(const uint64_t& amount, const uint64_t& index) const
output_data_t BlockchainLMDB::get_output_key(const uint64_t &global_index) const
{
LOG_PRINT_L3("BlockchainLMDB::" << __func__);
check_open();
uint64_t glob_index = get_output_global_index(amount, index);
mdb_txn_safe txn;
if (mdb_txn_begin(m_env, NULL, MDB_RDONLY, txn))
throw0(DB_ERROR("Failed to create a transaction for the db"));
MDB_val_copy<uint64_t> k(glob_index);
MDB_val_copy<uint64_t> k(global_index);
MDB_val v;
auto get_result = mdb_get(txn, m_output_keys, &k, &v);
if (get_result == MDB_NOTFOUND)
throw0(DB_ERROR("Attempting to get output pubkey by global index, but key does not exist"));
else if (get_result)
throw0(DB_ERROR("Error attempting to retrieve an output pubkey from the db"));
txn.commit();
return *(output_data_t *) v.mv_data;
}
return *(crypto::public_key*)v.mv_data;
output_data_t BlockchainLMDB::get_output_key(const uint64_t& amount, const uint64_t& index)
{
LOG_PRINT_L3("BlockchainLMDB::" << __func__);
check_open();
uint64_t glob_index = get_output_global_index(amount, index);
return get_output_key(glob_index);
}
tx_out BlockchainLMDB::get_output(const crypto::hash& h, const uint64_t& index) const
@ -1731,53 +1796,17 @@ tx_out_index BlockchainLMDB::get_output_tx_and_index_from_global(const uint64_t&
return tx_out_index(tx_hash, *(const uint64_t *)v.mv_data);
}
tx_out_index BlockchainLMDB::get_output_tx_and_index(const uint64_t& amount, const uint64_t& index) const
tx_out_index BlockchainLMDB::get_output_tx_and_index(const uint64_t& amount, const uint64_t& index)
{
LOG_PRINT_L3("BlockchainLMDB::" << __func__);
check_open();
mdb_txn_safe txn;
mdb_txn_safe* txn_ptr = &txn;
if (m_batch_active)
txn_ptr = m_write_txn;
else
{
if (mdb_txn_begin(m_env, NULL, MDB_RDONLY, txn))
throw0(DB_ERROR("Failed to create a transaction for the db"));
}
lmdb_cur cur(*txn_ptr, m_output_amounts);
MDB_val_copy<uint64_t> k(amount);
MDB_val v;
auto result = mdb_cursor_get(cur, &k, &v, MDB_SET);
if (result == MDB_NOTFOUND)
std::vector < uint64_t > offsets;
std::vector<tx_out_index> indices;
offsets.push_back(index);
get_output_tx_and_index(amount, offsets, indices);
if (!indices.size())
throw1(OUTPUT_DNE("Attempting to get an output index by amount and amount index, but amount not found"));
else if (result)
throw0(DB_ERROR("DB error attempting to get an output"));
size_t num_elems = 0;
mdb_cursor_count(cur, &num_elems);
if (num_elems <= index)
throw1(OUTPUT_DNE("Attempting to get an output index by amount and amount index, but output not found"));
mdb_cursor_get(cur, &k, &v, MDB_FIRST_DUP);
for (uint64_t i = 0; i < index; ++i)
{
mdb_cursor_get(cur, &k, &v, MDB_NEXT_DUP);
}
mdb_cursor_get(cur, &k, &v, MDB_GET_CURRENT);
uint64_t glob_index = *(const uint64_t*)v.mv_data;
cur.close();
if (! m_batch_active)
txn.commit();
return get_output_tx_and_index_from_global(glob_index);
return indices[0];
}
std::vector<uint64_t> BlockchainLMDB::get_tx_output_indices(const crypto::hash& h) const
@ -2016,12 +2045,8 @@ void BlockchainLMDB::set_batch_transactions(bool batch_transactions)
LOG_PRINT_L3("batch transactions " << (m_batch_transactions ? "enabled" : "disabled"));
}
uint64_t BlockchainLMDB::add_block( const block& blk
, const size_t& block_size
, const difficulty_type& cumulative_difficulty
, const uint64_t& coins_generated
, const std::vector<transaction>& txs
)
uint64_t BlockchainLMDB::add_block(const block& blk, const size_t& block_size, const difficulty_type& cumulative_difficulty, const uint64_t& coins_generated,
const std::vector<transaction>& txs)
{
LOG_PRINT_L3("BlockchainLMDB::" << __func__);
check_open();
@ -2103,4 +2128,213 @@ void BlockchainLMDB::pop_block(block& blk, std::vector<transaction>& txs)
--m_height;
}
void BlockchainLMDB::get_output_tx_and_index_from_global(const std::vector<uint64_t> &global_indices,
std::vector<tx_out_index> &tx_out_indices) const
{
LOG_PRINT_L3("BlockchainLMDB::" << __func__);
check_open();
tx_out_indices.clear();
mdb_txn_safe txn;
mdb_txn_safe* txn_ptr = &txn;
if (m_batch_active)
txn_ptr = m_write_txn;
else
{
if (mdb_txn_begin(m_env, NULL, MDB_RDONLY, txn))
throw0(DB_ERROR("Failed to create a transaction for the db"));
}
for (const uint64_t &index : global_indices)
{
MDB_val_copy<uint64_t> k(index);
MDB_val v;
auto get_result = mdb_get(*txn_ptr, m_output_txs, &k, &v);
if (get_result == MDB_NOTFOUND)
throw1(OUTPUT_DNE("output with given index not in db"));
else if (get_result)
throw0(DB_ERROR("DB error attempting to fetch output tx hash"));
crypto::hash tx_hash = *(crypto::hash*) v.mv_data;
get_result = mdb_get(*txn_ptr, m_output_indices, &k, &v);
if (get_result == MDB_NOTFOUND)
throw1(OUTPUT_DNE("output with given index not in db"));
else if (get_result)
throw0(DB_ERROR("DB error attempting to fetch output tx index"));
auto result = tx_out_index(tx_hash, *(const uint64_t *) v.mv_data);
tx_out_indices.push_back(result);
}
if (!m_batch_active)
txn.commit();
}
void BlockchainLMDB::get_output_global_indices(const uint64_t& amount, const std::vector<uint64_t> &offsets,
std::vector<uint64_t> &global_indices)
{
LOG_PRINT_L3("BlockchainLMDB::" << __func__);
TIME_MEASURE_START(txx);
check_open();
global_indices.clear();
uint64_t max = 0;
for (const uint64_t &index : offsets)
{
if (index > max)
max = index;
}
mdb_txn_safe txn;
mdb_txn_safe* txn_ptr = &txn;
if(m_batch_active)
txn_ptr = m_write_txn;
else
{
if (mdb_txn_begin(m_env, NULL, MDB_RDONLY, txn))
throw0(DB_ERROR("Failed to create a transaction for the db"));
}
lmdb_cur cur(*txn_ptr, m_output_amounts);
MDB_val_copy<uint64_t> k(amount);
MDB_val v;
auto result = mdb_cursor_get(cur, &k, &v, MDB_SET);
if (result == MDB_NOTFOUND)
throw1(OUTPUT_DNE("Attempting to get an output index by amount and amount index, but amount not found"));
else if (result)
throw0(DB_ERROR("DB error attempting to get an output"));
size_t num_elems = 0;
mdb_cursor_count(cur, &num_elems);
if (max <= 1 && num_elems <= max)
throw1(OUTPUT_DNE("Attempting to get an output index by amount and amount index, but output not found"));
uint64_t t_dbmul = 0;
uint64_t t_dbscan = 0;
if (max <= 1)
{
for (const uint64_t& index : offsets)
{
mdb_cursor_get(cur, &k, &v, MDB_FIRST_DUP);
for (uint64_t i = 0; i < index; ++i)
{
mdb_cursor_get(cur, &k, &v, MDB_NEXT_DUP);
}
mdb_cursor_get(cur, &k, &v, MDB_GET_CURRENT);
uint64_t glob_index = *(const uint64_t*) v.mv_data;
LOG_PRINT_L3("Amount: " << amount << " M0->v: " << glob_index);
global_indices.push_back(glob_index);
}
}
else
{
uint32_t curcount = 0;
uint32_t blockstart = 0;
for (const uint64_t& index : offsets)
{
if (index >= num_elems)
{
LOG_PRINT_L1("Index: " << index << " Elems: " << num_elems << " partial results found for get_output_tx_and_index");
break;
}
while (index >= curcount)
{
TIME_MEASURE_START(db1);
if (mdb_cursor_get(cur, &k, &v, curcount == 0 ? MDB_GET_MULTIPLE : MDB_NEXT_MULTIPLE) != 0)
{
// allow partial results
result = false;
break;
}
int count = v.mv_size / sizeof(uint64_t);
blockstart = curcount;
curcount += count;
TIME_MEASURE_FINISH(db1);
t_dbmul += db1;
}
LOG_PRINT_L3("Records returned: " << curcount << " Index: " << index);
TIME_MEASURE_START(db2);
uint64_t actual_index = index - blockstart;
uint64_t glob_index = ((const uint64_t*) v.mv_data)[actual_index];
LOG_PRINT_L3("Amount: " << amount << " M1->v: " << glob_index);
global_indices.push_back(glob_index);
TIME_MEASURE_FINISH(db2);
t_dbscan += db2;
}
}
cur.close();
if(!m_batch_active)
txn.commit();
TIME_MEASURE_FINISH(txx);
LOG_PRINT_L3("txx: " << txx << " db1: " << t_dbmul << " db2: " << t_dbscan);
}
void BlockchainLMDB::get_output_key(const uint64_t &amount, const std::vector<uint64_t> &offsets, std::vector<output_data_t> &outputs)
{
LOG_PRINT_L3("BlockchainLMDB::" << __func__);
TIME_MEASURE_START(db3);
check_open();
outputs.clear();
std::vector <uint64_t> global_indices;
get_output_global_indices(amount, offsets, global_indices);
if (global_indices.size() > 0)
{
mdb_txn_safe txn;
if (mdb_txn_begin(m_env, NULL, MDB_RDONLY, txn))
throw0(DB_ERROR("Failed to create a transaction for the db"));
for (const uint64_t &index : global_indices)
{
MDB_val_copy<uint64_t> k(index);
MDB_val v;
auto get_result = mdb_get(txn, m_output_keys, &k, &v);
if (get_result != 0)
throw0(DB_ERROR("Attempting to get output pubkey by global index, but key does not exist"));
else if (get_result)
throw0(DB_ERROR("Error attempting to retrieve an output pubkey from the db"));
output_data_t data = *(output_data_t *) v.mv_data;
outputs.push_back(data);
}
txn.commit();
}
TIME_MEASURE_FINISH(db3);
LOG_PRINT_L3("db3: " << db3);
}
void BlockchainLMDB::get_output_tx_and_index(const uint64_t& amount, const std::vector<uint64_t> &offsets, std::vector<tx_out_index> &indices)
{
LOG_PRINT_L3("BlockchainLMDB::" << __func__);
check_open();
indices.clear();
std::vector <uint64_t> global_indices;
get_output_global_indices(amount, offsets, global_indices);
TIME_MEASURE_START(db3);
if(global_indices.size() > 0)
{
get_output_tx_and_index_from_global(global_indices, indices);
}
TIME_MEASURE_FINISH(db3);
LOG_PRINT_L3("db3: " << db3);
}
} // namespace cryptonote

View file

@ -33,6 +33,8 @@
#include <lmdb.h>
#define ENABLE_AUTO_RESIZE
namespace cryptonote
{
@ -159,7 +161,9 @@ public:
virtual uint64_t get_num_outputs(const uint64_t& amount) const;
virtual crypto::public_key get_output_key(const uint64_t& amount, const uint64_t& index) const;
virtual output_data_t get_output_key(const uint64_t& amount, const uint64_t& index);
virtual output_data_t get_output_key(const uint64_t& global_index) const;
virtual void get_output_key(const uint64_t &amount, const std::vector<uint64_t> &offsets, std::vector<output_data_t> &outputs);
virtual tx_out get_output(const crypto::hash& h, const uint64_t& index) const;
@ -175,12 +179,12 @@ public:
tx_out get_output(const uint64_t& index) const;
virtual tx_out_index get_output_tx_and_index_from_global(const uint64_t& index) const;
virtual void get_output_tx_and_index_from_global(const std::vector<uint64_t> &global_indices,
std::vector<tx_out_index> &tx_out_indices) const;
virtual tx_out_index get_output_tx_and_index(const uint64_t& amount, const uint64_t& index) const;
virtual void get_output_tx_and_index(const uint64_t& amount, std::vector<uint64_t> &offsets, std::vector<tx_out_index> &indices) const
{
// do nothing
};
virtual tx_out_index get_output_tx_and_index(const uint64_t& amount, const uint64_t& index);
virtual void get_output_tx_and_index(const uint64_t& amount, const std::vector<uint64_t> &offsets, std::vector<tx_out_index> &indices);
virtual void get_output_global_indices(const uint64_t& amount, const std::vector<uint64_t> &offsets, std::vector<uint64_t> &indices);
virtual std::vector<uint64_t> get_tx_output_indices(const crypto::hash& h) const;
virtual std::vector<uint64_t> get_tx_amount_output_indices(const crypto::hash& h) const;
@ -202,7 +206,7 @@ public:
virtual void pop_block(block& blk, std::vector<transaction>& txs);
virtual bool has_bulk_indices() const { return false; }
virtual bool can_thread_bulk_indices() const { return true; }
private:
void do_resize(uint64_t size_increase=0);
@ -223,7 +227,7 @@ private:
virtual void remove_transaction_data(const crypto::hash& tx_hash, const transaction& tx);
virtual void add_output(const crypto::hash& tx_hash, const tx_out& tx_output, const uint64_t& local_index);
virtual void add_output(const crypto::hash& tx_hash, const tx_out& tx_output, const uint64_t& local_index, const uint64_t unlock_time);
virtual void remove_output(const tx_out& tx_output);
@ -262,7 +266,7 @@ private:
*
* @return the global index of the desired output
*/
uint64_t get_output_global_index(const uint64_t& amount, const uint64_t& index) const;
uint64_t get_output_global_index(const uint64_t& amount, const uint64_t& index);
void check_open() const;
@ -299,9 +303,18 @@ private:
bool m_batch_transactions; // support for batch transactions
bool m_batch_active; // whether batch transaction is in progress
constexpr static uint64_t DEFAULT_MAPSIZE = 1 << 30;
#if defined(__arm__)
// force a value so it can compile with 32-bit ARM
constexpr static uint64_t DEFAULT_MAPSIZE = 1LL << 31;
#else
#if defined(ENABLE_AUTO_RESIZE)
constexpr static uint64_t DEFAULT_MAPSIZE = 1LL << 30;
#else
constexpr static uint64_t DEFAULT_MAPSIZE = 1LL << 33;
#endif
#endif
constexpr static float RESIZE_PERCENT = 0.8f;
constexpr static float RESIZE_FACTOR = 1.5f;
};
} // namespace cryptonote