Merge branch 'dht-testing' into 'main'

Bug fixes

See merge request veilid/veilid!29
This commit is contained in:
John Smith 2023-06-19 19:29:06 +00:00
commit 7d80a14fe1
9 changed files with 108 additions and 11 deletions

View File

@ -21,7 +21,8 @@ earthly-amd64:
- linux
- amd64
script:
- earthly --ci -P +package-linux-amd64
- echo "disabled for now"
# - earthly --ci -P +package-linux-amd64
#earthly-arm64:
# stage: linux-arm64
@ -33,4 +34,5 @@ earthly-amd64:
# - linux
# - amd64
# script:
# - earthly --ci -P +package-linux-arm64
# - echo "disabled for now"
# # - earthly --ci -P +package-linux-arm64

2
Cargo.lock generated
View File

@ -2576,7 +2576,7 @@ dependencies = [
[[package]]
name = "igd"
version = "0.12.0"
version = "0.12.1"
dependencies = [
"attohttpc",
"bytes 1.4.0",

70
RELEASING.md Normal file
View File

@ -0,0 +1,70 @@
# Veilid Release Process
## Introduction
Veilid is a monorepo consisting of several projects:
(checked boxes are released as packages)
## Release Mechanism
Releases happen via a CI/CD pipeline. Builds and tests are automatic and must succeed before a release is triggered. Releases happen if a successful build pipeline off of the `main` branch runs, followed by test pipeline, followed by package pipeline.
A new tag is calculated for each released artifact in the format 'name-v0.1.0', where the name is the pipeline name (veilid-server-deb-v0.0.0) for example. If the version number on the resulting output package artifact has changed from the most recent tag for that artifact, it is published. If publication is successful, the repository is tagged with the new tag. Multiple releases and tags can happen per pipeline run, if multiple version numbers are bumped in the same commit.
Tags serve as a historical record of what repo versions were successfully released at which version numbers.
## Reverting Releases
Occasionally a release will happen that needs to be reverted. This is done manually on `crates.io` or the APT repository, or wherever the artifacts end up. Tags are not removed.
## Released Artifacts
### Rust Crates:
- [x] __veilid-tools__ [**Tag**: `veilid-tools-v0.0.0`]
> An assortment of useful components used by the other Veilid crates.
> Released to crates.io when its version number is changed in `Cargo.toml`
- [x] __veilid-core__ [**Tag**: `veilid-core-v0.0.0`]
> The base rust crate for Veilid's logic
> Released to crates.io when its version number is changed in `Cargo.toml`
- [ ] __veilid-server__
> The Veilid headless node end-user application
> Not released to crates.io as it is an application binary that is either built by hand or installed using a package manager.
> This application does not currently support `cargo install`
- [ ] __veilid-cli__ A text user interface to talk to veilid-server and operate it manually
> Not released to crates.io as it is an application binary that is either built by hand or installed using a package manager.
> This application does not currently support `cargo install`
- [ ] __veilid-wasm__
> Not released to crates.io as it is not a library that can be linked by other Rust applications
- [ ] __veilid-flutter__
> The Dart-FFI native interface to the Veilid API
> This is currently built by the Flutter plugin `veilid-flutter` and not released.
### Python Packages:
- [x] __veilid-python__ [**Tag**: `veilid-python-v0.0.0`]
> The Veilid API bindings for Python
> Released to PyPi when the version number is changed in `pyproject.toml`
### Flutter Plugins:
- [ ] __veilid-flutter__
> The Flutter plugin for the Veilid API.
> Because this requires a build of a native Rust crate, this is not yet released via https://pub.dev
> TODO: Eventually the rust crate should be bound to
### Operating System Packages:
- [x] __veilid-server__ DEB package [**Tag**: `veilid-server-deb-v0.0.0`]
> The Veilid headless node binary in the following formats:
> * Standalone Debian/Ubuntu DEB file as a 'release file' on the `veilid` GitLab repository
> * Pushed to APT repository at https://packages.veilid.com
- [x] __veilid-server__ RPM package [**Tag**: `veilid-server-rpm-v0.0.0`]
> The Veilid headless node binary in the following formats:
> * Standalone RedHat/CentOS RPM file as a 'release file' on the `veilid` GitLab repository
> * Pushed to Yum repository at https://packages.veilid.com
- [x] __veilid-cli__ DEB package [**Tag**: `veilid-cli-deb-v0.0.0`]
> The Veilid headless node administrator control binary in the following formats:
> * Standalone Debian/Ubuntu DEB file as a 'release file' on the `veilid` GitLab repository
> * Pushed to APT repository at https://packages.veilid.com
- [x] __veilid-cli__ RPM package [**Tag**: `veilid-cli-rpm-v0.0.0`]
> The Veilid headless node administrator control binary in the following formats:
> * Standalone RedHat/CentOS RPM file as a 'release file' on the `veilid` GitLab repository
> * Pushed to Yum repository at https://packages.veilid.com

2
external/rust-igd vendored

@ -1 +1 @@
Subproject commit 330c8e2ea33f6b9bd34809bb1c504459920f4fe2
Subproject commit 0db4faa4bd3b7e06fe3d4fcc7115b69790ea607f

View File

@ -11,6 +11,21 @@ where
pub type FanoutCallReturnType = Result<Option<Vec<PeerInfo>>, RPCError>;
/// Contains the logic for generically searing the Veilid routing table for a set of nodes and applying an
/// RPC operation that eventually converges on satisfactory result, or times out and returns some
/// unsatisfactory but acceptable result. Or something.
///
/// The algorithm starts by creating a 'closest_nodes' working set of the nodes closest to some node id currently in our routing table
/// If has pluggable callbacks:
/// * 'check_done' - for checking for a termination condition
/// * 'call_routine' - routine to call for each node that performs an operation and may add more nodes to our closest_nodes set
/// The algorithm is parameterized by:
/// * 'node_count' - the number of nodes to keep in the closest_nodes set
/// * 'fanout' - the number of concurrent calls being processed at the same time
/// The algorithm returns early if 'check_done' returns some value, or if an error is found during the process.
/// If the algorithm times out, a Timeout result is returned, however operations will still have been performed and a
/// timeout is not necessarily indicative of an algorithmic 'failure', just that no definitive stopping condition was found
/// in the given time
pub struct FanoutCall<R, F, C, D>
where
R: Unpin,
@ -68,6 +83,7 @@ where
let mut ctx = self.context.lock();
for nn in new_nodes {
// Make sure the new node isnt already in the list
let mut dup = false;
for cn in &ctx.closest_nodes {
if cn.same_entry(&nn) {
@ -75,7 +91,12 @@ where
}
}
if !dup {
ctx.closest_nodes.push(nn.clone());
// Add the new node if we haven't already called it before (only one call per node ever)
if let Some(key) = nn.node_ids().get(self.crypto_kind) {
if !ctx.called_nodes.contains(&key) {
ctx.closest_nodes.push(nn.clone());
}
}
}
}
@ -145,7 +166,7 @@ where
self.clone().add_new_nodes(new_nodes);
}
Ok(None) => {
// Call failed, remove the node so it isn't included in the output
// Call failed, remove the node so it isn't considered as part of the fanout
self.clone().remove_node(next_node);
}
Err(e) => {

View File

@ -71,7 +71,7 @@ impl StorageManager {
)
.await?;
let sva = network_result_value_or_log!(vres => {
// Any other failures, just try the next node
// Any other failures, just try the next node and pretend this one never happened
return Ok(None);
});
@ -88,8 +88,7 @@ impl StorageManager {
subkey,
value.value_data(),
) {
// Validation failed, ignore this value
// Move to the next node
// Validation failed, ignore this value and pretend we never saw this node
return Ok(None);
}
@ -104,7 +103,7 @@ impl StorageManager {
} else {
// If the sequence number is older, or an equal sequence number,
// node should have not returned a value here.
// Skip this node's closer list because it is misbehaving
// Skip this node and it's closer list because it is misbehaving
return Ok(None);
}
}

View File

@ -85,7 +85,7 @@ ipython_config.py
# pyenv
# For a library or package, you might want to ignore these files since the code is
# intended to run in multiple environments; otherwise, check them in:
# .python-version
.python-version
# pipenv
# According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.

View File

@ -0,0 +1,2 @@
[virtualenvs]
in-project = true

View File

@ -0,0 +1,3 @@
#!/bin/bash
export PYTHON_KEYRING_BACKEND=keyring.backends.null.Keyring
poetry install