Compare commits

..

69 Commits

Author SHA1 Message Date
Mark Qvist
1bdcf6ad53 Updated license 2025-04-15 20:21:54 +02:00
Mark Qvist
e6021b8fed Updated license 2025-04-15 20:21:16 +02:00
Mark Qvist
326c0eed8f Updated version 2025-03-13 19:46:11 +01:00
Mark Qvist
336792c07a Updated dependencies 2025-03-13 19:45:15 +01:00
Mark Qvist
570d2c6846 Added configuration options to default config file 2025-03-07 11:05:50 +01:00
Mark Qvist
1ef4665073 Cleanup 2025-02-18 20:05:19 +01:00
Mark Qvist
d5540b927f Added allow_duplicate option to message ingest API 2025-01-31 13:38:56 +01:00
Mark Qvist
a6cf585109 Cleanup 2025-01-30 15:11:26 +01:00
Mark Qvist
c0a8f3be49 Cleanup 2025-01-30 15:04:21 +01:00
Mark Qvist
7b4780cfb7 Automatically clean messages exceeding propagation transfer limit for peer from unhandled message queues 2025-01-30 11:36:11 +01:00
Mark Qvist
b94a712bb6 Automatically clean messages exceeding propagation transfer limit for peer from unhandled message queues 2025-01-30 11:30:45 +01:00
Mark Qvist
f42ccfc4e9 Automatically clean messages exceeding propagation transfer limit for peer from unhandled message queues 2025-01-30 11:23:18 +01:00
Mark Qvist
9eca747757 Updated peer rotation timing to align with distribution queue mapping 2025-01-30 10:46:31 +01:00
Mark Qvist
b7b6753640 Fixed potential division by zero. Fixes #25. 2025-01-30 00:37:50 +01:00
Mark Qvist
40d0b9a5de Added acceptance rate threshold to peer rotation 2025-01-29 21:21:51 +01:00
Mark Qvist
40fc75f559 Refined peer rotation algorithm 2025-01-29 14:24:09 +01:00
Mark Qvist
f1d060a92e Added peer rotation 2025-01-29 01:26:36 +01:00
Mark Qvist
e0e901291e Updated logging 2025-01-27 12:04:16 +01:00
Mark Qvist
886ac69a82 Tear down control link after use 2025-01-27 12:04:05 +01:00
Mark Qvist
e0163e100a Updated issue template 2025-01-27 10:26:11 +01:00
Mark Qvist
26a10cce8f Status query return code 2025-01-26 01:13:11 +01:00
Mark Qvist
cec903a4dc Added status query API function 2025-01-24 14:05:12 +01:00
Mark Qvist
962d9c90d1 Added wanted inbound peers to PN announce data 2025-01-24 13:50:56 +01:00
Mark Qvist
6d2eb4f973 Updated default config 2025-01-24 00:26:47 +01:00
Mark Qvist
a8cc5f41cf Fixed typo 2025-01-24 00:21:37 +01:00
Mark Qvist
aa57b16cf5 Fixed #23 2025-01-24 00:09:36 +01:00
Mark Qvist
cdea838a6c Updated status output 2025-01-23 17:43:24 +01:00
Mark Qvist
fb4bf9b0b9 Cleanup 2025-01-23 17:36:30 +01:00
Mark Qvist
a3e3868f92 Changed formatting 2025-01-23 17:09:40 +01:00
Mark Qvist
70186cf8d9 Fixed typo 2025-01-23 17:07:20 +01:00
Mark Qvist
fe59b265c5 Fixed fstrings not working on Python < 3.12 2025-01-23 16:54:12 +01:00
Mark Qvist
a87458d25f Updated version 2025-01-23 16:28:11 +01:00
Mark Qvist
35dd70c59e Format status and peers output 2025-01-23 16:27:48 +01:00
Mark Qvist
a198e96064 Include unhandled message count in stats 2025-01-23 16:27:23 +01:00
Mark Qvist
e3be7e0cfd Persist last sync attempt 2025-01-23 16:27:01 +01:00
Mark Qvist
460645cea2 Added lxmd status getter 2025-01-23 14:15:31 +01:00
Mark Qvist
f683e03891 Added lxmd status getter 2025-01-23 14:15:12 +01:00
Mark Qvist
2c71cea7a0 Added local node stats request handler 2025-01-23 14:13:08 +01:00
Mark Qvist
61b1ecce27 Updated readme 2025-01-22 10:10:57 +01:00
Mark Qvist
68257a441f Set transfer limit on reverse auto-peer 2025-01-22 09:44:03 +01:00
Mark Qvist
e69da2ed2a Added static peers and peering limit 2025-01-22 01:37:09 +01:00
Mark Qvist
c2a08ef355 Enqueue and batch process distribution queue mappings 2025-01-21 20:44:11 +01:00
Mark Qvist
1430b1ce90 Enqueue and batch process distribution queue mappings 2025-01-21 20:20:39 +01:00
Mark Qvist
1c9c744107 Memory optimisations 2025-01-21 16:51:25 +01:00
Mark Qvist
bfed126a7c Memory optimisations 2025-01-21 16:44:24 +01:00
Mark Qvist
44d1d992f8 Updated version 2025-01-21 16:34:00 +01:00
Mark Qvist
7701f326d9 Memory optimisations 2025-01-21 16:33:39 +01:00
Mark Qvist
356cb6412f Optimise structure overhead 2025-01-21 10:46:59 +01:00
Mark Qvist
7bd3cf986d Updated versions 2025-01-18 21:39:39 +01:00
Mark Qvist
3948c9a187 Added message reject on too large transfer size 2025-01-18 21:36:08 +01:00
Mark Qvist
d6b1b9c94d Added ability to cancel stamp generation 2025-01-18 20:11:31 +01:00
Mark Qvist
a676954116 Added ability to cancel outbound messages 2025-01-18 19:13:43 +01:00
Mark Qvist
d97c4f292e Fixed missing checks for file corruption 2025-01-14 21:32:10 +01:00
Mark Qvist
2d175a331f Updated dependencies 2025-01-13 15:26:27 +01:00
Mark Qvist
976305b791 Sort waiting peers by sync transfer rate 2025-01-13 14:37:51 +01:00
Mark Qvist
a6a42eff80 Add sync transfer rate to peer stats 2025-01-13 14:35:14 +01:00
Mark Qvist
96dddf1b3a Added handling of corrupted transient ID cache files 2024-12-23 12:36:53 +01:00
Mark Qvist
c426c93cc5 Updated versions 2024-12-09 22:10:17 +01:00
Mark Qvist
1a43d93da2 Added message renderer field 2024-12-09 18:16:12 +01:00
Mark Qvist
575fbc9ffe Updated dependencies 2024-11-23 13:20:43 +01:00
Mark Qvist
c21da895b6 Improved duplicate message detection when syncing from multiple different PNs 2024-11-23 13:20:24 +01:00
Mark Qvist
b172c7fcd4 Added PN announce data validation to announce handler 2024-11-23 12:49:01 +01:00
Mark Qvist
61331b58d7 Updated version 2024-11-23 12:47:31 +01:00
Mark Qvist
9ff76c0473 Updated version 2024-10-13 14:01:10 +02:00
Mark Qvist
c9272c9218 Fixed missing byteorder argument in stamp value calculation. Fixes #21. 2024-10-13 13:08:10 +02:00
Mark Qvist
36f0c17c8b Added RNR_REFS field 2024-10-13 13:05:52 +02:00
Mark Qvist
aa406d1552 Updated version 2024-10-11 23:45:24 +02:00
Mark Qvist
0cb771439f Fixed incorrect progress values on path waiting 2024-10-11 23:40:27 +02:00
Mark Qvist
672d754238 Updated dependency 2024-10-06 11:13:38 +02:00
14 changed files with 1235 additions and 190 deletions

View File

@ -12,7 +12,7 @@ Before creating a bug report on this issue tracker, you **must** read the [Contr
- The issue tracker is used by developers of this project. **Do not use it to ask general questions, or for support requests**.
- Ideas and feature requests can be made on the [Discussions](https://github.com/markqvist/Reticulum/discussions). **Only** feature requests accepted by maintainers and developers are tracked and included on the issue tracker. **Do not post feature requests here**.
- After reading the [Contribution Guidelines](https://github.com/markqvist/Reticulum/blob/master/Contributing.md), delete this section from your bug report.
- After reading the [Contribution Guidelines](https://github.com/markqvist/Reticulum/blob/master/Contributing.md), **delete this section only** (*"Read the Contribution Guidelines"*) from your bug report, **and fill in all the other sections**.
**Describe the Bug**
A clear and concise description of what the bug is.

16
LICENSE
View File

@ -1,6 +1,6 @@
MIT License
Reticulum License
Copyright (c) 2020 Mark Qvist / unsigned.io
Copyright (c) 2020-2025 Mark Qvist
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
@ -9,8 +9,16 @@ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
- The Software shall not be used in any kind of system which includes amongst
its functions the ability to purposefully do harm to human beings.
- The Software shall not be used, directly or indirectly, in the creation of
an artificial intelligence, machine learning or language model training
dataset, including but not limited to any use that contributes to the
training or development of such a model or algorithm.
- The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,

View File

@ -2,8 +2,7 @@ import time
import RNS
import RNS.vendor.umsgpack as msgpack
from .LXMF import APP_NAME, stamp_cost_from_app_data
from .LXMF import APP_NAME, stamp_cost_from_app_data, pn_announce_data_is_valid
from .LXMessage import LXMessage
class LXMFDeliveryAnnounceHandler:
@ -40,23 +39,37 @@ class LXMFPropagationAnnounceHandler:
def received_announce(self, destination_hash, announced_identity, app_data):
try:
if type(app_data) == bytes:
data = msgpack.unpackb(app_data)
if self.lxmrouter.propagation_node and self.lxmrouter.autopeer:
node_timebase = data[1]
propagation_transfer_limit = None
if len(data) >= 3:
try:
propagation_transfer_limit = float(data[2])
except:
propagation_transfer_limit = None
data = msgpack.unpackb(app_data)
if data[0] == True:
if RNS.Transport.hops_to(destination_hash) <= self.lxmrouter.autopeer_maxdepth:
self.lxmrouter.peer(destination_hash, node_timebase, propagation_transfer_limit)
if pn_announce_data_is_valid(data):
node_timebase = data[1]
propagation_transfer_limit = None
wanted_inbound_peers = None
if len(data) >= 4:
# TODO: Rethink, probably not necessary anymore
# try:
# wanted_inbound_peers = int(data[3])
# except:
# wanted_inbound_peers = None
pass
elif data[0] == False:
self.lxmrouter.unpeer(destination_hash, node_timebase)
if len(data) >= 3:
try:
propagation_transfer_limit = float(data[2])
except:
propagation_transfer_limit = None
if destination_hash in self.lxmrouter.static_peers:
self.lxmrouter.peer(destination_hash, node_timebase, propagation_transfer_limit, wanted_inbound_peers)
else:
if data[0] == True:
if RNS.Transport.hops_to(destination_hash) <= self.lxmrouter.autopeer_maxdepth:
self.lxmrouter.peer(destination_hash, node_timebase, propagation_transfer_limit, wanted_inbound_peers)
elif data[0] == False:
self.lxmrouter.unpeer(destination_hash, node_timebase)
except Exception as e:
RNS.log("Error while evaluating propagation node announce, ignoring announce.", RNS.LOG_DEBUG)

View File

@ -18,6 +18,8 @@ FIELD_RESULTS = 0x0A
FIELD_GROUP = 0x0B
FIELD_TICKET = 0x0C
FIELD_EVENT = 0x0D
FIELD_RNR_REFS = 0x0E
FIELD_RENDERER = 0x0F
# For usecases such as including custom data structures,
# embedding or encapsulating other data types or protocols
@ -76,12 +78,25 @@ AM_OPUS_LOSSLESS = 0x19
# determine it itself based on the included data.
AM_CUSTOM = 0xFF
# Message renderer specifications for FIELD_RENDERER.
# The renderer specification is completely optional,
# and only serves as an indication to the receiving
# client on how to render the message contents. It is
# not mandatory to implement, either on sending or
# receiving sides, but is the recommended way to
# signal how to render a message, if non-plaintext
# formatting is used.
RENDERER_PLAIN = 0x00
RENDERER_MICRON = 0x01
RENDERER_MARKDOWN = 0x02
RENDERER_BBCODE = 0x03
##########################################################
# The following helper functions makes it easier to #
# handle and operate on LXMF data in client programs #
##########################################################
import RNS
import RNS.vendor.umsgpack as msgpack
def display_name_from_app_data(app_data=None):
if app_data == None:
@ -103,8 +118,8 @@ def display_name_from_app_data(app_data=None):
try:
decoded = dn.decode("utf-8")
return decoded
except:
RNS.log("Could not decode display name in included announce data. The contained exception was: {e}", RNS.LOG_ERROR)
except Exception as e:
RNS.log(f"Could not decode display name in included announce data. The contained exception was: {e}", RNS.LOG_ERROR)
return None
# Original announce format
@ -126,4 +141,25 @@ def stamp_cost_from_app_data(app_data=None):
# Original announce format
else:
return None
return None
def pn_announce_data_is_valid(data):
try:
if type(data) == bytes:
data = msgpack.unpackb(data)
if len(data) < 3:
raise ValueError("Invalid announce data: Insufficient peer data")
else:
if data[0] != True and data[0] != False:
raise ValueError("Invalid announce data: Indeterminate propagation node status")
try:
int(data[1])
except:
raise ValueError("Invalid announce data: Could not decode peer timebase")
except Exception as e:
RNS.log(f"Could not validate propagation node announce data: {e}", RNS.LOG_DEBUG)
return False
return True

View File

@ -4,6 +4,7 @@ import time
import RNS
import RNS.vendor.umsgpack as msgpack
from collections import deque
from .LXMF import APP_NAME
class LXMPeer:
@ -19,6 +20,7 @@ class LXMPeer:
ERROR_NO_IDENTITY = 0xf0
ERROR_NO_ACCESS = 0xf1
ERROR_TIMEOUT = 0xfe
# Maximum amount of time a peer can
# be unreachable before it is removed
@ -38,15 +40,25 @@ class LXMPeer:
@staticmethod
def from_bytes(peer_bytes, router):
dictionary = msgpack.unpackb(peer_bytes)
peer_destination_hash = dictionary["destination_hash"]
peer_peering_timebase = dictionary["peering_timebase"]
peer_alive = dictionary["alive"]
peer_last_heard = dictionary["last_heard"]
peer = LXMPeer(router, peer_destination_hash)
peer.peering_timebase = peer_peering_timebase
peer.alive = peer_alive
peer.last_heard = peer_last_heard
peer = LXMPeer(router, dictionary["destination_hash"])
peer.peering_timebase = dictionary["peering_timebase"]
peer.alive = dictionary["alive"]
peer.last_heard = dictionary["last_heard"]
if "link_establishment_rate" in dictionary:
peer.link_establishment_rate = dictionary["link_establishment_rate"]
else:
peer.link_establishment_rate = 0
if "sync_transfer_rate" in dictionary:
peer.sync_transfer_rate = dictionary["sync_transfer_rate"]
else:
peer.sync_transfer_rate = 0
if "propagation_transfer_limit" in dictionary:
try:
@ -55,15 +67,55 @@ class LXMPeer:
peer.propagation_transfer_limit = None
else:
peer.propagation_transfer_limit = None
if "offered" in dictionary:
peer.offered = dictionary["offered"]
else:
peer.offered = 0
if "outgoing" in dictionary:
peer.outgoing = dictionary["outgoing"]
else:
peer.outgoing = 0
if "incoming" in dictionary:
peer.incoming = dictionary["incoming"]
else:
peer.incoming = 0
if "rx_bytes" in dictionary:
peer.rx_bytes = dictionary["rx_bytes"]
else:
peer.rx_bytes = 0
if "tx_bytes" in dictionary:
peer.tx_bytes = dictionary["tx_bytes"]
else:
peer.tx_bytes = 0
if "last_sync_attempt" in dictionary:
peer.last_sync_attempt = dictionary["last_sync_attempt"]
else:
peer.last_sync_attempt = 0
hm_count = 0
for transient_id in dictionary["handled_ids"]:
if transient_id in router.propagation_entries:
peer.handled_messages[transient_id] = router.propagation_entries[transient_id]
peer.add_handled_message(transient_id)
hm_count += 1
um_count = 0
for transient_id in dictionary["unhandled_ids"]:
if transient_id in router.propagation_entries:
peer.unhandled_messages[transient_id] = router.propagation_entries[transient_id]
peer.add_unhandled_message(transient_id)
um_count += 1
peer._hm_count = hm_count
peer._um_count = um_count
peer._hm_counts_synced = True
peer._um_counts_synced = True
del dictionary
return peer
def to_bytes(self):
@ -73,7 +125,14 @@ class LXMPeer:
dictionary["last_heard"] = self.last_heard
dictionary["destination_hash"] = self.destination_hash
dictionary["link_establishment_rate"] = self.link_establishment_rate
dictionary["sync_transfer_rate"] = self.sync_transfer_rate
dictionary["propagation_transfer_limit"] = self.propagation_transfer_limit
dictionary["last_sync_attempt"] = self.last_sync_attempt
dictionary["offered"] = self.offered
dictionary["outgoing"] = self.outgoing
dictionary["incoming"] = self.incoming
dictionary["rx_bytes"] = self.rx_bytes
dictionary["tx_bytes"] = self.tx_bytes
handled_ids = []
for transient_id in self.handled_messages:
@ -86,7 +145,10 @@ class LXMPeer:
dictionary["handled_ids"] = handled_ids
dictionary["unhandled_ids"] = unhandled_ids
return msgpack.packb(dictionary)
peer_bytes = msgpack.packb(dictionary)
del dictionary
return peer_bytes
def __init__(self, router, destination_hash):
self.alive = False
@ -96,13 +158,25 @@ class LXMPeer:
self.sync_backoff = 0
self.peering_timebase = 0
self.link_establishment_rate = 0
self.sync_transfer_rate = 0
self.propagation_transfer_limit = None
self.handled_messages_queue = deque()
self.unhandled_messages_queue = deque()
self.offered = 0 # Messages offered to this peer
self.outgoing = 0 # Messages transferred to this peer
self.incoming = 0 # Messages received from this peer
self.rx_bytes = 0 # Bytes received from this peer
self.tx_bytes = 0 # Bytes sent to this peer
self._hm_count = 0
self._um_count = 0
self._hm_counts_synced = False
self._um_counts_synced = False
self.link = None
self.state = LXMPeer.IDLE
self.unhandled_messages = {}
self.handled_messages = {}
self.last_offer = []
self.router = router
@ -111,6 +185,7 @@ class LXMPeer:
if self.identity != None:
self.destination = RNS.Destination(self.identity, RNS.Destination.OUT, RNS.Destination.SINGLE, APP_NAME, "propagation")
else:
self.destination = None
RNS.log(f"Could not recall identity for LXMF propagation peer {RNS.prettyhexrep(self.destination_hash)}, will retry identity resolution on next sync", RNS.LOG_WARNING)
def sync(self):
@ -164,7 +239,7 @@ class LXMPeer:
for transient_id in purged_ids:
RNS.log("Dropping unhandled message "+RNS.prettyhexrep(transient_id)+" for peer "+RNS.prettyhexrep(self.destination_hash)+" since it no longer exists in the message store.", RNS.LOG_DEBUG)
self.unhandled_messages.pop(transient_id)
self.remove_unhandled_message(transient_id)
unhandled_entries.sort(key=lambda e: e[1], reverse=False)
per_message_overhead = 16 # Really only 2 bytes, but set a bit higher for now
@ -175,14 +250,17 @@ class LXMPeer:
lxm_size = unhandled_entry[2]
next_size = cumulative_size + (lxm_size+per_message_overhead)
if self.propagation_transfer_limit != None and next_size > (self.propagation_transfer_limit*1000):
pass
if lxm_size+per_message_overhead > (self.propagation_transfer_limit*1000):
self.remove_unhandled_message(transient_id)
self.add_handled_message(transient_id)
RNS.log(f"Message {RNS.prettyhexrep(transient_id)} exceeds transfer limit for {self}, considering handled", RNS.LOG_DEBUG)
else:
cumulative_size += (lxm_size+per_message_overhead)
unhandled_ids.append(transient_id)
RNS.log("Sending sync request to peer "+str(self.destination), RNS.LOG_DEBUG)
RNS.log(f"Offering {len(unhandled_ids)} messages to peer {RNS.prettyhexrep(self.destination.hash)}", RNS.LOG_VERBOSE)
self.last_offer = unhandled_ids
self.link.request(LXMPeer.OFFER_REQUEST_PATH, self.last_offer, response_callback=self.offer_response, failed_callback=self.request_failed)
self.link.request(LXMPeer.OFFER_REQUEST_PATH, unhandled_ids, response_callback=self.offer_response, failed_callback=self.request_failed)
self.state = LXMPeer.REQUEST_SENT
else:
@ -210,22 +288,29 @@ class LXMPeer:
if response == LXMPeer.ERROR_NO_IDENTITY:
if self.link != None:
RNS.log("Remote peer indicated that no identification was received, retrying...", RNS.LOG_DEBUG)
RNS.log("Remote peer indicated that no identification was received, retrying...", RNS.LOG_VERBOSE)
self.link.identify()
self.state = LXMPeer.LINK_READY
self.sync()
return
elif response == LXMPeer.ERROR_NO_ACCESS:
RNS.log("Remote indicated that access was denied, breaking peering", RNS.LOG_VERBOSE)
self.router.unpeer(self.destination_hash)
return
elif response == False:
# Peer already has all advertised messages
for transient_id in self.last_offer:
if transient_id in self.unhandled_messages:
self.handled_messages[transient_id] = self.unhandled_messages.pop(transient_id)
self.add_handled_message(transient_id)
self.remove_unhandled_message(transient_id)
elif response == True:
# Peer wants all advertised messages
for transient_id in self.last_offer:
wanted_messages.append(self.unhandled_messages[transient_id])
wanted_messages.append(self.router.propagation_entries[transient_id])
wanted_message_ids.append(transient_id)
else:
@ -234,18 +319,17 @@ class LXMPeer:
# If the peer did not want the message, it has
# already received it from another peer.
if not transient_id in response:
if transient_id in self.unhandled_messages:
self.handled_messages[transient_id] = self.unhandled_messages.pop(transient_id)
self.add_handled_message(transient_id)
self.remove_unhandled_message(transient_id)
for transient_id in response:
wanted_messages.append(self.unhandled_messages[transient_id])
wanted_messages.append(self.router.propagation_entries[transient_id])
wanted_message_ids.append(transient_id)
if len(wanted_messages) > 0:
RNS.log("Peer wanted "+str(len(wanted_messages))+" of the available messages", RNS.LOG_DEBUG)
RNS.log("Peer wanted "+str(len(wanted_messages))+" of the available messages", RNS.LOG_VERBOSE)
lxm_list = []
for message_entry in wanted_messages:
file_path = message_entry[1]
if os.path.isfile(file_path):
@ -257,10 +341,12 @@ class LXMPeer:
data = msgpack.packb([time.time(), lxm_list])
resource = RNS.Resource(data, self.link, callback = self.resource_concluded)
resource.transferred_messages = wanted_message_ids
resource.sync_transfer_started = time.time()
self.state = LXMPeer.RESOURCE_TRANSFERRING
else:
RNS.log("Peer "+RNS.prettyhexrep(self.destination_hash)+" did not request any of the available messages, sync completed", RNS.LOG_DEBUG)
RNS.log("Peer "+RNS.prettyhexrep(self.destination_hash)+" did not request any of the available messages, sync completed", RNS.LOG_VERBOSE)
self.offered += len(self.last_offer)
if self.link != None:
self.link.teardown()
@ -280,8 +366,8 @@ class LXMPeer:
def resource_concluded(self, resource):
if resource.status == RNS.Resource.COMPLETE:
for transient_id in resource.transferred_messages:
message = self.unhandled_messages.pop(transient_id)
self.handled_messages[transient_id] = message
self.add_handled_message(transient_id)
self.remove_unhandled_message(transient_id)
if self.link != None:
self.link.teardown()
@ -289,12 +375,20 @@ class LXMPeer:
self.link = None
self.state = LXMPeer.IDLE
RNS.log("Sync to peer "+RNS.prettyhexrep(self.destination_hash)+" completed", RNS.LOG_DEBUG)
rate_str = ""
if hasattr(resource, "sync_transfer_started") and resource.sync_transfer_started:
self.sync_transfer_rate = (resource.get_transfer_size()*8)/(time.time()-resource.sync_transfer_started)
rate_str = f" at {RNS.prettyspeed(self.sync_transfer_rate)}"
RNS.log(f"Syncing {len(resource.transferred_messages)} messages to peer {RNS.prettyhexrep(self.destination_hash)} completed{rate_str}", RNS.LOG_VERBOSE)
self.alive = True
self.last_heard = time.time()
self.offered += len(self.last_offer)
self.outgoing += len(resource.transferred_messages)
self.tx_bytes += resource.get_data_size()
else:
RNS.log("Resource transfer for LXMF peer sync failed to "+str(self.destination), RNS.LOG_DEBUG)
RNS.log("Resource transfer for LXMF peer sync failed to "+str(self.destination), RNS.LOG_VERBOSE)
if self.link != None:
self.link.teardown()
@ -315,9 +409,103 @@ class LXMPeer:
self.link = None
self.state = LXMPeer.IDLE
def handle_message(self, transient_id):
if not transient_id in self.handled_messages and not transient_id in self.unhandled_messages:
self.unhandled_messages[transient_id] = self.router.propagation_entries[transient_id]
def queued_items(self):
return len(self.handled_messages_queue) > 0 or len(self.unhandled_messages_queue) > 0
def queue_unhandled_message(self, transient_id):
self.unhandled_messages_queue.append(transient_id)
def queue_handled_message(self, transient_id):
self.handled_messages_queue.append(transient_id)
def process_queues(self):
if len(self.unhandled_messages_queue) > 0 or len(self.handled_messages_queue) > 0:
# TODO: Remove debug
# st = time.time(); lu = len(self.unhandled_messages_queue); lh = len(self.handled_messages_queue)
handled_messages = self.handled_messages
unhandled_messages = self.unhandled_messages
while len(self.handled_messages_queue) > 0:
transient_id = self.handled_messages_queue.pop()
if not transient_id in handled_messages:
self.add_handled_message(transient_id)
if transient_id in unhandled_messages:
self.remove_unhandled_message(transient_id)
while len(self.unhandled_messages_queue) > 0:
transient_id = self.unhandled_messages_queue.pop()
if not transient_id in handled_messages and not transient_id in unhandled_messages:
self.add_unhandled_message(transient_id)
del handled_messages, unhandled_messages
# TODO: Remove debug
# RNS.log(f"{self} processed {lh}/{lu} in {RNS.prettytime(time.time()-st)}")
@property
def handled_messages(self):
pes = self.router.propagation_entries.copy()
hm = list(filter(lambda tid: self.destination_hash in pes[tid][4], pes))
self._hm_count = len(hm); del pes
self._hm_counts_synced = True
return hm
@property
def unhandled_messages(self):
pes = self.router.propagation_entries.copy()
um = list(filter(lambda tid: self.destination_hash in pes[tid][5], pes))
self._um_count = len(um); del pes
self._um_counts_synced = True
return um
@property
def handled_message_count(self):
if not self._hm_counts_synced:
self._update_counts()
return self._hm_count
@property
def unhandled_message_count(self):
if not self._um_counts_synced:
self._update_counts()
return self._um_count
@property
def acceptance_rate(self):
return 0 if self.offered == 0 else (self.outgoing/self.offered)
def _update_counts(self):
if not self._hm_counts_synced:
hm = self.handled_messages; del hm
if not self._um_counts_synced:
um = self.unhandled_messages; del um
def add_handled_message(self, transient_id):
if transient_id in self.router.propagation_entries:
if not self.destination_hash in self.router.propagation_entries[transient_id][4]:
self.router.propagation_entries[transient_id][4].append(self.destination_hash)
self._hm_counts_synced = False
def add_unhandled_message(self, transient_id):
if transient_id in self.router.propagation_entries:
if not self.destination_hash in self.router.propagation_entries[transient_id][5]:
self.router.propagation_entries[transient_id][5].append(self.destination_hash)
self._um_count += 1
def remove_handled_message(self, transient_id):
if transient_id in self.router.propagation_entries:
if self.destination_hash in self.router.propagation_entries[transient_id][4]:
self.router.propagation_entries[transient_id][4].remove(self.destination_hash)
self._hm_counts_synced = False
def remove_unhandled_message(self, transient_id):
if transient_id in self.router.propagation_entries:
if self.destination_hash in self.router.propagation_entries[transient_id][5]:
self.router.propagation_entries[transient_id][5].remove(self.destination_hash)
self._um_counts_synced = False
def __str__(self):
if self.destination_hash:

File diff suppressed because it is too large Load Diff

View File

@ -16,8 +16,10 @@ class LXMessage:
SENDING = 0x02
SENT = 0x04
DELIVERED = 0x08
REJECTED = 0xFD
CANCELLED = 0xFE
FAILED = 0xFF
states = [GENERATING, OUTBOUND, SENDING, SENT, DELIVERED, FAILED]
states = [GENERATING, OUTBOUND, SENDING, SENT, DELIVERED, REJECTED, CANCELLED, FAILED]
UNKNOWN = 0x00
PACKET = 0x01
@ -378,7 +380,7 @@ class LXMessage:
if self.desired_method == LXMessage.OPPORTUNISTIC:
if self.__destination.type == RNS.Destination.SINGLE:
if content_size > LXMessage.ENCRYPTED_PACKET_MAX_CONTENT:
RNS.log(f"Opportunistic delivery was requested for {self}, but content exceeds packet size limit. Falling back to link-based delivery.", RNS.LOG_DEBUG)
RNS.log(f"Opportunistic delivery was requested for {self}, but content of length {content_size} exceeds packet size limit. Falling back to link-based delivery.", RNS.LOG_DEBUG)
self.desired_method = LXMessage.DIRECT
# Set delivery parameters according to delivery method
@ -564,22 +566,27 @@ class LXMessage:
if resource.status == RNS.Resource.COMPLETE:
self.__mark_delivered()
else:
resource.link.teardown()
self.state = LXMessage.OUTBOUND
if resource.status == RNS.Resource.REJECTED:
self.state = LXMessage.REJECTED
elif self.state != LXMessage.CANCELLED:
resource.link.teardown()
self.state = LXMessage.OUTBOUND
def __propagation_resource_concluded(self, resource):
if resource.status == RNS.Resource.COMPLETE:
self.__mark_propagated()
else:
resource.link.teardown()
self.state = LXMessage.OUTBOUND
if self.state != LXMessage.CANCELLED:
resource.link.teardown()
self.state = LXMessage.OUTBOUND
def __link_packet_timed_out(self, packet_receipt):
if packet_receipt:
packet_receipt.destination.teardown()
self.state = LXMessage.OUTBOUND
if self.state != LXMessage.CANCELLED:
if packet_receipt:
packet_receipt.destination.teardown()
self.state = LXMessage.OUTBOUND
def __update_transfer_progress(self, resource):
self.progress = 0.10 + (resource.get_progress()*0.90)

View File

@ -7,6 +7,8 @@ import multiprocessing
WORKBLOCK_EXPAND_ROUNDS = 3000
active_jobs = {}
def stamp_workblock(message_id):
wb_st = time.time()
expand_rounds = WORKBLOCK_EXPAND_ROUNDS
@ -27,7 +29,7 @@ def stamp_value(workblock, stamp):
value = 0
bits = 256
material = RNS.Identity.full_hash(workblock+stamp)
i = int.from_bytes(material)
i = int.from_bytes(material, byteorder="big")
while ((i & (1 << (bits - 1))) == 0):
i = (i << 1)
value += 1
@ -44,23 +46,56 @@ def generate_stamp(message_id, stamp_cost):
value = 0
if RNS.vendor.platformutils.is_windows() or RNS.vendor.platformutils.is_darwin():
stamp, rounds = job_simple(stamp_cost, workblock)
stamp, rounds = job_simple(stamp_cost, workblock, message_id)
elif RNS.vendor.platformutils.is_android():
stamp, rounds = job_android(stamp_cost, workblock)
stamp, rounds = job_android(stamp_cost, workblock, message_id)
else:
stamp, rounds = job_linux(stamp_cost, workblock)
stamp, rounds = job_linux(stamp_cost, workblock, message_id)
duration = time.time() - start_time
speed = rounds/duration
value = stamp_value(workblock, stamp)
if stamp != None:
value = stamp_value(workblock, stamp)
RNS.log(f"Stamp with value {value} generated in {RNS.prettytime(duration)}, {rounds} rounds, {int(speed)} rounds per second", RNS.LOG_DEBUG)
return stamp, value
def job_simple(stamp_cost, workblock):
def cancel_work(message_id):
if RNS.vendor.platformutils.is_windows() or RNS.vendor.platformutils.is_darwin():
try:
if message_id in active_jobs:
active_jobs[message_id] = True
except Exception as e:
RNS.log("Error while terminating stamp generation workers: {e}", RNS.LOG_ERROR)
RNS.trace_exception(e)
elif RNS.vendor.platformutils.is_android():
try:
if message_id in active_jobs:
active_jobs[message_id] = True
except Exception as e:
RNS.log("Error while terminating stamp generation workers: {e}", RNS.LOG_ERROR)
RNS.trace_exception(e)
else:
try:
if message_id in active_jobs:
stop_event = active_jobs[message_id][0]
result_queue = active_jobs[message_id][1]
stop_event.set()
result_queue.put(None)
active_jobs.pop(message_id)
except Exception as e:
RNS.log("Error while terminating stamp generation workers: {e}", RNS.LOG_ERROR)
RNS.trace_exception(e)
def job_simple(stamp_cost, workblock, message_id):
# A simple, single-process stamp generator.
# should work on any platform, and is used
# as a fall-back, in case of limited multi-
@ -73,6 +108,8 @@ def job_simple(stamp_cost, workblock):
pstamp = os.urandom(256//8)
st = time.time()
active_jobs[message_id] = False;
def sv(s, c, w):
target = 0b1<<256-c; m = w+s
result = RNS.Identity.full_hash(m)
@ -81,15 +118,20 @@ def job_simple(stamp_cost, workblock):
else:
return True
while not sv(pstamp, stamp_cost, workblock):
while not sv(pstamp, stamp_cost, workblock) and not active_jobs[message_id]:
pstamp = os.urandom(256//8); rounds += 1
if rounds % 2500 == 0:
speed = rounds / (time.time()-st)
RNS.log(f"Stamp generation running. {rounds} rounds completed so far, {int(speed)} rounds per second", RNS.LOG_DEBUG)
if active_jobs[message_id] == True:
pstamp = None
active_jobs.pop(message_id)
return pstamp, rounds
def job_linux(stamp_cost, workblock):
def job_linux(stamp_cost, workblock, message_id):
allow_kill = True
stamp = None
total_rounds = 0
@ -126,6 +168,8 @@ def job_linux(stamp_cost, workblock):
job_procs.append(process)
process.start()
active_jobs[message_id] = [stop_event, result_queue]
stamp = result_queue.get()
RNS.log("Got stamp result from worker", RNS.LOG_DEBUG) # TODO: Remove
@ -170,7 +214,7 @@ def job_linux(stamp_cost, workblock):
return stamp, total_rounds
def job_android(stamp_cost, workblock):
def job_android(stamp_cost, workblock, message_id):
# Semaphore support is flaky to non-existent on
# Android, so we need to manually dispatch and
# manage workloads here, while periodically
@ -230,10 +274,12 @@ def job_android(stamp_cost, workblock):
RNS.log(f"Stamp generation worker error: {e}", RNS.LOG_ERROR)
RNS.trace_exception(e)
active_jobs[message_id] = False;
RNS.log(f"Dispatching {jobs} workers for stamp generation...", RNS.LOG_DEBUG) # TODO: Remove
results_dict = wm.dict()
while stamp == None:
while stamp == None and active_jobs[message_id] == False:
job_procs = []
try:
for pnum in range(jobs):
@ -260,6 +306,8 @@ def job_android(stamp_cost, workblock):
RNS.log(f"Stamp generation job error: {e}")
RNS.trace_exception(e)
active_jobs.pop(message_id)
return stamp, total_rounds
if __name__ == "__main__":

View File

@ -35,6 +35,7 @@ import time
import os
from LXMF._version import __version__
from LXMF import APP_NAME
from RNS.vendor.configobj import ConfigObj
@ -126,7 +127,7 @@ def apply_config():
if active_configuration["message_storage_limit"] < 0.005:
active_configuration["message_storage_limit"] = 0.005
else:
active_configuration["message_storage_limit"] = 2000
active_configuration["message_storage_limit"] = 500
if "propagation" in lxmd_config and "propagation_transfer_max_accepted_size" in lxmd_config["propagation"]:
active_configuration["propagation_transfer_max_accepted_size"] = lxmd_config["propagation"].as_float("propagation_transfer_max_accepted_size")
@ -140,6 +141,24 @@ def apply_config():
else:
active_configuration["prioritised_lxmf_destinations"] = []
if "propagation" in lxmd_config and "static_peers" in lxmd_config["propagation"]:
static_peers = lxmd_config["propagation"].as_list("static_peers")
active_configuration["static_peers"] = []
for static_peer in static_peers:
active_configuration["static_peers"].append(bytes.fromhex(static_peer))
else:
active_configuration["static_peers"] = []
if "propagation" in lxmd_config and "max_peers" in lxmd_config["propagation"]:
active_configuration["max_peers"] = lxmd_config["propagation"].as_int("max_peers")
else:
active_configuration["max_peers"] = None
if "propagation" in lxmd_config and "from_static_only" in lxmd_config["propagation"]:
active_configuration["from_static_only"] = lxmd_config["propagation"].as_bool("from_static_only")
else:
active_configuration["from_static_only"] = False
# Load various settings
if "logging" in lxmd_config and "loglevel" in lxmd_config["logging"]:
targetloglevel = lxmd_config["logging"].as_int("loglevel")
@ -305,7 +324,10 @@ def program_setup(configdir = None, rnsconfigdir = None, run_pn = False, on_inbo
autopeer_maxdepth = active_configuration["autopeer_maxdepth"],
propagation_limit = active_configuration["propagation_transfer_max_accepted_size"],
delivery_limit = active_configuration["delivery_transfer_max_accepted_size"],
)
max_peers = active_configuration["max_peers"],
static_peers = active_configuration["static_peers"],
from_static_only = active_configuration["from_static_only"])
message_router.register_delivery_callback(lxmf_delivery)
for destination_hash in active_configuration["ignored_lxmf_destinations"]:
@ -362,13 +384,13 @@ def jobs():
try:
if "peer_announce_interval" in active_configuration and active_configuration["peer_announce_interval"] != None:
if time.time() > last_peer_announce + active_configuration["peer_announce_interval"]:
RNS.log("Sending announce for LXMF delivery destination", RNS.LOG_EXTREME)
RNS.log("Sending announce for LXMF delivery destination", RNS.LOG_VERBOSE)
message_router.announce(lxmf_destination.hash)
last_peer_announce = time.time()
if "node_announce_interval" in active_configuration and active_configuration["node_announce_interval"] != None:
if time.time() > last_node_announce + active_configuration["node_announce_interval"]:
RNS.log("Sending announce for LXMF Propagation Node", RNS.LOG_EXTREME)
RNS.log("Sending announce for LXMF Propagation Node", RNS.LOG_VERBOSE)
message_router.announce_propagation_node()
last_node_announce = time.time()
@ -381,7 +403,7 @@ def deferred_start_jobs():
global active_configuration, last_peer_announce, last_node_announce
global message_router, lxmf_destination
time.sleep(DEFFERED_JOBS_DELAY)
RNS.log("Running deferred start jobs")
RNS.log("Running deferred start jobs", RNS.LOG_DEBUG)
if active_configuration["peer_announce_at_start"]:
RNS.log("Sending announce for LXMF delivery destination", RNS.LOG_EXTREME)
message_router.announce(lxmf_destination.hash)
@ -394,6 +416,190 @@ def deferred_start_jobs():
last_node_announce = time.time()
threading.Thread(target=jobs, daemon=True).start()
def query_status(identity, timeout=5, exit_on_fail=False):
control_destination = RNS.Destination(identity, RNS.Destination.OUT, RNS.Destination.SINGLE, APP_NAME, "propagation", "control")
timeout = time.time()+timeout
def check_timeout():
if time.time() > timeout:
if exit_on_fail:
RNS.log("Getting lxmd statistics timed out, exiting now", RNS.LOG_ERROR)
exit(200)
else:
return LXMF.LXMPeer.LXMPeer.ERROR_TIMEOUT
else:
time.sleep(0.1)
if not RNS.Transport.has_path(control_destination.hash):
RNS.Transport.request_path(control_destination.hash)
while not RNS.Transport.has_path(control_destination.hash):
tc = check_timeout()
if tc:
return tc
link = RNS.Link(control_destination)
while not link.status == RNS.Link.ACTIVE:
tc = check_timeout()
if tc:
return tc
link.identify(identity)
request_receipt = link.request(LXMF.LXMRouter.STATS_GET_PATH, data=None, response_callback=None, failed_callback=None)
while not request_receipt.get_status() == RNS.RequestReceipt.READY:
tc = check_timeout()
if tc:
return tc
link.teardown()
return request_receipt.get_response()
def get_status(configdir = None, rnsconfigdir = None, verbosity = 0, quietness = 0, timeout=5, show_status=False, show_peers=False, identity_path=None):
global configpath, identitypath, storagedir, lxmdir
global lxmd_config, active_configuration, targetloglevel
targetlogdest = RNS.LOG_STDOUT
if identity_path == None:
if configdir == None:
if os.path.isdir("/etc/lxmd") and os.path.isfile("/etc/lxmd/config"):
configdir = "/etc/lxmd"
elif os.path.isdir(RNS.Reticulum.userdir+"/.config/lxmd") and os.path.isfile(Reticulum.userdir+"/.config/lxmd/config"):
configdir = RNS.Reticulum.userdir+"/.config/lxmd"
else:
configdir = RNS.Reticulum.userdir+"/.lxmd"
configpath = configdir+"/config"
identitypath = configdir+"/identity"
identity = None
if not os.path.isdir(configdir):
RNS.log("Specified configuration directory does not exist, exiting now", RNS.LOG_ERROR)
exit(201)
if not os.path.isfile(identitypath):
RNS.log("Identity file not found in specified configuration directory, exiting now", RNS.LOG_ERROR)
exit(202)
else:
identity = RNS.Identity.from_file(identitypath)
if identity == None:
RNS.log("Could not load the Primary Identity from "+identitypath, RNS.LOG_ERROR)
exit(4)
else:
if not os.path.isfile(identity_path):
RNS.log("Identity file not found in specified configuration directory, exiting now", RNS.LOG_ERROR)
exit(202)
else:
identity = RNS.Identity.from_file(identity_path)
if identity == None:
RNS.log("Could not load the Primary Identity from "+identity_path, RNS.LOG_ERROR)
exit(4)
if targetloglevel == None:
targetloglevel = 3
if verbosity != 0 or quietness != 0:
targetloglevel = targetloglevel+verbosity-quietness
reticulum = RNS.Reticulum(configdir=rnsconfigdir, loglevel=targetloglevel, logdest=targetlogdest)
response = query_status(identity, timeout=timeout, exit_on_fail=True)
if response == LXMF.LXMPeer.LXMPeer.ERROR_NO_IDENTITY:
RNS.log("Remote received no identity")
exit(203)
if response == LXMF.LXMPeer.LXMPeer.ERROR_NO_ACCESS:
RNS.log("Access denied")
exit(204)
else:
s = response
mutil = round((s["messagestore"]["bytes"]/s["messagestore"]["limit"])*100, 2)
ms_util = f"{mutil}%"
if s["from_static_only"]:
who_str = "static peers only"
else:
who_str = "all nodes"
available_peers = 0
unreachable_peers = 0
peered_incoming = 0
peered_outgoing = 0
peered_rx_bytes = 0
peered_tx_bytes = 0
for peer_id in s["peers"]:
p = s["peers"][peer_id]
pm = p["messages"]
peered_incoming += pm["incoming"]
peered_outgoing += pm["outgoing"]
peered_rx_bytes += p["rx_bytes"]
peered_tx_bytes += p["tx_bytes"]
if p["alive"]:
available_peers += 1
else:
unreachable_peers += 1
total_incoming = peered_incoming+s["unpeered_propagation_incoming"]+s["clients"]["client_propagation_messages_received"]
total_rx_bytes = peered_rx_bytes+s["unpeered_propagation_rx_bytes"]
df = round(peered_outgoing/total_incoming, 2)
dhs = RNS.prettyhexrep(s["destination_hash"]); uts = RNS.prettytime(s["uptime"])
print(f"\nLXMF Propagation Node running on {dhs}, uptime is {uts}")
if show_status:
msb = RNS.prettysize(s["messagestore"]["bytes"]); msl = RNS.prettysize(s["messagestore"]["limit"])
ptl = RNS.prettysize(s["propagation_limit"]*1000); uprx = RNS.prettysize(s["unpeered_propagation_rx_bytes"])
mscnt = s["messagestore"]["count"]; stp = s["total_peers"]; smp = s["max_peers"]; sdp = s["discovered_peers"]
ssp = s["static_peers"]; cprr = s["clients"]["client_propagation_messages_received"]
cprs = s["clients"]["client_propagation_messages_served"]; upi = s["unpeered_propagation_incoming"]
print(f"Messagestore contains {mscnt} messages, {msb} ({ms_util} utilised of {msl})")
print(f"Accepting propagated messages from {who_str}, {ptl} per-transfer limit")
print(f"")
print(f"Peers : {stp} total (peer limit is {smp})")
print(f" {sdp} discovered, {ssp} static")
print(f" {available_peers} available, {unreachable_peers} unreachable")
print(f"")
print(f"Traffic : {total_incoming} messages received in total ({RNS.prettysize(total_rx_bytes)})")
print(f" {peered_incoming} messages received from peered nodes ({RNS.prettysize(peered_rx_bytes)})")
print(f" {upi} messages received from unpeered nodes ({uprx})")
print(f" {peered_outgoing} messages transferred to peered nodes ({RNS.prettysize(peered_tx_bytes)})")
print(f" {cprr} propagation messages received directly from clients")
print(f" {cprs} propagation messages served to clients")
print(f" Distribution factor is {df}")
print(f"")
if show_peers:
if not show_status:
print("")
for peer_id in s["peers"]:
ind = " "
p = s["peers"][peer_id]
if p["type"] == "static":
t = "Static peer "
elif p["type"] == "discovered":
t = "Discovered peer "
else:
t = "Unknown peer "
a = "Available" if p["alive"] == True else "Unreachable"
h = max(time.time()-p["last_heard"], 0)
hops = p["network_distance"]
hs = "hops unknown" if hops == RNS.Transport.PATHFINDER_M else f"{hops} hop away" if hops == 1 else f"{hops} hops away"
pm = p["messages"]
if p["last_sync_attempt"] != 0:
lsa = p["last_sync_attempt"]
ls = f"last synced {RNS.prettytime(max(time.time()-lsa, 0))} ago"
else:
ls = "never synced"
sstr = RNS.prettyspeed(p["str"]); sler = RNS.prettyspeed(p["ler"]); stl = RNS.prettysize(p["transfer_limit"]*1000)
srxb = RNS.prettysize(p["rx_bytes"]); stxb = RNS.prettysize(p["tx_bytes"]); pmo = pm["offered"]; pmout = pm["outgoing"]
pmi = pm["incoming"]; pmuh = pm["unhandled"]
print(f"{ind}{t}{RNS.prettyhexrep(peer_id)}")
print(f"{ind*2}Status : {a}, {hs}, last heard {RNS.prettytime(h)} ago")
print(f"{ind*2}Speeds : {sstr} STR, {sler} LER, {stl} transfer limit")
print(f"{ind*2}Messages : {pmo} offered, {pmout} outgoing, {pmi} incoming")
print(f"{ind*2}Traffic : {srxb} received, {stxb} sent")
ms = "" if pm["unhandled"] == 1 else "s"
print(f"{ind*2}Sync state : {pmuh} unhandled message{ms}, {ls}")
print("")
def main():
try:
parser = argparse.ArgumentParser(description="Lightweight Extensible Messaging Daemon")
@ -404,6 +610,10 @@ def main():
parser.add_argument("-v", "--verbose", action="count", default=0)
parser.add_argument("-q", "--quiet", action="count", default=0)
parser.add_argument("-s", "--service", action="store_true", default=False, help="lxmd is running as a service and should log to file")
parser.add_argument("--status", action="store_true", default=False, help="display node status")
parser.add_argument("--peers", action="store_true", default=False, help="display peered nodes")
parser.add_argument("--timeout", action="store", default=5, help="timeout in seconds for query operations", type=float)
parser.add_argument("--identity", action="store", default=None, help="path to identity used for query request", type=str)
parser.add_argument("--exampleconfig", action="store_true", default=False, help="print verbose configuration example to stdout and exit")
parser.add_argument("--version", action="version", version="lxmd {version}".format(version=__version__))
@ -413,15 +623,24 @@ def main():
print(__default_lxmd_config__)
exit()
program_setup(
configdir = args.config,
rnsconfigdir=args.rnsconfig,
run_pn=args.propagation_node,
on_inbound=args.on_inbound,
verbosity=args.verbose,
quietness=args.quiet,
service=args.service
)
if args.status or args.peers:
get_status(configdir = args.config,
rnsconfigdir=args.rnsconfig,
verbosity=args.verbose,
quietness=args.quiet,
timeout=args.timeout,
show_status=args.status,
show_peers=args.peers,
identity_path=args.identity)
exit()
program_setup(configdir = args.config,
rnsconfigdir=args.rnsconfig,
run_pn=args.propagation_node,
on_inbound=args.on_inbound,
verbosity=args.verbose,
quietness=args.quiet,
service=args.service)
except KeyboardInterrupt:
print("")
@ -477,9 +696,9 @@ propagation_transfer_max_accepted_size = 256
# LXMF prioritises keeping messages that are
# new and small. Large and old messages will
# be removed first. This setting is optional
# and defaults to 2 gigabytes.
# and defaults to 500 megabytes.
# message_storage_limit = 2000
# message_storage_limit = 500
# You can tell the LXMF message router to
# prioritise storage for one or more
@ -491,6 +710,25 @@ propagation_transfer_max_accepted_size = 256
# prioritise_destinations = 41d20c727598a3fbbdf9106133a3a0ed, d924b81822ca24e68e2effea99bcb8cf
# You can configure the maximum number of other
# propagation nodes that this node will peer
# with automatically. The default is 50.
# max_peers = 25
# You can configure a list of static propagation
# node peers, that this node will always be
# peered with, by specifying a list of
# destination hashes.
# static_peers = e17f833c4ddf8890dd3a79a6fea8161d, 5a2d0029b6e5ec87020abaea0d746da4
# You can configure the propagation node to
# only accept incoming propagation messages
# from configured static peers.
# from_static_only = True
# By default, any destination is allowed to
# connect and download messages, but you can
# optionally restrict this. If you enable

View File

@ -1 +1 @@
__version__ = "0.5.5"
__version__ = "0.6.3"

View File

@ -12,6 +12,7 @@ User-facing clients built on LXMF include:
Community-provided tools and utilities for LXMF include:
- [LXMFy](https://lxmfy.quad4.io/)
- [LXMF-Bot](https://github.com/randogoth/lxmf-bot)
- [LXMF Messageboard](https://github.com/chengtripp/lxmf_messageboard)
- [LXMEvent](https://github.com/faragher/LXMEvent)

View File

@ -69,4 +69,4 @@ while True:
# input()
# RNS.log("Requesting messages from propagation node...")
# router.request_messages_from_propagation_node(identity)
# router.request_messages_from_propagation_node(identity)

View File

@ -1,3 +1,2 @@
qrcode==7.4.2
rns==0.7.8
setuptools==70.0.0
qrcode>=7.4.2
rns>=0.9.1

View File

@ -25,6 +25,6 @@ setuptools.setup(
'lxmd=LXMF.Utilities.lxmd:main',
]
},
install_requires=['rns>=0.8.0'],
python_requires='>=3.7',
install_requires=["rns>=0.9.3"],
python_requires=">=3.7",
)