This was causing scary error message in RsGxsChannels with autodoanload
enabled, when all messages where already processed none matched the
filter so all messages where returned,
making p3GxsChannels::handleUnprocessedPost furious
more details into this forum post
retroshare://forum?name=Scary%20message%2C%20but%20which%20doesn%27t%20seem%20to%20be%20the%20source%20of%20the%20problems%20Was%3A%20More%20GXS%20strange%20messages&id=8fd22bd8f99754461e7ba1ca8a727995&msgid=04f10ff97f761c6840c33f1610cb050f0f73da8d
Deprecated old method which exposed interna async mechanism to the API
users, making their and out life difficult
Things that really need to be async like turtle search/requests now accept
callbacks, so the caller can be notified everytime some result is got
back
Implement RsThread::async commodity wrapper to execute blocking API
calls without blocking the caller, this could be optimized
trasparently using a thread pool if necessary
Added hints into some retroshare-gui files on how to use RsThread::async
thoghether with QMetaObject::invokeMethod and blocking RetroShare API
to simplyfy interaction between GUI and libretroshare
rsGetHostByName doesn't support IPv6 addresses.
Before this commit when
an hostname had both AAAA and A record rsGetHostByName retrurned success
but with 0.0.0.0 invalid address. As rsGetHostByName is not capable of
handling IPv6 (fixing it would require API change and a bunch of
refactors around) just avoid to receive IPv6 addresses on resolition, so
the correct IPv4 address is returned if present, otherwise fail
gracefully (A record not found).
Forging of BIO_METHOD may be the cause of the stack overflow as the
internal structure in openssl changed so the methods get assigned to
wrong pointer.
Kudos Cyril to notice newer openssl internal struct had one new pointer
inserted as second member
Caused by unneeded pointer usages + not enough careful IPv6 porting
I haven't managed to reproduce the crash nor to test the fix due it
happening only when UDP relayed connection happens (apparently never on
my nodes.
I have managed to discover where the bug comes from thanks to multiple
user reports, specially to Ilario report which documented 3 crashes
happening on 0.6.4 with complete log.