The function is used for two purposes: 1) for subscribers of streams to
get a token they can use to get further updates with, and 2) for
replication to track position of the writers of the stream.
For streams with a single writer the two scenarios produce the same
result, however the situation becomes complicated for streams with
multiple writers. The current `MultiWriterIdGenerator` does not
correctly handle the first case (which is not an issue as its only used
for the `caches` stream which nothing subscribes to outside of
replication).
This allows us to have the logic on both master and workers, which is necessary to move event persistence off master.
We also combine the instantiation of ID generators from DataStore and slave stores to the base worker stores. This allows us to select which process writes events independently of the master/worker splits.
Python will return a tuple whether there are parentheses around the returned values or not.
I'm just sick of my editor complaining about this all over the place :)
... and update some docstrings to correctly reflect the types being used.
get_new_device_msgs_for_remote can return a long under some circumstances,
which was being stored in last_device_list_stream_id_by_dest, and was then
upsetting things on the next loop.
Rather than loading them lazily. This allows us to remove all
the yield statements and spurious arguments for the get_next
methods.
It also allows us to replace all instances of get_next_txn with
get_next since get_next no longer needs to access the db.
This is for setting up dependencies that require work on startup. This
is useful for the DataStore that wants to read a bunch from the database
before initiliazing.
Handle the fact that events can be persisted out of order, and so to get
the "current max" stream token becomes non trivial - as we need to make
sure that *all* stream tokens less than the current max have also
successfully been persisted.