Fix errors storing large retry intervals.

We have set the max retry interval to a value larger than a postgres or
sqlite int can hold, which caused exceptions when updating the
destinations table.

To fix postgres we need to change the column to a bigint, and for sqlite
we lower the max interval to 2**62 (which is still incredibly long).
This commit is contained in:
Erik Johnston 2019-10-02 10:36:27 +01:00
parent dfe7009639
commit f44f1d2e83
3 changed files with 30 additions and 1 deletions

View file

@ -13,6 +13,8 @@
# See the License for the specific language governing permissions and
# limitations under the License.
from synapse.util.retryutils import MAX_RETRY_INTERVAL
from tests.unittest import HomeserverTestCase
@ -45,3 +47,12 @@ class TransactionStoreTestCase(HomeserverTestCase):
"""
d = self.store.set_destination_retry_timings("example.com", 1000, 50, 100)
self.get_success(d)
def test_large_destination_retry(self):
d = self.store.set_destination_retry_timings(
"example.com", MAX_RETRY_INTERVAL, MAX_RETRY_INTERVAL, MAX_RETRY_INTERVAL
)
self.get_success(d)
d = self.store.get_destination_retry_timings("example.com")
self.get_success(d)