These servers originally only had the 1Gbps base bandwidth and shaping
it with CAKE worked well to make the most of it during traffic spikes
for the web servers. It has little value for the nameservers since the
only potentially high throughput service is non-interactive SSH.
These servers now have 10Gbps burst available but are heavily limited by
their single virtual core and unable to use all of it in practice. CAKE
can only provide significant value when it's the bottleneck which isn't
the case when the workload is CPU limited. We don't want to keep around
the artificially low 1Gbps limit and it can't do much more.
Unlike OVH, the practical bottleneck is the CPU and FQ has the lowest
CPU usage in practice due to being very performance-oriented with a FIFO
fast path and offloading TCP pacing from the TCP stack to itself. On the
DNS servers, the fast path is always used in practice. Our OVH servers
have a much lower enforced bandwidth limit and the way they implement it
ruins fairness across flows. We definitely want to stick with CAKE for
our VPS instances on OVH but it doesn't make sense on BuyVM anymore.
This is fully supported by the Broadcom NIC used for both servers but
not enabled by default. It's already enabled by default for the Intel
NIC used by the Macarne update server.
1.releases.grapheneos.org and 2.releases.grapheneos.org were ending up
with only 6 channels by default despite the hardware being capable of
far more. This raises it to match the 24 CPU threads.
0.releases.grapheneos.org is already using 32 channels by default which
matches the 32 CPU threads.
Based on the CAKE statistics during load testing, the latency benefits
of GSO splitting are minimal for our servers and the increased CPU usage
can increase latency.