This needs to be configured by specific services to have any effect. For
now, we're only enabling it for the PowerDNS Authoritative Server and
dnsdist since it's recommended by RFC 9210 and actively used by various
recursive resolver servers when falling back to TCP. TCP Fast Open is
rarely used from end user devices due to it enabling tracking and having
issues with middleboxes. We aren't going to start using it anywhere in
GrapheneOS but may have more server-side uses for it. This functionality
is built into QUIC without the same downsides but QUIC support in the
software we use is not ready for us to enable it, especially the very
primitive support in nginx.
For most servers, a new random TCP Fast Open key is created on a daily
basis and the previous key continues to be accepted. For DNS servers,
the new key is generated via a keyed hash of the current date in order
to keep it consistent across servers providing an anycast IP without it
needing regular synchronization.
This is fully supported by the Broadcom NIC used for both servers but
not enabled by default. It's already enabled by default for the Intel
NIC used by the Macarne update server.
1.releases.grapheneos.org and 2.releases.grapheneos.org were ending up
with only 6 channels by default despite the hardware being capable of
far more. This raises it to match the 24 CPU threads.
0.releases.grapheneos.org is already using 32 channels by default which
matches the 32 CPU threads.
This avoids setting outbound DSCP for echo-reply, TCP RST for TCP
sockets in the Time-Wait state and potentially other cases. We don't
want it to be possible for inbound packets to determine our outbound
traffic classification even to a small extent.
We used to have this but it was lost during changes to our firewall
rules. We don't have an AAAA record for discuss.grapheneos.org to avoid
IPv6 connections but should also be explicitly blocking it. We're doing
this due to reliance on IP bans for registration to block spammers and
having IPv6 would greatly weaken it even if banning based on /64.
Based on the CAKE statistics during load testing, the latency benefits
of GSO splitting are minimal for our servers and the increased CPU usage
can increase latency.