mirror of
https://github.com/autistic-symposium/backend-and-orchestration-toolkit.git
synced 2025-06-29 00:47:09 -04:00
.. | ||
README.md |
📡 communication design patterns
Request Response model
used in
- the web, HTTP, DNS, SSH
- RPC (remote procedure call)
- SQL and database protocols
- APIs (REST/SOAP/GraphQL)
the basic idea
1. clients sends a request
- the request structure is defined by both client and server and has a boundary.
2. server parses the request
- the parsing cost is not cheap (e.g. `json` vs. `xml` vs. protocol buffers)
- for example, for a large image, chunks can be sent, with a request per chunk
3. Server processes the request
4. Server sends a response
5. Client parse the Response and consume
an example in your terminal
- see how it always get the headers firsts:
curl -v --trace marinasouza.xyz
== Info: Trying 76.76.21.21:80...
== Info: Connected to marinasouza.xyz (76.76.21.21) port 80 (#0)
=> Send header, 79 bytes (0x4f)
0000: 47 45 54 20 2f 20 48 54 54 50 2f 31 2e 31 0d 0a GET / HTTP/1.1..
0010: 48 6f 73 74 3a 20 6d 61 72 69 6e 61 73 6f 75 7a Host: marinasouz
0020: 61 2e 78 79 7a 0d 0a 55 73 65 72 2d 41 67 65 6e a.xyz..User-Agen
0030: 74 3a 20 63 75 72 6c 2f 37 2e 38 38 2e 31 0d 0a t: curl/7.88.1..
0040: 41 63 63 65 70 74 3a 20 2a 2f 2a 0d 0a 0d 0a Accept: */*....
== Info: HTTP 1.0, assume close after body
<= Recv header, 33 bytes (0x21)
0000: 48 54 54 50 2f 31 2e 30 20 33 30 38 20 50 65 72 HTTP/1.0 308 Per
0010: 6d 61 6e 65 6e 74 20 52 65 64 69 72 65 63 74 0d manent Redirect.
0020: 0a .
<= Recv header, 26 bytes (0x1a)
0000: 43 6f 6e 74 65 6e 74 2d 54 79 70 65 3a 20 74 65 Content-Type: te
0010: 78 74 2f 70 6c 61 69 6e 0d 0a xt/plain..
<= Recv header, 36 bytes (0x24)
0000: 4c 6f 63 61 74 69 6f 6e 3a 20 68 74 74 70 73 3a Location: https:
0010: 2f 2f 6d 61 72 69 6e 61 73 6f 75 7a 61 2e 78 79 //marinasouza.xy
0020: 7a 2f 0d 0a z/..
<= Recv header, 41 bytes (0x29)
0000: 52 65 66 72 65 73 68 3a 20 30 3b 75 72 6c 3d 68 Refresh: 0;url=h
0010: 74 74 70 73 3a 2f 2f 6d 61 72 69 6e 61 73 6f 75 ttps://marinasou
0020: 7a 61 2e 78 79 7a 2f 0d 0a za.xyz/..
<= Recv header, 16 bytes (0x10)
0000: 73 65 72 76 65 72 3a 20 56 65 72 63 65 6c 0d 0a server: Vercel..
<= Recv header, 2 bytes (0x2)
0000: 0d 0a ..
<= Recv data, 14 bytes (0xe)
0000: 52 65 64 69 72 65 63 74 69
Synchronous vs. Asynchronous workloads
Synchronous I/O: the basic idea
1. Caller sends a request and blocks
2. Caller cannot execute any code meanwhile
3. Receiver responds, Caller unblocks
4. Caller and Receiver are in sync
example (note the waste!)
1. program asks OS to read from disk
2. program main threads is taken off the CPU
3. read is complete and program resume execution (costly)
Asynchronous I/O: the basic idea
1. caller sends a request
2. caller can work until it gets a response
3. caller either:
- checks whether the response is ready (epoll)
- receiver calls back when it's done (io_uring)
- spins up a new thread that blocks
4. caller and receiver not in sync
Sync vs. Async in a Request Response
- synchronicity is a client property
- most modern client libraries are async
Async workload is everywhere
- async programming (promises, futures)
- async backend processing
- async commits in postgres
- async IO in Linux (epoll, io_uring)
- async replication
- async OS fsync (filesystem cache)
Push
pros and coins
- real time
- the client must be online (connected to the server)
- the client must be able to handle the load
- polling is preferred for light clients
basic idea
1. client connects to a server
2. server sends data to the client
3. client doesn't have to request anything
4. protocol must be bidirectional
#### used in
- RabbitMQ (clients consume the queues, and the messages are pushed to the clients)
Polling
-
used when a request takes long time to process (e.g., upload a video) and very simple to build.
-
however, it can be too chatting, use too much network bandwidth and backend resources.
basic idea
1. client sends a request
2. server responds immediately with a handle
3. server continues to process the request
4. client uses that handle to check for status
5. multiple short request response as polls
Long Polling
- a poll requests where the server only responds when the job is ready (used when a request takes long time to process and it's not real time)
- used by Kafka
basic idea
1. clients sends a request
2. server responds immediately with a handle
3. server continues to process the request
4. client uses that handle to check for status
5. server does not reply until has the response (and there are some timeouts)
Server Sent Events
- one request with a long response, but the client must be online and be able to handle the response.
basic idea
1. a response has start and end
2. client sends a request
3. server sends logical events as part of response
4. server never writes the end of the response
5. it's still a request but an unending response
6. client parses the streams data
7. works with HTTP