4 KiB
Locking in dm-sflc
For the accesses to the position map (and ancillary data structures) to be thread-safe, we obviously need some locking mechanism, because there will be many I/O requests trying to access it concurrently.
The simplest mechanism possible would be a single per-volume lock associated to the position map (to be acquired at every PosMap access), plus a per-device lock associated to the pre-shuffled array of PSIs (to be acquired by WRITEs when allocating a new slice).
Instead, what we use is slightly more complex. Besides the per-device lock, there are two locks associated to each volume: a read-write semaphore, and a spinlock. The reason lies in the following two observations/requirements:
-
FLUSH requests need to perform potentially-sleeping operations in their critical section(s): while locking the position map, they need to encrypt its dirty blocks onto a separate memory area. Encryption can potentially sleep, not just because of memory allocation, but also because of the Kernel Crypto API itself deciding to schedule the operation rather than performing it synchronously.
-
We would like the overall locking procedure to be sleepless in the "typical case". That is, when there are only READs and WRITEs incoming, and no FLUSHes, we would like the lock(s) governing the READ and WRITE critical sections to be acquirable without sleeping. Otherwise, the overhead of scheduling and context-switching would likely be too much compared to the critical sections themselves (which are very tiny), and would affect the end-to-end I/O throughput.
If we wanted to go for the simple mechanism and only have a single per-volume lock, this lock would have to be a sleeping mutex, because of point 1: it could never be a spinlock, because a FLUSH would need to acquire it and then potentially sleep on encryption (which is a no-no). But then, having a mutex govern access to the position map would violate point 2.
The next-simplest solution is to use a rwsem and a spinlock in conjunction. At a high level:
- READs only take the
spinlock. - WRITEs first take the
rwsemas readers, then thespinlock. Also the per-devicespinlock, if allocating a new slice. - FLUSHes take the
rwsemas writers. - DISCARDs behave like WRITEs in their critical section, so they also take the
rwsemas readers and then thespinlock. No need (yet) to take the per-devicespinlock.
This respects both points 1 and 2 above: the FLUSH is able to sleep on encryption under a rwsem; READs and WRITEs don't sleep to enter their critical sections, when there are no FLUSHes.
Also, notice that this architecture allows for concurrency between FLUSHes and READs: this is alright because READs only read the position map entries in their critical section, and the FLUSH never writes to those entries.
The Flush State
FLUSH operations need a potentially large amount of memory as their contextual state. To avoid having to allocate it (and free it) for each request, we pre-allocate it once and for all, at volume construction time; its access is (for the most part) not governed by any locks, because the block layer offers the core guarantee that only one FLUSH request can be pending at any given time, for each block device, therefore the state can never be accessed concurrently.
This state consists, among other things, of the memory area hosting the encrypted (and serialised) position map; this buffer is referenced by the FLUSH's in-flight CWBs (Cacheline WriteBack requests), while no lock protects it: it is therefore necessary that no other code accesses it concurrently. This fundamental assumption of non-reentrancy is guaranteed, aside from the aforementioned property of the block layer, by the fact that the only other callers of the FLUSH function are the volume constructor and destructor, which can never execute concurrently with I/O on the volume.
Pseudocode
The following picture illustrates the position map and its ancillary fields (including the FLUSH state), allocated once per volume.
