formatting, webarchive links, add icons

This commit is contained in:
qubedmaiska 2025-02-27 16:10:31 -05:00
parent 2093a276b8
commit d4461fb4cb
No known key found for this signature in database
GPG key ID: 204BCE0FD52C0501
6 changed files with 15 additions and 9 deletions

View file

@ -24,6 +24,6 @@ This has the following disadvantages:
In modern Qubes OS releases, we have reimplemented interVM file copy using qrexec, which addresses the above mentioned disadvantages. Nowadays, even more generic solution (qubes rpc) is used. See the developer docs on qrexec and qubes rpc. In a nutshell, the file sender and the file receiver just read/write from stdin/stdout, and the qubes rpc layer passes data properly - so, no block devices are used.
The rpc action for regular file copy is *qubes.Filecopy*, the rpc client is named *qfile-agent*, the rpc server is named *qfile-unpacker*. For DispVM copy, the rpc action is *qubes.OpenInVM*, the rpc client is named *qopen-in-vm*, rpc server is named *vm-file-editor*. Note that the qubes.OpenInVM action can be done on a normal AppVM, too.
The rpc action for regular file copy is *qubes.Filecopy*, the rpc client is named *qfile-agent*, the rpc server is named *qfile-unpacker*. For DispVM copy, the rpc action is *qubes.OpenInVM*, the rpc client is named *qopen-in-vm*, rpc server is named *vm-file-editor*. Note that the *qubes.OpenInVM* action can be done on a normal AppVM, too.
Being a rpc server, *qfile-unpacker* must be coded securely, as it processes potentially untrusted data format. Particularly, we do not want to use external tar or cpio and be prone to all vulnerabilities in them; we want a simplified, small utility, that handles only directory/file/symlink file type, permissions, mtime/atime, and assume user/user ownership. In the current implementation, the code that actually parses the data from srcVM has ca 100 lines of code and executes chrooted to the destination directory. The latter is hardcoded to `~user/QubesIncoming/srcVM`; because of chroot, there is no possibility to alter files outside of this directory.

View file

@ -15,7 +15,7 @@ Rationale
Traditionally, Xen VMs are assigned a fixed amount of memory. It is not the optimal solution, as some VMs may require more memory than assigned initially, while others underutilize memory. Thus, there is a need for solution capable of shifting free memory from VM to another VM.
The [tmem](https://oss.oracle.com/projects/tmem/) project provides a "pseudo-RAM" that is assigned on per-need basis. However this solution has some disadvantages:
The [tmem](https://web.archive.org/web/20210712161104/https://oss.oracle.com/projects/tmem/) project provides a "pseudo-RAM" that is assigned on per-need basis. However this solution has some disadvantages:
- It does not provide real RAM, just an interface to copy memory to/from fast, RAM-based storage. It is perfect for swap, good for file cache, but not ideal for many tasks.
- It is deeply integrated with the Linux kernel. When Qubes will support Windows guests natively, we would have to port *tmem* to Windows, which may be challenging.
@ -24,13 +24,19 @@ Therefore, in Qubes another solution is used. There is the *qmemman* dom0 daemon
Similarly, when there is need for Xen free memory (for instance, in order to create a new VM), traditionally the memory is obtained from dom0 only. When *qmemman* is running, it offers an interface to obtain memory from all domains.
To sum up, *qmemman* pros and cons. Pros:
To sum up, *qmemman* pros and cons.
<div class="focus">
<i class="fa fa-check"></i> <strong>Pros</strong>
</div>
- provides automatic balancing of memory across participating PV and HVM domains, based on their memory demand
- works well in practice, with less than 1% CPU consumption in the idle case
- simple, concise implementation
Cons:
<div class="focus">
<i class="fa fa-times"></i> <strong>Cons</strong>
</div>
- the algorithm to calculate the memory requirement for a domain is necessarily simple, and may not closely reflect reality
- *qmemman* is notified by a VM about memory usage change not more often than 10 times per second (to limit CPU overhead in VM). Thus, there can be up to 0.1s delay until qmemman starts to react to the new memory requirements