Move current pages to /en/ subdirectory

This commit is contained in:
Axon 2015-10-11 07:29:00 +00:00
parent f3ee78f223
commit a91496e9d6
No known key found for this signature in database
GPG key ID: 8CE137352A019A17
159 changed files with 2 additions and 2 deletions

View file

@ -1,49 +0,0 @@
---
layout: doc
title: Dom0SecureUpdates
permalink: /en/doc/dom0-secure-updates/
redirect_from:
- /doc/Dom0SecureUpdates/
- /wiki/Dom0SecureUpdates/
---
Qubes Dom0 secure update procedure
==================================
Reasons for Dom0 updates
------------------------
Normally there should be few reasons for updating software in Dom0. This is because there is no networking in Dom0, which means that even if some bugs will be discovered e.g. in the Dom0 Desktop Manager, this really is not a problem for Qubes, because all the 3rd party software running in Dom0 is not accessible from VMs or network in any way. Some exceptions to the above include: Qubes GUI daemon, Xen store daemon, and disk back-ends (we plan move the disk backends to untrusted domain in Qubes 2.0). Of course we believe this software is reasonably secure and we hope it will not need patching.
However, we anticipate some other situations when updating Dom0 software might be required:
- Updating drivers/libs for new hardware support
- Correcting non-security related bugs (e.g. new buttons for qubes manager)
- Adding new features (e.g. GUI backup tool)
Problems with traditional network-based update mechanisms
---------------------------------------------------------
Dom0 is the most trusted domain on Qubes OS, and for this reason we decided to design Qubes in such a way that Dom0 is not connected to any network. In fact only select domains can be connected to a network via so called network domains. There could also be more than one network domain, e.g. in case the user is connected to more than one physically or logically separated networks.
Secure update mechanism we use in Qubes (starting from Beta 2
-------------------------------------------------------------
Keeping Dom0 not connected to any network makes it hard, however, to provide updates for software in Dom0. For this reason we have come up with the following mechanism for Dom0 updates, which minimizes the amount of untrusted input processed by Dom0 software:
The update process is initiated by [qvm-dom0-update script](https://github.com/QubesOS/qubes-core-admin-linux/blob/release2/dom0-updates/qubes-dom0-update), running in Dom0.
Updates (\*.rpm files) are checked and downloaded by UpdateVM, which by default is the same as the firewall VM, but can be configured to be any other, network-connected VM. This is done by [qubes-download-dom0-updates.sh script](https://github.com/QubesOS/qubes-core-agent-linux/blob/release2/misc/qubes-download-dom0-updates.sh) (this script is executed using qrexec by the previously mentioned qvm-dom0-update). Note that we assume that this script might get compromised and might download a maliciously compromised downloads -- this is not a problem as Dom0 verifies digital signatures on updates later. The downloaded rpm files are placed in a ~~~/var/lib/qubes/dom0-updates~~~ directory on UpdateVM filesystem (again, they might get compromised while being kept there, still this isn't a problem). This directory is passed to yum using the ~~~--installroot=~~~ option.
Once updates are downloaded, the update script that runs in UpdateVM requests an RPM service [qubes.ReceiveUpdates](https://github.com/QubesOS/qubes-core-admin-linux/blob/release2/dom0-updates/qubes.ReceiveUpdates) to be executed in Dom0. This service is implemented by [qubes-receive-updates script](https://github.com/QubesOS/qubes-core-admin-linux/blob/release2/dom0-updates/qubes-receive-updates) running in Dom0. The Dom0's qvm-dom0-update script (which originally initiated the whole update process) waits until qubes-receive-updates finished.
The qubes-receive-updates script processes the untrusted input from Update VM: it first extracts the received \*.rpm files (that are sent over qrexec data connection) and then verifies digital signature on each file. The qubes-receive-updates script is a security-critical component of the Dom0 update process (as is the [qfile-dom0-unpacker.c](https://github.com/QubesOS/qubes-core-admin-linux/blob/release2/dom0-updates/qfile-dom0-unpacker.c) and the rpm utility, both used by the qubes-receive-updates for processing the untrusted input.
Once qubes-receive-updates finished unpacking and verifying the updates, the updates are placed in ``qubes-receive-updates`` directory in Dom0 filesystem. Those updates are now trusted. Dom0 is configured (see /etc/yum.conf in Dom0) to use this directory as a default (and only) [yum repository](https://github.com/QubesOS/qubes-core-admin-linux/blob/release2/dom0-updates/qubes-cached.repo).
Finally, qvm-dom0-updates runs ``yum update`` that fetches the rpms from qubes-cached repo and installs them as usual.
Security benefit of our update mechanism
----------------------------------------
The benefit of the above scheme is that one doesn't need to trust the TCP/IP stack, the HTTP stack, and wget implementation in order to safely deliver and install updates in Dom0. One only needs to trust a few hundred lines of code as discussed above, as well as the rpm utility for properly checking digital signature (but this is required in any update scheme).

View file

@ -1,55 +0,0 @@
---
layout: doc
title: DVMimpl
permalink: /en/doc/dvm-impl/
redirect_from:
- /doc/DVMimpl/
- /wiki/DVMimpl/
---
DisposableVM implementation in Qubes
====================================
DisposableVM image preparation
------------------------------
DisposableVM is not started like other VMs, by executing equivalent of *xl create* - it would be too slow. Instead, DisposableVM are started by restore from a savefile.
Preparing a savefile is done by */usr/lib/qubes/qubes\_prepare\_saved\_domain.sh* script. It takes two mandatory arguments, appvm name (APPVM) and the savefile name, and optional path to "prerun" script. The script executes the following steps:
1. APPVM is started by *qvm-start*
2. xenstore key `/local/domain/appvm_domain_id/qubes_save_request` is created
3. if prerun script was specified, copy it to `qubes_save_script` xenstore key
4. wait for the `qubes_used_mem` key to appear
5. (in APPVM) APPVM boots normally, up to the point in */etc/init.d/qubes\_core* script when the presence of `qubes_save_request` key is tested. If it exists, then
1. (in APPVM) if exists, prerun script is retrieved from the respective xenstore key and executed. This preloads filesystem cache with useful applications, so that they will start faster.
2. (in APPVM) the amount of used memory is stored to `qubes_used_mem` xenstore key
3. (in APPVM) busy-waiting for `qubes_restore_complete` xenstore key to appear
6. when `qubes_used_mem` key appears, the domain memory is reduced to this amount, to make the savefile smaller.
7. APPVM private image is detached
8. the domain is saved via *xl save*
9. the COW file volatile.img (cow for for root fs and swap) is packed to `saved_cows.tar` archive
*qubes\_prepare\_saved\_domain.sh* script is somehow lowlevel. It is usually called by *qvm-create-default-dvm* script, that takes care of creating a special AppVM (named template\_name-dvm) to be passed to *qubes\_prepare\_saved\_domain.sh*, as well as copying the savefile to /dev/shm (the latter action is not done if the `/var/lib/qubes/dvmdata/dont_use_shm` file exists).
Restoring a DisposableVM from the savefile
------------------------------------------
Normally, disposable VM is created when qubes rpc request with target *\$dispvm* is received. Then, as a part of rpc connection setup, the *qfile-daemon-dvm* program is executed; it executes */usr/lib/qubes/qubes\_restore* program. It is crucial that this program executes quickly, to make DisposableVM creation overhead bearable for the user. Its main steps are:
1. modify the savefile so that the VM name, VM UUID, MAC address and IP address are unique
2. restore the COW files from the `saved_cows.tar`
3. create the `/var/run/qubes/fast_block_attach` file, whose presence tells the */etc/xen/scripts/block* script to bypass some redundant checks and execute as fast as possible.
4. execute "xl restore" in order to restore a domain.
5. create the same xenstore keys as normally created when AppVM boots (e.g. `qubes_ip`)
6. create the `qubes_restore_complete` xenstore key. This allows the boot process in DisposableVM to continue.
The actual passing of files between AppVM and a DisposableVM is implemented via qubes rpc.
Validating the DisposableVM savefile
------------------------------------
DisposableVM savefile contains references to template rootfs and to COW files. The COW files are restored before each DisposableVM start, so they cannot change. On the other hand, if templateVM is started, the template rootfs will change, and it may not be coherent with the COW files.
Therefore, the check for template rootfs modification time being older than DisposableVM savefile modification time is required. It is done in *qfilexchgd* daemon, just before restoring DisposableVM. If necassary, an attempt is made to recreate the DisposableVM savefile, using the last template used (or default template, if run for the first time) and the default prerun script, residing at */var/lib/qubes/vm-templates/templatename/dispvm\_prerun.sh*. Unfortunately, the prerun script takes a lot of time to execute - therefore, after template rootfs modification, the next DisposableVM creation can be longer by about 2.5 minutes.

View file

@ -1,29 +0,0 @@
---
layout: doc
title: Qfilecopy
permalink: /en/doc/qfilecopy/
redirect_from:
- /doc/Qfilecopy/
- /wiki/Qfilecopy/
---
InterVM file copy design
========================
There are two cases when we need a mechanism to copy files between VMs:
- "regular" file copy - when user instructs file manager to copy a given files/directories to a different VM
- DispVM copy - user selects "open in DispVM" on a file; this file must be copied to a Disposable VM, edited by user, and possibly a modified file copied back from DispVM to VM.
Prior to Qubes Beta1, for both cases, a block device (backed by a file in dom0 with a vfat filesystem on it) was attached to VM, file(s) copied there, and then the device was detached and attached to target VM. In the DispVM case, if a edited file has been modified, another block device is passed to requester VM in order to update the source file.
This has the following disadvantages:
- performance - dom0 has to prepare and attach/detach block devices, which is slow because of hotplug scripts involvement.
- security - VM kernel parses partition table and filesystem metadata from the block device; they are controlled by (potentially untrusted) sender VM.
In Qubes Beta1, we have reimplemented interVM file copy using qrexec, which addresses the abovementioned disadvantages. In Qubes Beta2, even more generic solution (qubes rpc) is used. See the developer docs on qrexec and qubes rpc. In a nutshell, the file sender and the file receiver just read/write from stdin/stdout, and the qubes rpc layer passes data properly - so, no block devices are used.
The rpc action for regular file copy is *qubes.Filecopy*, the rpc client is named *qfile-agent*, the rpc server is named *qfile-unpacker*. For DispVM copy, the rpc action is *qubes.OpenInVM*, the rpc client is named *qopen-in-vm*, rpc server is named *vm-file-editor*. Note that the qubes.OpenInVM action can be done on a normal AppVM, too.
Being a rpc server, *qfile-unpacker* must be coded securely, as it processes potentially untrusted data format. Particularly, we do not want to use external tar or cpio and be prone to all vulnerabilities in them; we want a simplified, small utility, that handles only directory/file/symlink file type, permissions, mtime/atime, and assume user/user ownership. In the current implementation, the code that actually parses the data from srcVM has ca 100 lines of code and executes chrooted to the destination directory. The latter is hardcoded to `~user/incoming/srcVM`; because of chroot, there is no possibility to alter files outside of this directory.

View file

@ -1,56 +0,0 @@
---
layout: doc
title: Qfileexchgd
permalink: /en/doc/qfileexchgd/
redirect_from:
- /doc/Qfileexchgd/
- /wiki/Qfileexchgd/
---
**This mechanism is obsolete as of Qubes Beta 1!**
==================================================
Please see this [page](/en/doc/qfilecopy/) instead.
qfilexchgd, the Qubes file exchange daemon
==========================================
Overview
--------
*qfilexchgd* is a dom0 daemon responsible for managing exchange of block devices ("virtual pendrives") between VMs. It is used for
- copying files between AppVMs
- copying a single file between an AppVM and a DisposableVM
*qfilexchgd* is started after first *qubes\_guid* has been started, so that it has access to X display in dom0 to present dialog messages.
*qfilexchgd* is event driven. The sources of events are:
- trigger of xenstore watch for the changes in `/local/domain` xenstore hierarchy - to detect start/stop of VMs, and maintain vmname-\>vm\_xid dictionary
- trigger of xenstore watch for a change in `/local/domain/domid/device/qpen` key - VMs write to this key to request service from *qfilexchgd*
Copying files between AppVMs
----------------------------
1. AppVM1 user runs *qvm-copy-to-vm* script (accessible from Dolphin file manager by right click on a "file(s)-\>Actions-\>Send to VM" menu). It calls */usr/lib/qubes/qubes\_penctl new*, and it writes "new" request to its `device/qpen` xenstore key. *qfilexchgd* creates a new 1G file, makes vfat fs on it, and does block-attach so that this file is attached as `/dev/xvdg` in AppVM1.
2. AppVM1 mounts `/dev/xvdg` on `/mnt/outgoing` and copies requested files there, then unmounts it.
3. AppVM1 writes "send DestVM" request to its `device/qpen` xenstore key (calling */usr/lib/qubes/qubes\_penctl send DestVM*). After getting confirmation by displaying a dialog box in dom0 display, *qfilexchgd* detaches `/dev/xvdg` from AppVM1, attaches it as `/dev/xvdh` to DestVM.
4. In DestVM, udev script for `/dev/xvdh` named *qubes\_add\_pendrive\_script* (see `/etc/udev/rules.d/qubes.rules`) mounts `/dev/xvdh` on `/mnt/incoming`, and then waits for `/mnt/incoming` to become unmounted. A file manager running in DestVM shows a new volume, and user in DestVM may copy files from it. When user in DestVM is done, then user unmounts `/mnt/incoming`. *qubes\_add\_pendrive*\_script then tells *qfilexchgd* to detach `/dev/xvdh` and terminates.
Copying a single file between AppVM and a DisposableVM
------------------------------------------------------
In order to minimize attack surface presented by necessity to process virtual pendrive metadata sent by (potentially compromised and malicious) DisposableVM, AppVM\<-\>DisposableVM file exchange protocol does not use any filesystem.
1. User in AppVM1 runs *qvm-open-in-dvm* (accessible from Dolphin file manager by right click on a "file-\>Actions-\>Open in DisposableVM" menu). *qvm-open-in-dvm*
1. gets a new `/dev/xvdg` (just as described in previous paragraph)
2. computes a new unique transaction seq SEQ by incrementing `/home/user/.dvm/seq` contents,
3. writes the requested file name (say, /home/user/document.txt) to `/home/user/.dvm/SEQ`
4. creates a dvm\_header (see core.git/appvm/dvm.h) on `/dev/xvdg`, followed by file contents
5. writes the "send disposable SEQ" command to its `device/qpen` xenstore key.
2. *qfilexchgd* sees that "send" argument=="disposable", and creates a new DisposableVM by calling */usr/lib/qubes/qubes\_restore*. It adds the new DisposableVM to qubesDB via qvm\_collection.add\_new\_disposablevm. Then it attaches the virtual pendrive (previously attached as `/dev/xvdg` at AppVM1) as `/dev/xvdh` in DisposableVM.
3. In DisposableVM, *qubes\_add\_pendrive\_script* sees non-zero `qubes_transaction_seq` key in xenstore, and instead processing the virtual pendrive as in the case of normal copy, treats it as DVM transaction (a request, because we run in DisposableVM). It retrieves the body of the file passed in `/dev/xvdh`, copies to /tmp, and runs *mime-open* utility to open appropriate executable to edit it. When *mime-open* returns, if the file was modified, it is sent back to AppVM1 (by writing "send AppVM1 SEQ" to `device/qpen` xenstore key). Then DisposableVM destroys itself.
4. In AppVM1, a new `/dev/xvdh` appears (because DisposableVM has sent it). *qubes\_add\_pendrive\_script* sees non-zero `qubes_transaction_seq` key, and treats it as DVM transaction (a response, because we run in AppVM, not DisposableVM). It retrieves the filename from `/home/user/.dvm/SEQ`, and copies data from `/dev/xvdh` to it.

View file

@ -1,76 +0,0 @@
---
layout: doc
title: Qmemman
permalink: /en/doc/qmemman/
redirect_from:
- /doc/Qmemman/
- /wiki/Qmemman/
---
qmemman, Qubes memory manager
=============================
Rationale
---------
Traditionally, Xen VMs are assigned a fixed amount of memory. It is not the optimal solution, as some VMs may require more memory than assigned initially, while others underutilize memory. Thus, there is a need for solution capable of shifting free memory from VM to another VM.
The [tmem](http://oss.oracle.com/projects/tmem/) project provides a "pseudo-RAM" that is assigned on per-need basis. However this solution has some disadvantages:
- It does not provide real RAM, just an interface to copy memory to/from fast, RAM-based storage. It is perfect for swap, good for file cache, but not ideal for many tasks.
- It is deeply integrated with the Linux kernel. When Qubes will support Windows guests natively, we would have to port *tmem* to Windows, which may be challenging.
Therefore, in Qubes another solution is used. There is the *qmemman* dom0 daemon. All VMs report their memory usage (via xenstore) to *qmemman*, and it makes decisions on whether to balance memory across domains. The actual mechanism to add/remove memory from a domain (*xc.domain\_set\_target\_mem*) is already supported by both PV Linux guests and Windows guests (the latter via PV drivers).
Similarly, when there is need for Xen free memory (for instance, in order to create a new VM), traditionally the memory is obtained from dom0 only. When *qmemman* is running, it offers interface to obtain memory from all domains.
To sum up, *qmemman* pros and cons. Pros:
- provides automatic balancing of memory across participating PV and HVM domains, based on their memory demand
- works well in practice, with less than 1% CPU consumption in the idle case
- simple, concise implementation
Cons:
- the algorithm to calculate the memory requirement for a domain is necessarily simple, and may not closely reflect reality
- *qmemman* is notified by a VM about memory usage change not more often than 10 times per seconds (to limit CPU overhead in VM). Thus, there can be up to 0.1s delay until qmemman starts to react to the new memory requirements
- it takes more time to obtain free Xen memory, as all participating domains need to instructed to yield memory
Interface
---------
*qmemman* listens for the following events:
- writes to `/local/domain/domid/memory/meminfo` xenstore keys by *meminfo-writer* process in VM. The content of this key is taken from the VM's `/proc/meminfo` pseudofile ; *meminfo-writer* just strips some unused lines from it. Note that *meminfo-writer* writes its xenstore key only if the VM memory usage has changed significantly enough since the last update (by default 30MB), to prevent flooding with almost identical data
- commands issued over Unix socket `/var/run/qubes/qmemman.sock`. Currently, the only command recognized is to free the specified amount of memory. The QMemmanClient class implements the protocol.
- if the `/var/run/qubes/do-not-membalance` file exists, *qmemman* suspends memory balancing. It is primarily used when allocating memory for a to-be-created domain, to prevent using up the free Xen memory by the balancing algorithm before the domain creation is completed.
Algorithms basics
-----------------
The core VM property is `prefmem`. It denotes the amount of memory that should be enough for a domain to run efficiently in the nearest future. All *qmemman* algorithms will never shrink domain memory below `prefmem`. Currently, `prefmem` is simply 130% of current memory usage in a domain (without buffers and cache, but including swap). Naturally, `prefmem` is calculated by *qmemman* based on the information passed by *meminfo-writer*.
Whenever *meminfo-writer* running in domain A provides new data on memory usage to *qmemman*, the `prefmem` value for A is updated and the following balance algorithm (*qmemman\_algo.balance*) is triggered. Its output is the list of (domain\_id, new\_memory\_target\_to\_be\_set) pairs:
1. TOTAL\_PREFMEM = sum of `prefmem` of all participating domains
2. TOTAL\_MEMORY = sum of all memory assigned to participating domains plus Xen free memory
3. if TOTAL\_MEMORY \> TOTAL\_PREFMEM, then redistribute TOTAL\_MEMORY across all domains proportionally to their `prefmem`
4. if TOTAL\_MEMORY \< TOTAL\_PREFMEM, then
1. for all domains whose `prefmem` is less than actual memory, shrink them to their `prefmem`
2. redistribute memory reclaimed in the previous step between the rest of domains, proportionally to their `prefmem`
In order to avoid too frequent memory redistribution, it is actually executed only if one of the below conditions hold:
- the sum of memory size changes for all domains is more than MIN\_TOTAL\_MEMORY\_TRANSFER (150MB)
- one of the domains is below its `prefmem`, and more than MIN\_MEM\_CHANGE\_WHEN\_UNDER\_PREF (15MB) would be added to it
Additionally, the balance algorithm is tuned so that XEN\_FREE\_MEM\_LEFT (50MB) is always left as Xen free memory, to make coherent memory allocations in driver domains work.
Whenever *qmemman* is asked to return X megabytes of memory to Xen free pool, the following algorithm (*qmemman\_algo.balloon*) is executed:
1. find all domains ("donors") whose actual memory is greater than its `prefmem`
2. calculate how much memory can be reclaimed by shrinking donors to their `prefmem`. If is is less than X, return error.
3. shrink donors, proportionally to their `prefmem`, so that X MB should become free
4. wait BALOON\_DELAY (0.1s)
5. if some domain have not given back any memory, remove it from the donors list, and go to step 2, unless we already did MAX\_TRIES (20) iterations (then return error).

View file

@ -1,164 +0,0 @@
---
layout: doc
title: Qrexec
permalink: /en/doc/qrexec/
redirect_from:
- /doc/Qrexec/
- /wiki/Qrexec/
---
Command execution in VM (and Qubes RPC)
=======================================
Qubes **qrexec** is a framework for implementing inter-VM (incl. Dom0-VM) services. It offers a mechanism to start programs in VMs, redirect their stdin/stdout, and a policy framework to control this all.
Basic Dom0-VM command execution
-------------------------------
During each domain creation a process named `qrexec-daemon` is started in dom0, and a process named `qrexec-agent` is started in the VM. They are connected over `vchan` channel.
Typically, the first thing that a `qrexec-client` instance does is to send a request to `qrexec-agent` to start a process in the VM. From then on, the stdin/stdout/stderr from this remote process will be passed to the `qrexec-client` process.
E.g., to start a primitive shell in a VM type the following in Dom0 console:
~~~
[user@dom0 ~]$ /usr/lib/qubes/qrexec-client -d <vm name> user:bash
~~~
The string before first semicolon specifies what user to run the command as.
Adding `-e` on the `qrexec-client` command line results in mere command execution (no data passing), and `qrexec-client` exits immediately after sending the execution request.
There is also the `-l <local program>` flag, which directs `qrexec-client` to pass stdin/stdout of the remote program not to its stdin/stdout, but to the (spawned for this purpose) `<local program>`.
The `qvm-run` command is heavily based on `qrexec-client`. It also takes care of additional activities (e.g., starting the domain, if it is not up yet, and starting the GUI daemon). Thus, it is usually more convenient to use `qvm-run`.
There can be almost arbitrary number of `qrexec-client` processes for a domain (i.e., `qrexec-client` processes connected to the same `qrexec-daemon`); their data is multiplexed independently.
There is a similar command line utility avilable inside Linux AppVMs (note the `-vm` suffix): `qrexec-client-vm` that will be described in subsequent sections.
Qubes Inter-VM Services (Qubes RPC)
-----------------------------------
Apart from simple Dom0-\>VM command executions, as discussed above, it is also useful to have more advanced infrastructure for controlled inter-VM RPC/services. This might be used for simple things like inter-VM file copy operations, as well as more complex tasks like starting a DispVM, and requesting it to do certain operations on a handed file(s).
Instead of implementing complex RPC-like mechanisms for inter-VM communication, Qubes takes a much simpler and pragmatic approach and aims to only provide simple *pipes* between the VMs, plus ability to request *pre-defined* programs (servers) to be started on the other end of such pipes, and a centralized policy (enforced by the `qrexec-policy` process running in dom0) which says which VMs can request what services from what VMs.
Thanks to the framework and automatic stdin/stdout redirection, RPC programs are very simple; both the client and server just use their stdin/stdout to pass data. The framework does all the inner work to connect these file descriptors to each other via `qrexec-daemon` and `qrexec-agent`. Additionally, DispVMs are tightly integrated; RPC to a DispVM is a simple matter of using a magic `$dispvm` keyword as the target VM name.
All services in Qubes are identified by a single string, which by convention takes a form of `qubes.ServiceName`. Each VM can provide handlers for each of the known services by providing a file in `/etc/qubes-rpc/` directory with the same name as the service it is supposed to handle. This file will then be executed by the qrexec service, if the dom0 policy allowed the service to be requested (see below). Typically, the files in `/etc/qubes-rpc/` contain just one line, which is a path to the specific binary that acts as a server for the incoming request, however they might also be the actual executable themselves. Qrexec framework is careful about connecting the stdin/stdout of the server process with the corresponding stdin/stdout of the requesting process in the requesting VM (see example Hello World service described below).
Qubes Services (RPC) policy
---------------------------
Besides each VM needing to provide explicit programs to serve each supported service, the inter-VM service RPC is also governed by a central policy in Dom0.
In dom0, there is a bunch of files in `/etc/qubes-rpc/policy/` directory, whose names describe the available RPC actions; their content is the RPC access policy database. Some example of the default services in Qubes are:
- qubes.Filecopy
- qubes.OpenInVM
- qubes.ReceiveUpdates
- qubes.SyncAppMenus
- qubes.VMShell
- qubes.ClipboardPaste
- qubes.Gpg
- qubes.NotifyUpdates
- qubes.PdfConvert
These files contain lines with the following format:
~~~
srcvm destvm (allow|deny|ask)[,user=user_to_run_as][,target=VM_to_redirect_to]
~~~
You can specify `srcvm` and `destvm` by name, or by one of `$anyvm`, `$dispvm`, `dom0` reserved keywords (note string `dom0` does not match the `$anyvm` pattern; all other names do). Only `$anyvm` keyword makes sense in the `srcvm` field (service calls from dom0 are currently always allowed, `$dispvm` means "new VM created for this particular request" - so it is never a source of request). Currently, there is no way to specify source VM by type, but this is planned for Qubes R3.
Whenever a RPC request for service named "XYZ" is received, the first line in `/etc/qubes-rpc/policy/XYZ` that matches the actual `srcvm`/`destvm` is consulted to determine whether to allow RPC, what user account the program should run in target VM under, and what VM to redirect the execution to. If the policy file does not exits, user is prompted to create one *manually*; if still there is no policy file after prompting, the action is denied.
On the target VM, the `/etc/qubes-rpc/XYZ` must exist, containing the file name of the program that will be invoked.
Requesting VM-VM (and VM-Dom0) services execution
-------------------------------------------------
In a src VM, one should invoke the qrexec client via the following command:
~~~
/usr/lib/qubes/qrexec-client-vm <target vm name> <service name> <local program path> [local program arguments]`
~~~
Note that only stdin/stdout is passed between RPC server and client - notably, no cmdline argument are passed.
The source VM name can be accessed in the server process via QREXEC\_REMOTE\_DOMAIN environment variable. (Note the source VM has *no* control over the name provided in this variable--the name of the VM is provided by dom0, and so is trusted.)
By default, stderr of client and server is logged to respective `/var/log/qubes/qrexec.XID` files, in each of the VM.
Be very careful when coding and adding a new RPC service! Any vulnerability in a RPC server can be fatal to security of the target VM!
Requesting VM-VM (and VM-Dom0) services execution (without cmdline helper)
--------------------------------------------------------------------------
Connect directly to `/var/run/qubes/qrexec-agent-fdpass` socket as described [here](/doc/Qrexec2Implementation#Allthepiecestogetheratwork).
### Revoking "Yes to All" authorization
Qubes RPC policy supports an "ask" action, that will prompt the user whether a given RPC call should be allowed. It is set as default for services such as inter-VM file copy. A prompt window launches in dom0, that gives the user option to click "Yes to All", which allows the action and adds a new entry to the policy file, which will unconditionally allow further calls for given (service, srcVM, dstVM) tuple.
In order to remove such authorization, issue this command from a Dom0 terminal (example below for qubes.Filecopy service):
~~~
sudo nano /etc/qubes-rpc/policy/qubes.Filecopy
~~~
and then remove any line(s) ending in "allow" (before the first \#\# comment) which are the "Yes to All" results.
A user might also want to set their own policies in this section. This may mostly serve to prevent the user from mistakenly copying files or text from a trusted to untrusted domain, or vice-versa.
### Qubes RPC "Hello World" service
We will show the necessary files to create a simple RPC call that adds two integers on the target VM and returns back the result to the invoking VM.
- Client code on source VM (`/usr/bin/our_test_add_client`)
~~~
#!/bin/sh
echo $1 $2 # pass data to rpc server
exec cat >&$SAVED_FD_1 # print result to the original stdout, not to the other rpc endpoint
~~~
- Server code on target VM (`/usr/bin/our_test_add_server`)
~~~
#!/bin/sh
read arg1 arg2 # read from stdin, which is received from the rpc client
echo $(($arg1+$arg2)) # print to stdout - so, pass to the rpc client
~~~
- Policy file in dom0 (`/etc/qubes-rpc/policy/test.Add`)
~~~
$anyvm $anyvm ask
~~~
- Server path definition on target VM (`/etc/qubes-rpc/test.Add`)
~~~
/usr/bin/our_test_add_server
~~~
- To test this service, run the following in the source VM:
~~~
/usr/lib/qubes/qrexec-client-vm <target VM> test.Add /usr/bin/our_test_add_client 1 2
~~~
and we should get "3" as answer, provided dom0 policy allows the call to pass through, which would happen after we click "Yes" in the popup that should appear after the invocation of this command. If we changed the policy from "ask" to "allow", then no popup should be presented, and the call will always be allowed.
More high-level RPCs?
---------------------
As previously noted, Qubes aims to provide mechanisms that are very simple and thus with very small attack surface. This is the reason why the inter-VM RPC framework is very primitive and doesn't include any serialization or other function arguments passing, etc. We should remember, however, that users/app developers are always free to run more high-level RPC protocols on top of qrexec. Care should be taken, however, to consider potential attack surfaces that are exposed to untrusted or less trusted VMs in that case.
Qubes RPC internals
-------------------
The internal implementation of qrexec in Qubes R2 is described [here](/en/doc/qrexec2-implementation/), and in Qubes R3 [here](/en/doc/qrexec3-implementation/).

View file

@ -1,74 +0,0 @@
---
layout: doc
title: Qrexec2Implementation
permalink: /en/doc/qrexec2-implementation/
redirect_from:
- /doc/Qrexec2Implementation/
- /wiki/Qrexec2Implementation/
---
Implementation of qrexec in Qubes R2
====================================
This page describes implementation of the [qrexec framework](/en/doc/qrexec/) in Qubes OS R2. Note that the implementation has changed significantly in Qubes R3 (see [Qrexec3Implementation](/en/doc/qrexec3-implementation/)), although the user API reminded backwards compatible (i.e. qrexec apps written for Qubes R2 should run without modifications on Qubes R3).
Dom0 tools implementation
-------------------------
Players:
- `/usr/lib/qubes/qrexec-daemon` \<- started by mgmt stack (qubes.py) when a VM is started.
- `/usr/lib/qubes/qrexec-policy` \<- internal program used to evaluate the policy file and making the 2nd half of the connection.
- `/usr/lib/qubes/qrexec-client` \<- raw command line tool that talks to the daemon via unix socket (`/var/run/qubes/qrexec.XID`)
Note: none of the above tools are designed to be used by users.
Linux VMs implementation
------------------------
Players:
- `/usr/lib/qubes/qrexec-agent` \<- started by VM bootup scripts, a daemon.
- `/usr/lib/qubes/qubes-rpc-multiplexer` \<- executes the actual service program, as specified in VM's `/etc/qubes-rpc/qubes.XYZ`.
- `/usr/lib/qubes/qrexec-client-vm` \<- raw command line tool that talks to the agent.
Note: none of the above tools are designed to be used by users. `qrexec-client-vm` is designed to be wrapped up by Qubes apps.
Windows VMs implemention
------------------------
%QUBES\_DIR% is the installation path (`c:\Program Files\Invisible Things Lab\Qubes OS Windows Tools` by default).
- `%QUBES_DIR%\bin\qrexec-agent.exe` \<- runs as a system service. Responsible both for raw command execution and interpreting RPC service requests.
- `%QUBES_DIR%\qubes-rpc` \<- directory with `qubes.XYZ` files that contain commands for executing RPC services. Binaries for the services are contained in `%QUBES_DIR%\qubes-rpc-services`.
- `%QUBES_DIR%\bin\qrexec-client-vm` \<- raw command line tool that talks to the agent.
Note: neither of the above tools are designed to be used by users. `qrexec-client-vm` is designed to be wrapped up by Qubes apps.
All the pieces together at work
-------------------------------
Note: this section is not needed to use qrexec for writing Qubes apps. Also note the qrexec framework implemention in Qubes R3 significantly differs from what is described in this section.
The VM-VM channels in Qubes R2 are made via "gluing" two VM-Dom0 and Dom0-VM vchan connections:
![qrexec2-internals.png](/attachment/wiki/Qrexec2Implementation/qrexec2-internals.png)
Note: Dom0 never examines the actual data flowing in neither of the two vchan connections.
When a user in a source VM executes `qrexec-client-vm` utility, the following steps are taken:
- `qrexec-client-vm` connects to `qrexec-agent`'s `/var/run/qubes/qrexec-agent-fdpass` unix socket 3 times. Reads 4 bytes from each of them, which is the fd number of the accepted socket in agent. These 3 integers, in text, concatenated, form "connection identifier" (CID)
- `qrexec-client-vm` writes to `/var/run/qubes/qrexec-agent` fifo a blob, consisting of target vmname, rpc action, and CID
- `qrexec-client-vm` executes the rpc client, passing the above mentioned unix sockets as process stdin/stdout, and optionally stderr (if the PASS\_LOCAL\_STDERR env variable is set)
- `qrexec-agent` passes the blob to `qrexec-daemon`, via MSG\_AGENT\_TO\_SERVER\_TRIGGER\_CONNECT\_EXISTING message over vchan
- `qrexec-daemon` executes `qrexec-policy`, passing source vmname, target vmname, rpc action, and CID as cmdline arguments
- `qrexec-policy` evaluates the policy file. If successful, creates a pair of `qrexec-client` processes, whose stdin/stdout are cross-connencted.
- The first `qrexec-client` connects to the src VM, using the `-c ClientID` parameter, which results in not creating a new process, but connecting to the existing process file descriptors (these are the fds of unix socket created in step 1).
- The second `qrexec-client` connects to the target VM, and executes `qubes-rpc-multiplexer` command there with the rpc action as the cmdline argument. Finally, `qubes-rpc-multiplexer` executes the correct rpc server on the target.
- In the above step, if the target VM is `$dispvm`, the DispVM is created via the `qfile-daemon-dvm` program. The latter waits for the `qrexec-client` process to exit, and then destroys the DispVM.
Protocol description ("wire-level" spec)
----------------------------------------
TODO

View file

@ -1,129 +0,0 @@
---
layout: doc
title: Qrexec3
permalink: /en/doc/qrexec3/
redirect_from:
- /doc/Qrexec3/
- /wiki/Qrexec3/
---
Command execution in VM (and Qubes RPC)
=======================================
*[Note: this documents describes Qrexec v3 (Odyssey)]*
Qrexec framework is used by core Qubes components to implement communication between domains. Qubes domains are isolated by design but there is a need for a mechanism to allow administrative domain (dom0) to force command execution in another domain (VM). For instance, when user selects an application from KDE menu, it should be started in the selected VM. Also it is often useful to be able to pass stdin/stdout/stderr from an application running in VM to dom0 (and the other way around). In specific circumstances Qubes allows VMs to be initiators of such communication (so for example a VM can notify dom0 that there are updates available for it).
Qrexec basics
-------------
Qrexec is built on top of vchan (a library providing data links between VMs). During domain creation a process named *qrexec-daemon* is started in dom0, and a process named *qrexec-agent* is started in the VM. They are connected over *vchan* channel. *qrexec-daemon* listens for connections from dom0 utility named *qrexec-client*. Typically, the first thing that a *qrexec-client* instance does is to send a request to *qrexec-daemon* to start a process (let's name it VMprocess) with a given command line in a specified VM (someVM). *qrexec-daemon* assigns unique vchan connection details and sends them both to *qrexec-client* (in dom0) and *qrexec-agent* (in someVM). *qrexec-client* starts a vchan server which *qrexec-agent* connects to. Since then, stdin/stdout/stderr from the VMprocess is passed via vchan between *qrexec-agent* and the *qrexec-client* process.
So, for example, executing in dom0
`qrexec-client -d someVM user:bash`
allows to work with the remote shell. The string before the first semicolon specifies what user to run the command as. Adding `-e` on the *qrexec-client* command line results in mere command execution (no data passing), and *qrexec-client* exits immediately after sending the execution request and receiving status code from *qrexec-agent* (whether the process creation succeeded). There is also the `-l local_program` flag -- with it, *qrexec-client* passes stdin/stdout of the remote process to the (spawned for this purpose) *local\_program*, not to its own stdin/stdout.
The `qvm-run` command is heavily based on *qrexec-client*. It also takes care of additional activities, e.g. starting the domain if it is not up yet and starting the GUI daemon. Thus, it is usually more convenient to use `qvm-run`.
There can be almost arbitrary number of *qrexec-client* processes for a domain (so, connected to the same *qrexec-daemon*, same domain) -- their data is multiplexed independently. Number of available vchan channels is the limiting factor here, it depends on the underlying hypervisor.
Qubes RPC services
------------------
Some tasks (like intervm file copy) share the same rpc-like structure: a process in one VM (say, file sender) needs to invoke and send/receive data to some process in other VM (say, file receiver). Thus, the Qubes RPC framework was created, facilitating such actions.
Obviously, inter-VM communication must be tightly controlled to prevent one VM from taking control over other, possibly more privileged, VM. Therefore the design decision was made to pass all control communication via dom0, that can enforce proper authorization. Then, it is natural to reuse the already-existing qrexec framework.
Also, note that bare qrexec provides VM\<-\>dom0 connectivity, but the command execution is always initiated by dom0. There are cases when VM needs to invoke and send data to a command in dom0 (e.g. to pass information on newly installed .desktop files). Thus, the framework allows dom0 to be the rpc target as well.
Thanks to the framework, RPC programs are very simple -- both rpc client and server just use their stdin/stdout to pass data. The framework does all the inner work to connect these processes to each other via *qrexec-daemon* and *qrexec-agent*. Additionally, disposable VMs are tightly integrated -- rpc to a disposableVM is identical to rpc to a normal domain, all one needs is to pass "\$dispvm" as the remote domain name.
Qubes RPC administration
------------------------
[TODO: fix for non-linux dom0]
In dom0, there is a bunch of files in */etc/qubes-rpc/policy* directory, whose names describe the available rpc actions; their content is the rpc access policy database. Currently defined actions are:
- qubes.Filecopy
- qubes.OpenInVM
- qubes.ReceiveUpdates
- qubes.SyncAppMenus
- qubes.VMShell
- qubes.ClipboardPaste
- qubes.Gpg
- qubes.NotifyUpdates
- qubes.PdfConvert
These files contain lines with the following format:
srcvm destvm (allow|deny|ask)[,user=user\_to\_run\_as][,target=VM\_to\_redirect\_to]
You can specify srcvm and destvm by name, or by one of "\$anyvm", "\$dispvm", "dom0" reserved keywords (note string "dom0" does not match the \$anyvm pattern; all other names do). Only "\$anyvm" keyword makes sense in srcvm field (service calls from dom0 are currently always allowed, "\$dispvm" means "new VM created for this particular request" - so it is never a source of request). Currently there is no way to specify source VM by type. Whenever a rpc request for action X is received, the first line in /etc/qubes-rpc/policy/X that match srcvm/destvm is consulted to determine whether to allow rpc, what user account the program should run in target VM under, and what VM to redirect the execution to. If the policy file does not exits, user is prompted to create one; if still there is no policy file after prompting, the action is denied.
In the target VM, the */etc/qubes-rpc/RPC\_ACTION\_NAME* must exist, containing the file name of the program that will be invoked.
In the src VM, one should invoke the client via
`/usr/lib/qubes/qrexec-client-vm target_vm_name RPC_ACTION_NAME rpc_client_path client arguments`
Note that only stdin/stdout is passed between rpc server and client -- notably, no command line argument are passed. Source VM name is specified by QREXEC\_REMOTE\_DOMAIN environment variable. By default, stderr of client and server is logged to respective /var/log/qubes/qrexec.XID files.
Be very careful when coding and adding a new rpc service. Unless the offered functionality equals full control over the target (it is the case with e.g. qubes.VMShell action), any vulnerability in a rpc server can be fatal to qubes security. On the other hand, this mechanism allows to delegate processing of untrusted input to less privileged (or throwaway) AppVMs, thus wise usage of it increases security.
### Revoking "Yes to All" authorization
Qubes RPC policy supports "ask" action. This will prompt the user whether given RPC call should be allowed. That prompt window has also "Yes to All" option, which will allow the action and add new entry to the policy file, which will unconditionally allow further calls for given service-srcVM-dstVM tuple.
In order to remove such authorization, issue this command from a Dom0 terminal (for qubes.Filecopy service):
`sudo nano /etc/qubes-rpc/policy/qubes.Filecopy`
and then remove the first line/s (before the first \#\# comment) which are the "Yes to All" results.
### Qubes RPC example
We will show the necessary files to create rpc call that adds two integers on the target and returns back the result to the invoker.
- rpc client code (*/usr/bin/our\_test\_add\_client*)
~~~
#!/bin/sh
echo $1 $2 # pass data to rpc server
exec cat >&$SAVED_FD_1 # print result to the original stdout, not to the other rpc endpoint
~~~
- rpc server code (*/usr/bin/our\_test\_add\_server*)
~~~
#!/bin/sh
read arg1 arg2 # read from stdin, which is received from the rpc client
echo $(($arg1+$arg2)) # print to stdout - so, pass to the rpc client
~~~
- policy file in dom0 (*/etc/qubes-rpc/policy/test.Add* )
~~~
$anyvm $anyvm ask
~~~
- server path definition ( */etc/qubes-rpc/test.Add*)
~~~
/usr/bin/our_test_add_server
~~~
- invoke rpc via
~~~
/usr/lib/qubes/qrexec-client-vm target_vm test.Add /usr/bin/our_test_add_client 1 2
~~~
and we should get "3" as answer, after dom0 allows it.
Qubes RPC internals
-------------------
See [QrexecProtocol?](/doc/QrexecProtocol/).

View file

@ -1,184 +0,0 @@
---
layout: doc
title: Qrexec3Implementation
permalink: /en/doc/qrexec3-implementation/
redirect_from:
- /doc/Qrexec3Implementation/
- /wiki/Qrexec3Implementation/
---
Implementation of qrexec in Qubes R3
====================================
This page describes implementation of the [qrexec framework](/en/doc/qrexec/) in Qubes OS R3.
Qrexec framework consists of a number of processes communicating with each other using common IPC protocol (described in detail below). Components residing in the same domain use pipes as the underlying transport medium, while components in separate domains use vchan link.
Dom0 tools implementation
-------------------------
- `/usr/lib/qubes/qrexec-daemon` \<- one instance is required for every active domain. Responsible for:
- Handling execution and service requests from **dom0** (source: `qrexec-client`).
- Handling service requests from the associated domain (source: `qrexec-client-vm`, then `qrexec-agent`).
> Command line: `qrexec-daemon domain-id domain-name [default user]`
> *domain-id*: numeric Qubes ID assigned to the associated domain.
> *domain-name*: associated domain name.
> *default user*: optional. If passed, `qrexec-daemon` uses this user as default for all execution requests that don't specify one.
- `/usr/lib/qubes/qrexec-policy` \<- internal program used to evaluate the RPC policy and deciding whether a RPC call should be allowed.
- `/usr/lib/qubes/qrexec-client` \<- used to pass execution and service requests to `qrexec-daemon`. Command line parameters:
> `-d target-domain-name` Specifies the target for the execution/service request.
> `-l local-program` Optional. If present, `local-program` is executed and its stdout/stdin are used when sending/receiving data to/from the remote peer.
> `-e` Optional. If present, stdout/stdin are not connected to the remote peer. Only process creation status code is received.
> `-c <request-id,src-domain-name,src-domain-id>` Used for connecting a VM-VM service request by `qrexec-policy`. Details described below in the service example.
> `cmdline` Command line to pass to `qrexec-daemon` as the execution/service request. Service request format is described below in the service example.
Note: none of the above tools are designed to be used by users directly.
VM tools implementation
-----------------------
- `qrexec-agent` \<- one instance runs in each active domain. Responsible for:
- Handling service requests from `qrexec-client-vm` and passing them to connected `qrexec-daemon` in **dom0**.
- Executing associated `qrexec-daemon` execution/service requests.
> Command line parameters: none.
- `qrexec-client-vm` \<- runs in an active domain. Used to pass service requests to `qrexec-agent`.
> Command line: `qrexec-client-vm target-domain-name service-name local-program [local program arguments]`
> `target-domain-name` Target domain for the service request. Source is the current domain.
> `service-name` Requested service name.
> `local-program` **local-program** is executed locally and its stdin/stdout are connected to the remote service endpoint.
Qrexec protocol details
-----------------------
Qrexec protocol is message-based. All messages share a common header followed by an optional data packet.
~~~
/* uniform for all peers, data type depends on message type */
struct msg_header {
uint32_t type; /* message type */
uint32_t len; /* data length */
};
~~~
When two peers establish connection, the server sends `MSG_HELLO` followed by `peer_info` struct:
~~~
struct peer_info {
uint32_t version; /* qrexec protocol version */
};
~~~
The client then should reply with its own `MSG_HELLO` and `peer_info`. If protocol versions don't match, the connection is closed. TODO: fallback for backwards compatibility, don't do handshake in the same domain?.
Details of all possible use cases and the messages involved are described below.
### dom0: request execution of some\_command in domX and pass stdin/stdout
- **dom0**: `qrexec-client` is invoked in **dom0** as follows:
> `qrexec-client -d domX [-l local_program] user:some_command`
> `user` may be substituted with the literal `DEFAULT`. In that case, default Qubes user will be used to execute `some_command`.
- **dom0**: `qrexec-client` sets `QREXEC_REMOTE_DOMAIN` environment variable to **domX**.
- **dom0**: If `local_program` is set, `qrexec-client` executes it and uses that child's stdin/stdout in place of its own when exchanging data with `qrexec-agent` later.
- **dom0**: `qrexec-client` connects to **domX**'s `qrexec-daemon`.
- **dom0**: `qrexec-daemon` sends `MSG_HELLO` header followed by `peer_info` to `qrexec-client`.
- **dom0**: `qrexec-client` replies with `MSG_HELLO` header followed by `peer_info` to `qrexec-daemon`.
- **dom0**: `qrexec-client` sends `MSG_EXEC_CMDLINE` header followed by `exec_params` to `qrexec-daemon`
~~~
/* variable size */
struct exec_params {
uint32_t connect_domain; /* target domain id */
uint32_t connect_port; /* target vchan port for i/o exchange */
char cmdline[0]; /* command line to execute, size = msg_header.len - sizeof(struct exec_params) */
};
~~~
In this case, `connect_domain` and `connect_port` are set to 0.
- **dom0**: `qrexec-daemon` replies to `qrexec-client` with `MSG_EXEC_CMDLINE` header followed by `exec_params`, but with empty `cmdline` field. `connect_domain` is set to Qubes ID of **domX** and `connect_port` is set to a vchan port allocated by `qrexec-daemon`.
- **dom0**: `qrexec-daemon` sends `MSG_EXEC_CMDLINE` header followed by `exec_params` to the associated **domX** `qrexec-agent` over vchan. `connect_domain` is set to 0 (**dom0**), `connect_port` is the same as sent to `qrexec-client`. `cmdline` is unchanged except that the literal `DEFAULT` is replaced with actual user name, if present.
- **dom0**: `qrexec-client` disconnects from `qrexec-daemon`.
- **dom0**: `qrexec-client` starts a vchan server using the details received from `qrexec-daemon` and waits for connection from **domX**'s `qrexec-agent`.
- **domX**: `qrexec-agent` receives `MSG_EXEC_CMDLINE` header followed by `exec_params` from `qrexec-daemon` over vchan.
- **domX**: `qrexec-agent` connects to `qrexec-client` over vchan using the details from `exec_params`.
- **domX**: `qrexec-agent` executes `some_command` in **domX** and connects the child's stdin/stdout to the data vchan. If the process creation fails, `qrexec-agent` sends `MSG_DATA_EXIT_CODE` to `qrexec-client` followed by the status code (**int**) and disconnects from the data vchan.
- Data read from `some_command`'s stdout is sent to the data vchan using `MSG_DATA_STDOUT` by `qrexec-agent`. `qrexec-client` passes data received as `MSG_DATA_STDOUT` to its own stdout (or to `local_program`'s stdin if used).
- `qrexec-client` sends data read from local stdin (or `local_program`'s stdout if used) to `qrexec-agent` over the data vchan using `MSG_DATA_STDIN`. `qrexec-agent` passes data received as `MSG_DATA_STDIN` to `some_command`'s stdin.
- `MSG_DATA_STDOUT` or `MSG_DATA_STDIN` with data `len` field set to 0 in `msg_header` is an EOF marker. Peer receiving such message should close the associated input/output pipe.
- When `some_command` terminates, **domX**'s `qrexec-agent` sends `MSG_DATA_EXIT_CODE` header to `qrexec-client` followed by the exit code (**int**). `qrexec-agent` then disconnects from the data vchan.
### domY: invoke execution of qubes service qubes.SomeRpc in domX and pass stdin/stdout
- **domY**: `qrexec-client-vm` is invoked as follows:
> `qrexec-client-vm domX qubes.SomeRpc local_program [params]`
- **domY**: `qrexec-client-vm` connects to `qrexec-agent` (via local socket/named pipe).
- **domY**: `qrexec-client-vm` sends `trigger_service_params` data to `qrexec-agent` (without filling the `request_id` field):
~~~
struct trigger_service_params {
char service_name[64];
char target_domain[32];
struct service_params request_id; /* service request id */
};
struct service_params {
char ident[32];
};
~~~
- **domY**: `qrexec-agent` allocates a locally-unique (for this domain) `request_id` (let's say `13`) and fills it in the `trigger_service_params` struct received from `qrexec-client-vm`.
- **domY**: `qrexec-agent` sends `MSG_TRIGGER_SERVICE` header followed by `trigger_service_params` to `qrexec-daemon` in **dom0** via vchan.
- **dom0**: **domY**'s `qrexec-daemon` executes `qrexec-policy`: `qrexec-policy domY_id domY domX qubes.SomeRpc 13`.
- **dom0**: `qrexec-policy` evaluates if the RPC should be allowed or denied. If the action is allowed it returns `0`, if the action is denied it returns `1`.
- **dom0**: **domY**'s `qrexec-daemon` checks the exit code of `qrexec-policy`.
- If `qrexec-policy` returned **not** `0`: **domY**'s `qrexec-daemon` sends `MSG_SERVICE_REFUSED` header followed by `service_params` to **domY**'s `qrexec-agent`. `service_params.ident` is identical to the one received. **domY**'s `qrexec-agent` disconnects its `qrexec-client-vm` and RPC processing is finished.
- If `qrexec-policy` returned `0`, RPC processing continues.
- **dom0**: if `qrexec-policy` allowed the RPC, it executed `qrexec-client -d domX -c 13,domY,domY_id user:QUBESRPC qubes.SomeRpc domY`.
- **dom0**: `qrexec-client` sets `QREXEC_REMOTE_DOMAIN` environment variable to **domX**.
- **dom0**: `qrexec-client` connects to **domX**'s `qrexec-daemon`.
- **dom0**: **domX**'s `qrexec-daemon` sends `MSG_HELLO` header followed by `peer_info` to `qrexec-client`.
- **dom0**: `qrexec-client` replies with `MSG_HELLO` header followed by `peer_info` to **domX**'s`qrexec-daemon`.
- **dom0**: `qrexec-client` sends `MSG_EXEC_CMDLINE` header followed by `exec_params` to **domX**'s`qrexec-daemon`
~~~
/* variable size */
struct exec_params {
uint32_t connect_domain; /* target domain id */
uint32_t connect_port; /* target vchan port for i/o exchange */
char cmdline[0]; /* command line to execute, size = msg_header.len - sizeof(struct exec_params) */
};
~~~
In this case, `connect_domain` is set to id of **domY** (from the `-c` parameter) and `connect_port` is set to 0. `cmdline` field contains the RPC to execute, in this case `user:QUBESRPC qubes.SomeRpc domY`.
- **dom0**: **domX**'s `qrexec-daemon` replies to `qrexec-client` with `MSG_EXEC_CMDLINE` header followed by `exec_params`, but with empty `cmdline` field. `connect_domain` is set to Qubes ID of **domX** and `connect_port` is set to a vchan port allocated by **domX**'s `qrexec-daemon`.
- **dom0**: **domX**'s `qrexec-daemon` sends `MSG_EXEC_CMDLINE` header followed by `exec_params` to **domX**'s `qrexec-agent`. `connect_domain` and `connect_port` fields are the same as in the step above. `cmdline` is set to the one received from `qrexec-client`, in this case `user:QUBESRPC qubes.SomeRpc domY`.
- **dom0**: `qrexec-client` disconnects from **domX**'s `qrexec-daemon` after receiving connection details.
- **dom0**: `qrexec-client` connects to **domY**'s `qrexec-daemon` and exchanges MSG\_HELLO as usual.
- **dom0**: `qrexec-client` sends `MSG_SERVICE_CONNECT` header followed by `exec_params` to **domY**'s `qrexec-daemon`. `connect_domain` is set to ID of **domX** (received from **domX**'s `qrexec-daemon`) and `connect_port` is the one received as well. `cmdline` is set to request ID (`13` in this case).
- **dom0**: **domY**'s `qrexec-daemon` sends `MSG_SERVICE_CONNECT` header followed by `exec_params` to **domY**'s `qrexec-agent`. Data fields are unchanged from the step above.
- **domY**: `qrexec-agent` starts a vchan server on the port received in the step above. It acts as a `qrexec-client` in this case because this is a VM-VM connection.
- **domX**: `qrexec-agent` connects to the vchan server of **domY**'s `qrexec-agent` (connection details were received before from **domX**'s `qrexec-daemon`).
- After that, connection follows the flow of the previous example (dom0-VM).