Reorganize files to account for new "External" section

QubesOS/qubes-issues#4693
This commit is contained in:
Andrew David Wong 2019-05-26 19:32:45 -05:00
parent 5cc99a23d1
commit d31c786942
No known key found for this signature in database
GPG key ID: 8CE137352A019A17
203 changed files with 0 additions and 0 deletions

View file

@ -0,0 +1,326 @@
---
layout: doc-full
title: Admin API
permalink: /doc/admin-api/
redirect_from:
- /doc/mgmt/
- /doc/mgmt1/
- /doc/mgmt-architecture/
- /doc/admin-api-architecture/
---
# Qubes OS Admin API
*(This page is the current draft of the proposal. It is not implemented yet.)*
## Goals
The goals of the Admin API system is to provide a way for the user to manage
the domains without direct access to dom0.
Foreseen benefits include:
- Ability to remotely manage the Qubes OS.
- Possibility to create multi-user system, where different users are able to use
different sets of domains, possibly overlapping. This would also require to
have separate GUI domain.
The API would be used by:
- Qubes OS Manager (or any tools that would replace it)
- CLI tools, when run from another VM (and possibly also from dom0)
- remote management tools
- any custom tools
## Threat model
TBD
## Components
![Admin API Architecture][admin-api-architecture]
A central entity in the Qubes Admin API system is a `qubesd` daemon, which
holds information about all domains in the system and mediates all actions (like
starting and stopping a qube) with `libvirtd`. The `qubesd` daemon also manages
the `qubes.xml` file, which stores all persistent state information and
dispatches events to extensions. Last but not least, `qubesd` is responsible for
querying the RPC policy for qrexec daemon.
The `qubesd` daemon may be accessed from other domains through a set of qrexec
API calls called the "Admin API". This API is the intended
management interface supported by the Qubes OS. The API is stable. When called,
the RPC handler performs basic validation and forwards the request to the
`qubesd` via UNIX domain socket. The socket API is private and unstable. It is
documented [FIXME currently it isn't -- woju 20161221] in the developer's
documentation of the source code.
## The calls
The API should be implemented as a set of qrexec calls. This is to make it easy
to set the policy using current mechanism.
| call | dest | argument | inside | return | note |
| ------------------------------------- | --------- | --------- | ----------------------------------------- | --------------------------------------------------------- | ---- |
| `admin.vmclass.List` | `dom0` | - | - | `<class>\n` |
| `admin.vm.List` | `dom0|<vm>` | - | - | `<name> class=<class> state=<state>\n` |
| `admin.vm.Create.<class>` | `dom0` | template | `name=<name> label=<label>` | - |
| `admin.vm.CreateInPool.<class>` | `dom0` | template | `name=<name> label=<label> `<br/>`pool=<pool> pool:<volume>=<pool>` | - | either use `pool=` to put all volumes there, <br/>or `pool:<volume>=` for individual volumes - both forms are not allowed at the same time
| `admin.vm.CreateDisposable` | template | - | - | name | Create new DisposableVM, `template` is any AppVM with `dispvm_allowed` set to True, or `dom0` to use default defined in `default_dispvm` property of calling VM; VM created with this call will be automatically removed after its shutdown; the main difference from `admin.vm.Create.DispVM` is automatic (random) name generation.
| `admin.vm.Remove` | vm | - | - | - |
| `admin.label.List` | `dom0` | - | - | `<property>\n` |
| `admin.label.Create` | `dom0` | label | `0xRRGGBB` | - |
| `admin.label.Get` | `dom0` | label | - | `0xRRGGBB` |
| `admin.label.Index` | `dom0` | label | - | `<label-index>` |
| `admin.label.Remove` | `dom0` | label | - | - |
| `admin.property.List` | `dom0` | - | - | `<property>\n` |
| `admin.property.Get` | `dom0` | property | - | `default={True|False} `<br/>`type={str|int|bool|vm|label} <value>` |
| `admin.property.GetDefault` | `dom0` | property | - | `type={str|int|bool|vm|label} <value>` |
| `admin.property.Help` | `dom0` | property | - | `help` |
| `admin.property.HelpRst` | `dom0` | property | - | `help.rst` |
| `admin.property.Reset` | `dom0` | property | - | - |
| `admin.property.Set` | `dom0` | property | value | - |
| `admin.vm.property.List` | vm | - | - | `<property>\n` |
| `admin.vm.property.Get` | vm | property | - | `default={True|False} `<br/>`type={str|int|bool|vm|label} <value>` |
| `admin.vm.property.GetDefault` | vm | property | - | `type={str|int|bool|vm|label} <value>` |
| `admin.vm.property.Help` | vm | property | - | `help` |
| `admin.vm.property.HelpRst` | vm | property | - | `help.rst` |
| `admin.vm.property.Reset` | vm | property | - | - |
| `admin.vm.property.Set` | vm | property | value | - |
| `admin.vm.feature.List` | vm | - | - | `<feature>\n` |
| `admin.vm.feature.Get` | vm | feature | - | value |
| `admin.vm.feature.CheckWithTemplate` | vm | feature | - | value |
| `admin.vm.feature.CheckWithNetvm` | vm | feature | - | value |
| `admin.vm.feature.CheckWithAdminVM` | vm | feature | - | value |
| `admin.vm.feature.CheckWithTemplateAndAdminVM`| vm | feature | - | value |
| `admin.vm.feature.Remove` | vm | feature | - | - |
| `admin.vm.feature.Set` | vm | feature | value | - |
| `admin.vm.tag.List` | vm | - | - | `<tag>\n` |
| `admin.vm.tag.Get` | vm | tag | - | `0` or `1` | retcode? |
| `admin.vm.tag.Remove` | vm | tag | - | - |
| `admin.vm.tag.Set` | vm | tag | - | - |
| `admin.vm.firewall.Get` | vm | - | - | `<rule>\n` | rules syntax as in [firewall interface](/doc/vm-interface/#firewall-rules-in-4x) with addition of `expire=` and `comment=` options; `comment=` (if present) must be the last option
| `admin.vm.firewall.Set` | vm | - | `<rule>\n` | - | set firewall rules, see `admin.vm.firewall.Get` for syntax
| `admin.vm.firewall.Reload` | vm | - | - | - | force reload firewall without changing any rule
| `admin.vm.device.<class>.Attach` | vm | device | options | - | `device` is in form `<backend-name>+<device-ident>` <br/>optional options given in `key=value` format, separated with spaces; <br/>options can include `persistent=True` to "persistently" attach the device (default is temporary)
| `admin.vm.device.<class>.Detach` | vm | device | - | - | `device` is in form `<backend-name>+<device-ident>`
| `admin.vm.device.<class>.Set.persistent`| vm | device | `True`|`False` | - | `device` is in form `<backend-name>+<device-ident>`
| `admin.vm.device.<class>.List` | vm | - | - | `<device> <options>\n` | options can include `persistent=True` for "persistently" attached devices (default is temporary)
| `admin.vm.device.<class>.Available` | vm | device-ident | - | `<device-ident> <properties> description=<desc>\n` | optional service argument may be used to get info about a single device, <br/>optional (device class specific) properties are in `key=value` form, <br/>`description` must be the last one and is the only one allowed to contain spaces
| `admin.pool.List` | `dom0` | - | - | `<pool>\n` |
| `admin.pool.ListDrivers` | `dom0` | - | - | `<pool-driver> <property> ...\n` | Properties allowed in `admin.pool.Add`
| `admin.pool.Info` | `dom0` | pool | - | `<property>=<value>\n` |
| `admin.pool.Add` | `dom0` | driver | `<property>=<value>\n` | - |
| `admin.pool.Set.revisions_to_keep` | `dom0` | pool | `<value>` | - |
| `admin.pool.Remove` | `dom0` | pool | - | - |
| `admin.pool.volume.List` | `dom0` | pool | - | volume id |
| `admin.pool.volume.Info` | `dom0` | pool | vid | `<property>=<value>\n` |
| `admin.pool.volume.Set.revisions_to_keep`| `dom0` | pool | `<vid> <value>` | - |
| `admin.pool.volume.ListSnapshots` | `dom0` | pool | vid | `<snapshot>\n` |
| `admin.pool.volume.Snapshot` | `dom0` | pool | vid | snapshot |
| `admin.pool.volume.Revert` | `dom0` | pool | `<vid> <snapshot>` | - |
| `admin.pool.volume.Resize` | `dom0` | pool | `<vid> <size_in_bytes>` | - |
| `admin.pool.volume.Import` | `dom0` | pool | `<vid>\n<raw volume data>` | - |
| `admin.pool.volume.CloneFrom` | `dom0` | pool | vid | token, to be used in `admin.pool.volume.CloneTo` | obtain a token to copy volume `vid` in `pool`;<br/>the token is one time use only, it's invalidated by `admin.pool.volume.CloneTo`, even if the operation fails |
| `admin.pool.volume.CloneTo` | `dom0` | pool | `<vid> <token>` | - | copy volume pointed by a token to volume `vid` in `pool` |
| `admin.vm.volume.List` | vm | - | - | `<volume>\n` | `<volume>` is per-VM volume name (`root`, `private`, etc), `<vid>` is pool-unique volume id
| `admin.vm.volume.Info` | vm | volume | - | `<property>=<value>\n` |
| `admin.vm.volume.Set.revisions_to_keep`| vm | volume | value | - |
| `admin.vm.volume.ListSnapshots` | vm | volume | - | snapshot | duplicate of `admin.pool.volume.`, but with other call params |
| `admin.vm.volume.Snapshot` | vm | volume | - | snapshot | id. |
| `admin.vm.volume.Revert` | vm | volume | snapshot | - | id. |
| `admin.vm.volume.Resize` | vm | volume | size_in_bytes | - | id. |
| `admin.vm.volume.Import` | vm | volume | raw volume data | - | id. |
| `admin.vm.volume.CloneFrom` | vm | volume | - | token, to be used in `admin.vm.volume.CloneTo` | obtain a token to copy `volume` of `vm`;<br/>the token is one time use only, it's invalidated by `admin.vm.volume.CloneTo`, even if the operation fails |
| `admin.vm.volume.CloneTo` | vm | volume | token, obtained with `admin.vm.volume.CloneFrom` | - | copy volume pointed by a token to `volume` of `vm` |
| `admin.vm.Start` | vm | - | - | - |
| `admin.vm.Shutdown` | vm | - | - | - |
| `admin.vm.Pause` | vm | - | - | - |
| `admin.vm.Unpause` | vm | - | - | - |
| `admin.vm.Kill` | vm | - | - | - |
| `admin.backup.Execute` | `dom0` | config id | - | - | config in `/etc/qubes/backup/<id>.conf`, only one backup operation of given `config id` can be running at once |
| `admin.backup.Info` | `dom0` | config id | - | backup info | info what would be included in the backup
| `admin.backup.Cancel` | `dom0` | config id | - | - | cancel running backup operation
| `admin.Events` | `dom0|vm` | - | - | events |
| `admin.vm.Stats` | `dom0|vm` | - | - | `vm-stats` events, see below | emit VM statistics (CPU, memory usage) in form of events
Volume properties:
- `pool`
- `vid`
- `size`
- `usage`
- `rw`
- `source`
- `save_on_stop`
- `snap_on_start`
- `revisions_to_keep`
- `is_outdated`
Method `admin.vm.Stats` returns `vm-stats` events every `stats_interval` seconds, for every running VM. Parameters of `vm-stats` events:
- `memory_kb` - memory usage in kB
- `cpu_time` - absolute CPU time (in milliseconds) spent by the VM since its startup, normalized for one CPU
- `cpu_usage` - CPU usage in percents
## Returned messages
First byte of a message is a message type. This is 8 bit non-zero integer.
Values start at 0x30 (48, `'0'`, zero digit in ASCII) for readability in hexdump.
Next byte must be 0x00 (a separator).
This alternatively can be thought of as zero-terminated string containing
single ASCII digit.
### OK (0)
```
30 00 <content>
```
Server will close the connection after delivering single message.
### EVENT (1)
```
31 00 <subject> 00 <event> 00 ( <key> 00 <value> 00 )* 00
```
Events are returned as stream of messages in selected API calls. Normally server
will not close the connection.
A method yielding events will not ever return a `OK` or `EXCEPTION` message.
When calling such method, it will produce an artificial event
`connection-established` just after connection, to help avoiding race
conditions during event handler registration.
### EXCEPTION (2)
```
32 00 <type> 00 ( <traceback> )? 00 <format string> 00 ( <field> 00 )*
```
Server will close the connection.
Traceback may be empty, can be enabled server-side as part of debug mode.
Delimiting zero-byte is always present.
Fields are should substituted into `%`-style format string, possibly after
client-side translation, to form final message to be displayed unto user. Server
does not by itself support translation.
## Tags
The tags provided can be used to write custom policies. They are not used in
a&nbsp;default Qubes OS installation. However, they are created anyway.
- `created-by-<vm>` &mdash;&nbsp;Created in an extension to qubesd at the
moment of creation of the VM. Cannot be changed via API, which is also
enforced by this extension.
- `managed-by-<vm>` &mdash;&nbsp;Can be used for the same purpose, but it is
not created automatically, nor is it forbidden to set or reset this tag.
## Backup profile
Backup-related calls do not allow (yet) to specify what should be included in
the backup. This needs to be configured separately in dom0, with a backup
profile, stored in `/etc/qubes/backup/<profile>.conf`. The file use yaml syntax
and have following settings:
- `include` - list of VMs to include, can also contains tags using
`$tag:some-tag` syntax or all VMs of given type using `$type:AppVM`, known
from qrexec policy
- `exclude` - list of VMs to exclude, after evaluating `include` setting
- `destination_vm` - VM to which the backup should be send
- `destination_path` - path to which backup should be written in
`destination_vm`. This setting is given to `qubes.Backup` service and
technically it's up to it how to interpret it. In current implementation it is
interpreted as a directory where a new file should be written (with a name
based on the current timestamp), or a command where the backup should
be piped to
- `compression` - should the backup be compressed (default: True)? The value can be either
`False` or `True` for default compression, or a compression command (needs to
accept `-d` argument for decompression)
- `passphrase_text` - passphrase used to encrypt and integrity protect the backup
- `passphrase_vm` - VM which should be asked what backup passphrase should be
used. The asking is performed using `qubes.BackupPassphrase+profile_name`
service, which is expected to output chosen passphrase to its stdout. Empty
output cancel the backup operation. This service can be used either to ask
the user interactively, or to have some automated passphrase handling (for
example: generate randomly, then encrypt with a public key and send
somewhere)
Not all settings needs to be set.
Example backup profile:
```yaml
# Backup only selected VMs
include:
- work
- personal
- vault
- banking
# Store the backup on external disk
destination_vm: sys-usb
destination_path: /media/my-backup-disk
# Use static passphrase
passphrase_text: "My$Very!@Strong23Passphrase"
```
And slightly more advanced one:
```yaml
# Include all VMs with a few exceptions
include:
- $type:AppVM
- $type:TemplateVM
- $type:StandaloneVM
exclude:
- untrusted
- $tag:do-not-backup
# parallel gzip for faster backup
compression: pigz
# ask 'vault' VM for the backup passphrase
passphrase_vm: vault
# send the (encrypted) backup directly to remote server
destination_vm: sys-net
destination_path: ncftpput -u my-ftp-username -p my-ftp-pass -c my-ftp-server /directory/for/backups
```
## General notes
- there is no provision for `qvm-run`, but there already exists `qubes.VMShell` call
- generally actions `*.List` return a list of objects and have "object
identifier" as first word in a row. Such action can be also called with "object
identifier" in argument to get only a single entry (in the same format).
- closing qrexec connection normally does _not_ interrupt running operation; this is important to avoid leaving the system in inconsistent state
- actual operation starts only after caller send all the parameters (including a payload), signaled by sending EOF mark; there is no support for interactive protocols, to keep the protocol reasonable simple
## TODO
- something to configure/update policy
- notifications
- how to constrain the events?
- how to pass the parameters? maybe XML, since this is trusted anyway and
parser may be complicated
- how to constrain the possible values for `admin.vm.property.Set` etc, like
"you can change `netvm`, but you have to pick from this set"; this currently
can be done by writing an extension
- a call for executing `*.desktop` file from `/usr/share/applications`, for use
with appmenus without giving access to `qubes.VMShell`; currently this can be
done by writing custom qrexec calls
- maybe some generator for `.desktop` for appmenus, which would wrap calls in
`qrexec-client-vm`
<!-- vim: set ts=4 sts=4 sw=4 et : -->
[admin-api-architecture]: /attachment/wiki/AdminAPI/admin-api-architecture.svg

View file

@ -0,0 +1,50 @@
---
layout: doc
title: Dom0 Secure Updates
permalink: /doc/dom0-secure-updates/
redirect_from:
- /en/doc/dom0-secure-updates/
- /doc/Dom0SecureUpdates/
- /wiki/Dom0SecureUpdates/
---
Qubes Dom0 secure update procedure
==================================
Reasons for Dom0 updates
------------------------
Normally there should be few reasons for updating software in Dom0. This is because there is no networking in Dom0, which means that even if some bugs will be discovered e.g. in the Dom0 Desktop Manager, this really is not a problem for Qubes, because all the third-party software running in Dom0 is not accessible from VMs or network in any way. Some exceptions to the above include: Qubes GUI daemon, Xen store daemon, and disk back-ends (we plan to move the disk backends to untrusted domain in Qubes 2.0). Of course we believe this software is reasonably secure and we hope it will not need patching.
However, we anticipate some other situations when updating Dom0 software might be required:
- Updating drivers/libs for new hardware support
- Correcting non-security related bugs (e.g. new buttons for qubes manager)
- Adding new features (e.g. GUI backup tool)
Problems with traditional network-based update mechanisms
---------------------------------------------------------
Dom0 is the most trusted domain on Qubes OS, and for this reason we decided to design Qubes in such a way that Dom0 is not connected to any network. In fact only certain domains can be connected to a network via so-called network domains. There can also be more than one network domain, e.g. in case the user is connected to more than one physically or logically separated networks.
Secure update mechanism we use in Qubes (starting from Beta 2)
-------------------------------------------------------------
Keeping Dom0 not connected to any network makes it hard, however, to provide updates for software in Dom0. For this reason we have come up with the following mechanism for Dom0 updates, which minimizes the amount of untrusted input processed by Dom0 software:
The update process is initiated by [qubes-dom0-update script](https://github.com/QubesOS/qubes-core-admin-linux/blob/release2/dom0-updates/qubes-dom0-update), running in Dom0.
Updates (\*.rpm files) are checked and downloaded by UpdateVM, which by default is the same as the firewall VM, but can be configured to be any other, network-connected VM. This is done by [qubes-download-dom0-updates.sh script](https://github.com/QubesOS/qubes-core-agent-linux/blob/release2/misc/qubes-download-dom0-updates.sh) (this script is executed using qrexec by the previously mentioned qubes-dom0-update). Note that we assume that this script might get compromised and fetch maliciously compromised downloads -- this is not a problem as Dom0 verifies digital signatures on updates later. The downloaded rpm files are placed in a ~~~/var/lib/qubes/dom0-updates~~~ directory on UpdateVM filesystem (again, they might get compromised while being kept there, still this isn't a problem). This directory is passed to yum using the ~~~--installroot=~~~ option.
Once updates are downloaded, the update script that runs in UpdateVM requests an RPM service [qubes.ReceiveUpdates](https://github.com/QubesOS/qubes-core-admin-linux/blob/release2/dom0-updates/qubes.ReceiveUpdates) to be executed in Dom0. This service is implemented by [qubes-receive-updates script](https://github.com/QubesOS/qubes-core-admin-linux/blob/release2/dom0-updates/qubes-receive-updates) running in Dom0. The Dom0's qubes-dom0-update script (which originally initiated the whole update process) waits until qubes-receive-updates finished.
The qubes-receive-updates script processes the untrusted input from Update VM: it first extracts the received \*.rpm files (that are sent over qrexec data connection) and then verifies digital signature on each file. The qubes-receive-updates script is a security-critical component of the Dom0 update process (as is the [qfile-dom0-unpacker.c](https://github.com/QubesOS/qubes-core-admin-linux/blob/release2/dom0-updates/qfile-dom0-unpacker.c) and the rpm utility, both used by the qubes-receive-updates for processing the untrusted input).
Once qubes-receive-updates finished unpacking and verifying the updates, the updates are placed in ``qubes-receive-updates`` directory in Dom0 filesystem. Those updates are now trusted. Dom0 is configured (see /etc/yum.conf in Dom0) to use this directory as a default (and only) [yum repository](https://github.com/QubesOS/qubes-core-admin-linux/blob/release2/dom0-updates/qubes-cached.repo).
Finally, qubes-dom0-update runs ``yum update`` that fetches the rpms from qubes-cached repo and installs them as usual.
Security benefit of our update mechanism
----------------------------------------
The benefit of the above scheme is that one doesn't need to trust the TCP/IP stack, the HTTP stack, and wget implementation in order to safely deliver and install updates in Dom0. One only needs to trust a few hundred lines of code as discussed above, as well as the rpm utility for properly checking digital signature (but this is required in any update scheme).

View file

@ -0,0 +1,58 @@
---
layout: doc
title: DVMimpl
permalink: /doc/dvm-impl/
redirect_from:
- /en/doc/dvm-impl/
- /doc/DVMimpl/
- /wiki/DVMimpl/
---
DisposableVM implementation in Qubes
====================================
**Note: The content below applies to Qubes R3.2.**
DisposableVM image preparation
------------------------------
DisposableVM is not started like other VMs, by executing equivalent of `xl create` - it would be too slow. Instead, DisposableVM are started by restore from a savefile.
Preparing a savefile is done by `/usr/lib/qubes/qubes_prepare_saved_domain.sh` script. It takes two mandatory arguments, appvm name (APPVM) and the savefile name, and optional path to "prerun" script. The script executes the following steps:
1. APPVM is started by `qvm-start`
2. xenstore key `/local/domain/appvm_domain_id/qubes_save_request` is created
3. if prerun script was specified, copy it to `qubes_save_script` xenstore key
4. wait for the `qubes_used_mem` key to appear
5. (in APPVM) APPVM boots normally, up to the point in `/etc/init.d/qubes_core` script when the presence of `qubes_save_request` key is tested. If it exists, then
1. (in APPVM) if exists, prerun script is retrieved from the respective xenstore key and executed. This preloads filesystem cache with useful applications, so that they will start faster.
2. (in APPVM) the amount of used memory is stored to `qubes_used_mem` xenstore key
3. (in APPVM) busy-waiting for `qubes_restore_complete` xenstore key to appear
6. when `qubes_used_mem` key appears, the domain memory is reduced to this amount, to make the savefile smaller.
7. APPVM private image is detached
8. the domain is saved via `xl save`
9. the COW file volatile.img (cow for root fs and swap) is packed to `saved_cows.tar` archive
The `qubes_prepare_saved_domain.sh` script is lowlevel. It is usually called by `qvm-create-default-dvm` script, that takes care of creating a special AppVM (named template\_name-dvm) to be passed to `qubes_prepare_saved_domain.sh`, as well as copying the savefile to /dev/shm (the latter action is not done if the `/var/lib/qubes/dvmdata/dont_use_shm` file exists).
Restoring a DisposableVM from the savefile
------------------------------------------
Normally, disposable VM is created when qubes rpc request with target *\$dispvm* is received. Then, as a part of rpc connection setup, the `qfile-daemon-dvm` program is executed; it executes `/usr/lib/qubes/qubes_restore` program. It is crucial that this program executes quickly, to make DisposableVM creation overhead bearable for the user. Its main steps are:
1. modify the savefile so that the VM name, VM UUID, MAC address and IP address are unique
2. restore the COW files from the `saved_cows.tar`
3. create the `/var/run/qubes/fast_block_attach` file, whose presence tells the `/etc/xen/scripts/block` script to bypass some redundant checks and execute as fast as possible.
4. execute `xl restore` in order to restore a domain.
5. create the same xenstore keys as normally created when AppVM boots (e.g. `qubes_ip`)
6. create the `qubes_restore_complete` xenstore key. This allows the boot process in DisposableVM to continue.
The actual passing of files between AppVM and a DisposableVM is implemented via qubes rpc.
Validating the DisposableVM savefile
------------------------------------
DisposableVM savefile contains references to template rootfs and to COW files. The COW files are restored before each DisposableVM start, so they cannot change. On the other hand, if templateVM is started, the template rootfs will change, and it may not be coherent with the COW files.
Therefore, the check for template rootfs modification time being older than DisposableVM savefile modification time is required. It is done in `qfilexchgd` daemon, just before restoring DisposableVM. If necessary, an attempt is made to recreate the DisposableVM savefile, using the last template used (or default template, if run for the first time) and the default prerun script, residing at `/var/lib/qubes/vm-templates/templatename/dispvm_prerun.sh`. Unfortunately, the prerun script takes a lot of time to execute - therefore, after template rootfs modification, the next DisposableVM creation can be longer by about 2.5 minutes.

View file

@ -0,0 +1,30 @@
---
layout: doc
title: Qfilecopy
permalink: /doc/qfilecopy/
redirect_from:
- /en/doc/qfilecopy/
- /doc/Qfilecopy/
- /wiki/Qfilecopy/
---
InterVM file copy design
========================
There are two cases when we need a mechanism to copy files between VMs:
- "regular" file copy - when user instructs file manager to copy a given files/directories to a different VM
- DispVM copy - user selects "open in DispVM" on a file; this file must be copied to a DisposableVM, edited by user, and possibly a modified file copied back from DispVM to VM.
Prior to Qubes Beta1, for both cases, a block device (backed by a file in dom0 with a vfat filesystem on it) was attached to VM, file(s) copied there, and then the device was detached and attached to target VM. In the DispVM case, if a edited file has been modified, another block device is passed to requester VM in order to update the source file.
This has the following disadvantages:
- performance - dom0 has to prepare and attach/detach block devices, which is slow because of hotplug scripts involvement.
- security - VM kernel parses partition table and filesystem metadata from the block device; they are controlled by (potentially untrusted) sender VM.
In Qubes Beta1, we have reimplemented interVM file copy using qrexec, which addresses the above mentioned disadvantages. In Qubes Beta2, even more generic solution (qubes rpc) is used. See the developer docs on qrexec and qubes rpc. In a nutshell, the file sender and the file receiver just read/write from stdin/stdout, and the qubes rpc layer passes data properly - so, no block devices are used.
The rpc action for regular file copy is *qubes.Filecopy*, the rpc client is named *qfile-agent*, the rpc server is named *qfile-unpacker*. For DispVM copy, the rpc action is *qubes.OpenInVM*, the rpc client is named *qopen-in-vm*, rpc server is named *vm-file-editor*. Note that the qubes.OpenInVM action can be done on a normal AppVM, too.
Being a rpc server, *qfile-unpacker* must be coded securely, as it processes potentially untrusted data format. Particularly, we do not want to use external tar or cpio and be prone to all vulnerabilities in them; we want a simplified, small utility, that handles only directory/file/symlink file type, permissions, mtime/atime, and assume user/user ownership. In the current implementation, the code that actually parses the data from srcVM has ca 100 lines of code and executes chrooted to the destination directory. The latter is hardcoded to `~user/QubesIncoming/srcVM`; because of chroot, there is no possibility to alter files outside of this directory.

View file

@ -0,0 +1,57 @@
---
layout: doc
title: Qfileexchgd
permalink: /doc/qfileexchgd/
redirect_from:
- /en/doc/qfileexchgd/
- /doc/Qfileexchgd/
- /wiki/Qfileexchgd/
---
**This mechanism is obsolete as of Qubes Beta 1!**
==================================================
Please see this [page](/doc/qfilecopy/) instead.
qfilexchgd, the Qubes file exchange daemon
==========================================
Overview
--------
*qfilexchgd* is a dom0 daemon responsible for managing exchange of block devices ("virtual pendrives") between VMs. It is used for
- copying files between AppVMs
- copying a single file between an AppVM and a DisposableVM
*qfilexchgd* is started after first *qubes\_guid* has been started, so that it has access to X display in dom0 to present dialog messages.
*qfilexchgd* is event driven. The sources of events are:
- trigger of xenstore watch for the changes in `/local/domain` xenstore hierarchy - to detect start/stop of VMs, and maintain vmname-\>vm\_xid dictionary
- trigger of xenstore watch for a change in `/local/domain/domid/device/qpen` key - VMs write to this key to request service from *qfilexchgd*
Copying files between AppVMs
----------------------------
1. AppVM1 user runs *qvm-copy-to-vm* script (accessible from Dolphin file manager by right click on a "file(s)-\>Actions-\>Send to VM" menu). It calls */usr/lib/qubes/qubes\_penctl new*, and it writes "new" request to its `device/qpen` xenstore key. *qfilexchgd* creates a new 1G file, makes vfat fs on it, and does block-attach so that this file is attached as `/dev/xvdg` in AppVM1.
2. AppVM1 mounts `/dev/xvdg` on `/mnt/outgoing` and copies requested files there, then unmounts it.
3. AppVM1 writes "send DestVM" request to its `device/qpen` xenstore key (calling */usr/lib/qubes/qubes\_penctl send DestVM*). After getting confirmation by displaying a dialog box in dom0 display, *qfilexchgd* detaches `/dev/xvdg` from AppVM1, attaches it as `/dev/xvdh` to DestVM.
4. In DestVM, udev script for `/dev/xvdh` named *qubes\_add\_pendrive\_script* (see `/etc/udev/rules.d/qubes.rules`) mounts `/dev/xvdh` on `/mnt/incoming`, and then waits for `/mnt/incoming` to become unmounted. A file manager running in DestVM shows a new volume, and user in DestVM may copy files from it. When user in DestVM is done, then user unmounts `/mnt/incoming`. *qubes\_add\_pendrive*\_script then tells *qfilexchgd* to detach `/dev/xvdh` and terminates.
Copying a single file between AppVM and a DisposableVM
------------------------------------------------------
In order to minimize attack surface presented by necessity to process virtual pendrive metadata sent by (potentially compromised and malicious) DisposableVM, AppVM\<-\>DisposableVM file exchange protocol does not use any filesystem.
1. User in AppVM1 runs *qvm-open-in-dvm* (accessible from Dolphin file manager by right click on a "file-\>Actions-\>Open in DisposableVM" menu). *qvm-open-in-dvm*
1. gets a new `/dev/xvdg` (just as described in previous paragraph)
2. computes a new unique transaction seq SEQ by incrementing `/home/user/.dvm/seq` contents,
3. writes the requested file name (say, /home/user/document.txt) to `/home/user/.dvm/SEQ`
4. creates a dvm\_header (see core.git/appvm/dvm.h) on `/dev/xvdg`, followed by file contents
5. writes the "send disposable SEQ" command to its `device/qpen` xenstore key.
2. *qfilexchgd* sees that "send" argument=="disposable", and creates a new DisposableVM by calling */usr/lib/qubes/qubes\_restore*. It adds the new DisposableVM to qubesDB via qvm\_collection.add\_new\_disposablevm. Then it attaches the virtual pendrive (previously attached as `/dev/xvdg` at AppVM1) as `/dev/xvdh` in DisposableVM.
3. In DisposableVM, *qubes\_add\_pendrive\_script* sees non-zero `qubes_transaction_seq` key in xenstore, and instead processing the virtual pendrive as in the case of normal copy, treats it as DVM transaction (a request, because we run in DisposableVM). It retrieves the body of the file passed in `/dev/xvdh`, copies to /tmp, and runs *mime-open* utility to open appropriate executable to edit it. When *mime-open* returns, if the file was modified, it is sent back to AppVM1 (by writing "send AppVM1 SEQ" to `device/qpen` xenstore key). Then DisposableVM destroys itself.
4. In AppVM1, a new `/dev/xvdh` appears (because DisposableVM has sent it). *qubes\_add\_pendrive\_script* sees non-zero `qubes_transaction_seq` key, and treats it as DVM transaction (a response, because we run in AppVM, not DisposableVM). It retrieves the filename from `/home/user/.dvm/SEQ`, and copies data from `/dev/xvdh` to it.

View file

@ -0,0 +1,77 @@
---
layout: doc
title: Qmemman
permalink: /doc/qmemman/
redirect_from:
- /en/doc/qmemman/
- /doc/Qmemman/
- /wiki/Qmemman/
---
qmemman, Qubes memory manager
=============================
Rationale
---------
Traditionally, Xen VMs are assigned a fixed amount of memory. It is not the optimal solution, as some VMs may require more memory than assigned initially, while others underutilize memory. Thus, there is a need for solution capable of shifting free memory from VM to another VM.
The [tmem](https://oss.oracle.com/projects/tmem/) project provides a "pseudo-RAM" that is assigned on per-need basis. However this solution has some disadvantages:
- It does not provide real RAM, just an interface to copy memory to/from fast, RAM-based storage. It is perfect for swap, good for file cache, but not ideal for many tasks.
- It is deeply integrated with the Linux kernel. When Qubes will support Windows guests natively, we would have to port *tmem* to Windows, which may be challenging.
Therefore, in Qubes another solution is used. There is the *qmemman* dom0 daemon. All VMs report their memory usage (via xenstore) to *qmemman*, and it makes decisions on whether to balance memory across domains. The actual mechanism to add/remove memory from a domain (*xc.domain\_set\_target\_mem*) is already supported by both PV Linux guests and Windows guests (the latter via PV drivers).
Similarly, when there is need for Xen free memory (for instance, in order to create a new VM), traditionally the memory is obtained from dom0 only. When *qmemman* is running, it offers an interface to obtain memory from all domains.
To sum up, *qmemman* pros and cons. Pros:
- provides automatic balancing of memory across participating PV and HVM domains, based on their memory demand
- works well in practice, with less than 1% CPU consumption in the idle case
- simple, concise implementation
Cons:
- the algorithm to calculate the memory requirement for a domain is necessarily simple, and may not closely reflect reality
- *qmemman* is notified by a VM about memory usage change not more often than 10 times per second (to limit CPU overhead in VM). Thus, there can be up to 0.1s delay until qmemman starts to react to the new memory requirements
- it takes more time to obtain free Xen memory, as all participating domains need to instructed to yield memory
Interface
---------
*qmemman* listens for the following events:
- writes to `/local/domain/domid/memory/meminfo` xenstore keys by *meminfo-writer* process in VM. The content of this key is taken from the VM's `/proc/meminfo` pseudofile ; *meminfo-writer* just strips some unused lines from it. Note that *meminfo-writer* writes its xenstore key only if the VM memory usage has changed significantly enough since the last update (by default 30MB), to prevent flooding with almost identical data
- commands issued over Unix socket `/var/run/qubes/qmemman.sock`. Currently, the only command recognized is to free the specified amount of memory. The QMemmanClient class implements the protocol.
- if the `/var/run/qubes/do-not-membalance` file exists, *qmemman* suspends memory balancing. It is primarily used when allocating memory for a to-be-created domain, to prevent using up the free Xen memory by the balancing algorithm before the domain creation is completed.
Algorithms basics
-----------------
The core VM property is `prefmem`. It denotes the amount of memory that should be enough for a domain to run efficiently in the nearest future. All *qmemman* algorithms will never shrink domain memory below `prefmem`. Currently, `prefmem` is simply 130% of current memory usage in a domain (without buffers and cache, but including swap). Naturally, `prefmem` is calculated by *qmemman* based on the information passed by *meminfo-writer*.
Whenever *meminfo-writer* running in domain A provides new data on memory usage to *qmemman*, the `prefmem` value for A is updated and the following balance algorithm (*qmemman\_algo.balance*) is triggered. Its output is the list of (domain\_id, new\_memory\_target\_to\_be\_set) pairs:
1. TOTAL\_PREFMEM = sum of `prefmem` of all participating domains
2. TOTAL\_MEMORY = sum of all memory assigned to participating domains plus Xen free memory
3. if TOTAL\_MEMORY \> TOTAL\_PREFMEM, then redistribute TOTAL\_MEMORY across all domains proportionally to their `prefmem`
4. if TOTAL\_MEMORY \< TOTAL\_PREFMEM, then
1. for all domains whose `prefmem` is less than actual memory, shrink them to their `prefmem`
2. redistribute memory reclaimed in the previous step between the rest of domains, proportionally to their `prefmem`
In order to avoid too frequent memory redistribution, it is actually executed only if one of the below conditions hold:
- the sum of memory size changes for all domains is more than MIN\_TOTAL\_MEMORY\_TRANSFER (150MB)
- one of the domains is below its `prefmem`, and more than MIN\_MEM\_CHANGE\_WHEN\_UNDER\_PREF (15MB) would be added to it
Additionally, the balance algorithm is tuned so that XEN\_FREE\_MEM\_LEFT (50MB) is always left as Xen free memory, to make coherent memory allocations in driver domains work.
Whenever *qmemman* is asked to return X megabytes of memory to Xen free pool, the following algorithm (*qmemman\_algo.balloon*) is executed:
1. find all domains ("donors") whose actual memory is greater than its `prefmem`
2. calculate how much memory can be reclaimed by shrinking donors to their `prefmem`. If it is less than X, return error.
3. shrink donors, proportionally to their `prefmem`, so that X MB should become free
4. wait BALOON\_DELAY (0.1s)
5. if some domain have not given back any memory, remove it from the donors list, and go to step 2, unless we already did MAX\_TRIES (20) iterations (then return error).

View file

@ -0,0 +1,329 @@
---
layout: doc
title: Qrexec2
permalink: /doc/qrexec2/
redirect_from:
- /doc/qrexec2-implementation/
- /en/doc/qrexec2-implementation/
- /doc/Qrexec2Implementation/
- /wiki/Qrexec2Implementation/
---
# Command execution in VMs #
(*This page is about qrexec v2. For qrexec v3, see
[here](/doc/qrexec3/).*)
Qubes **qrexec** is a framework for implementing inter-VM (incl. Dom0-VM)
services. It offers a mechanism to start programs in VMs, redirect their
stdin/stdout, and a policy framework to control this all.
## Qrexec basics ##
During each domain creation a process named `qrexec-daemon` is started in
dom0, and a process named `qrexec-agent` is started in the VM. They are
connected over `vchan` channel.
Typically, the first thing that a `qrexec-client` instance does is to send
a request to `qrexec-agent` to start a process in the VM. From then on,
the stdin/stdout/stderr from this remote process will be passed to the
`qrexec-client` process.
E.g., to start a primitive shell in a VM type the following in Dom0 console:
[user@dom0 ~]$ /usr/lib/qubes/qrexec-client -d <vm name> user:bash
The string before first semicolon specifies what user to run the command as.
Adding `-e` on the `qrexec-client` command line results in mere command
execution (no data passing), and `qrexec-client` exits immediately after
sending the execution request.
There is also the `-l <local program>` flag, which directs `qrexec-client`
to pass stdin/stdout of the remote program not to its stdin/stdout, but to
the (spawned for this purpose) `<local program>`.
The `qvm-run` command is heavily based on `qrexec-client`. It also takes care
of additional activities (e.g., starting the domain, if it is not up yet, and
starting the GUI daemon). Thus, it is usually more convenient to use `qvm-run`.
There can be almost arbitrary number of `qrexec-client` processes for a domain
(i.e., `qrexec-client` processes connected to the same `qrexec-daemon`);
their data is multiplexed independently.
There is a similar command line utility available inside Linux AppVMs (note
the `-vm` suffix): `qrexec-client-vm` that will be described in subsequent
sections.
## Qubes RPC services ##
Apart from simple Dom0-\>VM command executions, as discussed above, it is
also useful to have more advanced infrastructure for controlled inter-VM
RPC/services. This might be used for simple things like inter-VM file
copy operations, as well as more complex tasks like starting a DispVM,
and requesting it to do certain operations on a handed file(s).
Instead of implementing complex RPC-like mechanisms for inter-VM communication,
Qubes takes a much simpler and pragmatic approach and aims to only provide
simple *pipes* between the VMs, plus ability to request *pre-defined* programs
(servers) to be started on the other end of such pipes, and a centralized
policy (enforced by the `qrexec-policy` process running in dom0) which says
which VMs can request what services from what VMs.
Thanks to the framework and automatic stdin/stdout redirection, RPC programs
are very simple; both the client and server just use their stdin/stdout to pass
data. The framework does all the inner work to connect these file descriptors
to each other via `qrexec-daemon` and `qrexec-agent`. Additionally, DispVMs
are tightly integrated; RPC to a DispVM is a simple matter of using a magic
`$dispvm` keyword as the target VM name.
All services in Qubes are identified by a single string, which by convention
takes a form of `qubes.ServiceName`. Each VM can provide handlers for each of
the known services by providing a file in `/etc/qubes-rpc/` directory with
the same name as the service it is supposed to handle. This file will then
be executed by the qrexec service, if the dom0 policy allowed the service to
be requested (see below). Typically, the files in `/etc/qubes-rpc/` contain
just one line, which is a path to the specific binary that acts as a server
for the incoming request, however they might also be the actual executable
themselves. Qrexec framework is careful about connecting the stdin/stdout
of the server process with the corresponding stdin/stdout of the requesting
process in the requesting VM (see example Hello World service described below).
## Qubes RPC administration ##
Besides each VM needing to provide explicit programs to serve each supported
service, the inter-VM service RPC is also governed by a central policy in Dom0.
In dom0, there is a bunch of files in `/etc/qubes-rpc/policy/` directory,
whose names describe the available RPC actions; their content is the RPC
access policy database. Some example of the default services in Qubes are:
qubes.Filecopy
qubes.OpenInVM
qubes.ReceiveUpdates
qubes.SyncAppMenus
qubes.VMShell
qubes.ClipboardPaste
qubes.Gpg
qubes.NotifyUpdates
qubes.PdfConvert
These files contain lines with the following format:
srcvm destvm (allow|deny|ask)[,user=user_to_run_as][,target=VM_to_redirect_to]
You can specify `srcvm` and `destvm` by name, or by one of `$anyvm`,
`$dispvm`, `dom0` reserved keywords (note string `dom0` does not match the
`$anyvm` pattern; all other names do). Only `$anyvm` keyword makes sense
in the `srcvm` field (service calls from dom0 are currently always allowed,
`$dispvm` means "new VM created for this particular request" - so it is never
a source of request). Currently, there is no way to specify source VM by type,
but this is planned for Qubes R3.
Whenever a RPC request for service named "XYZ" is received, the first line
in `/etc/qubes-rpc/policy/XYZ` that matches the actual `srcvm`/`destvm` is
consulted to determine whether to allow RPC, what user account the program
should run in target VM under, and what VM to redirect the execution to. If
the policy file does not exist, user is prompted to create one *manually*;
if still there is no policy file after prompting, the action is denied.
On the target VM, the `/etc/qubes-rpc/XYZ` must exist, containing the file
name of the program that will be invoked.
### Requesting VM-VM (and VM-Dom0) services execution ###
In a src VM, one should invoke the qrexec client via the following command:
/usr/lib/qubes/qrexec-client-vm <target vm name> <service name> <local program path> [local program arguments]
Note that only stdin/stdout is passed between RPC server and client --
notably, no cmdline argument are passed.
The source VM name can be accessed in the server process via
`QREXEC_REMOTE_DOMAIN` environment variable. (Note the source VM has *no*
control over the name provided in this variable--the name of the VM is
provided by dom0, and so is trusted.)
By default, stderr of client and server is logged to respective
`/var/log/qubes/qrexec.XID` files, in each of the VM.
Be very careful when coding and adding a new RPC service! Any vulnerability
in a RPC server can be fatal to security of the target VM!
If requesting VM-VM (and VM-Dom0) services execution *without cmdline helper*,
connect directly to `/var/run/qubes/qrexec-agent-fdpass` socket as described
[below](#all-the-pieces-together-at-work).
### Revoking "Yes to All" authorization ###
Qubes RPC policy supports an "ask" action, that will prompt the user whether
a given RPC call should be allowed. It is set as default for services such
as inter-VM file copy. A prompt window launches in dom0, that gives the user
option to click "Yes to All", which allows the action and adds a new entry
to the policy file, which will unconditionally allow further calls for given
(service, srcVM, dstVM) tuple.
In order to remove such authorization, issue this command from a Dom0 terminal
(example below for `qubes.Filecopy` service):
sudo nano /etc/qubes-rpc/policy/qubes.Filecopy
and then remove any line(s) ending in "allow" (before the first `##` comment)
which are the "Yes to All" results.
A user might also want to set their own policies in this section. This may
mostly serve to prevent the user from mistakenly copying files or text from
a trusted to untrusted domain, or vice-versa.
### Qubes RPC "Hello World" service ###
We will show the necessary files to create a simple RPC call that adds two
integers on the target VM and returns back the result to the invoking VM.
* Client code on source VM (`/usr/bin/our_test_add_client`)
#!/bin/sh
echo $1 $2 # pass data to rpc server
exec cat >&$SAVED_FD_1 # print result to the original stdout, not to the other rpc endpoint
* Server code on target VM (`/usr/bin/our_test_add_server`)
#!/bin/sh
read arg1 arg2 # read from stdin, which is received from the rpc client
echo $(($arg1+$arg2)) # print to stdout - so, pass to the rpc client
* Policy file in dom0 (`/etc/qubes-rpc/policy/test.Add`)
$anyvm $anyvm ask
* Server path definition on target VM (`/etc/qubes-rpc/test.Add`)
/usr/bin/our_test_add_server
* To test this service, run the following in the source VM:
/usr/lib/qubes/qrexec-client-vm <target VM> test.Add /usr/bin/our_test_add_client 1 2
and we should get "3" as answer, provided dom0 policy allows the call to pass
through, which would happen after we click "Yes" in the popup that should
appear after the invocation of this command. If we changed the policy from
"ask" to "allow", then no popup should be presented, and the call will always
be allowed.
**Note:** For a real world example of writing a qrexec service, see this
[blog post](https://blog.invisiblethings.org/2013/02/21/converting-untrusted-pdfs-into-trusted.html).
### More high-level RPCs? ###
As previously noted, Qubes aims to provide mechanisms that are very simple
and thus with very small attack surface. This is the reason why the inter-VM
RPC framework is very primitive and doesn't include any serialization or
other function arguments passing, etc. We should remember, however, that
users/app developers are always free to run more high-level RPC protocols on
top of qrexec. Care should be taken, however, to consider potential attack
surfaces that are exposed to untrusted or less trusted VMs in that case.
# Qubes RPC internals #
(*This is about the implementation of qrexec v2. For the implementation of
qrexec v3, see [here](/doc/qrexec3/#qubes-rpc-internals). Note that the user
API in v3 is backward compatible: qrexec apps written for Qubes R2 should
run without modification on Qubes R3.*)
## Dom0 tools implementation ##
Players:
* `/usr/lib/qubes/qrexec-daemon`: started by mgmt stack (qubes.py) when a
VM is started.
* `/usr/lib/qubes/qrexec-policy`: internal program used to evaluate the
policy file and making the 2nd half of the connection.
* `/usr/lib/qubes/qrexec-client`: raw command line tool that talks to the
daemon via unix socket (`/var/run/qubes/qrexec.XID`)
**Note:** None of the above tools are designed to be used by users.
## Linux VMs implementation ##
Players:
* `/usr/lib/qubes/qrexec-agent`: started by VM bootup scripts, a daemon.
* `/usr/lib/qubes/qubes-rpc-multiplexer`: executes the actual service program,
as specified in VM's `/etc/qubes-rpc/qubes.XYZ`.
* `/usr/lib/qubes/qrexec-client-vm`: raw command line tool that talks to
the agent.
**Note:** None of the above tools are designed to be used by
users. `qrexec-client-vm` is designed to be wrapped up by Qubes apps.
## Windows VMs implementation ##
`%QUBES_DIR%` is the installation path (`c:\Program Files\Invisible Things
Lab\Qubes OS Windows Tools` by default).
* `%QUBES_DIR%\bin\qrexec-agent.exe`: runs as a system service. Responsible
both for raw command execution and interpreting RPC service requests.
* `%QUBES_DIR%\qubes-rpc`: directory with `qubes.XYZ` files that contain
commands for executing RPC services. Binaries for the services are contained
in `%QUBES_DIR%\qubes-rpc-services`.
* `%QUBES_DIR%\bin\qrexec-client-vm`: raw command line tool that talks to
the agent.
**Note:** None of the above tools are designed to be used by
users. `qrexec-client-vm` is designed to be wrapped up by Qubes apps.
## All the pieces together at work ##
**Note:** This section is not needed to use qrexec for writing Qubes
apps. Also note the [qrexec framework implemention in Qubes R3](/doc/qrexec3/)
significantly differs from what is described in this section.
The VM-VM channels in Qubes R2 are made via "gluing" two VM-Dom0 and Dom0-VM
vchan connections:
![qrexec2-internals.png](/attachment/wiki/Qrexec2Implementation/qrexec2-internals.png)
Note that Dom0 never examines the actual data flowing in neither of the two
vchan connections.
When a user in a source VM executes `qrexec-client-vm` utility, the following
steps are taken:
* `qrexec-client-vm` connects to `qrexec-agent`'s
`/var/run/qubes/qrexec-agent-fdpass` unix socket 3 times. Reads 4 bytes from
each of them, which is the fd number of the accepted socket in agent. These
3 integers, in text, concatenated, form "connection identifier" (CID)
* `qrexec-client-vm` writes to `/var/run/qubes/qrexec-agent` fifo a blob,
consisting of target vmname, rpc action, and CID
* `qrexec-client-vm` executes the rpc client, passing the above mentioned
unix sockets as process stdin/stdout, and optionally stderr (if the
`PASS_LOCAL_STDERR` env variable is set)
* `qrexec-agent` passes the blob to `qrexec-daemon`, via
`MSG_AGENT_TO_SERVER_TRIGGER_CONNECT_EXISTING` message over vchan
* `qrexec-daemon` executes `qrexec-policy`, passing source vmname, target
vmname, rpc action, and CID as cmdline arguments
* `qrexec-policy` evaluates the policy file. If successful, creates a pair of
`qrexec-client` processes, whose stdin/stdout are cross-connected.
* The first `qrexec-client` connects to the src VM, using the `-c ClientID`
parameter, which results in not creating a new process, but connecting to
the existing process file descriptors (these are the fds of unix socket
created in step 1).
* The second `qrexec-client` connects to the target VM, and executes
`qubes-rpc-multiplexer` command there with the rpc action as the cmdline
argument. Finally, `qubes-rpc-multiplexer` executes the correct rpc server
on the target.
* In the above step, if the target VM is `$dispvm`, the DispVM is created
via the `qfile-daemon-dvm` program. The latter waits for the `qrexec-client`
process to exit, and then destroys the DispVM.
*TODO: Protocol description ("wire-level" spec)*

View file

@ -0,0 +1,628 @@
---
layout: doc
title: Qrexec3
permalink: /doc/qrexec3/
redirect_from:
- /en/doc/qrexec3/
- /doc/Qrexec3/
- /wiki/Qrexec3/
- /doc/qrexec/
- /en/doc/qrexec/
- /doc/Qrexec/
- /wiki/Qrexec/
- /doc/qrexec3-implementation/
- /en/doc/qrexec3-implementation/
- /doc/Qrexec3Implementation/
- /wiki/Qrexec3Implementation/
---
# Command execution in VMs #
(*This page is about qrexec v3. For qrexec v2, see
[here](/doc/qrexec2/).*)
The **qrexec** framework is used by core Qubes components to implement
communication between domains. Qubes domains are isolated by design, but
there is a need for a mechanism to allow the administrative domain (dom0) to
force command execution in another domain (VM). For instance, when user
selects an application from the KDE menu, it should be started in the selected
VM. Also, it is often useful to be able to pass stdin/stdout/stderr from an
application running in a VM to dom0 (and the other way around). In specific
circumstances, Qubes allows VMs to be initiators of such communications (so,
for example, a VM can notify dom0 that there are updates available for it).
## Qrexec basics ##
Qrexec is built on top of vchan (a library providing data links between
VMs). During domain creation a process named `qrexec-daemon` is started
in dom0, and a process named `qrexec-agent` is started in the VM. They are
connected over **vchan** channel. `qrexec-daemon` listens for connections
from dom0 utility named `qrexec-client`. Typically, the first thing that a
`qrexec-client` instance does is to send a request to `qrexec-daemon` to
start a process (let's name it `VMprocess`) with a given command line in
a specified VM (`someVM`). `qrexec-daemon` assigns unique vchan connection
details and sends them both to `qrexec-client` (in dom0) and `qrexec-agent`
(in `someVM`). `qrexec-client` starts a vchan server which `qrexec-agent`
connects to. Since then, stdin/stdout/stderr from the VMprocess is passed
via vchan between `qrexec-agent` and the `qrexec-client` process.
So, for example, executing in dom0:
qrexec-client -d someVM user:bash
allows to work with the remote shell. The string before the first
semicolon specifies what user to run the command as. Adding `-e` on the
`qrexec-client` command line results in mere command execution (no data
passing), and `qrexec-client` exits immediately after sending the execution
request and receiving status code from `qrexec-agent` (whether the process
creation succeeded). There is also the `-l local_program` flag -- with it,
`qrexec-client` passes stdin/stdout of the remote process to the (spawned
for this purpose) `local_program`, not to its own stdin/stdout.
The `qvm-run` command is heavily based on `qrexec-client`. It also takes care
of additional activities, e.g. starting the domain if it is not up yet and
starting the GUI daemon. Thus, it is usually more convenient to use `qvm-run`.
There can be almost arbitrary number of `qrexec-client` processes for a
domain (so, connected to the same `qrexec-daemon`, same domain) -- their
data is multiplexed independently. Number of available vchan channels is
the limiting factor here, it depends on the underlying hypervisor.
## Qubes RPC services ##
Some tasks (like inter-vm file copy) share the same rpc-like structure:
a process in one VM (say, file sender) needs to invoke and send/receive
data to some process in other VM (say, file receiver). Thus, the Qubes RPC
framework was created, facilitating such actions.
Obviously, inter-VM communication must be tightly controlled to prevent one
VM from taking control over other, possibly more privileged, VM. Therefore
the design decision was made to pass all control communication via dom0,
that can enforce proper authorization. Then, it is natural to reuse the
already-existing qrexec framework.
Also, note that bare qrexec provides `VM <-> dom0` connectivity, but the
command execution is always initiated by dom0. There are cases when VM needs
to invoke and send data to a command in dom0 (e.g. to pass information on
newly installed `.desktop` files). Thus, the framework allows dom0 to be
the rpc target as well.
Thanks to the framework, RPC programs are very simple -- both rpc client
and server just use their stdin/stdout to pass data. The framework does all
the inner work to connect these processes to each other via `qrexec-daemon`
and `qrexec-agent`. Additionally, disposable VMs are tightly integrated --
rpc to a DisposableVM is identical to rpc to a normal domain, all one needs
is to pass `$dispvm` as the remote domain name.
## Qubes RPC administration ##
(*TODO: fix for non-linux dom0*)
In dom0, there is a bunch of files in `/etc/qubes-rpc/policy` directory,
whose names describe the available rpc actions. Their content is the rpc
access policy database. Currently defined actions are:
qubes.ClipboardPaste
qubes.Filecopy
qubes.GetImageRGBA
qubes.GetRandomizedTime
qubes.Gpg
qubes.GpgImportKey
qubes.InputKeyboard
qubes.InputMouse
qubes.NotifyTools
qubes.NotifyUpdates
qubes.OpenInVM
qubes.OpenURL
qubes.PdfConvert
qubes.ReceiveUpdates
qubes.SyncAppMenus
qubes.USB
qubes.VMShell
qubes.WindowIconUpdater
These files contain lines with the following format:
srcvm destvm (allow|deny|ask)[,user=user_to_run_as][,target=VM_to_redirect_to]
You can specify srcvm and destvm by name, or by one of `$anyvm`, `$dispvm`,
`dom0` reserved keywords (note string `dom0` does not match the `$anyvm`
pattern; all other names do). Only `$anyvm` keyword makes sense in srcvm
field (service calls from dom0 are currently always allowed, `$dispvm`
means "new VM created for this particular request," so it is never a
source of request). Currently there is no way to specify source VM by
type. Whenever a rpc request for action X is received, the first line in
`/etc/qubes-rpc/policy/X` that match srcvm/destvm is consulted to determine
whether to allow rpc, what user account the program should run in target VM
under, and what VM to redirect the execution to. Note that if the request is
redirected (`target=` parameter), policy action remains the same - even if
there is another rule which would otherwise deny such request. If the policy
file does not exist, user is prompted to create one; if still there is no
policy file after prompting, the action is denied.
In the target VM, the `/etc/qubes-rpc/RPC_ACTION_NAME` must exist, containing
the file name of the program that will be invoked, or being that program itself
- in which case it must have executable permission set (`chmod +x`).
In the src VM, one should invoke the client via:
/usr/lib/qubes/qrexec-client-vm target_vm_name RPC_ACTION_NAME rpc_client_path client arguments
Note that only stdin/stdout is passed between rpc server and client --
notably, no command line arguments are passed. Source VM name is specified by
`QREXEC_REMOTE_DOMAIN` environment variable. By default, stderr of client
and server is logged to respective `/var/log/qubes/qrexec.XID` files.
It is also possible to call service without specific client program - in which
case server stdin/out will be connected with the terminal:
/usr/lib/qubes/qrexec-client-vm target_vm_name RPC_ACTION_NAME
Be very careful when coding and adding a new rpc service. Unless the
offered functionality equals full control over the target (it is the case
with e.g. `qubes.VMShell` action), any vulnerability in an rpc server can
be fatal to Qubes security. On the other hand, this mechanism allows to
delegate processing of untrusted input to less privileged (or disposable)
AppVMs, thus wise usage of it increases security.
For example, this command will run the `firefox` command in a DisposableVM based
on `work`:
```
$ qvm-run --dispvm=work firefox
```
By contrast, consider this command:
```
$ qvm-run --dispvm=work --service qubes.StartApp+firefox
```
This will look for a `firefox.desktop` file in a standard location in a
DisposableVM based on `work`, then launch the application described by that
file. The practical difference is that the bare `qvm-run` command uses the
`qubes.VMShell` service, which allows you to run an arbitrary command with
arbitrary arguments, essentially providing full control over the target VM. By
contrast, the `qubes.StartApp` service allows you to run only applications that
are advertised in `/usr/share/applications` (or other standard locations)
*without* control over the arguments, so giving a VM access to `qubes.StartApp`
is much safer. While there isn't much practical difference between the two
commands above when starting an application from dom0 in Qubes 4.0, there is a
significant security risk when launching applications from a domU (e.g., from
a separate GUI domain). This is why `qubes.StartApp` uses our standard `qrexec`
argument grammar to strictly filter the permissible grammar of the `Exec=` lines
in `.desktop` files that are passed from untrusted domUs to dom0, thereby
protecting dom0 from command injection by maliciously-crafted `.desktop` files.
### Extra keywords available in Qubes 4.0 and later
**This section is about a not-yet-released version, some details may change**
In Qubes 4.0, target VM can be specified also as `$dispvm:DISP_VM`, which is
very similar to `$dispvm` but forces using a particular VM (`DISP_VM`) as a base
VM to be started as DisposableVM. For example:
anon-whonix $dispvm:anon-whonix-dvm allow
Adding such policy itself will not force usage of this particular `DISP_VM` -
it will only allow it when specified by the caller. But `$dispvm:DISP_VM` can
also be used as target in request redirection, so _it is possible_ to force
particular `DISP_VM` usage, when caller didn't specify it:
anon-whonix $dispvm allow,target=$dispvm:anon-whonix-dvm
Note that without redirection, this rule would allow using default Disposable
VM (`default_dispvm` VM property, which itself defaults to global
`default_dispvm` property).
Also note that the request will be allowed (`allow` action) even if there is no
second rule allowing calls to `$dispvm:anon-whonix-dvm`, or even if
there is a rule explicitly denying it. This is because the redirection happens
_after_ considering the action.
In Qubes 4.0 there are also additional methods to specify source/target VM:
* `$tag:some-tag` - meaning a VM with tag `some-tag`
* `$type:type` - meaning a VM of `type` (like `AppVM`, `TemplateVM` etc)
Target VM can be also specified as `$default`, which matches the case when
calling VM didn't specified any particular target (either by using `$default`
target, or empty target).
In Qubes 4.0 policy confirmation dialog (`ask` action) allow the user to
specify target VM. User can choose from VMs that, according to policy, would
lead to `ask` or `allow` actions. It is not possible to select VM that policy
would deny. By default no VM is selected, even if the caller provided some, but
policy can specify default value using `default_target=` parameter. For
example:
work-mail work-archive allow
work-mail $tag:work ask,default_target=work-files
work-mail $default ask,default_target=work-files
The first rule allow call from `work-mail` to `work-archive`, without any
confirmation.
The second rule will ask the user about calls from `work-mail` VM to any VM with
tag `work`. And the confirmation dialog will have `work-files` VM chosen by
default, regardless of the VM specified by the caller (`work-mail` VM). The
third rule allow the caller to not specify target VM at all and let the user
choose, still - from VMs with tag `work` (and `work-archive`, regardless of
tag), and with `work-files` as default.
### Service argument in policy
Sometimes just service name isn't enough to make reasonable qrexec policy. One
example of such a situation is [qrexec-based USB
passthrough](https://github.com/qubesos/qubes-issues/issues/531) - using just
service name isn't possible to express the policy "allow access to device X and
deny to others". It also isn't feasible to create a separate service for every
device...
For this reason, starting with Qubes 3.2, it is possible to specify a service
argument, which will be subject to policy. Besides the above example of USB
passthrough, a service argument can make many service policies more fine-grained
and easier to write precise policy with "allow" and "deny" actions, instead of
"ask" (offloading additional decisions to the user). And generally the less
choices the user must make, the lower the chance to make a mistake.
The syntax is simple: when calling a service, add an argument to the service name
separated with `+` sign, for example:
/usr/lib/qubes/qrexec-client-vm target_vm_name RPC_ACTION_NAME+ARGUMENT
Then create a policy as usual, including the argument
(`/etc/qubes-rpc/policy/RPC_ACTION_NAME+ARGUMENT`). If the policy for the specific
argument is not set (file does not exist), then the default policy for this service
is loaded (`/etc/qubes-rpc/policy/RPC_ACTION_NAME`).
In target VM (when the call is allowed) the service file will searched as:
- `/etc/qubes-rpc/RPC_ACTION_NAME+ARGUMENT`
- `/etc/qubes-rpc/RPC_ACTION_NAME`
In any case, the script will receive `ARGUMENT` as its argument and additionally as
`QREXEC_SERVICE_ARGUMENT` environment variable. This means it is also possible
to install a different script for a particular service argument.
See below for an example service using an argument.
### Revoking "Yes to All" authorization ###
Qubes RPC policy supports "ask" action. This will prompt the user whether given
RPC call should be allowed. That prompt window also has a "Yes to All" option,
which will allow the action and add a new entry to the policy file, which will
unconditionally allow further calls for the given service-srcVM-dstVM tuple.
In order to remove such authorization, issue this command from a dom0 terminal
(for `qubes.Filecopy` service):
sudo nano /etc/qubes-rpc/policy/qubes.Filecopy
and then remove the first line(s) (before the first `##` comment) which are
the "Yes to All" results.
### Qubes RPC example ###
We will show the necessary files to create an rpc call that adds two integers
on the target and returns back the result to the invoker.
* rpc client code (`/usr/bin/our_test_add_client`):
#!/bin/sh
echo $1 $2 # pass data to rpc server
exec cat >&$SAVED_FD_1 # print result to the original stdout, not to the other rpc endpoint
* rpc server code (*/usr/bin/our\_test\_add\_server*)
#!/bin/sh
read arg1 arg2 # read from stdin, which is received from the rpc client
echo $(($arg1+$arg2)) # print to stdout - so, pass to the rpc client
* policy file in dom0 (*/etc/qubes-rpc/policy/test.Add* )
$anyvm $anyvm ask
* server path definition ( */etc/qubes-rpc/test.Add*)
/usr/bin/our_test_add_server
* invoke rpc via
/usr/lib/qubes/qrexec-client-vm target_vm test.Add /usr/bin/our_test_add_client 1 2
and we should get "3" as answer, after dom0 allows it.
**Note:** For a real world example of writing a qrexec service, see this
[blog post](https://blog.invisiblethings.org/2013/02/21/converting-untrusted-pdfs-into-trusted.html).
### Qubes RPC example - with argument usage ###
We will show the necessary files to create an rpc call that reads a specific file
from a predefined directory on the target. Besides really naive storage, it may
be a very simple password manager.
Additionally, in this example a simplified workflow will be used - server code
placed directly in the service definition file (in `/etc/qubes-rpc` directory). And
no separate client script will be used.
* rpc server code (*/etc/qubes-rpc/test.File*)
#!/bin/sh
argument="$1" # service argument, also available as $QREXEC_SERVICE_ARGUMENT
if [ -z "$argument" ]; then
echo "ERROR: No argument given!"
exit 1
fi
# service argument is already sanitized by qrexec framework and it is
# guaranteed to not contain any space or /, so no need for additional path
# sanitization
cat "/home/user/rpc-file-storage/$argument"
* specific policy file in dom0 (*/etc/qubes-rpc/policy/test.File+testfile1* )
source_vm1 target_vm allow
* another specific policy file in dom0 (*/etc/qubes-rpc/policy/test.File+testfile2* )
source_vm2 target_vm allow
* default policy file in dom0 (*/etc/qubes-rpc/policy/test.File* )
$anyvm $anyvm deny
* invoke rpc from `source_vm1` via
/usr/lib/qubes/qrexec-client-vm target_vm test.File+testfile1
and we should get content of `/home/user/rpc-file-storage/testfile1` as answer.
* also possible to invoke rpc from `source_vm2` via
/usr/lib/qubes/qrexec-client-vm target_vm test.File+testfile2
But when invoked with other argument or from different VM, it should be denied.
# Qubes RPC internals #
(*This is about the implementation of qrexec v3. For the implementation of
qrexec v2, see [here](/doc/qrexec2/#qubes-rpc-internals).*)
Qrexec framework consists of a number of processes communicating with each
other using common IPC protocol (described in detail below). Components
residing in the same domain (`qrexec-client-vm` to `qrexec-agent`, `qrexec-client` to `qrexec-daemon`) use pipes as the underlying transport medium,
while components in separate domains (`qrexec-daemon` to `qrexec-agent`, data channel between `qrexec-agent`s) use vchan link.
Because of [vchan limitation](https://github.com/qubesos/qubes-issues/issues/951), it is not possible to establish qrexec connection back to the source domain.
## Dom0 tools implementation ##
* `/usr/lib/qubes/qrexec-daemon`: One instance is required for every active
domain. Responsible for:
* Handling execution and service requests from **dom0** (source:
`qrexec-client`).
* Handling service requests from the associated domain (source:
`qrexec-client-vm`, then `qrexec-agent`).
* Command line: `qrexec-daemon domain-id domain-name [default user]`
* `domain-id`: Numeric Qubes ID assigned to the associated domain.
* `domain-name`: Associated domain name.
* `default user`: Optional. If passed, `qrexec-daemon` uses this user as
default for all execution requests that don't specify one.
* `/usr/lib/qubes/qrexec-policy`: Internal program used to evaluate the
RPC policy and deciding whether a RPC call should be allowed.
* `/usr/lib/qubes/qrexec-client`: Used to pass execution and service requests
to `qrexec-daemon`. Command line parameters:
* `-d target-domain-name`: Specifies the target for the execution/service
request.
* `-l local-program`: Optional. If present, `local-program` is executed
and its stdout/stdin are used when sending/receiving data to/from the
remote peer.
* `-e`: Optional. If present, stdout/stdin are not connected to the remote
peer. Only process creation status code is received.
* `-c <request-id,src-domain-name,src-domain-id>`: used for connecting
a VM-VM service request by `qrexec-policy`. Details described below in
the service example.
* `cmdline`: Command line to pass to `qrexec-daemon` as the
execution/service request. Service request format is described below in
the service example.
**Note:** None of the above tools are designed to be used by users directly.
## VM tools implementation ##
* `qrexec-agent`: One instance runs in each active domain. Responsible for:
* Handling service requests from `qrexec-client-vm` and passing them to
connected `qrexec-daemon` in dom0.
* Executing associated `qrexec-daemon` execution/service requests.
* Command line parameters: none.
* `qrexec-client-vm`: Runs in an active domain. Used to pass service requests
to `qrexec-agent`.
* Command line: `qrexec-client-vm target-domain-name service-name local-program [local program arguments]`
* `target-domain-name`: Target domain for the service request. Source is
the current domain.
* `service-name`: Requested service name.
* `local-program`: `local-program` is executed locally and its stdin/stdout
are connected to the remote service endpoint.
## Qrexec protocol details ##
Qrexec protocol is message-based. All messages share a common header followed
by an optional data packet.
/* uniform for all peers, data type depends on message type */
struct msg_header {
uint32_t type; /* message type */
uint32_t len; /* data length */
};
When two peers establish connection, the server sends `MSG_HELLO` followed by
`peer_info` struct:
struct peer_info {
uint32_t version; /* qrexec protocol version */
};
The client then should reply with its own `MSG_HELLO` and `peer_info`. The
lower of two versions define protocol used for this connection. If either side
does not support this version, the connection is closed.
Details of all possible use cases and the messages involved are described below.
### dom0: request execution of `some_command` in domX and pass stdin/stdout ###
- **dom0**: `qrexec-client` is invoked in **dom0** as follows:
`qrexec-client -d domX [-l local_program] user:some_command`
- `user` may be substituted with the literal `DEFAULT`. In that case,
default Qubes user will be used to execute `some_command`.
- **dom0**: `qrexec-client` sets `QREXEC_REMOTE_DOMAIN` environment variable
to **domX**.
- **dom0**: If `local_program` is set, `qrexec-client` executes it and uses
that child's stdin/stdout in place of its own when exchanging data with
`qrexec-agent` later.
- **dom0**: `qrexec-client` connects to **domX**'s `qrexec-daemon`.
- **dom0**: `qrexec-daemon` sends `MSG_HELLO` header followed by `peer_info`
to `qrexec-client`.
- **dom0**: `qrexec-client` replies with `MSG_HELLO` header followed by
`peer_info` to `qrexec-daemon`.
- **dom0**: `qrexec-client` sends `MSG_EXEC_CMDLINE` header followed by
`exec_params` to `qrexec-daemon`.
/* variable size */
struct exec_params {
uint32_t connect_domain; /* target domain id */
uint32_t connect_port; /* target vchan port for i/o exchange */
char cmdline[0]; /* command line to execute, size = msg_header.len - sizeof(struct exec_params) */
};
In this case, `connect_domain` and `connect_port` are set to 0.
- **dom0**: `qrexec-daemon` replies to `qrexec-client` with
`MSG_EXEC_CMDLINE` header followed by `exec_params`, but with empty `cmdline`
field. `connect_domain` is set to Qubes ID of **domX** and `connect_port`
is set to a vchan port allocated by `qrexec-daemon`.
- **dom0**: `qrexec-daemon` sends `MSG_EXEC_CMDLINE` header followed
by `exec_params` to the associated **domX** `qrexec-agent` over
vchan. `connect_domain` is set to 0 (**dom0**), `connect_port` is the same
as sent to `qrexec-client`. `cmdline` is unchanged except that the literal
`DEFAULT` is replaced with actual user name, if present.
- **dom0**: `qrexec-client` disconnects from `qrexec-daemon`.
- **dom0**: `qrexec-client` starts a vchan server using the details received
from `qrexec-daemon` and waits for connection from **domX**'s `qrexec-agent`.
- **domX**: `qrexec-agent` receives `MSG_EXEC_CMDLINE` header followed by
`exec_params` from `qrexec-daemon` over vchan.
- **domX**: `qrexec-agent` connects to `qrexec-client` over vchan using the
details from `exec_params`.
- **domX**: `qrexec-agent` executes `some_command` in **domX** and connects
the child's stdin/stdout to the data vchan. If the process creation fails,
`qrexec-agent` sends `MSG_DATA_EXIT_CODE` to `qrexec-client` followed by
the status code (**int**) and disconnects from the data vchan.
- Data read from `some_command`'s stdout is sent to the data vchan using
`MSG_DATA_STDOUT` by `qrexec-agent`. `qrexec-client` passes data received as
`MSG_DATA_STDOUT` to its own stdout (or to `local_program`'s stdin if used).
- `qrexec-client` sends data read from local stdin (or `local_program`'s
stdout if used) to `qrexec-agent` over the data vchan using
`MSG_DATA_STDIN`. `qrexec-agent` passes data received as `MSG_DATA_STDIN`
to `some_command`'s stdin.
- `MSG_DATA_STDOUT` or `MSG_DATA_STDIN` with data `len` field set to 0 in
`msg_header` is an EOF marker. Peer receiving such message should close the
associated input/output pipe.
- When `some_command` terminates, **domX**'s `qrexec-agent` sends
`MSG_DATA_EXIT_CODE` header to `qrexec-client` followed by the exit code
(**int**). `qrexec-agent` then disconnects from the data vchan.
### domY: invoke execution of qubes service `qubes.SomeRpc` in domX and pass stdin/stdout ###
- **domY**: `qrexec-client-vm` is invoked as follows:
`qrexec-client-vm domX qubes.SomeRpc local_program [params]`
- **domY**: `qrexec-client-vm` connects to `qrexec-agent` (via local
socket/named pipe).
- **domY**: `qrexec-client-vm` sends `trigger_service_params` data to
`qrexec-agent` (without filling the `request_id` field):
struct trigger_service_params {
char service_name[64];
char target_domain[32];
struct service_params request_id; /* service request id */
};
struct service_params {
char ident[32];
};
- **domY**: `qrexec-agent` allocates a locally-unique (for this domain)
`request_id` (let's say `13`) and fills it in the `trigger_service_params`
struct received from `qrexec-client-vm`.
- **domY**: `qrexec-agent` sends `MSG_TRIGGER_SERVICE` header followed by
`trigger_service_params` to `qrexec-daemon` in **dom0** via vchan.
- **dom0**: **domY**'s `qrexec-daemon` executes `qrexec-policy`: `qrexec-policy
domY_id domY domX qubes.SomeRpc 13`.
- **dom0**: `qrexec-policy` evaluates if the RPC should be allowed or
denied. If the action is allowed it returns `0`, if the action is denied it
returns `1`.
- **dom0**: **domY**'s `qrexec-daemon` checks the exit code of `qrexec-policy`.
- If `qrexec-policy` returned **not** `0`: **domY**'s `qrexec-daemon`
sends `MSG_SERVICE_REFUSED` header followed by `service_params` to
**domY**'s `qrexec-agent`. `service_params.ident` is identical to the one
received. **domY**'s `qrexec-agent` disconnects its `qrexec-client-vm`
and RPC processing is finished.
- If `qrexec-policy` returned `0`, RPC processing continues.
- **dom0**: if `qrexec-policy` allowed the RPC, it executed `qrexec-client
-d domX -c 13,domY,domY_id user:QUBESRPC qubes.SomeRpc domY`.
- **dom0**: `qrexec-client` sets `QREXEC_REMOTE_DOMAIN` environment variable
to **domX**.
- **dom0**: `qrexec-client` connects to **domX**'s `qrexec-daemon`.
- **dom0**: **domX**'s `qrexec-daemon` sends `MSG_HELLO` header followed by
`peer_info` to `qrexec-client`.
- **dom0**: `qrexec-client` replies with `MSG_HELLO` header followed by
`peer_info` to **domX**'s`qrexec-daemon`.
- **dom0**: `qrexec-client` sends `MSG_EXEC_CMDLINE` header followed by
`exec_params` to **domX**'s`qrexec-daemon`
/* variable size */
struct exec_params {
uint32_t connect_domain; /* target domain id */
uint32_t connect_port; /* target vchan port for i/o exchange */
char cmdline[0]; /* command line to execute, size = msg_header.len - sizeof(struct exec_params) */
};
In this case, `connect_domain` is set to id of **domY** (from the `-c`
parameter) and `connect_port` is set to 0. `cmdline` field contains the
RPC to execute, in this case `user:QUBESRPC qubes.SomeRpc domY`.
- **dom0**: **domX**'s `qrexec-daemon` replies to `qrexec-client` with
`MSG_EXEC_CMDLINE` header followed by `exec_params`, but with empty `cmdline`
field. `connect_domain` is set to Qubes ID of **domX** and `connect_port`
is set to a vchan port allocated by **domX**'s `qrexec-daemon`.
- **dom0**: **domX**'s `qrexec-daemon` sends `MSG_EXEC_CMDLINE` header
followed by `exec_params` to **domX**'s `qrexec-agent`. `connect_domain`
and `connect_port` fields are the same as in the step above. `cmdline` is
set to the one received from `qrexec-client`, in this case `user:QUBESRPC
qubes.SomeRpc domY`.
- **dom0**: `qrexec-client` disconnects from **domX**'s `qrexec-daemon`
after receiving connection details.
- **dom0**: `qrexec-client` connects to **domY**'s `qrexec-daemon` and
exchanges `MSG_HELLO` as usual.
- **dom0**: `qrexec-client` sends `MSG_SERVICE_CONNECT` header followed by
`exec_params` to **domY**'s `qrexec-daemon`. `connect_domain` is set to ID
of **domX** (received from **domX**'s `qrexec-daemon`) and `connect_port` is
the one received as well. `cmdline` is set to request ID (`13` in this case).
- **dom0**: **domY**'s `qrexec-daemon` sends `MSG_SERVICE_CONNECT` header
followed by `exec_params` to **domY**'s `qrexec-agent`. Data fields are
unchanged from the step above.
- **domY**: `qrexec-agent` starts a vchan server on the port received in
the step above. It acts as a `qrexec-client` in this case because this is
a VM-VM connection.
- **domX**: `qrexec-agent` connects to the vchan server of **domY**'s
`qrexec-agent` (connection details were received before from **domX**'s
`qrexec-daemon`).
- After that, connection follows the flow of the previous example (dom0-VM).