Convert to RST

This is done using tools at
https://github.com/maiska/qubes-translation-utilz, commit
4c8e2a7f559fd37e29b51769ed1ab1c6cf92e00d.
This commit is contained in:
Marek Marczykowski-Górecki 2025-07-04 14:23:09 +02:00
parent e3db139fe3
commit 7e464d0f40
No known key found for this signature in database
GPG key ID: F32894BE9684938A
428 changed files with 32833 additions and 29703 deletions

View file

@ -1,281 +0,0 @@
---
lang: en
layout: doc
permalink: /doc/automated-tests/
redirect_from:
- /en/doc/automated-tests/
- /doc/AutomatedTests/
ref: 45
title: Automated tests
---
## Unit and Integration Tests
Starting with Qubes R3 we use [python unittest](https://docs.python.org/3/library/unittest.html) to perform automatic tests of Qubes OS.
Despite the name, we use it for both [unit tests](https://en.wikipedia.org/wiki/Unit_tests) and [integration tests](https://en.wikipedia.org/wiki/Integration_tests).
The main purpose is, of course, to deliver much more stable releases.
The integration tests must be run in dom0, but some unit tests can run inside a VM as well.
### Integration & unit testing in dom0
Integration tests are written with the assumption that they will be executed on dedicated hardware and must be run in dom0. All other unit tests can also be run in dom0.
**Do not run the tests on installations with important data, because you might lose it.**
All the VMs with a name starting with `test-` on the installation are removed during the process, and all the tests are recklessly started from dom0, even when testing (& possibly breaking) VM components.
First you need to build all packages that you want to test. Please do not mix branches as this will inevitably lead to failures. Then setup Qubes OS with these packages installed.
For testing you'll have to stop the `qubesd` service as the tests will use its own custom variant of the service:
`sudo systemctl stop qubesd`
Don't forget to start it after testing again.
To start testing you can then use the standard python unittest runner:
`sudo -E python3 -m unittest -v qubes.tests`
Alternatively, use the custom Qubes OS test runner:
`sudo -E python3 -m qubes.tests.run -v`
Our test runner runs mostly the same as the standard one, but it has some nice additional features like colored output and not needing the "qubes.test" prefix.
You can use `python3 -m qubes.tests.run -h` to get usage information:
```
[user@dom0 ~]$ python3 -m qubes.tests.run -h
usage: run.py [-h] [--verbose] [--quiet] [--list] [--failfast] [--no-failfast]
[--do-not-clean] [--do-clean] [--loglevel LEVEL]
[--logfile FILE] [--syslog] [--no-syslog] [--kmsg] [--no-kmsg]
[TESTNAME [TESTNAME ...]]
positional arguments:
TESTNAME list of tests to run named like in description
(default: run all tests)
optional arguments:
-h, --help show this help message and exit
--verbose, -v increase console verbosity level
--quiet, -q decrease console verbosity level
--list, -l list all available tests and exit
--failfast, -f stop on the first fail, error or unexpected success
--no-failfast disable --failfast
--loglevel LEVEL, -L LEVEL
logging level for file and syslog forwarding (one of:
NOTSET, DEBUG, INFO, WARN, WARNING, ERROR, CRITICAL;
default: DEBUG)
--logfile FILE, -o FILE
if set, test run will be also logged to file
--syslog reenable logging to syslog
--no-syslog disable logging to syslog
--kmsg, --very-brave-or-very-stupid
log most important things to kernel ring-buffer
--no-kmsg, --i-am-smarter-than-kay-sievers
do not abuse kernel ring-buffer
--allow-running-along-qubesd
allow running in parallel with qubesd; this is
DANGEROUS and WILL RESULT IN INCONSISTENT SYSTEM STATE
--break-to-repl break to REPL after tests
When running only specific tests, write their names like in log, in format:
MODULE+"/"+CLASS+"/"+FUNCTION. MODULE should omit initial "qubes.tests.".
Example: basic/TC_00_Basic/test_000_create
```
For instance, to run only the tests for the fedora-21 template, you can use the `-l` option, then filter the list:
```
[user@dom0 ~]$ python3 -m qubes.tests.run -l | grep fedora-21
network/VmNetworking_fedora-21/test_000_simple_networking
network/VmNetworking_fedora-21/test_010_simple_proxyvm
network/VmNetworking_fedora-21/test_020_simple_proxyvm_nm
network/VmNetworking_fedora-21/test_030_firewallvm_firewall
network/VmNetworking_fedora-21/test_040_inter_vm
vm_qrexec_gui/TC_00_AppVM_fedora-21/test_000_start_shutdown
vm_qrexec_gui/TC_00_AppVM_fedora-21/test_010_run_gui_app
vm_qrexec_gui/TC_00_AppVM_fedora-21/test_050_qrexec_simple_eof
vm_qrexec_gui/TC_00_AppVM_fedora-21/test_051_qrexec_simple_eof_reverse
vm_qrexec_gui/TC_00_AppVM_fedora-21/test_052_qrexec_vm_service_eof
vm_qrexec_gui/TC_00_AppVM_fedora-21/test_053_qrexec_vm_service_eof_reverse
vm_qrexec_gui/TC_00_AppVM_fedora-21/test_060_qrexec_exit_code_dom0
vm_qrexec_gui/TC_00_AppVM_fedora-21/test_065_qrexec_exit_code_vm
vm_qrexec_gui/TC_00_AppVM_fedora-21/test_100_qrexec_filecopy
vm_qrexec_gui/TC_00_AppVM_fedora-21/test_110_qrexec_filecopy_deny
vm_qrexec_gui/TC_00_AppVM_fedora-21/test_120_qrexec_filecopy_self
vm_qrexec_gui/TC_20_DispVM_fedora-21/test_000_prepare_dvm
vm_qrexec_gui/TC_20_DispVM_fedora-21/test_010_simple_dvm_run
vm_qrexec_gui/TC_20_DispVM_fedora-21/test_020_gui_app
vm_qrexec_gui/TC_20_DispVM_fedora-21/test_030_edit_file
[user@dom0 ~]$ sudo -E python3 -m qubes.tests.run -v `python3 -m qubes.tests.run -l | grep fedora-21`
```
Example test run:
![snapshot-tests2.png](/attachment/doc/snapshot-tests2.png)
Tests are also compatible with nose2 test runner, so you can use this instead:
```bash
sudo systemctl stop qubesd; sudo -E nose2 -v --plugin nose2.plugins.loader.loadtests qubes.tests; sudo systemctl start qubesd
```
This may be especially useful together with various nose2 plugins to store tests results (for example `nose2.plugins.junitxml`), to ease presenting results. This is what we use on [OpenQA](https://open.qa/).
### Unit testing inside a VM
Many unit tests will also work inside a VM. However all of the tests requiring a dedicated VM to be run (mostly the integration tests) will be skipped.
Whereas integration tests are mostly stored in the [qubes-core-admin](https://github.com/QubesOS/qubes-core-admin) repository, unit tests can be found in each of the Qubes OS repositories.
To for example run the `qubes-core-admin` unit tests, you currently have to clone at least [qubes-core-admin](https://github.com/QubesOS/qubes-core-admin) and
its dependency [qubes-core-qrexec](https://github.com/QubesOS/qubes-core-qrexec) repository in the branches that you want to test.
The below example however will assume that you set up a build environment as described in the [Qubes Builder documentation](/doc/qubes-builder-v2/).
Assuming you cloned the `qubes-builder` repository to your home directory inside a fedora VM, you can use the following commands to run the unit tests:
```{.bash}
cd ~
sudo dnf install python3-pip lvm2 python35 python3-virtualenv
virtualenv -p /usr/bin/python35 python35
source python35/bin/activate
python3 -V
cd ~/qubes-builder/qubes-src/core-admin
pip3 install -r ci/requirements.txt
export PYTHONPATH=../core-qrexec:test-packages
./run-tests
```
To run only the tests related to e.g. `lvm`, you may use:
`./run-tests -v $(python3 -m qubes.tests.run -l | grep lvm)`
You can later re-use the created virtual environment including all of the via `pip3` installed packages with `source ~/python35/bin/activate`.
We recommend to run the unit tests with the Python version that the code is meant to be run with in dom0 (3.5 was just an example above). For instance, the `release4.0` (Qubes 4.0) branch is intended
to be run with Python 3.5 whereas the Qubes 4.1 branch (`master` as of 2020-07) is intended to be run with Python 3.7 or higher. You can always check your dom0 installation for the Python version of
the current stable branch.
### Tests configuration
Test runs can be altered using environment variables:
- `DEFAULT_LVM_POOL` - LVM thin pool to use for tests, in `VolumeGroup/ThinPool` format
- `QUBES_TEST_PCIDEV` - PCI device to be used in PCI passthrough tests (for example sound card)
- `QUBES_TEST_TEMPLATES` - space separated list of templates to run tests on; if not set, all installed templates are tested
- `QUBES_TEST_LOAD_ALL` - load all tests (including tests for all templates) when relevant test modules are imported; this needs to be set for test runners not supporting [load_tests protocol](https://docs.python.org/3/library/unittest.html#load-tests-protocol)
### Adding a new test to core-admin
After adding a new unit test to [core-admin/qubes/tests](https://github.com/QubesOS/qubes-core-admin/tree/master/qubes/tests) you'll have to include it in [core-admin/qubes/tests/\_\_init\_\_.py](https://github.com/QubesOS/qubes-core-admin/tree/master/qubes/tests/__init__.py)
#### Editing `__init__.py`
You'll also need to add your test at the bottom of the `__init__.py` file, in the method `def load_tests`, in the for loop with `modname`.
Again, given the hypothetical `example.py` test:
~~~python
for modname in (
'qubes.tests.basic',
'qubes.tests.dom0_update',
'qubes.tests.network',
'qubes.tests.vm_qrexec_gui',
'qubes.tests.backup',
'qubes.tests.backupcompatibility',
'qubes.tests.regressions',
'qubes.tests.example', # This is our newly added test
):
~~~
### Testing PyQt applications
When testing (Py)QT applications, it's useful to create a separate QApplication object for each test.
But QT framework does not allow multiple QApplication objects in the same process at the same time.
This means it's critical to reliably cleanup the previous instance before creating a new one.
This turns out to be a non-trivial task, especially if _any_ test uses the event loop.
Failure to perform proper cleanup in many cases results in SEGV.
Below you can find steps for the proper cleanup:
~~~python
import asyncio
import quamash
import unittest
import gc
class SomeTestCase(unittest.TestCase):
def setUp(self):
[...]
# force "cleanlooks" style, the default one on Xfce (GtkStyle) use
# static variable internally and caches pointers to later destroyed
# objects (result: SEGV)
self.qtapp = QtGui.QApplication(["test", "-style", "cleanlooks"])
# construct event loop even if this particular test doesn't use it,
# otherwise events with qtapp references will be queued there anyway and the
# first test that actually use event loop will try to dereference (already
# destroyed) objects, resulting in SEGV
self.loop = quamash.QEventLoop(self.qtapp)
def tearDown(self):
[...]
# process any pending events before destroying the object
self.qtapp.processEvents()
# queue destroying the QApplication object, do that for any other QT
# related objects here too
self.qtapp.deleteLater()
# process any pending events (other than just queued destroy), just in case
self.qtapp.processEvents()
# execute main loop, which will process all events, _including just queued destroy_
self.loop.run_until_complete(asyncio.sleep(0))
# at this point it QT objects are destroyed, cleanup all remaining references;
# del other QT object here too
self.loop.close()
del self.qtapp
del self.loop
gc.collect()
~~~
## Automated tests with openQA
**URL:** <https://openqa.qubes-os.org/>
**Tests:** <https://github.com/marmarek/openqa-tests-qubesos>
Manually testing Qubes OS and its installation is a time-consuming process.
We use [OpenQA](https://open.qa/) to automate this process.
It works by installing Qubes in KVM and interacting with it as a user would, including simulating mouse clicks and keyboard presses.
Then, it checks the output to see whether various tests were passed, e.g. by comparing the virtual screen output to screenshots of a successful installation.
Using openQA to automatically test the Qubes installation process works as of Qubes 4.0-rc4 on 2018-01-26, provided that the versions of KVM and QEMU are new enough and the hardware has VT-x and EPT.
KVM also supports nested virtualization, so HVM should theoretically work.
In practice, however, either Xen or QEMU crashes when this is attempted.
Nonetheless, PV works well, which is sufficient for automated installation testing.
Thanks to present and past donors who have provided the infrastructure for Qubes' openQA system with hardware that meets these requirements.
### Looking for patterns in tests
In order to better visualize patterns in tests the [`openqa_investigator`](https://github.com/QubesOS/openqa-tests-qubesos/blob/master/utils/openqa_investigator.py) script can be used.
It feeds off of the openQA test data to make graph plots. Here is an example:
![openqa-investigator-splitgpg-example.png](/attachment/doc/openqa-investigator-splitgpg-example.png)
Some outputs:
- plot by tests
- plot by errors
- markdown
Some filters:
- filter by error
- filter by test name
Check out the script's help with `python3 openqa_investigator.py --help`
to see all available options.

View file

@ -0,0 +1,296 @@
===============
Automated tests
===============
Unit and Integration Tests
--------------------------
Starting with Qubes R3 we use `python unittest <https://docs.python.org/3/library/unittest.html>`__ to perform automatic tests of Qubes OS. Despite the name, we use it for both `unit tests <https://en.wikipedia.org/wiki/Unit_tests>`__ and `integration tests <https://en.wikipedia.org/wiki/Integration_tests>`__. The main purpose is, of course, to deliver much more stable releases.
The integration tests must be run in dom0, but some unit tests can run inside a VM as well.
Integration & unit testing in dom0
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Integration tests are written with the assumption that they will be executed on dedicated hardware and must be run in dom0. All other unit tests can also be run in dom0.
**Do not run the tests on installations with important data, because you might lose it.**
All the VMs with a name starting with ``test-`` on the installation are removed during the process, and all the tests are recklessly started from dom0, even when testing (& possibly breaking) VM components.
First you need to build all packages that you want to test. Please do not mix branches as this will inevitably lead to failures. Then setup Qubes OS with these packages installed.
For testing youll have to stop the ``qubesd`` service as the tests will use its own custom variant of the service: ``sudo systemctl stop qubesd``
Dont forget to start it after testing again.
To start testing you can then use the standard python unittest runner:
``sudo -E python3 -m unittest -v qubes.tests``
Alternatively, use the custom Qubes OS test runner:
``sudo -E python3 -m qubes.tests.run -v``
Our test runner runs mostly the same as the standard one, but it has some nice additional features like colored output and not needing the “qubes.test” prefix.
You can use ``python3 -m qubes.tests.run -h`` to get usage information:
.. code:: bash
[user@dom0 ~]$ python3 -m qubes.tests.run -h
usage: run.py [-h] [--verbose] [--quiet] [--list] [--failfast] [--no-failfast]
[--do-not-clean] [--do-clean] [--loglevel LEVEL]
[--logfile FILE] [--syslog] [--no-syslog] [--kmsg] [--no-kmsg]
[TESTNAME [TESTNAME ...]]
positional arguments:
TESTNAME list of tests to run named like in description
(default: run all tests)
optional arguments:
-h, --help show this help message and exit
--verbose, -v increase console verbosity level
--quiet, -q decrease console verbosity level
--list, -l list all available tests and exit
--failfast, -f stop on the first fail, error or unexpected success
--no-failfast disable --failfast
--loglevel LEVEL, -L LEVEL
logging level for file and syslog forwarding (one of:
NOTSET, DEBUG, INFO, WARN, WARNING, ERROR, CRITICAL;
default: DEBUG)
--logfile FILE, -o FILE
if set, test run will be also logged to file
--syslog reenable logging to syslog
--no-syslog disable logging to syslog
--kmsg, --very-brave-or-very-stupid
log most important things to kernel ring-buffer
--no-kmsg, --i-am-smarter-than-kay-sievers
do not abuse kernel ring-buffer
--allow-running-along-qubesd
allow running in parallel with qubesd; this is
DANGEROUS and WILL RESULT IN INCONSISTENT SYSTEM STATE
--break-to-repl break to REPL after tests
When running only specific tests, write their names like in log, in format:
MODULE+"/"+CLASS+"/"+FUNCTION. MODULE should omit initial "qubes.tests.".
Example: basic/TC_00_Basic/test_000_create
For instance, to run only the tests for the fedora-21 template, you can use the ``-l`` option, then filter the list:
.. code:: bash
[user@dom0 ~]$ python3 -m qubes.tests.run -l | grep fedora-21
network/VmNetworking_fedora-21/test_000_simple_networking
network/VmNetworking_fedora-21/test_010_simple_proxyvm
network/VmNetworking_fedora-21/test_020_simple_proxyvm_nm
network/VmNetworking_fedora-21/test_030_firewallvm_firewall
network/VmNetworking_fedora-21/test_040_inter_vm
vm_qrexec_gui/TC_00_AppVM_fedora-21/test_000_start_shutdown
vm_qrexec_gui/TC_00_AppVM_fedora-21/test_010_run_gui_app
vm_qrexec_gui/TC_00_AppVM_fedora-21/test_050_qrexec_simple_eof
vm_qrexec_gui/TC_00_AppVM_fedora-21/test_051_qrexec_simple_eof_reverse
vm_qrexec_gui/TC_00_AppVM_fedora-21/test_052_qrexec_vm_service_eof
vm_qrexec_gui/TC_00_AppVM_fedora-21/test_053_qrexec_vm_service_eof_reverse
vm_qrexec_gui/TC_00_AppVM_fedora-21/test_060_qrexec_exit_code_dom0
vm_qrexec_gui/TC_00_AppVM_fedora-21/test_065_qrexec_exit_code_vm
vm_qrexec_gui/TC_00_AppVM_fedora-21/test_100_qrexec_filecopy
vm_qrexec_gui/TC_00_AppVM_fedora-21/test_110_qrexec_filecopy_deny
vm_qrexec_gui/TC_00_AppVM_fedora-21/test_120_qrexec_filecopy_self
vm_qrexec_gui/TC_20_DispVM_fedora-21/test_000_prepare_dvm
vm_qrexec_gui/TC_20_DispVM_fedora-21/test_010_simple_dvm_run
vm_qrexec_gui/TC_20_DispVM_fedora-21/test_020_gui_app
vm_qrexec_gui/TC_20_DispVM_fedora-21/test_030_edit_file
[user@dom0 ~]$ sudo -E python3 -m qubes.tests.run -v `python3 -m qubes.tests.run -l | grep fedora-21`
Example test run:
.. figure:: /attachment/doc/snapshot-tests2.png
:alt: snapshot-tests2.png
Tests are also compatible with nose2 test runner, so you can use this instead:
.. code:: bash
sudo systemctl stop qubesd; sudo -E nose2 -v --plugin nose2.plugins.loader.loadtests qubes.tests; sudo systemctl start qubesd
This may be especially useful together with various nose2 plugins to store tests results (for example ``nose2.plugins.junitxml``), to ease presenting results. This is what we use on `OpenQA <https://open.qa/>`__.
Unit testing inside a VM
^^^^^^^^^^^^^^^^^^^^^^^^
Many unit tests will also work inside a VM. However all of the tests requiring a dedicated VM to be run (mostly the integration tests) will be skipped.
Whereas integration tests are mostly stored in the `qubes-core-admin <https://github.com/QubesOS/qubes-core-admin>`__ repository, unit tests can be found in each of the Qubes OS repositories.
To for example run the ``qubes-core-admin`` unit tests, you currently have to clone at least `qubes-core-admin <https://github.com/QubesOS/qubes-core-admin>`__ and its dependency `qubes-core-qrexec <https://github.com/QubesOS/qubes-core-qrexec>`__ repository in the branches that you want to test.
The below example however will assume that you set up a build environment as described in the :doc:`Qubes Builder documentation </developer/building/qubes-builder-v2>`.
Assuming you cloned the ``qubes-builder`` repository to your home directory inside a fedora VM, you can use the following commands to run the unit tests:
.. code:: bash
cd ~
sudo dnf install python3-pip lvm2 python35 python3-virtualenv
virtualenv -p /usr/bin/python35 python35
source python35/bin/activate
python3 -V
cd ~/qubes-builder/qubes-src/core-admin
pip3 install -r ci/requirements.txt
export PYTHONPATH=../core-qrexec:test-packages
./run-tests
To run only the tests related to e.g. ``lvm``, you may use:
``./run-tests -v $(python3 -m qubes.tests.run -l | grep lvm)``
You can later re-use the created virtual environment including all of the via ``pip3`` installed packages with ``source ~/python35/bin/activate``.
We recommend to run the unit tests with the Python version that the code is meant to be run with in dom0 (3.5 was just an example above). For instance, the ``release4.0`` (Qubes 4.0) branch is intended to be run with Python 3.5 whereas the Qubes 4.1 branch (``master`` as of 2020-07) is intended to be run with Python 3.7 or higher. You can always check your dom0 installation for the Python version of the current stable branch.
Tests configuration
^^^^^^^^^^^^^^^^^^^
Test runs can be altered using environment variables:
- ``DEFAULT_LVM_POOL`` - LVM thin pool to use for tests, in ``VolumeGroup/ThinPool`` format
- ``QUBES_TEST_PCIDEV`` - PCI device to be used in PCI passthrough tests (for example sound card)
- ``QUBES_TEST_TEMPLATES`` - space separated list of templates to run tests on; if not set, all installed templates are tested
- ``QUBES_TEST_LOAD_ALL`` - load all tests (including tests for all templates) when relevant test modules are imported; this needs to be set for test runners not supporting `load_tests protocol <https://docs.python.org/3/library/unittest.html#load-tests-protocol>`__
Adding a new test to core-admin
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
After adding a new unit test to `core-admin/qubes/tests <https://github.com/QubesOS/qubes-core-admin/tree/master/qubes/tests>`__ youll have to include it in `core-admin/qubes/tests/__init__.py <https://github.com/QubesOS/qubes-core-admin/tree/master/qubes/tests/__init__.py>`__
Editing ``__init__.py``
^^^^^^^^^^^^^^^^^^^^^^^
Youll also need to add your test at the bottom of the ``__init__.py`` file, in the method ``def load_tests``, in the for loop with ``modname``. Again, given the hypothetical ``example.py`` test:
.. code:: python
for modname in (
'qubes.tests.basic',
'qubes.tests.dom0_update',
'qubes.tests.network',
'qubes.tests.vm_qrexec_gui',
'qubes.tests.backup',
'qubes.tests.backupcompatibility',
'qubes.tests.regressions',
'qubes.tests.example', # This is our newly added test
):
Testing PyQt applications
^^^^^^^^^^^^^^^^^^^^^^^^^
When testing (Py)QT applications, its useful to create a separate QApplication object for each test. But QT framework does not allow multiple QApplication objects in the same process at the same time. This means its critical to reliably cleanup the previous instance before creating a new one. This turns out to be a non-trivial task, especially if *any* test uses the event loop. Failure to perform proper cleanup in many cases results in SEGV. Below you can find steps for the proper cleanup:
.. code:: python
import asyncio
import quamash
import unittest
import gc
class SomeTestCase(unittest.TestCase):
def setUp(self):
[...]
# force "cleanlooks" style, the default one on Xfce (GtkStyle) use
# static variable internally and caches pointers to later destroyed
# objects (result: SEGV)
self.qtapp = QtGui.QApplication(["test", "-style", "cleanlooks"])
# construct event loop even if this particular test doesn't use it,
# otherwise events with qtapp references will be queued there anyway and the
# first test that actually use event loop will try to dereference (already
# destroyed) objects, resulting in SEGV
self.loop = quamash.QEventLoop(self.qtapp)
def tearDown(self):
[...]
# process any pending events before destroying the object
self.qtapp.processEvents()
# queue destroying the QApplication object, do that for any other QT
# related objects here too
self.qtapp.deleteLater()
# process any pending events (other than just queued destroy), just in case
self.qtapp.processEvents()
# execute main loop, which will process all events, _including just queued destroy_
self.loop.run_until_complete(asyncio.sleep(0))
# at this point it QT objects are destroyed, cleanup all remaining references;
# del other QT object here too
self.loop.close()
del self.qtapp
del self.loop
gc.collect()
Automated tests with openQA
---------------------------
**URL:** https://openqa.qubes-os.org/ **Tests:** https://github.com/marmarek/openqa-tests-qubesos
Manually testing Qubes OS and its installation is a time-consuming process. We use `OpenQA <https://open.qa/>`__ to automate this process. It works by installing Qubes in KVM and interacting with it as a user would, including simulating mouse clicks and keyboard presses. Then, it checks the output to see whether various tests were passed, e.g. by comparing the virtual screen output to screenshots of a successful installation.
Using openQA to automatically test the Qubes installation process works as of Qubes 4.0-rc4 on 2018-01-26, provided that the versions of KVM and QEMU are new enough and the hardware has VT-x and EPT. KVM also supports nested virtualization, so HVM should theoretically work. In practice, however, either Xen or QEMU crashes when this is attempted. Nonetheless, PV works well, which is sufficient for automated installation testing.
Thanks to present and past donors who have provided the infrastructure for Qubes openQA system with hardware that meets these requirements.
Looking for patterns in tests
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
In order to better visualize patterns in tests the `openqa_investigator <https://github.com/QubesOS/openqa-tests-qubesos/blob/master/utils/openqa_investigator.py>`__ script can be used. It feeds off of the openQA test data to make graph plots. Here is an example:
.. figure:: /attachment/doc/openqa-investigator-splitgpg-example.png
:alt: openqa-investigator-splitgpg-example.png
Some outputs:
- plot by tests
- plot by errors
- markdown
Some filters:
- filter by error
- filter by test name
Check out the scripts help with ``python3 openqa_investigator.py --help`` to see all available options.

View file

@ -1,54 +0,0 @@
---
lang: en
layout: doc
permalink: /doc/mount-lvm-image/
ref: 46
title: How to mount LVM images
---
You want to read your LVM image (e.g., there is a problem where you can't start any VMs except dom0).
1: make the image available for qubesdb.
From dom0 terminal:
```bash
# Example: /dev/qubes_dom0/vm-debian-9-tmp-root
[user@dom0]$ dev=$(basename $(readlink /dev/YOUR_LVM_VG/YOUR_LVM_IMAGE))
[user@dom0]$ qubesdb-write /qubes-block-devices/$dev/desc "YOUR_LVM_IMAGE"
```
2: Create a new disposable VM
```bash
[user@dom0]$ qvm-run -v --dispvm=YOUR_DVM_TEMPLATE --service qubes.StartApp+xterm &
```
3: Attach the device to your newly created disp VM
From the GUI, or from the command line:
```bash
[user@dom0]$ qvm-block attach NEWLY_CREATED_DISPVM dom0:$dev
```
4: Mount the partition you want to, and do what you want with it
```bash
[user@dispXXXX]$ mount /dev/xvdiX /mnt/
```
5: Umount and kill the VM
```bash
[user@dispXXXX]$ umount /mnt/
```
6: Remove the image from qubesdb
```bash
[user@dom0]$ qubesdb-rm /qubes-block-devices/$dev/
```
# References
Please consult this issue's [comment](https://github.com/QubesOS/qubes-issues/issues/4687#issuecomment-451626625).

View file

@ -0,0 +1,58 @@
=======================
How to mount LVM images
=======================
You want to read your LVM image (e.g., there is a problem where you cant start any VMs except dom0).
1: make the image available for qubesdb. From dom0 terminal:
.. code:: bash
# Example: /dev/qubes_dom0/vm-debian-9-tmp-root
[user@dom0]$ dev=$(basename $(readlink /dev/YOUR_LVM_VG/YOUR_LVM_IMAGE))
[user@dom0]$ qubesdb-write /qubes-block-devices/$dev/desc "YOUR_LVM_IMAGE"
2: Create a new disposable VM
.. code:: bash
[user@dom0]$ qvm-run -v --dispvm=YOUR_DVM_TEMPLATE --service qubes.StartApp+xterm &
3: Attach the device to your newly created disp VM
From the GUI, or from the command line:
.. code:: bash
[user@dom0]$ qvm-block attach NEWLY_CREATED_DISPVM dom0:$dev
4: Mount the partition you want to, and do what you want with it
.. code:: bash
[user@dispXXXX]$ mount /dev/xvdiX /mnt/
5: Umount and kill the VM
.. code:: bash
[user@dispXXXX]$ umount /mnt/
6: Remove the image from qubesdb
.. code:: bash
[user@dom0]$ qubesdb-rm /qubes-block-devices/$dev/
References
----------
Please consult this issues `comment <https://github.com/QubesOS/qubes-issues/issues/4687#issuecomment-451626625>`__.

View file

@ -1,76 +0,0 @@
---
lang: en
layout: doc
permalink: /doc/safe-remote-ttys/
redirect_from:
- /en/doc/safe-remote-ttys/
ref: 49
title: Safe remote dom0 terminals
---
If you do not have working graphics in Dom0, then using a terminal can be quite annoying!
This was the case for the author while trying to debug PCI-passthrough of a machine's primary (only) GPU.
Your first thought might be to just allow network access to Dom0, enable ssh, and connect in remotely.
But, this gravely violates the Qubes security model.
Instead, a better solution is to split the input and output paths of using a terminal.
Use your normal keyboard for input, but have the output go to a remote machine in a unidirectional manner.
To do this, we make use of script(1), qvm-run, and optionally your network transport of choice.
To a different VM
-----------------
As an example of forwarding terminal output to another VM on the same machine:
~~~
$ mkfifo /tmp/foo
$ qvm-run -p some-vm 'xterm -e "cat 0<&5" 5<&0' </tmp/foo >/dev/null 2>&1 &
$ script -f /tmp/foo
~~~
To a different machine
----------------------
In this case over SSH (from a network-connected VM):
~~~
$ mkfifo /tmp/foo
$ qvm-run -p some-vm \
'ssh user@host sh -c "DISPLAY=:0 xterm -e \"cat 0<&5\" 5<&0"' \
</tmp/foo >/dev/null 2>&1 &
$ script -f /tmp/foo
~~~
Note that no data received over SSH is ever treated as terminal input in Dom0.
The input path remains only from your trusted local keyboard.
Multiple terminals
------------------
For multiple terminals, you may find it easier to just use tmux than to try to blindly switch to the correct window.
Terminal size
-------------
It is up to you to ensure the sizes of the local and remote terminal are the same, otherwise things may display incorrectly (especially in interactive programs).
Depending on your shell, the size of your local (blind) terminal is likely stored in the `$LINES` and `$COLUMNS` variables.
~~~
$ echo $COLUMNS $LINES
80 24
~~~
A note on serial consoles
-------------------------
If your machine has a serial console, you may with to use that, but note that a similar split-I/O model should be used to ensure Dom0 integrity.
If you use the serial console as normal (via e.g. getty on ttyX, and logging in as normal), then the machine at the end of the serial cable could compromise your machine!
Ideally, you would take input from your trusted keyboard, and only send the output over the serial cable via e.g. disabling getty and using:
~~~
script -f /dev/ttyS0
~~~
You don't even need to connect the TX pin.

View file

@ -0,0 +1,77 @@
==========================
Safe remote dom0 terminals
==========================
If you do not have working graphics in Dom0, then using a terminal can be quite annoying! This was the case for the author while trying to debug PCI-passthrough of a machines primary (only) GPU.
Your first thought might be to just allow network access to Dom0, enable ssh, and connect in remotely. But, this gravely violates the Qubes security model.
Instead, a better solution is to split the input and output paths of using a terminal. Use your normal keyboard for input, but have the output go to a remote machine in a unidirectional manner.
To do this, we make use of script(1), qvm-run, and optionally your network transport of choice.
To a different VM
-----------------
As an example of forwarding terminal output to another VM on the same machine:
.. code:: bash
$ mkfifo /tmp/foo
$ qvm-run -p some-vm 'xterm -e "cat 0<&5" 5<&0' </tmp/foo >/dev/null 2>&1 &
$ script -f /tmp/foo
To a different machine
----------------------
In this case over SSH (from a network-connected VM):
.. code:: bash
$ mkfifo /tmp/foo
$ qvm-run -p some-vm \
'ssh user@host sh -c "DISPLAY=:0 xterm -e \"cat 0<&5\" 5<&0"' \
</tmp/foo >/dev/null 2>&1 &
$ script -f /tmp/foo
Note that no data received over SSH is ever treated as terminal input in Dom0. The input path remains only from your trusted local keyboard.
Multiple terminals
------------------
For multiple terminals, you may find it easier to just use tmux than to try to blindly switch to the correct window.
Terminal size
-------------
It is up to you to ensure the sizes of the local and remote terminal are the same, otherwise things may display incorrectly (especially in interactive programs). Depending on your shell, the size of your local (blind) terminal is likely stored in the ``$LINES`` and ``$COLUMNS`` variables.
.. code:: bash
$ echo $COLUMNS $LINES
80 24
A note on serial consoles
-------------------------
If your machine has a serial console, you may with to use that, but note that a similar split-I/O model should be used to ensure Dom0 integrity. If you use the serial console as normal (via e.g. getty on ttyX, and logging in as normal), then the machine at the end of the serial cable could compromise your machine! Ideally, you would take input from your trusted keyboard, and only send the output over the serial cable via e.g. disabling getty and using:
.. code:: bash
script -f /dev/ttyS0
You dont even need to connect the TX pin.

View file

@ -1,236 +0,0 @@
---
lang: en
layout: doc
permalink: /doc/test-bench/
redirect_from:
- /en/doc/test-bench/
- /doc/TestBench/
- /wiki/TestBench/
ref: 44
title: How to set up a test bench
---
This guide shows how to set up simple test bench that automatically test your code you're about to push. It is written especially for `core3` branch of `core-admin.git` repo, but some ideas are universal.
We will set up a spare machine (bare metal, not a virtual) that will be hosting our experimental Dom0. We will communicate with it via Ethernet and SSH. This tutorial assumes you are familiar with [QubesBuilder](/doc/qubes-builder/) and you have it set up and running flawlessly.
> **Notice:**
> This setup intentionally weakens some security properties in the testing system. So make sure you understand the risks and use exclusively for testing.
## Setting up the Machine
### Install ISO
First, do a clean install from the `.iso` [you built](/doc/qubes-iso-building/) or grabbed elsewhere (for example [here](https://forum.qubes-os.org/t/qubesos-4-1-alpha-signed-weekly-builds/3601)).
### Enabling Network Access in Dom0
Internet access is intentionally disabled by default in dom0. But to ease the deployment process we will give it access. The following steps should be done in `dom0`.
> **Note:** the following assume you have only one network card. If you have two, pick one and leave the other attached to `sys-net`.
1. Remove the network card (PCI device) from `sys-net`
2. Restart your computer (for the removal to take effect)
3. Install `dhcp-client` and `openssh-server` on your testbench's dom0.
4. Save the following script in `/home/user/bin/dom0_network.sh` and make it executable. It should enable your network card in dom0. *Be sure to adjust the script's variables to suit your needs.*
```bash
#!/bin/sh
# adjust this for your NIC (run lspci)
BDF=0000:02:00.0
# adjust this for your network driver
DRIVER=e1000e
prog=$(basename $0)
pciunbind() {
local path
path=/sys/bus/pci/devices/${1}/driver/unbind
if ! [ -w ${path} ]; then
echo "${prog}: Device ${1} not bound"
return 1
fi
echo -n ${1} >${path}
}
pcibind() {
local path
path=/sys/bus/pci/drivers/${2}/bind
if ! [ -w ${path} ]; then
echo "${prog}: Driver ${2} not found"
return 1
fi
echo ${1} >${path}
}
pciunbind ${BDF}
pcibind ${BDF} ${DRIVER}
sleep 1
dhclient
```
5. Configure your DHCP server so your testbench gets static IP and connect your machine to your local network. You should ensure that your testbench can reach the Internet.
6. You'll need to run the above script on every startup. To automate this save the following systemd service `/etc/systemd/system/dom0-network-direct.service`
```
[Unit]
Description=Connect network to dom0
[Service]
Type=oneshot
ExecStart=/home/user/bin/dom0_network.sh
[Install]
WantedBy=multi-user.target
```
7. Then, enable and start the SSH Server and the script on boot:
```bash
sudo systemctl enable sshd
sudo systemctl start sshd
sudo systemctl enable dom0-network-direct
sudo systemctl start dom0-network-direct
```
> **Note:** If you want to install additional software in dom0 and your only network card was assigned to dom0, then _instead_ of the usual `sudo qubes-dom0-update <PACKAGE>` now you run `sudo dnf --setopt=reposdir=/etc/yum.repos.d install <PACKAGE>`.
### Install Tests and Their Dependencies
A regular Qubes installation isn't ready to run the full suite of tests. For example, in order to run the [Split GPG tests](https://github.com/QubesOS/qubes-app-linux-split-gpg/blob/4bc201bb70c011119eed19df25dc5b46120d04ed/tests/splitgpg/tests.py) you need to have the `qubes-gpg-split-tests` package installed in your app qubes.
Because of the above reason, some additional configurations need to be done to your testing environment. This can be done in an automated manner with the help of the [Salt](/doc/salt) configuration that provisions the [automated testing environment](/doc/automated-tests/).
The following commands should work for you, but do keep in mind that the provisioning scripts are designed for the [openQA environment](https://openqa.qubes-os.org/) and not your specific local testing system. Run the following in `dom0`:
```bash
# For future reference the following commands are an adaptation of
# https://github.com/marmarek/openqa-tests-qubesos/blob/master/tests/update.pm
# Install git
sudo qubes-dom0-update git || sudo dnf --setopt=reposdir=/etc/yum.repos.d install git
# Download the openQA automated testing environment Salt configuration
git clone https://github.com/marmarek/openqa-tests-qubesos/
cd openqa-tests-qubesos/extra-files
sudo cp -a system-tests/ /srv/salt/
sudo qubesctl top.enable system-tests
# Install the same configuration as the one in openQA
QUBES_VERSION=4.1
PILLAR_DIR=/srv/pillar/base/update
sudo mkdir -p $PILLAR_DIR
printf 'update:\n qubes_ver: '$QUBES_VERSION'\n' | sudo tee $PILLAR_DIR/init.sls
printf "base:\n '*':\n - update\n" | sudo tee $PILLAR_DIR/init.top
sudo qubesctl top.enable update pillar=True
# Apply states to dom0 and VMs
# NOTE: These commands can take several minutes (if not more) without showing output
sudo qubesctl --show-output state.highstate
sudo qubesctl --max-concurrency=2 --skip-dom0 --templates --show-output state.highstate
```
## Development VM
### SSH
Arrange firewall so you can reach the testbench from your `qubes-dev` VM. Generate SSH key in `qubes-dev`:
~~~
ssh-keygen -t ecdsa -b 521
~~~
Add the following section in `.ssh/config` in `qubes-dev`:
~~~
Host testbench
# substitute username in testbench
User user
# substitute address of your testbench
HostName 192.168.123.45
~~~
#### Passwordless SSH Login
To log to your testbench without entering password every time, copy your newly generated public key (`id_ecdsa.pub`) to `~/.ssh/authorized_keys` on your testbench. You can do this easily by running this command on `qubes-dev`: `ssh-copy-id -i ~/.ssh/id_ecdsa.pub user@192.168.123.45` (substituting with the actual username address of your testbench).
### Scripting
This step is optional, but very helpful. Put these scripts somewhere in your `${PATH}`, like `/usr/local/bin`.
`qtb-runtests`:
```bash
#!/bin/sh
ssh testbench python -m qubes.tests.run
```
`qtb-install`:
```bash
#!/bin/sh
TMPDIR=/tmp/qtb-rpms
if [ $# -eq 0 ]; then
echo "usage: $(basename $0) <rpmfile> ..."
exit 2
fi
set -e
ssh testbench mkdir -p "${TMPDIR}"
scp "${@}" testbench:"${TMPDIR}" || echo "check if you have 'scp' installed on your testbench"
while [ $# -gt 0 ]; do
ssh testbench sudo rpm -i --replacepkgs --replacefiles "${TMPDIR}/$(basename ${1})"
shift
done
```
`qtb-iterate`:
```bash
#!/bin/sh
set -e
# substitute path to your builder installation
pushd ${HOME}/builder >/dev/null
# the following are needed only if you have sources outside builder
#rm -rf qubes-src/core-admin
#qb -c core-admin package fetch
qb -c core-admin -d host-fc41 prep build
# update your dom0 fedora distribution as appropriate
qtb-install qubes-src/core-admin/rpm/x86_64/qubes-core-dom0-*.rpm
qtb-runtests
```
### Hooking git
I (woju) have those two git hooks. They ensure tests are passing (or are marked as expected failure) when committing and pushing. For committing it is only possible to run tests that may be executed from git repo (even if the rest were available, I probably wouldn't want to do that). For pushing, I also install RPM and run tests on testbench.
`core-admin/.git/hooks/pre-commit`: (you may retain also the default hook, here omitted for readability)
```bash
#!/bin/sh
set -e
python -c "import sys, qubes.tests.run; sys.exit(not qubes.tests.run.main())"
```
`core-admin/.git/hooks/pre-push`:
```bash
#!/bin/sh
exec qtb-iterate
```

View file

@ -0,0 +1,266 @@
==========================
How to set up a test bench
==========================
This guide shows how to set up simple test bench that automatically test your code youre about to push. It is written especially for ``core3`` branch of ``core-admin.git`` repo, but some ideas are universal.
We will set up a spare machine (bare metal, not a virtual) that will be hosting our experimental Dom0. We will communicate with it via Ethernet and SSH. This tutorial assumes you are familiar with :doc:`QubesBuilder </developer/building/qubes-builder>` and you have it set up and running flawlessly.
**Notice:** This setup intentionally weakens some security properties in the testing system. So make sure you understand the risks and use exclusively for testing.
Setting up the Machine
----------------------
Install ISO
^^^^^^^^^^^
First, do a clean install from the ``.iso`` :doc:`you built </developer/building/qubes-iso-building>` or grabbed elsewhere (for example `here <https://forum.qubes-os.org/t/qubesos-4-1-alpha-signed-weekly-builds/3601>`__).
Enabling Network Access in Dom0
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Internet access is intentionally disabled by default in dom0. But to ease the deployment process we will give it access. The following steps should be done in ``dom0``.
**Note:** the following assume you have only one network card. If you have two, pick one and leave the other attached to ``sys-net``.
1. Remove the network card (PCI device) from ``sys-net``
2. Restart your computer (for the removal to take effect)
3. Install ``dhcp-client`` and ``openssh-server`` on your testbenchs dom0.
4. Save the following script in ``/home/user/bin/dom0_network.sh`` and make it executable. It should enable your network card in dom0. *Be sure to adjust the scripts variables to suit your needs.*
.. code:: bash
#!/bin/sh
# adjust this for your NIC (run lspci)
BDF=0000:02:00.0
# adjust this for your network driver
DRIVER=e1000e
prog=$(basename $0)
pciunbind() {
local path
path=/sys/bus/pci/devices/${1}/driver/unbind
if ! [ -w ${path} ]; then
echo "${prog}: Device ${1} not bound"
return 1
fi
echo -n ${1} >${path}
}
pcibind() {
local path
path=/sys/bus/pci/drivers/${2}/bind
if ! [ -w ${path} ]; then
echo "${prog}: Driver ${2} not found"
return 1
fi
echo ${1} >${path}
}
pciunbind ${BDF}
pcibind ${BDF} ${DRIVER}
sleep 1
dhclient
5. Configure your DHCP server so your testbench gets static IP and connect your machine to your local network. You should ensure that your testbench can reach the Internet.
6. Youll need to run the above script on every startup. To automate this save the following systemd service ``/etc/systemd/system/dom0-network-direct.service``
.. code:: bash
[Unit]
Description=Connect network to dom0
[Service]
Type=oneshot
ExecStart=/home/user/bin/dom0_network.sh
[Install]
WantedBy=multi-user.target
7. Then, enable and start the SSH Server and the script on boot:
.. code:: bash
sudo systemctl enable sshd
sudo systemctl start sshd
sudo systemctl enable dom0-network-direct
sudo systemctl start dom0-network-direct
**Note:** If you want to install additional software in dom0 and your only network card was assigned to dom0, then *instead* of the usual ``sudo qubes-dom0-update <PACKAGE>`` now you run ``sudo dnf --setopt=reposdir=/etc/yum.repos.d install <PACKAGE>``.
Install Tests and Their Dependencies
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
A regular Qubes installation isnt ready to run the full suite of tests. For example, in order to run the `Split GPG tests <https://github.com/QubesOS/qubes-app-linux-split-gpg/blob/4bc201bb70c011119eed19df25dc5b46120d04ed/tests/splitgpg/tests.py>`__ you need to have the ``qubes-gpg-split-tests`` package installed in your app qubes.
Because of the above reason, some additional configurations need to be done to your testing environment. This can be done in an automated manner with the help of the :doc:`Salt </user/advanced-topics/salt>` configuration that provisions the :doc:`automated testing environment </developer/debugging/automated-tests>`.
The following commands should work for you, but do keep in mind that the provisioning scripts are designed for the `openQA environment <https://openqa.qubes-os.org/>`__ and not your specific local testing system. Run the following in ``dom0``:
.. code:: bash
# For future reference the following commands are an adaptation of
# https://github.com/marmarek/openqa-tests-qubesos/blob/master/tests/update.pm
# Install git
sudo qubes-dom0-update git || sudo dnf --setopt=reposdir=/etc/yum.repos.d install git
# Download the openQA automated testing environment Salt configuration
git clone https://github.com/marmarek/openqa-tests-qubesos/
cd openqa-tests-qubesos/extra-files
sudo cp -a system-tests/ /srv/salt/
sudo qubesctl top.enable system-tests
# Install the same configuration as the one in openQA
QUBES_VERSION=4.1
PILLAR_DIR=/srv/pillar/base/update
sudo mkdir -p $PILLAR_DIR
printf 'update:\n qubes_ver: '$QUBES_VERSION'\n' | sudo tee $PILLAR_DIR/init.sls
printf "base:\n '*':\n - update\n" | sudo tee $PILLAR_DIR/init.top
sudo qubesctl top.enable update pillar=True
# Apply states to dom0 and VMs
# NOTE: These commands can take several minutes (if not more) without showing output
sudo qubesctl --show-output state.highstate
sudo qubesctl --max-concurrency=2 --skip-dom0 --templates --show-output state.highstate
Development VM
--------------
SSH
^^^
Arrange firewall so you can reach the testbench from your ``qubes-dev`` VM. Generate SSH key in ``qubes-dev``:
.. code:: bash
ssh-keygen -t ecdsa -b 521
Add the following section in ``.ssh/config`` in ``qubes-dev``:
.. code:: bash
Host testbench
# substitute username in testbench
User user
# substitute address of your testbench
HostName 192.168.123.45
Passwordless SSH Login
^^^^^^^^^^^^^^^^^^^^^^
To log to your testbench without entering password every time, copy your newly generated public key (``id_ecdsa.pub``) to ``~/.ssh/authorized_keys`` on your testbench. You can do this easily by running this command on ``qubes-dev``: ``ssh-copy-id -i ~/.ssh/id_ecdsa.pub user@192.168.123.45`` (substituting with the actual username address of your testbench).
Scripting
^^^^^^^^^
This step is optional, but very helpful. Put these scripts somewhere in your ``${PATH}``, like ``/usr/local/bin``.
``qtb-runtests``:
.. code:: bash
#!/bin/sh
ssh testbench python -m qubes.tests.run
``qtb-install``:
.. code:: bash
#!/bin/sh
TMPDIR=/tmp/qtb-rpms
if [ $# -eq 0 ]; then
echo "usage: $(basename $0) <rpmfile> ..."
exit 2
fi
set -e
ssh testbench mkdir -p "${TMPDIR}"
scp "${@}" testbench:"${TMPDIR}" || echo "check if you have 'scp' installed on your testbench"
while [ $# -gt 0 ]; do
ssh testbench sudo rpm -i --replacepkgs --replacefiles "${TMPDIR}/$(basename ${1})"
shift
done
``qtb-iterate``:
.. code:: bash
#!/bin/sh
set -e
# substitute path to your builder installation
pushd ${HOME}/builder >/dev/null
# the following are needed only if you have sources outside builder
#rm -rf qubes-src/core-admin
#qb -c core-admin package fetch
qb -c core-admin -d host-fc41 prep build
# update your dom0 fedora distribution as appropriate
qtb-install qubes-src/core-admin/rpm/x86_64/qubes-core-dom0-*.rpm
qtb-runtests
Hooking git
^^^^^^^^^^^
I (woju) have those two git hooks. They ensure tests are passing (or are marked as expected failure) when committing and pushing. For committing it is only possible to run tests that may be executed from git repo (even if the rest were available, I probably wouldnt want to do that). For pushing, I also install RPM and run tests on testbench.
``core-admin/.git/hooks/pre-commit``: (you may retain also the default hook, here omitted for readability)
.. code:: bash
#!/bin/sh
set -e
python -c "import sys, qubes.tests.run; sys.exit(not qubes.tests.run.main())"
``core-admin/.git/hooks/pre-push``:
.. code:: bash
#!/bin/sh
exec qtb-iterate

View file

@ -1,226 +0,0 @@
---
lang: en
layout: doc
permalink: /doc/vm-interface/
redirect_from:
- /en/doc/vm-interface/
- /doc/VMInterface/
- /doc/SystemDoc/VMInterface/
- /wiki/SystemDoc/VMInterface/
ref: 47
title: Qube configuration interface
---
Qubes VM have some settings set by dom0 based on VM settings. There are multiple configuration channels, which includes:
- QubesDB
- XenStore (in Qubes 2, data the same as in QubesDB, keys without leading `/`)
- Qubes RPC (called at VM startup, or when configuration changed)
- GUI protocol
## QubesDB
### Keys exposed by dom0 to VM
- `/qubes-base-template` - base template
- `/qubes-vm-type` - VM type, the same as `type` field in `qvm-prefs`. One of `AppVM`, `ProxyVM`, `NetVM`, `TemplateVM`, `HVM`, `TemplateHVM`
- `/qubes-vm-updatable` - flag whether VM is updatable (whether changes in root.img will survive VM restart). One of `True`, `False`
- `/qubes-vm-persistence` - what data do persist between VM restarts:
- `full` - all disks
- `rw-only` - only `/rw` disk
- `none` - none
- `/qubes-timezone` - name of timezone based on dom0 timezone. For example `Europe/Warsaw`
- `/qubes-keyboard` (deprecated in R4.1) - keyboard layout based on dom0 layout. Its syntax is suitable for `xkbcomp` command (after expanding escape sequences like `\n` or `\t`). This is meant only as some default value, VM can ignore this option and choose its own keyboard layout (this is what keyboard setting from Qubes Manager does). This entry is created as part of gui-daemon initialization (so not available when gui-daemon disabled, or not started yet).
- `/keyboard-layout` - keyboard layout based on GuiVM layout. Its syntax can be `layout+variant+options`, `layout+variant`, `layout++options` or simply `layout`. For example, `fr+oss`, `pl++compose:caps` or `fr`. This is meant only as some default value, VM can ignore this option and choose its own keyboard layout (this is what keyboard setting from Qubes Manager does).
- `/qubes-debug-mode` - flag whether VM has debug mode enabled (qvm-prefs setting). One of `1`, `0`
- `/qubes-service/SERVICE_NAME` - subtree for VM services controlled from dom0 (using the `qvm-service` command or Qubes Manager). One of `1`, `0`. Note that not every service will be listed here, if entry is missing, it means "use VM default". A list of currently supported services is in the `qvm-service` man page.
- `/qubes-netm ask` - network mask (only when VM has netvm set); currently hardcoded "255.255.255.0"
- `/qubes-ip` - IP address for this VM (only when VM has netvm set)
- `/qubes-gateway` - default gateway IP (only when VM has netvm set); VM should add host route to this address directly via eth0 (or whatever default interface name is)
- `/qubes-primary-dns` - primary DNS address (only when VM has netvm set)
- `/qubes-secondary-dns` - secondary DNS address (only when VM has netvm set)
- `/qubes-netvm-gateway` - same as `qubes-gateway` in connected VMs (only when VM serves as network backend - ProxyVM and NetVM)
- `/qubes-netvm-netmask` - same as `qubes-netmask` in connected VMs (only when VM serves as network backend - ProxyVM and NetVM)
- `/qubes-netvm-network` - network address (only when VM serves as network backend - ProxyVM and NetVM); can be also calculated from qubes-netvm-gateway and qubes-netvm-netmask
- `/qubes-netvm-primary-dns` - same as `qubes-primary-dns` in connected VMs (only when VM serves as network backend - ProxyVM and NetVM); traffic sent to this IP on port 53 should be redirected to primary DNS server
- `/qubes-netvm-secondary-dns` - same as `qubes-secondary-dns` in connected VMs (only when VM serves as network backend - ProxyVM and NetVM); traffic sent to this IP on port 53 should be redirected to secondary DNS server
- `/guivm-windows-prefix` - title prefix for any window not originating from another qube. This means windows of applications running in GuiVM itself
#### Firewall rules in 3.x
QubesDB is also used to configure firewall in ProxyVMs. Rules are stored in
separate key for each target VM. Entries:
- `/qubes-iptables` - control entry - dom0 writing `reload` here signals `qubes-firewall` service to reload rules
- `/qubes-iptables-header` - rules not related to any particular VM, should be applied before domains rules
- `/qubes-iptables-domainrules/NNN` - rules for domain `NNN` (arbitrary number)
in `iptables-save` format. Rules are self-contained - fill `FORWARD` iptables
chain and contains all required matches (source IP address etc), as well as
final default action (`DROP`/`ACCEPT`)
VM after applying rules may signal some error, writing a message to
`/qubes-iptables-error` key. This does not exclude any other way of
communicating problems - like a popup.
#### Firewall rules in 4.x
QubesDB is also used to configure firewall in ProxyVMs. Each rule is stored as
a separate entry, grouped on target VM:
- `/qubes-firewall/SOURCE_IP` - base tree under which rules are placed. All
rules there should be applied to filter traffic coming from `SOURCE_IP`. This
can be either IPv4 or IPv6 address.
Dom0 will do an empty write to this top level entry after finishing rules
update, so VM can setup a watch here to trigger rules reload.
- `/qubes-firewall/SOURCE_IP/policy` - default action if no rule matches:
`drop` or `accept`.
- `/qubes-firewall/SOURCE_IP/NNNN` - rule number `NNNN` - decimal number,
padded with zeros. Se below for rule format. All the rules should be
applied in order of rules implied by those numbers. Note that QubesDB
itself does not impose any ordering (you need to sort the rules after
retrieving them). The first rule has number `0000`.
Each rule is a single QubesDB entry, consisting of pairs `key=value` separated
by space. QubesDB enforces limit on a single entry length - 3072 bytes.
Possible options for a single rule:
- `action`, values: `accept`, `drop`; this is present in every rule
- `dst4`, value: destination IPv4 address with a mask; for example: `192.168.0.0/24`
- `dst6`, value: destination IPv6 address with a mask; for example: `2000::/3`
- `dsthost`, value: DNS hostname of destination host
- `proto`, values: `tcp`, `udp`, `icmp`
- `specialtarget`, value: One of predefined target, currently defined values:
- `dns` - such option should match DNS traffic to default DNS server (but
not any DNS server), on both TCP and UDP
- `dstports`, value: destination ports range separated with `-`, valid only
together with `proto=tcp` or `proto=udp`; for example `1-1024`, `80-80`
- `icmptype`, value: numeric (decimal) icmp message type, for example `8` for
echo request, valid only together with `proto=icmp`
- `dpi`, value: Deep Packet Inspection protocol (like: HTTP, SSL, SMB, SSH, SMTP) or the default 'NO' as no DPI, only packet filtering
Options must appear in the rule in the order listed above. Duplicated options
are forbidden.
A rule matches only when all predicates match. Only one of `dst4`, `dst6` or
`dsthost` can be used in a single rule.
If tool applying firewall encounters any parse error (unknown option, invalid
value, duplicated option, etc), it should drop all the traffic coming from that `SOURCE_IP`,
regardless of properly parsed rules.
Example valid rules:
- `action=accept dst4=8.8.8.8 proto=udp dstports=53-53`
- `action=drop dst6=2a00:1450:4000::/37 proto=tcp`
- `action=accept specialtarget=dns`
- `action=drop proto=tcp specialtarget=dns` - drop DNS queries sent using TCP
- `action=drop`
### Keys set by VM for passing info to dom0
- `memory/meminfo` (**xenstore**) - used memory (updated by qubes-meminfo-writer), input information for qmemman;
- Qubes 3.x format: 6 lines (EOL encoded as `\n`), each in format "FIELD: VALUE kB"; fields: `MemTotal`, `MemFree`, `Buffers`, `Cached`, `SwapTotal`, `SwapFree`; meaning the same as in `/proc/meminfo` in Linux.
- Qubes 4.0+ format: used memory size in the VM, in kbytes
- `/qubes-block-devices` - list of block devices exposed by this VM, each device (subdirectory) should be named in a way that VM can attach the device based on it. Each should contain these entries:
- `desc` - device description (ASCII text)
- `size` - device size in bytes
- `mode` - default connection mode; `r` for read-only, `w` for read-write
- `/qubes-usb-devices` - list of USB devices exposed by this VM, each device (subdirectory) should contain:
- `desc` - device description (ASCII text)
- `usb-ver` - USB version (1, 2 or 3)
## Qubes RPC
Services called by dom0 to provide some VM configuration:
- `qubes.SetMonitorLayout` - provide list of monitors, one per line. Each line contains four numbers: `width height X Y width_mm height_mm` (physical dimensions - `width_mm` and `height_mm` - are optional)
- `qubes.WaitForSession` - called to wait for full VM startup
- `qubes.GetAppmenus` - receive appmenus from given VM (template); TODO: describe format here
- `qubes.GetImageRGBA` - receive image/application icon. Protocol:
1. Caller sends name of requested icon. This can be one of:
* `xdgicon:NAME` - search for NAME in standard icons theme
* `-` - get icon data from stdin (the caller), can be prefixed with format name, for example `png:-`
* file name
2. The service responds with image dimensions: width and height as
decimal numbers, separated with space and with EOL marker at the and; then
image data in RGBA format (32 bits per pixel)
- `qubes.SetDateTime` - set VM time, called periodically by dom0 (can be
triggered manually from dom0 by calling `qvm-sync-clock`). The service
receives one line at stdin - time in format of `date -u -Iseconds`, for
example `2015-07-31T16:10:43+0000`.
- `qubes.SetGuiMode` - called in HVM to switch between fullscreen and seamless
GUI mode. The service receives a single word on stdin - either `FULLSCREEN`
or `SEAMLESS`
- `qubes.ResizeDisk` - called to inform that underlying disk was resized.
Name of disk image is passed on standard input (`root`, `private`, `volatile`,
or other). This is used starting with Qubes 4.0.
Other Qrexec services installed by default:
- `qubes.Backup` - store Qubes backup. The service receives location chosen by
the user (one line, terminated by `\n`), the backup archive ([description of
backup format](/doc/BackupEmergencyRestoreV2/))
- `qubes.DetachPciDevice` - service called in reaction to `qvm-pci -d` call on
running VM. The service receives one word - BDF of device to detach. When the
service call ends, the device will be detached
- `qubes.Filecopy` - receive some files from other VM. Files sent in [qfile format](/doc/qfilecopy/)
- `qubes.OpenInVM` - open a file in called VM. Service receives a single file on stdin (in
[qfile format](/doc/qfilecopy/). After a file viewer/editor is terminated, if
the file was modified, can be sent back (just raw content, without any
headers); otherwise service should just terminate without sending anything.
This service is used by both `qvm-open-in-vm` and `qvm-open-in-dvm` tools. When
called in DispVM, service termination will trigger DispVM cleanup.
- `qubes.Restore` - retrieve Qubes backup. The service receives backup location
entered by the user (one line, terminated by `\n`), then should output backup
archive in [qfile format](/doc/qfilecopy/) (core-agent-linux component contains
`tar2qfile` utility to do the conversion)
- `qubes.SelectDirectory`, `qubes.SelectFile` - services which should show
file/directory selection dialog and return (to stdout) a single line
containing selected path, or nothing in the case of cancellation
- `qubes.SuspendPre` - service called in every VM with PCI device attached just
before system suspend
- `qubes.SuspendPost` - service called in every VM with PCI device attached just
after system resume
- `qubes.SyncNtpClock` - service called to trigger network time synchronization.
Service should synchronize local VM time and terminate when done.
- `qubes.WindowIconUpdater` - service called by VM to send icons of individual
windows. The protocol there is simple one direction stream: VM sends window ID
followed by icon in `qubes.GetImageRGBA` format, then next window ID etc. VM
can send icon for the same window multiple times to replace previous one (for
example for animated icons)
- `qubes.VMShell` - call any command in the VM; the command(s) is passed one per line
- `qubes.VMShell+WaitForSession` waits for full VM startup first
- `qubes.VMExec` - call any command in the VM, without using shell, the command
needs to be passed as argument and encoded as follows:
- the executable name and arguments are separated by `+`
- everything except alphanumeric characters, `.` and `_` needs to be
escaped
- bytes are escaped as `-HH` (where `HH` is hex code, capital letters only)
- `-` itself can be escaped as `--`
- example: to run `ls -a /home/user`, use
`qubes.VMExec+ls+--a+-2Fhome-2Fuser`
- `qubes.VMExecGUI` - a variant of `qubes.VMExec` that waits for full VM
startup first
Services called in GuiVM:
- `policy.Ask`, `policy.Notify` - confirmation prompt and notifications for
Qubes RPC calls, see [qrexec-policy implementation](/doc/qrexec-internals/#qrexec-policy-implementation)
for a detailed description.
Currently Qubes still calls few tools in VM directly, not using service
abstraction. This will change in the future. Those tools are:
- `/usr/lib/qubes/qubes-download-dom0-updates.sh` - script to download updates (or new packages to be installed) for dom0 (`qubes-dom0-update` tool)
- `date -u -Iseconds` - called directly to retrieve time after calling `qubes.SyncNtpClock` service (`qvm-sync-clock` tool)
- `nm-online -x` - called before `qubes.SyncNtpClock` service call by `qvm-sync-clock` tool
- `resize2fs` - called to resize filesystem on /rw partition by `qvm-grow-private` tool
- `gpk-update-viewer` - called by Qubes Manager to display available updates in a TemplateVM
- `systemctl start qubes-update-check.timer` (and similarly stop) - called when enabling/disabling updates checking in given VM (`qubes-update-check` [qvm-service](/doc/qubes-service/))
Additionally, automatic tests extensively run various commands directly in VMs. We do not plan to change that.
## GUI protocol
GUI initialization includes passing the whole screen dimensions from dom0 to VM. This will most likely be overwritten by qubes.SetMonitorLayout Qubes RPC call.

View file

@ -0,0 +1,298 @@
============================
Qube configuration interface
============================
Qubes VM have some settings set by dom0 based on VM settings. There are multiple configuration channels, which includes:
- QubesDB
- XenStore (in Qubes 2, data the same as in QubesDB, keys without leading ``/``)
- Qubes RPC (called at VM startup, or when configuration changed)
- GUI protocol
QubesDB
-------
Keys exposed by dom0 to VM
^^^^^^^^^^^^^^^^^^^^^^^^^^
- ``/qubes-base-template`` - base template
- ``/qubes-vm-type`` - VM type, the same as ``type`` field in ``qvm-prefs``. One of ``AppVM``, ``ProxyVM``, ``NetVM``, ``TemplateVM``, ``HVM``, ``TemplateHVM``
- ``/qubes-vm-updatable`` - flag whether VM is updatable (whether changes in root.img will survive VM restart). One of ``True``, ``False``
- ``/qubes-vm-persistence`` - what data do persist between VM restarts:
- ``full`` - all disks
- ``rw-only`` - only ``/rw`` disk
- ``none`` - none
- ``/qubes-timezone`` - name of timezone based on dom0 timezone. For example ``Europe/Warsaw``
- ``/qubes-keyboard`` (deprecated in R4.1) - keyboard layout based on dom0 layout. Its syntax is suitable for ``xkbcomp`` command (after expanding escape sequences like ``\n`` or ``\t``). This is meant only as some default value, VM can ignore this option and choose its own keyboard layout (this is what keyboard setting from Qubes Manager does). This entry is created as part of gui-daemon initialization (so not available when gui-daemon disabled, or not started yet).
- ``/keyboard-layout`` - keyboard layout based on GuiVM layout. Its syntax can be ``layout+variant+options``, ``layout+variant``, ``layout++options`` or simply ``layout``. For example, ``fr+oss``, ``pl++compose:caps`` or ``fr``. This is meant only as some default value, VM can ignore this option and choose its own keyboard layout (this is what keyboard setting from Qubes Manager does).
- ``/qubes-debug-mode`` - flag whether VM has debug mode enabled (qvm-prefs setting). One of ``1``, ``0``
- ``/qubes-service/SERVICE_NAME`` - subtree for VM services controlled from dom0 (using the ``qvm-service`` command or Qubes Manager). One of ``1``, ``0``. Note that not every service will be listed here, if entry is missing, it means “use VM default”. A list of currently supported services is in the ``qvm-service`` man page.
- ``/qubes-netm ask`` - network mask (only when VM has netvm set); currently hardcoded “255.255.255.0”
- ``/qubes-ip`` - IP address for this VM (only when VM has netvm set)
- ``/qubes-gateway`` - default gateway IP (only when VM has netvm set); VM should add host route to this address directly via eth0 (or whatever default interface name is)
- ``/qubes-primary-dns`` - primary DNS address (only when VM has netvm set)
- ``/qubes-secondary-dns`` - secondary DNS address (only when VM has netvm set)
- ``/qubes-netvm-gateway`` - same as ``qubes-gateway`` in connected VMs (only when VM serves as network backend - ProxyVM and NetVM)
- ``/qubes-netvm-netmask`` - same as ``qubes-netmask`` in connected VMs (only when VM serves as network backend - ProxyVM and NetVM)
- ``/qubes-netvm-network`` - network address (only when VM serves as network backend - ProxyVM and NetVM); can be also calculated from qubes-netvm-gateway and qubes-netvm-netmask
- ``/qubes-netvm-primary-dns`` - same as ``qubes-primary-dns`` in connected VMs (only when VM serves as network backend - ProxyVM and NetVM); traffic sent to this IP on port 53 should be redirected to primary DNS server
- ``/qubes-netvm-secondary-dns`` - same as ``qubes-secondary-dns`` in connected VMs (only when VM serves as network backend - ProxyVM and NetVM); traffic sent to this IP on port 53 should be redirected to secondary DNS server
- ``/guivm-windows-prefix`` - title prefix for any window not originating from another qube. This means windows of applications running in GuiVM itself
Firewall rules in 3.x
^^^^^^^^^^^^^^^^^^^^^
QubesDB is also used to configure firewall in ProxyVMs. Rules are stored in separate key for each target VM. Entries:
- ``/qubes-iptables`` - control entry - dom0 writing ``reload`` here signals ``qubes-firewall`` service to reload rules
- ``/qubes-iptables-header`` - rules not related to any particular VM, should be applied before domains rules
- ``/qubes-iptables-domainrules/NNN`` - rules for domain ``NNN`` (arbitrary number) in ``iptables-save`` format. Rules are self-contained - fill ``FORWARD`` iptables chain and contains all required matches (source IP address etc), as well as final default action (``DROP``/``ACCEPT``)
VM after applying rules may signal some error, writing a message to ``/qubes-iptables-error`` key. This does not exclude any other way of communicating problems - like a popup.
Firewall rules in 4.x
^^^^^^^^^^^^^^^^^^^^^
QubesDB is also used to configure firewall in ProxyVMs. Each rule is stored as a separate entry, grouped on target VM:
- ``/qubes-firewall/SOURCE_IP`` - base tree under which rules are placed. All rules there should be applied to filter traffic coming from ``SOURCE_IP``. This can be either IPv4 or IPv6 address. Dom0 will do an empty write to this top level entry after finishing rules update, so VM can setup a watch here to trigger rules reload.
- ``/qubes-firewall/SOURCE_IP/policy`` - default action if no rule matches: ``drop`` or ``accept``.
- ``/qubes-firewall/SOURCE_IP/NNNN`` - rule number ``NNNN`` - decimal number, padded with zeros. Se below for rule format. All the rules should be applied in order of rules implied by those numbers. Note that QubesDB itself does not impose any ordering (you need to sort the rules after retrieving them). The first rule has number ``0000``.
Each rule is a single QubesDB entry, consisting of pairs ``key=value`` separated by space. QubesDB enforces limit on a single entry length - 3072 bytes. Possible options for a single rule:
- ``action``, values: ``accept``, ``drop``; this is present in every rule
- ``dst4``, value: destination IPv4 address with a mask; for example: ``192.168.0.0/24``
- ``dst6``, value: destination IPv6 address with a mask; for example: ``2000::/3``
- ``dsthost``, value: DNS hostname of destination host
- ``proto``, values: ``tcp``, ``udp``, ``icmp``
- ``specialtarget``, value: One of predefined target, currently defined values:
- ``dns`` - such option should match DNS traffic to default DNS server (but not any DNS server), on both TCP and UDP
- ``dstports``, value: destination ports range separated with ``-``, valid only together with ``proto=tcp`` or ``proto=udp``; for example ``1-1024``, ``80-80``
- ``icmptype``, value: numeric (decimal) icmp message type, for example ``8`` for echo request, valid only together with ``proto=icmp``
- ``dpi``, value: Deep Packet Inspection protocol (like: HTTP, SSL, SMB, SSH, SMTP) or the default NO as no DPI, only packet filtering
Options must appear in the rule in the order listed above. Duplicated options are forbidden.
A rule matches only when all predicates match. Only one of ``dst4``, ``dst6`` or ``dsthost`` can be used in a single rule.
If tool applying firewall encounters any parse error (unknown option, invalid value, duplicated option, etc), it should drop all the traffic coming from that ``SOURCE_IP``, regardless of properly parsed rules.
Example valid rules:
- ``action=accept dst4=8.8.8.8 proto=udp dstports=53-53``
- ``action=drop dst6=2a00:1450:4000::/37 proto=tcp``
- ``action=accept specialtarget=dns``
- ``action=drop proto=tcp specialtarget=dns`` - drop DNS queries sent using TCP
- ``action=drop``
Keys set by VM for passing info to dom0
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
- ``memory/meminfo`` (**xenstore**) - used memory (updated by qubes-meminfo-writer), input information for qmemman;
- Qubes 3.x format: 6 lines (EOL encoded as ``\n``), each in format “FIELD: VALUE kB”; fields: ``MemTotal``, ``MemFree``, ``Buffers``, ``Cached``, ``SwapTotal``, ``SwapFree``; meaning the same as in ``/proc/meminfo`` in Linux.
- Qubes 4.0+ format: used memory size in the VM, in kbytes
- ``/qubes-block-devices`` - list of block devices exposed by this VM, each device (subdirectory) should be named in a way that VM can attach the device based on it. Each should contain these entries:
- ``desc`` - device description (ASCII text)
- ``size`` - device size in bytes
- ``mode`` - default connection mode; ``r`` for read-only, ``w`` for read-write
- ``/qubes-usb-devices`` - list of USB devices exposed by this VM, each device (subdirectory) should contain:
- ``desc`` - device description (ASCII text)
- ``usb-ver`` - USB version (1, 2 or 3)
Qubes RPC
---------
Services called by dom0 to provide some VM configuration:
- ``qubes.SetMonitorLayout`` - provide list of monitors, one per line. Each line contains four numbers: ``width height X Y width_mm height_mm`` (physical dimensions - ``width_mm`` and ``height_mm`` - are optional)
- ``qubes.WaitForSession`` - called to wait for full VM startup
- ``qubes.GetAppmenus`` - receive appmenus from given VM (template); TODO: describe format here
- ``qubes.GetImageRGBA`` - receive image/application icon. Protocol:
1. Caller sends name of requested icon. This can be one of:
- ``xdgicon:NAME`` - search for NAME in standard icons theme
- ``-`` - get icon data from stdin (the caller), can be prefixed with format name, for example ``png:-``
- file name
2. The service responds with image dimensions: width and height as decimal numbers, separated with space and with EOL marker at the and; then image data in RGBA format (32 bits per pixel)
- ``qubes.SetDateTime`` - set VM time, called periodically by dom0 (can be triggered manually from dom0 by calling ``qvm-sync-clock``). The service receives one line at stdin - time in format of ``date -u -Iseconds``, for example ``2015-07-31T16:10:43+0000``.
- ``qubes.SetGuiMode`` - called in HVM to switch between fullscreen and seamless GUI mode. The service receives a single word on stdin - either ``FULLSCREEN`` or ``SEAMLESS``
- ``qubes.ResizeDisk`` - called to inform that underlying disk was resized. Name of disk image is passed on standard input (``root``, ``private``, ``volatile``, or other). This is used starting with Qubes 4.0.
Other Qrexec services installed by default:
- ``qubes.Backup`` - store Qubes backup. The service receives location chosen by the user (one line, terminated by ``\n``), the backup archive (:doc:`description of backup format </user/how-to-guides/backup-emergency-restore-v2>`)
- ``qubes.DetachPciDevice`` - service called in reaction to ``qvm-pci -d`` call on running VM. The service receives one word - BDF of device to detach. When the service call ends, the device will be detached
- ``qubes.Filecopy`` - receive some files from other VM. Files sent in :doc:`qfile format </developer/services/qfilecopy>`
- ``qubes.OpenInVM`` - open a file in called VM. Service receives a single file on stdin (in :doc:`qfile format </developer/services/qfilecopy>`. After a file viewer/editor is terminated, if the file was modified, can be sent back (just raw content, without any headers); otherwise service should just terminate without sending anything. This service is used by both ``qvm-open-in-vm`` and ``qvm-open-in-dvm`` tools. When called in DispVM, service termination will trigger DispVM cleanup.
- ``qubes.Restore`` - retrieve Qubes backup. The service receives backup location entered by the user (one line, terminated by ``\n``), then should output backup archive in :doc:`qfile format </developer/services/qfilecopy>` (core-agent-linux component contains ``tar2qfile`` utility to do the conversion)
- ``qubes.SelectDirectory``, ``qubes.SelectFile`` - services which should show file/directory selection dialog and return (to stdout) a single line containing selected path, or nothing in the case of cancellation
- ``qubes.SuspendPre`` - service called in every VM with PCI device attached just before system suspend
- ``qubes.SuspendPost`` - service called in every VM with PCI device attached just after system resume
- ``qubes.SyncNtpClock`` - service called to trigger network time synchronization. Service should synchronize local VM time and terminate when done.
- ``qubes.WindowIconUpdater`` - service called by VM to send icons of individual windows. The protocol there is simple one direction stream: VM sends window ID followed by icon in ``qubes.GetImageRGBA`` format, then next window ID etc. VM can send icon for the same window multiple times to replace previous one (for example for animated icons)
- ``qubes.VMShell`` - call any command in the VM; the command(s) is passed one per line
- ``qubes.VMShell+WaitForSession`` waits for full VM startup first
- ``qubes.VMExec`` - call any command in the VM, without using shell, the command needs to be passed as argument and encoded as follows:
- the executable name and arguments are separated by ``+``
- everything except alphanumeric characters, ``.`` and ``_`` needs to be escaped
- bytes are escaped as ``-HH`` (where ``HH`` is hex code, capital letters only)
- ``-`` itself can be escaped as ``--``
- example: to run ``ls -a /home/user``, use ``qubes.VMExec+ls+--a+-2Fhome-2Fuser``
- ``qubes.VMExecGUI`` - a variant of ``qubes.VMExec`` that waits for full VM startup first
Services called in GuiVM:
- ``policy.Ask``, ``policy.Notify`` - confirmation prompt and notifications for Qubes RPC calls, see :ref:`qrexec-policy implementation <developer/services/qrexec-internals:\`\`qrexec-policy\`\` implementation>` for a detailed description.
Currently Qubes still calls few tools in VM directly, not using service abstraction. This will change in the future. Those tools are:
- ``/usr/lib/qubes/qubes-download-dom0-updates.sh`` - script to download updates (or new packages to be installed) for dom0 (``qubes-dom0-update`` tool)
- ``date -u -Iseconds`` - called directly to retrieve time after calling ``qubes.SyncNtpClock`` service (``qvm-sync-clock`` tool)
- ``nm-online -x`` - called before ``qubes.SyncNtpClock`` service call by ``qvm-sync-clock`` tool
- ``resize2fs`` - called to resize filesystem on /rw partition by ``qvm-grow-private`` tool
- ``gpk-update-viewer`` - called by Qubes Manager to display available updates in a TemplateVM
- ``systemctl start qubes-update-check.timer`` (and similarly stop) - called when enabling/disabling updates checking in given VM (``qubes-update-check`` :doc:`qvm-service </user/advanced-topics/qubes-service>`)
Additionally, automatic tests extensively run various commands directly in VMs. We do not plan to change that.
GUI protocol
------------
GUI initialization includes passing the whole screen dimensions from dom0 to VM. This will most likely be overwritten by qubes.SetMonitorLayout Qubes RPC call.

View file

@ -1,89 +0,0 @@
---
lang: en
layout: doc
permalink: /doc/windows-debugging/
redirect_from:
- /en/doc/windows-debugging/
- /doc/WindowsDebugging/
- /wiki/WindowsDebugging/
ref: 50
title: Windows debugging
---
Debugging Windows code can be tricky in a virtualized environment. The guide below assumes Qubes 4.2 and Windows 7 or later VMs.
User-mode debugging is usually straightforward if it can be done on one machine. Just duplicate your normal debugging environment in the VM.
Things get complicated if you need to perform kernel debugging or troubleshoot problems that only manifest on system boot, user logoff or similar. For that you need two Windows VMs: the *host* and the *target*. The *host* will contain the debugger, your source code and private symbols. The *target* will run the code being debugged. We will use kernel debugging over network which is supported from Windows 7 onwards. The main caveat is that Windows kernel supports only specific network adapters for this, and the default one in Qubes won't work.
## Important note
- Do not install Xen network PV drivers in the target VM. Network kernel debugging needs a specific type of NIC or it won't work, the network PV drivers interfere with that.
- If you have kernel debugging active when the Xen PV drivers are being installed, make sure to disable it before rebooting (`bcdedit /set debug off`). You can re-enable debugging after the reboot. The OS won't boot otherwise. I'm not sure what's the exact cause. I know that busparams for the debugging NIC change when PV drivers are installed (see later), but even changing that accordingly in the debug settings doesn't help -- so it's best to disable debug for this one reboot.
## Modifying the NIC of the target VM
You will need to create a custom libvirt config for the target VM. See [the documentation](https://dev.qubes-os.org/projects/core-admin/en/latest/libvirt.html) for overview of how libvirt templates work in Qubes. The following assumes the target VM is named `target-vm`.
- Edit `/usr/share/qubes/templates/libvirt/xen.xml` to prepare our custom config to override just the NIC part of the global template:
- add `{% raw %}{% block network %}{% endraw %}` before `{% raw %}{% if vm.netvm %}{% endraw %}`
- add `{% raw %}{% endblock %}{% endraw %}` after the matching `{% raw %}{% endif %}{% endraw %}`
- Copy `/usr/share/qubes/templates/libvirt/devices/net.xml` to `/etc/qubes/templates/libvirt/xen/by-name/target-vm.xml`.
- Add `<model type='e1000'/>` to the `<interface>` section.
- Enclose everything within `{% raw %}{% block network %}{% endraw %}` + `{% raw %}{% endblock %}{% endraw %}`.
- Add `{% raw %}{% extends 'libvirt/xen.xml' %}{% endraw %}` at the start.
- The final `target-vm.xml` should look something like this:
~~~
{% raw %}
{% extends 'libvirt/xen.xml' %}
{% block network %}
<interface type='ethernet'>
<mac address="{{ vm.mac }}" />
<ip address="{{ vm.ip }}" />
<backenddomain name="{{ vm.netvm.name }}" />
<script path='vif-route-qubes' />
<model type='e1000' />
</interface>
{% endblock %}
{% endraw %}
~~~
- Start `target-vm` and verify in the device manager that a "Intel PRO/1000 MT" adapter is present.
## Host and target preparation
- On `host-vm` you will need WinDbg, which is a part of the [Windows SDK](https://developer.microsoft.com/en-us/windows/downloads/windows-sdk/).
- Copy the `Debuggers` directory from Windows SDK to `target-vm`.
- In both `host-vm` and `target-vm` switch the windows network to private (it tends to be public by default).
- Either turn off the windows firewall or enable all ICMP-in rules in both VMs.
- In `firewall-vm` edit `/rw/config/qubes-firewall-user-script` to connect both Windows VMs, add:
- `iptables -I FORWARD 2 -s <target-vm-ip> -d <host-vm-ip> -j ACCEPT`
- `iptables -I FORWARD 2 -s <host-vm-ip> -d <target-vm-ip> -j ACCEPT`
- run `/rw/config/qubes-firewall-user-script` so the changes take effect immediately
- Make sure both VMs can ping each other.
- In `target-vm`:
- start elevated `cmd` session
- `cd sdk\Debuggers\x64`
- `kdnet` should show that the NIC is supported, note the busparams:
~~~
Network debugging is supported on the following NICs:
busparams=0.6.0, Intel(R) PRO/1000 MT Network Connection, KDNET is running on this NIC.
~~~
- `bcdedit /debug on`
- `bcdedit /dbgsettings net hostip:<host-vm-ip> port:50000 key:1.1.1.1` (you can customize the key)
- `bcdedit /set "{dbgsettings}" busparams x.y.z` (use the busparams `kdnet` has shown earlier)
- In `host-vm` start WinDbg: `windbg -k net:port=50000,key=1.1.1.1`. It will listen for target's connection.
- Reboot `target-vm`, debugging should start:
~~~
Waiting to reconnect...
Connected to target 10.137.0.19 on port 50000 on local IP 10.137.0.20.
You can get the target MAC address by running .kdtargetmac command.
Connected to Windows 10 19041 x64 target at (Thu Aug 3 14:05:48.069 2023 (UTC + 2:00)), ptr64 TRUE
~~~
Happy debugging!

View file

@ -0,0 +1,131 @@
=================
Windows debugging
=================
Debugging Windows code can be tricky in a virtualized environment. The guide below assumes Qubes 4.2 and Windows 7 or later VMs.
User-mode debugging is usually straightforward if it can be done on one machine. Just duplicate your normal debugging environment in the VM.
Things get complicated if you need to perform kernel debugging or troubleshoot problems that only manifest on system boot, user logoff or similar. For that you need two Windows VMs: the *host* and the *target*. The *host* will contain the debugger, your source code and private symbols. The *target* will run the code being debugged. We will use kernel debugging over network which is supported from Windows 7 onwards. The main caveat is that Windows kernel supports only specific network adapters for this, and the default one in Qubes wont work.
Important note
--------------
- Do not install Xen network PV drivers in the target VM. Network kernel debugging needs a specific type of NIC or it wont work, the network PV drivers interfere with that.
- If you have kernel debugging active when the Xen PV drivers are being installed, make sure to disable it before rebooting (``bcdedit /set debug off``). You can re-enable debugging after the reboot. The OS wont boot otherwise. Im not sure whats the exact cause. I know that busparams for the debugging NIC change when PV drivers are installed (see later), but even changing that accordingly in the debug settings doesnt help so its best to disable debug for this one reboot.
Modifying the NIC of the target VM
----------------------------------
You will need to create a custom libvirt config for the target VM. See `the documentation <https://dev.qubes-os.org/projects/core-admin/en/latest/libvirt.html>`__ for overview of how libvirt templates work in Qubes. The following assumes the target VM is named ``target-vm``.
- Edit ``/usr/share/qubes/templates/libvirt/xen.xml`` to prepare our custom config to override just the NIC part of the global template:
- add ``{% block network %}`` before ``{% if vm.netvm %}``
- add ``{% endblock %}`` after the matching ``{% endif %}``
- Copy ``/usr/share/qubes/templates/libvirt/devices/net.xml`` to ``/etc/qubes/templates/libvirt/xen/by-name/target-vm.xml``.
- Add ``<model type='e1000'/>`` to the ``<interface>`` section.
- Enclose everything within ``{% block network %}`` + ``{% endblock %}``.
- Add ``{% extends 'libvirt/xen.xml' %}`` at the start.
- The final ``target-vm.xml`` should look something like this:
.. code:: bash
{% extends 'libvirt/xen.xml' %}
{% block network %}
<interface type='ethernet'>
<mac address="{{ vm.mac }}" />
<ip address="{{ vm.ip }}" />
<backenddomain name="{{ vm.netvm.name }}" />
<script path='vif-route-qubes' />
<model type='e1000' />
</interface>
{% endblock %}
- Start ``target-vm`` and verify in the device manager that a “Intel PRO/1000 MT” adapter is present.
Host and target preparation
---------------------------
- On ``host-vm`` you will need WinDbg, which is a part of the `Windows SDK <https://developer.microsoft.com/en-us/windows/downloads/windows-sdk/>`__.
- Copy the ``Debuggers`` directory from Windows SDK to ``target-vm``.
- In both ``host-vm`` and ``target-vm`` switch the windows network to private (it tends to be public by default).
- Either turn off the windows firewall or enable all ICMP-in rules in both VMs.
- In ``firewall-vm`` edit ``/rw/config/qubes-firewall-user-script`` to connect both Windows VMs, add:
- ``iptables -I FORWARD 2 -s <target-vm-ip> -d <host-vm-ip> -j ACCEPT``
- ``iptables -I FORWARD 2 -s <host-vm-ip> -d <target-vm-ip> -j ACCEPT``
- run ``/rw/config/qubes-firewall-user-script`` so the changes take effect immediately
- Make sure both VMs can ping each other.
- In ``target-vm``:
- start elevated ``cmd`` session
- ``cd sdk\Debuggers\x64``
- ``kdnet`` should show that the NIC is supported, note the busparams:
.. code:: bash
Network debugging is supported on the following NICs:
busparams=0.6.0, Intel(R) PRO/1000 MT Network Connection, KDNET is running on this NIC.
- ``bcdedit /debug on``
- ``bcdedit /dbgsettings net hostip:<host-vm-ip> port:50000 key:1.1.1.1`` (you can customize the key)
- ``bcdedit /set "{dbgsettings}" busparams x.y.z`` (use the busparams ``kdnet`` has shown earlier)
- In ``host-vm`` start WinDbg: ``windbg -k net:port=50000,key=1.1.1.1``. It will listen for targets connection.
- Reboot ``target-vm``, debugging should start:
.. code:: bash
Waiting to reconnect...
Connected to target 10.137.0.19 on port 50000 on local IP 10.137.0.20.
You can get the target MAC address by running .kdtargetmac command.
Connected to Windows 10 19041 x64 target at (Thu Aug 3 14:05:48.069 2023 (UTC + 2:00)), ptr64 TRUE
Happy debugging!