Correct code-block lexers

Changing `bash` lexer to `console` because it is appropriate most of
the time. Then after a manual review, some lexer have been changed.

I used `text` each time I was unsure, and for prompt outputs.

The page `/developer/building/qubes-iso-building.rst` still need to be
reviewed (look for lines starting with `$ #`).

I'm not sure about the Windows pages, should we use
[doscon](https://pygments.org/docs/lexers/#pygments.lexers.shell.MSDOSSessionLexer)
or `powershell`?

Is there an appropriate lexer for `guid.conf` content?

**Statistics - Before**
    870 bash
      9 python
      9 c
      2 yaml

**Statistics - After**
    684 console
    111 text
     44 bash
     16 yaml
      9 systemd
      9 c
      8 python
      4 ini
      4 doscon
      2 markdown
      2 desktop
      1 xorg.conf
      1 xml+jinja
      1 xml
      1 kconfig
      1 html

This suggests that the default lexer should be `console`.
This commit is contained in:
parulin 2025-07-30 09:43:09 -04:00
parent a252dc4338
commit 4212c5eda8
No known key found for this signature in database
GPG key ID: BC3830B42F4BF1F5
98 changed files with 1022 additions and 1029 deletions

View file

@ -28,7 +28,7 @@ In this example, we want to make ``/var/lib/tor`` persistent. Enter all of the f
1. Make sure the directory ``/rw/config/qubes-bind-dirs.d`` exists.
.. code:: bash
.. code:: console
sudo mkdir -p /rw/config/qubes-bind-dirs.d
@ -36,7 +36,7 @@ In this example, we want to make ``/var/lib/tor`` persistent. Enter all of the f
2. Create the file ``/rw/config/qubes-bind-dirs.d/50_user.conf`` with root permissions, if it doesnt already exist.
.. code:: bash
.. code:: console
sudo touch /rw/config/qubes-bind-dirs.d/50_user.conf
@ -54,7 +54,7 @@ In this example, we want to make ``/var/lib/tor`` persistent. Enter all of the f
5. If the directory you wish to make persistent doesnt exist in the template on which the app qube is based, youll need to create the directory (with its full path) under ``/rw/bind-dirs`` in the app qube. For example, if ``/var/lib/tor`` didnt exist in the template, then you would execute the following command in your app qube:
.. code:: bash
.. code:: console
sudo mkdir -p /rw/bind-dirs/var/lib/tor
@ -155,7 +155,7 @@ Bind dirs are obviously still supported but this must be configured either in th
To use this feature, first, enable it:
.. code:: bash
.. code:: console
qvm-service -e my-app-vm custom-persist
@ -163,7 +163,7 @@ To use this feature, first, enable it:
Then, configure a persistent directory with ``qvm-features``:
.. code:: bash
.. code:: console
qvm-features my-app-vm custom-persist.my_persistent_dir /var/my_persistent_dir
@ -171,7 +171,7 @@ Then, configure a persistent directory with ``qvm-features``:
To re-enable ``/home`` and ``/usr/local`` persistence, just add them to the list:
.. code:: bash
.. code:: console
qvm-features my-app-vm custom-persist.home /home
qvm-features my-app-vm custom-persist.usrlocal /usr/local
@ -182,7 +182,7 @@ When starting the VM, declared custom-persist bind dirs are automatically added
A user may want their bind-dirs to be automatically pre-created in ``/rw/bind-dirs``. Custom persist can do this for you by providing the type of the resource to create (file or dir), owner, group and mode. For example:
.. code:: bash
.. code:: console
qvm-features my-app-vm custom-persist.downloads dir:user:user:0755:/home/user/Downloads
qvm-features my-app-vm custom-persist.my_ssh_known_hosts_file file:user:user:0600:/home/user/.ssh/known_hosts

View file

@ -37,7 +37,7 @@ These files are placed in ``/rw``, which survives a VM restart. That way, they c
- In ProxyVMs (or app qubes with ``qubes-firewall`` service enabled), scripts placed in the following directories will be executed in the listed order followed by ``qubes-firewall-user-script`` at start up. Good place to write custom firewall rules.
.. code:: bash
.. code:: text
/etc/qubes/qubes-firewall.d
/rw/config/qubes-firewall.d
@ -52,12 +52,12 @@ These files are placed in ``/rw``, which survives a VM restart. That way, they c
.. code:: bash
#!/bin/bash
command="$1"
vif="$2"
vif_type="$3"
ip="$4"
if [ "$ip" == '10.137.0.100' ]; then
case "$command" in
online)
@ -84,11 +84,11 @@ GUI and audio configuration in dom0
The GUI configuration file ``/etc/qubes/guid.conf`` in one of a few not managed by ``qubes-prefs`` or the Qubes Manager tool. Sample config (included in default installation):
.. code:: bash
.. code::
# Sample configuration file for Qubes GUI daemon
# For syntax go https://www.hyperrealm.com/libconfig/libconfig_manual.html
global: {
# default values
#allow_fullscreen = false;
@ -102,9 +102,9 @@ The GUI configuration file ``/etc/qubes/guid.conf`` in one of a few not managed
#trayicon_mode = "border1";
#startup_timeout = 45;
};
# most of setting can be set per-VM basis
VM: {
work: {
allow_utf8_titles = true;

View file

@ -93,7 +93,7 @@ You can use a :ref:`named disposable <user/reference/glossary:named disposable>`
To create one that has no PCI devices attached, such as for ``sys-firewall``:
.. code:: bash
.. code:: console
qvm-create -C DispVM -l green <SERVICE_QUBE>
qvm-prefs <SERVICE_QUBE> autostart true
@ -109,7 +109,7 @@ To create one with a PCI device attached such as for ``sys-net`` or ``sys-usb``,
**Note:** You can use ``qvm-pci`` to :ref:`determine <user/how-to-guides/how-to-use-pci-devices:\`\`qvm-pci\`\` usage>` the ``<BDF>``. Also, you will often need to include the ``-o no-strict-reset=True`` :ref:`option <user/how-to-guides/how-to-use-pci-devices:no-strict-reset>` with USB controllers.
.. code:: bash
.. code:: console
qvm-create -C DispVM -l red <SERVICE_QUBE>
qvm-prefs <SERVICE_QUBE> virt_mode hvm
@ -123,7 +123,7 @@ To create one with a PCI device attached such as for ``sys-net`` or ``sys-usb``,
Optionally, if this disposable will also provide network access to other qubes:
.. code:: bash
.. code:: console
qvm-prefs <SERVICE_QUBE> provides_network true
@ -131,7 +131,7 @@ Optionally, if this disposable will also provide network access to other qubes:
Next, set the old service qubes autostart to false, and update any references to the old one, e.g.:
.. code:: bash
.. code:: console
qvm-prefs sys-firewall netvm <SERVICE_QUBE>
@ -141,7 +141,7 @@ Also make sure to update any :doc:`RPC policies </user/advanced-topics/rpc-polic
Here is an example of a complete ``sys-net`` replacement:
.. code:: bash
.. code:: console
qvm-create -C DispVM -l red sys-net2
qvm-prefs sys-net2 virt_mode hvm

View file

@ -14,7 +14,7 @@ When a qube starts, a fixed amount of RAM is allocated to the graphics buffer ca
To increase the minimum size of the video RAM buffer:
.. code:: bash
.. code:: console
qvm-features dom0 gui-videoram-min $(($WIDTH * $HEIGHT * 4 / 1024))
qvm-features dom0 gui-videoram-overhead 0
@ -24,7 +24,7 @@ Where ``$WIDTH`` × ``$HEIGHT`` is the maximum desktop size that you anticipate
In the case of multiple display with different orientations or if you plug/unplug displays, the following code will set correct memory size using xrandr.
.. code:: bash
.. code:: console
qvm-features dom0 gui-videoram-min $(xrandr --verbose | grep "Screen 0" | sed -e 's/.*current //' -e 's/\,.*//' | awk '{print $1*$3*4/1024}')
@ -41,7 +41,7 @@ Default overhead is about 8 MiB, which is enough for a 1080p display (see above)
You might face issues when playing video, if the video is choppy instead of smooth display this could be because the X server doesnt work. You can use the Linux terminal (Ctrl-Alt-F2) after starting the virtual machine, login. You can look at the Xorg logs file. As an option you can have the below config as well present in ``/etc/X11/xorg.conf.d/90-intel.conf`` (depends on HD graphics though).
.. code:: bash
.. code:: xorg.conf
Section "Device"
Identifier "Intel Graphics"

View file

@ -20,7 +20,7 @@ Here, we describe how to setup ``sys-gui`` that we call *hybrid mode* or referen
In ``dom0``, enable the formula for ``sys-gui`` with pillar data:
.. code:: bash
.. code:: console
sudo qubesctl top.enable qvm.sys-gui
sudo qubesctl top.enable qvm.sys-gui pillar=True
@ -28,14 +28,14 @@ In ``dom0``, enable the formula for ``sys-gui`` with pillar data:
then, execute it:
.. code:: bash
.. code:: console
sudo qubesctl --all state.highstate
You can now disable the ``sys-gui`` formula:
.. code:: bash
.. code:: console
sudo qubesctl top.disable qvm.sys-gui
@ -56,7 +56,7 @@ Here, we describe how to setup ``sys-gui-gpu`` which is a GUI domain with *GPU p
In ``dom0``, enable the formula for ``sys-gui-gpu`` with pillar data:
.. code:: bash
.. code:: console
sudo qubesctl top.enable qvm.sys-gui-gpu
sudo qubesctl top.enable qvm.sys-gui-gpu pillar=True
@ -64,21 +64,21 @@ In ``dom0``, enable the formula for ``sys-gui-gpu`` with pillar data:
then, execute it:
.. code:: bash
.. code:: console
sudo qubesctl --all state.highstate
You can now disable the ``sys-gui-gpu`` formula:
.. code:: bash
.. code:: console
sudo qubesctl top.disable qvm.sys-gui-gpu
One more step is needed: attaching the actual GPU to ``sys-gui-gpu``. This can be done either manually via ``qvm-pci`` (remember to enable permissive option), or via:
.. code:: bash
.. code:: console
sudo qubesctl state.sls qvm.sys-gui-gpu-attach-gpu
@ -103,7 +103,7 @@ Here, we describe how to setup ``sys-gui-vnc`` that we call a *remote* GUI domai
In ``dom0``, enable the formula for ``sys-gui-vnc`` with pillar data:
.. code:: bash
.. code:: console
sudo qubesctl top.enable qvm.sys-gui-vnc
sudo qubesctl top.enable qvm.sys-gui-vnc pillar=True
@ -111,21 +111,21 @@ In ``dom0``, enable the formula for ``sys-gui-vnc`` with pillar data:
then, execute it:
.. code:: bash
.. code:: console
sudo qubesctl --all state.highstate
You can now disable the ``sys-gui-vnc`` formula:
.. code:: bash
.. code:: console
sudo qubesctl top.disable qvm.sys-gui-vnc
At this point, you need to shutdown all your running qubes as the ``default_guivm`` qubes global property has been set to ``sys-gui-vnc``. Then, you can start ``sys-gui-vnc``:
.. code:: bash
.. code:: console
qvm-start sys-gui-vnc
@ -186,30 +186,30 @@ The following commands have to be run in ``dom0``.
Set ``default_guivm`` as ``dom0``:
.. code:: bash
.. code:: console
qubes-prefs default_guivm dom0
and for every selected qubes not using default value for GUI domain property, for example with a qube ``personal``:
.. code:: bash
.. code:: console
qvm-prefs personal guivm dom0
You are now able to delete the GUI domain, for example ``sys-gui-gpu``:
.. code:: bash
.. code:: console
qvm-remove -f sys-gui-gpu
.. |sys-gui| image:: /attachment/posts/guivm-hybrid.png
.. |sys-gui-gpu| image:: /attachment/posts/guivm-gpu.png
.. |sys-gui-vnc| image:: /attachment/posts/guivm-vnc.png

View file

@ -46,7 +46,7 @@ How to downgrade a specific package
To downgrade a specific package in dom0:
.. code:: bash
.. code:: console
sudo qubes-dom0-update --action=downgrade package-version
@ -58,7 +58,7 @@ How to re-install a package
To re-install a package in dom0:
.. code:: bash
.. code:: console
sudo qubes-dom0-update --action=reinstall package
@ -70,7 +70,7 @@ How to uninstall a package
If youve installed a package such as anti-evil-maid, you can remove it with the following command:
.. code:: bash
.. code:: console
sudo dnf remove anti-evil-maid
@ -94,7 +94,7 @@ If you wish to install updates that are still in :doc:`testing </user/downloadin
To temporarily enable any of these repos, use the ``--enablerepo=<repo-name>`` option. Example commands:
.. code:: bash
.. code:: console
sudo qubes-dom0-update --enablerepo=qubes-dom0-current-testing
sudo qubes-dom0-update --enablerepo=qubes-dom0-security-testing
@ -152,7 +152,7 @@ Example
(Note that the following example enables the unstable repo.)
.. code:: bash
.. code:: console
sudo qubes-dom0-update --enablerepo=qubes-dom0-unstable kernel kernel-qubes-vm
@ -166,7 +166,7 @@ EFI
Replace the example version numbers with the one you are upgrading to.
.. code:: bash
.. code:: console
sudo dracut -f /boot/efi/EFI/qubes/initramfs-4.14.35-1.pvops.qubes.x86_64.img 4.14.35-1.pvops.qubes.x86_64
@ -176,7 +176,7 @@ Grub2
^^^^^
.. code:: bash
.. code:: console
sudo grub2-mkconfig -o /boot/grub2/grub.cfg
@ -192,7 +192,7 @@ Changing default kernel
This section describes changing the default kernel in dom0. It is sometimes needed if you have upgraded to a newer kernel and are having problems booting, for example. On the next kernel update, the default will revert to the newest.
.. code:: bash
.. code:: console
sudo nano /etc/default/grub
[update the following two lines, add if needed]
@ -213,8 +213,6 @@ Requires installed `Whonix <https://forum.qubes-os.org/t/19014>`__.
Go to Qubes VM Manager -> System -> Global Settings. See the UpdateVM setting. Choose your desired Whonix-Gateway ProxyVM from the list. For example: sys-whonix.
.. code:: bash
Qubes VM Manager -> System -> Global Settings -> UpdateVM -> sys-whonix
:menusettings:`Qubes VM Manager -> System -> Global Settings -> UpdateVM -> sys-whonix`

View file

@ -20,7 +20,7 @@ In dom0, use ``qubes-dom0-update``:
.. code:: bash
.. code:: console
sudo qubes-dom0-update qubes-repo-contrib
@ -29,7 +29,7 @@ In a Fedora-based template, use ``dnf``:
.. code:: bash
.. code:: console
sudo dnf install qubes-repo-contrib
@ -38,14 +38,14 @@ In a Debian-based template, use ``apt``:
.. code:: bash
.. code:: console
sudo apt update && sudo apt install qubes-repo-contrib
The new repository definition will be in the usual location for your distro, and it will follow the naming pattern ``qubes-contrib-*``, depending on your Qubes release and whether it is in dom0 or a template. For example, in a Fedora template on Qubes 4.0, the new repository definition would be:
.. code:: bash
.. code:: text
/etc/yum.repos.d/qubes-contrib-vm-r4.0.repo
@ -63,7 +63,7 @@ For example, to install ``qvm-screenshot-tool`` in dom0:
.. code:: bash
.. code:: console
sudo qubes-dom0-update --clean qvm-screenshot-tool

View file

@ -26,7 +26,7 @@ KDE is very customisable, and there is a range of widgets to use. If you want to
.. code:: bash
#!/usr/bin/sh
# Use Qubes provided menu instead of default XFCE one
if [ "$XDG_SESSION_DESKTOP" = "KDE" ]; then
XDG_MENU_PREFIX="kf5-"
@ -47,7 +47,7 @@ You can also change your default login manager (lightdm) to the new KDE default:
- first you need to edit the ``/etc/sddm.conf`` to make sure if the custom X parameter is set according to Qubes needs:
.. code:: bash
.. code:: kconfig
[XDisplay]
ServerArguments=-nolisten tcp -background none
@ -82,14 +82,14 @@ Window Management
You can set each windows position and size like this:
.. code:: python
.. code:: text
Right click title bar --> More actions --> Special window settings...
Window matching tab
Window class (application): Exact Match: <vm_name>
Window title: Substring Match: <partial or full program name>
Size & Position tab
[x] Position: Apply Initially: x,y
[x] Size: Apply Initially: x,y
@ -97,7 +97,7 @@ You can set each windows position and size like this:
You can also use ``kstart`` to control virtual desktop placement like this:
.. code:: bash
.. code:: console
kstart --desktop 3 --windowclass <vm_name> -q --tray -a <vm_name> '<run_program_command>'
@ -115,7 +115,7 @@ If you decide to remove KDE do **not** use ``dnf remove @kde-desktop-qubes``. Yo
The safest way to remove (most of) KDE is:
.. code:: bash
.. code:: console
sudo dnf remove kdelibs plasma-workspace

View file

@ -108,7 +108,7 @@ Installing a new version from ``qubes-dom0-unstable`` repository:
Loaded plugins: langpacks, post-transaction-actions, yum-qubes-hooks
Resolving Dependencies
(...)
===========================================================================================
Package Arch Version Repository Size
===========================================================================================
@ -116,12 +116,12 @@ Installing a new version from ``qubes-dom0-unstable`` repository:
kernel-qubes-vm x86_64 1000:4.1.12-6.pvops.qubes qubes-dom0-cached 40 M
Removing:
kernel-qubes-vm x86_64 1000:3.18.10-2.pvops.qubes @anaconda/R3.0 134 M
Transaction Summary
===========================================================================================
Install 1 Package
Remove 1 Package
Total download size: 40 M
Is this ok [y/d/N]: y
Downloading packages:
@ -136,13 +136,13 @@ Installing a new version from ``qubes-dom0-unstable`` repository:
Error in PREUN scriptlet in rpm package 1000:kernel-qubes-vm-3.18.10-2.pvops.qubes.x86_64
Verifying : 1000:kernel-qubes-vm-4.1.12-6.pvops.qubes.x86_64 1/2
Verifying : 1000:kernel-qubes-vm-3.18.10-2.pvops.qubes.x86_64 2/2
Installed:
kernel-qubes-vm.x86_64 1000:4.1.12-6.pvops.qubes
Failed:
kernel-qubes-vm.x86_64 1000:3.18.10-2.pvops.qubes
Complete!
[user@dom0 ~]$
@ -174,17 +174,17 @@ To prepare such a VM kernel, you need to install the ``qubes-kernel-vm-support``
Package 1000:kernel-devel-4.1.9-6.pvops.qubes.x86_64 already installed and latest version
Resolving Dependencies
(...)
================================================================================
Package Arch Version Repository Size
================================================================================
Installing:
qubes-kernel-vm-support x86_64 3.1.2-1.fc20 qubes-dom0-cached 9.2 k
Transaction Summary
================================================================================
Install 1 Package
Total download size: 9.2 k
Installed size: 13 k
Is this ok [y/d/N]: y
@ -194,16 +194,16 @@ To prepare such a VM kernel, you need to install the ``qubes-kernel-vm-support``
Transaction test succeeded
Running transaction (shutdown inhibited)
Installing : qubes-kernel-vm-support-3.1.2-1.fc20.x86_64 1/1
Creating symlink /var/lib/dkms/u2mfn/3.1.2/source ->
/usr/src/u2mfn-3.1.2
DKMS: add completed.
Verifying : qubes-kernel-vm-support-3.1.2-1.fc20.x86_64 1/1
Installed:
qubes-kernel-vm-support.x86_64 0:3.1.2-1.fc20
Complete!
@ -244,7 +244,7 @@ Using kernel installed in the VM
Both debian-9 and fedora-26 templates already have grub and related tools preinstalled so if you want to use one of the distribution kernels, all you need to do is clone either template to a new one, then:
.. code:: bash
.. code:: console
qvm-prefs <clonetemplatename> virt_mode hvm
qvm-prefs <clonetemplatename> kernel ''
@ -261,7 +261,7 @@ Install whatever kernel you want. You need to also ensure you have the ``kernel-
If you are using a distribution kernel package (``kernel`` package), the initramfs and kernel modules may be handled automatically. If you are using a manually built kernel, you need to handle this on your own. Take a look at the ``dkms`` documentation, especially the ``dkms autoinstall`` command may be useful. If you did not see the ``kernel`` install rebuild your initramfs, or are using a manually built kernel, you will need to rebuild it yourself. Replace the version numbers in the example below with the ones appropriate to the kernel you are installing:
.. code:: bash
.. code:: console
sudo dracut -f /boot/initramfs-4.15.14-200.fc26.x86_64.img 4.15.14-200.fc26.x86_64
@ -269,7 +269,7 @@ If you are using a distribution kernel package (``kernel`` package), the initram
Once the kernel is installed, you need to setup ``grub2`` by running:
.. code:: bash
.. code:: console
sudo grub2-install /dev/xvda
@ -277,7 +277,7 @@ Once the kernel is installed, you need to setup ``grub2`` by running:
Finally, you need to create a GRUB configuration. You may want to adjust some settings in ``/etc/default/grub``; for example, lower ``GRUB_TIMEOUT`` to speed up VM startup. Then, you need to generate the actual configuration. In Fedora it can be done using the ``grub2-mkconfig`` tool:
.. code:: bash
.. code:: console
sudo grub2-mkconfig -o /boot/grub2/grub.cfg
@ -285,7 +285,7 @@ Finally, you need to create a GRUB configuration. You may want to adjust some se
You can safely ignore this error message:
.. code:: bash
.. code:: console
grub2-probe: error: cannot find a GRUB drive for /dev/mapper/dmroot. Check your device.map
@ -319,7 +319,7 @@ Using a distribution kernel package the initramfs and kernel modules should be h
Install distribution kernel image, kernel headers and the grub.
.. code:: bash
.. code:: console
sudo apt install linux-image-amd64 linux-headers-amd64 grub2 qubes-kernel-vm-support
@ -327,7 +327,7 @@ Install distribution kernel image, kernel headers and the grub.
If you are doing that on a qube based on “Debian Minimal” template, a grub gui will popup during the installation, asking you where you want to install the grub loader. You must select ``/dev/xvda`` (check the box using the space bar, and validate your choice with “Enter”.) If this popup does not appear during the installation, you must manually setup ``grub2`` by running:
.. code:: bash
.. code:: console
sudo grub-install /dev/xvda
@ -381,7 +381,7 @@ Run DKMS. Replace this with actual kernel version.
.. code:: bash
.. code:: console
sudo dkms autoinstall -k <kernel-version>
@ -390,7 +390,7 @@ For example.
.. code:: bash
.. code:: console
sudo dkms autoinstall -k 4.19.0-6-amd64
@ -399,7 +399,7 @@ Update initramfs.
.. code:: bash
.. code:: console
sudo update-initramfs -u
@ -409,16 +409,16 @@ The output should look like this:
.. code:: console
$ sudo dkms autoinstall -k 3.16.0-4-amd64
u2mfn:
Running module version sanity check.
- Original module
- No original module exists within this kernel
- Installation
- Installing to /lib/modules/3.16.0-4-amd64/updates/dkms/
depmod....
DKMS: install completed.
$ sudo update-initramfs -u
update-initramfs: Generating /boot/initrd.img-3.16.0-4-amd64

View file

@ -26,7 +26,7 @@ Decrypting the Disk
1. Open a Linux terminal in either dom0 or the app qube the disk was passed through to and enter ``lsblk``, which will result in an output similar to the following. In this example, the currently booted Qubes system is installed on ``sda`` and the qubes system to be accessed is on ``nvme0n1p2``.
.. code:: bash
.. code:: text
sda 8:0 0 111.8G 0 disk
├─sda1 8:1 0 200M 0 part /boot/efi
@ -90,8 +90,8 @@ Mounting the disk
.. list-table::
:widths: 28 28 28
.. list-table::
:widths: 28 28 28
:align: center
:header-rows: 1
@ -110,7 +110,7 @@ Mounting the disk
* - other_install/pool00_tmeta
- LVM Metadata
- The metadata LV of this disk.
6. Mount the disk using the command ``mount /dev/other_install/<lv name> <mountpoint>``. *Note:* Any compromised data which exists in the volume to be mounted will be accessible here. Do not mount untrusted partitions in dom0.

View file

@ -12,7 +12,7 @@ Under the hood, an enabled service in a VM is signaled by a file in ``/var/run/q
Adding support for systemd services is pretty simple. In the VM, create the following file (and directory, if needed): ``/etc/systemd/system/<service name>.service.d/30_qubes.conf``. It should contain the following:
.. code:: bash
.. code:: systemd
[Unit]
ConditionPathExists=/var/run/qubes-service/<service name>

View file

@ -40,7 +40,7 @@ Increasing the size of Disk Images
Use either GUI tool Qube Settings (``qubes-vm-settings``) or the CLI tool ``qvm-volume``. Maximum size which can be assigned through Qube Settings is 1048576 MiB - if you need more, use ``qvm-volume``:
.. code:: bash
.. code:: console
qvm-volume extend <vm_name>:root <size>
@ -48,7 +48,7 @@ Use either GUI tool Qube Settings (``qubes-vm-settings``) or the CLI tool ``qvm-
OR
.. code:: bash
.. code:: console
qvm-volume extend <vm_name>:private <size>
@ -90,7 +90,7 @@ FreeBSD
^^^^^^^
.. code:: bash
.. code:: console
gpart recover ada0
sysctl kern.geom.debugflags=0x10
@ -115,7 +115,7 @@ You can create a new qube, copy your files in to the new qube, and delete the ol
Or you can take the risk of reducing the size of the disk. For example, to reduce the private storage of qube1 to 1GiB: Open a terminal in dom0:
.. code:: bash
.. code:: console
qvm-shutdown qube1
sudo lvresize --size 1024M /dev/qubes_dom0/vm-qube1-private

View file

@ -50,7 +50,7 @@ States
The smallest unit of configuration is a state. A state is written in YAML and looks like this:
.. code:: bash
.. code:: yaml
stateid:
cmd.run: #this is the execution module. in this case it will execute a command on the shell
@ -74,7 +74,7 @@ With these three states you can define most of the configuration of a VM.
You can also `order the execution <https://docs.saltproject.io/en/latest/ref/states/ordering.html>`__ of your states:
.. code:: bash
.. code:: yaml
D:
cmd.run:
@ -111,7 +111,7 @@ Top Files
After you have several state files, you need something to assign them to a qube. This is done by ``*.top`` files (`official documentation <https://docs.saltproject.io/en/latest/ref/states/top.html>`__). Their structure looks like this:
.. code:: bash
.. code:: yaml
environment:
target_matching_clause:
@ -122,7 +122,7 @@ After you have several state files, you need something to assign them to a qube.
In most cases, the environment will be called ``base``. The ``target_matching_clause`` will be used to select your minions (Templates or qubes). It can be either the name of a qube or a regular expression. If you are using a regular expressions, you need to give Salt a hint you are doing so:
.. code:: bash
.. code:: yaml
environment:
^app-(work|(?!mail).*)$:
@ -193,15 +193,15 @@ Configuring a qube's System from Dom0
Salt can be used to configure qubes from dom0. Simply set the qube name as the target minion name in the top file. You can also use the ``qubes`` pillar module to select qubes with a particular property (see below). If you do so, then you need to pass additional arguments to the ``qubesctl`` tool:
.. code:: bash
.. code:: text
usage: qubesctl [-h] [--show-output] [--force-color] [--skip-dom0]
[--targets TARGETS | --templates | --app | --all]
...
positional arguments:
command Salt command to execute (e.g., state.apply)
optional arguments:
-h, --help show this help message and exit
--show-output Show output of management commands
@ -233,7 +233,7 @@ Writing Your Own Configurations
Lets start with a quick example:
.. code:: bash
.. code:: yaml
my new and shiny VM:
qvm.present:
@ -267,7 +267,7 @@ As you will notice, the options are the same (or very similar) to those used in
This should be put in ``/srv/salt/my-new-vm.sls`` or another ``.sls`` file. A separate ``*.top`` file should be also written:
.. code:: bash
.. code:: yaml
base:
dom0:
@ -299,7 +299,7 @@ Example of Configuring Templates from Dom0
Lets make sure that the ``mc`` package is installed in all templates. Similar to the previous example, you need to create a state file (``/srv/salt/mc-everywhere.sls``):
.. code:: bash
.. code:: yaml
mc:
pkg.installed: []
@ -308,7 +308,7 @@ Lets make sure that the ``mc`` package is installed in all templates. Similar to
Then the appropriate top file (``/srv/salt/mc-everywhere.top``):
.. code:: bash
.. code:: yaml
base:
qubes:type:template:
@ -349,7 +349,7 @@ As in the example above, it creates a qube and sets its properties.
You can set properties of an existing qube:
.. code:: bash
.. code:: yaml
my preferences:
qvm.prefs:
@ -364,7 +364,7 @@ You can set properties of an existing qube:
^^^^^^^^^^^^^^^
.. code:: bash
.. code:: yaml
services in my qube:
qvm.service:
@ -388,7 +388,7 @@ This enables, disables, or sets to default, services as in ``qvm-service``.
Ensures the specified qube is running:
.. code:: bash
.. code:: yaml
qube is running:
qvm.running:
@ -402,7 +402,7 @@ Virtual Machine Formulae
You can use these formulae to download, install, and configure qubes in Qubes. These formulae use pillar data to define default qube names and configuration details. The default settings can be overridden in the pillar data located in:
.. code:: bash
.. code:: yaml
/srv/pillar/base/qvm/init.sls
@ -681,7 +681,7 @@ Disk Quota Exceeded (When Installing Templates)
If you install multiple templates you may encounter this error. The solution is to shut down the updateVM between each install:
.. code:: bash
.. code:: yaml
install template and shutdown updateVM:
cmd.run:

View file

@ -20,14 +20,14 @@ Qubes 4.0 is more flexible than earlier versions about placing different VMs on
You can query qvm-pool to list available storage drivers:
.. code:: bash
.. code:: console
qvm-pool --help-drivers
qvm-pool driver explanation:
.. code:: bash
.. code:: text
<file> refers to using a simple file for image storage and lacks a few features.
<file-reflink> refers to storing images on a filesystem supporting copy on write.
@ -48,7 +48,7 @@ These steps assume you have already created a separate `volume group <https://ac
First, collect some information in a dom0 terminal:
.. code:: bash
.. code:: console
sudo pvs
sudo lvs
@ -56,7 +56,7 @@ First, collect some information in a dom0 terminal:
Take note of the VG and thin pool names for your second drive., then register it with Qubes:
.. code:: bash
.. code:: console
# <pool_name> is a freely chosen pool name
# <vg_name> is LVM volume group name
@ -73,7 +73,7 @@ Theses steps assume you have already created a separate Btrfs filesystem for you
It is possible to use an existing Btrfs storage if it is configured. In dom0, available Btrfs storage can be displayed using:
.. code:: bash
.. code:: console
mount -t btrfs
btrfs show filesystem
@ -81,7 +81,7 @@ It is possible to use an existing Btrfs storage if it is configured. In dom0, av
To register the storage to qubes:
.. code:: bash
.. code:: console
# <pool_name> is a freely chosen pool name
# <dir_path> is the mounted path to the second btrfs storage
@ -94,14 +94,14 @@ Using the new pool
Now, you can create qubes in that pool:
.. code:: bash
.. code:: console
qvm-create -P <pool_name> --label red <vmname>
It isnt possible to directly migrate an existing qube to the new pool, but you can clone it there, then remove the old one:
.. code:: bash
.. code:: console
qvm-clone -P <pool_name> <sourceVMname> <cloneVMname>
qvm-remove <sourceVMname>
@ -109,7 +109,7 @@ It isnt possible to directly migrate an existing qube to the new pool, but yo
If that was a template, or other qube referenced elsewhere (netVM or such), you will need to adjust those references manually after moving. For example:
.. code:: bash
.. code:: console
qvm-prefs <appvmname_based_on_old_template> template <new_template_name>
@ -120,7 +120,7 @@ Example setup of second drive.
Assuming the secondary hard disk is at /dev/sdb , you can encrypt the drive as follows. Note that the drive contents will be completely erased, In a dom0 terminal run this command - use the same passphrase as the main Qubes disk to avoid a second password prompt at boot:
.. code:: bash
.. code:: console
sudo cryptsetup luksFormat --sector-size=512 /dev/sdb
sudo blkid /dev/sdb
@ -131,7 +131,7 @@ Assuming the secondary hard disk is at /dev/sdb , you can encrypt the drive as f
Note the devices UUID (in this example “b209…”), we will use it as its luks name for auto-mounting at boot, by editing ``/etc/crypttab``, and adding this line to crypttab (replacing both “b209…” entries with your devices UUID taken from blkid) :
.. code:: bash
.. code:: text
luks-b20975aa-8318-433d-8508-6c23982c6cde UUID=b20975aa-8318-433d-8508-6c23982c6cde none
@ -144,28 +144,28 @@ For LVM
First create the physical volume:
.. code:: bash
.. code:: console
sudo pvcreate /dev/mapper/luks-b20975aa-8318-433d-8508-6c23982c6cde
Then create the LVM volume group, we will use for example “qubes” as the :
.. code:: bash
.. code:: console
sudo vgcreate qubes /dev/mapper/luks-b20975aa-8318-433d-8508-6c23982c6cde
And then use “poolhd0” as the (LVM thin pool name):
.. code:: bash
.. code:: console
sudo lvcreate -T -n poolhd0 -l +100%FREE qubes
Finally we will tell Qubes to add a new pool on the just created thin pool:
.. code:: bash
.. code:: console
qvm-pool --add poolhd0_qubes lvm_thin -o volume_group=qubes,thin_pool=poolhd0,revisions_to_keep=2
@ -176,7 +176,7 @@ For Btrfs
First create the physical volume:
.. code:: bash
.. code:: console
# <label> Btrfs Label
sudo mkfs.btrfs -L <label> /dev/mapper/luks-b20975aa-8318-433d-8508-6c23982c6cde
@ -184,7 +184,7 @@ First create the physical volume:
Then mount the new Btrfs to a temporary path:
.. code:: bash
.. code:: console
sudo mkdir -p /mnt/new_qube_storage
sudo mount /dev/mapper/luks-b20975aa-8318-433d-8508-6c23982c6cde /mnt/new_qube_storage
@ -192,7 +192,7 @@ Then mount the new Btrfs to a temporary path:
Create a subvolume to hold the data:
.. code:: bash
.. code:: console
sudo btrfs subvolume create /mnt/new_qube_storage/qubes
@ -200,7 +200,7 @@ Create a subvolume to hold the data:
Unmount the temporary Btrfs filesystem:
.. code:: bash
.. code:: console
sudo umount /mnt/new_qube_storage
rmdir /mnt/new_qube_storage
@ -208,7 +208,7 @@ Unmount the temporary Btrfs filesystem:
Mount the subvolume with compression enabled if desired:
.. code:: bash
.. code:: console
# <compression> zlib|lzo|zstd
# <subvol> btrfs subvolume "qubes" in this example
@ -217,14 +217,14 @@ Mount the subvolume with compression enabled if desired:
Finally we will tell Qubes to add a new pool on the just created Btrfs subvolume:
.. code:: bash
.. code:: console
qvm-pool --add poolhd0_qubes file-reflink -o dir_path=/var/lib/qubes_newpool,revisions_to_keep=2
By default VMs will be created on the main Qubes disk (i.e. a small SSD), to create them on this secondary drive do the following on a dom0 terminal:
.. code:: bash
.. code:: console
qvm-create -P poolhd0_qubes --label red unstrusted-hdd

View file

@ -44,7 +44,7 @@ You can create a standalone in the Qube Manager by selecting the “Type” of
Alternatively, to create an empty standalone from the dom0 command line:
.. code:: bash
.. code:: console
qvm-create --class StandaloneVM --label <YOUR_COLOR> --property virt_mode=hvm <NEW_STANDALONE_NAME>
@ -52,7 +52,7 @@ Alternatively, to create an empty standalone from the dom0 command line:
Or to create a standalone copied from a template:
.. code:: bash
.. code:: console
qvm-create --class StandaloneVM --label <YOUR_COLOR> --property virt_mode=hvm --template <TEMPLATE_QUBE_NAME> <NEW_STANDALONE_NAME>
@ -88,7 +88,7 @@ Command line
Qubes are template-based (i.e., :ref:`app qubes <user/reference/glossary:app qube>` by default, so you must set the ``--class StandaloneVM`` option to create a standalone. The name and label color used below are for illustration purposes.
.. code:: bash
.. code:: console
qvm-create my-new-vm --class StandaloneVM --property virt_mode=hvm --property kernel='' --label=green
@ -96,7 +96,7 @@ Qubes are template-based (i.e., :ref:`app qubes <user/reference/glossary:app qub
If you receive an error like this one, then you must first enable VT-x in your BIOS:
.. code:: bash
.. code:: text
libvirt.libvirtError: invalid argument: could not find capabilities for arch=x86_64
@ -112,7 +112,7 @@ You will have to boot the qube with the installation media “attached” to it.
1. If you have the physical CD-ROM media and an optical disc drive:
.. code:: bash
.. code:: console
qvm-start <YOUR_HVM> --cdrom=/dev/cdrom
@ -120,7 +120,7 @@ You will have to boot the qube with the installation media “attached” to it.
2. If you have an ISO image of the installation media located in dom0:
.. code:: bash
.. code:: console
qvm-start <YOUR_HVM> --cdrom=dom0:/usr/local/iso/<YOUR_INSTALLER.ISO>
@ -128,7 +128,7 @@ You will have to boot the qube with the installation media “attached” to it.
3. If you have an ISO image of the installation media located in a qube (the qube where the media is located must be running):
.. code:: bash
.. code:: console
qvm-start <YOUR_HVM> --cdrom=<YOUR_OTHER_QUBE>:/home/user/<YOUR_INSTALLER.ISO>
@ -203,7 +203,7 @@ Qubes allows HVMs to share a common root filesystem from a select template. This
In order to create an HVM template, you use the following command, suitably adapted:
.. code:: bash
.. code:: console
qvm-create --class TemplateVM <YOUR_HVM_TEMPLATE_NAME> --property virt_mode=HVM --property kernel='' -l <YOUR_COLOR>
@ -264,11 +264,11 @@ The cloned qube will get identical root and private images and will essentially
visible_ip6 D fd09:24ef:4179::a89:7a
visible_netmask D 255.255.255.255
xid D -1
[joanna@dom0 ~]$ qvm-clone my-new-vm my-new-vm-copy
/.../
[joanna@dom0 ~]$ qvm-prefs my-new-vm-copy
autostart D False
backup_timestamp U
@ -316,7 +316,7 @@ Note that the MAC addresses differ between those two otherwise identical qubes.
.. code:: console
[joanna@dom0 ~]$ qvm-ls -n
NAME STATE NETVM IP IPBACK GATEWAY
my-new-hvm Halted sys-firewall 10.137.0.122 - 10.137.0.14
my-new-hvm-clone Halted sys-firewall 10.137.0.137 - 10.137.0.14
@ -379,7 +379,7 @@ About 60 GB of disk space is required for conversion. Use an external hard drive
In a Debian app qube, install ``qemu-utils`` and ``unzip``:
.. code:: bash
.. code:: console
sudo apt install qemu-utils unzip
@ -387,7 +387,7 @@ In a Debian app qube, install ``qemu-utils`` and ``unzip``:
In a Fedora app qube:
.. code:: bash
.. code:: console
sudo dnf install qemu-img
@ -395,7 +395,7 @@ In a Fedora app qube:
Unzip VirtualBox zip file:
.. code:: bash
.. code:: console
unzip *.zip
@ -403,7 +403,7 @@ Unzip VirtualBox zip file:
Extract OVA tar archive:
.. code:: bash
.. code:: console
tar -xvf *.ova
@ -411,7 +411,7 @@ Extract OVA tar archive:
Convert vmdk to raw:
.. code:: bash
.. code:: console
qemu-img convert -O raw *.vmdk win10.raw
@ -419,7 +419,7 @@ Convert vmdk to raw:
Copy the root image file from the originating qube (here called ``untrusted``) to a temporary location in dom0, typing this in a dom0 terminal:
.. code:: bash
.. code:: console
qvm-run --pass-io untrusted 'cat "/media/user/externalhd/win10.raw"' > /home/user/win10-root.img
@ -427,7 +427,7 @@ Copy the root image file from the originating qube (here called ``untrusted``) t
From within dom0, create a new HVM (here called ``win10``) with the root image we just copied to dom0 (change the amount of RAM in GB as you wish):
.. code:: bash
.. code:: console
qvm-create --property=virt_mode=hvm --property=memory=4096 --property=kernel='' --label red --standalone --root-move-from /home/user/win10-root.img win10
@ -435,7 +435,7 @@ From within dom0, create a new HVM (here called ``win10``) with the root image w
Start ``win10``:
.. code:: bash
.. code:: console
qvm-start win10
@ -447,7 +447,7 @@ Optional ways to get more information
Filetype of OVA file:
.. code:: bash
.. code:: console
file *.ova
@ -455,7 +455,7 @@ Filetype of OVA file:
List files of OVA tar archive:
.. code:: bash
.. code:: console
tar -tf *.ova
@ -463,7 +463,7 @@ List files of OVA tar archive:
List filetypes supported by qemu-img:
.. code:: bash
.. code:: console
qemu-img -h | tail -n1

View file

@ -28,7 +28,7 @@ If youre reading this section, its likely because the installer did not al
First, make sure you have the latest ``qubes-mgmt-salt-dom0-virtual-machines`` package by :ref:`updating dom0 <user/advanced-topics/how-to-install-software-in-dom0:how to update dom0>`. Then, enter the following command in dom0:
.. code:: bash
.. code:: console
sudo qubesctl state.sls qvm.usb-keyboard
@ -46,7 +46,7 @@ Manual setup for USB keyboards
In order to use a USB keyboard, you must first attach it to a USB qube, then give that qube permission to pass keyboard input to dom0. Edit the ``qubes.InputKeyboard`` policy file in dom0, which is located here:
.. code:: bash
.. code:: text
/etc/qubes-rpc/policy/qubes.InputKeyboard
@ -54,7 +54,7 @@ In order to use a USB keyboard, you must first attach it to a USB qube, then giv
Add a line like this one to the top of the file:
.. code:: bash
.. code:: text
sys-usb dom0 allow
@ -66,7 +66,7 @@ You can now use your USB keyboard to log in to your dom0 user account (after LUK
You can set up your system so that theres a confirmation prompt each time the USB keyboard is connected. However, this will effectively disable your USB keyboard for dom0 user account login and the screen locker, so **dont do this if you want to log into and unlock your device with a USB keyboard!** If youre sure you wish to proceed, change the previous line to:
.. code:: bash
.. code:: text
sys-usb dom0 ask,default_target=dom0
@ -110,7 +110,7 @@ Handling a USB mouse isnt as critical as handling a keyboard, since you can l
If you want to attach the USB mouse automatically anyway, you have to edit the ``qubes.InputMouse`` policy file in dom0, located at:
.. code:: bash
.. code:: text
/etc/qubes-rpc/policy/qubes.InputMouse
@ -118,7 +118,7 @@ If you want to attach the USB mouse automatically anyway, you have to edit the `
The first line should read similar to:
.. code:: bash
.. code:: text
sys-usb dom0 ask,default_target=dom0
@ -130,7 +130,7 @@ If the file is empty or does not exist, something might have gone wrong during s
In case you are absolutely sure you do not want to confirm mouse access from ``sys-usb`` to ``dom0``, you may add the following line to the top of the file:
.. code:: bash
.. code:: text
sys-usb dom0 allow
@ -146,7 +146,7 @@ If `automatically creating a USB qube for use with a USB keyboard <#how-to-creat
You can create a USB qube using the management stack by executing the following command as root in dom0:
.. code:: bash
.. code:: console
sudo qubesctl state.sls qvm.sys-usb

View file

@ -12,7 +12,7 @@ In Qubes, when you create a new VM, its volumes are stored in one of the syst
For the private volume associated with a VM named *vmname*, you may inspect the value of ``revisions_to_keep`` from the dom0 CLI as follows:
.. code:: bash
.. code:: console
qvm-volume info vmname:private
@ -20,7 +20,7 @@ For the private volume associated with a VM named *vmname*, you may inspect the
The output of the above command will also display the “Available revisions (for revert)” at the bottom. For a very large volume in a small pool, ``revisions_to_keep`` should probably be set to the maximum value of 1 to minimize the possibility of the pool being accidentally filled up by snapshots. For a smaller volume for which you would like to have the future option of reverting, ``revisions_to_keep`` should probably be set to at least 2. To set ``revisions_to_keep`` for this same VM / volume example:
.. code:: bash
.. code:: console
qvm-volume config vmname:private revisions_to_keep 2
@ -28,7 +28,7 @@ The output of the above command will also display the “Available revisions (fo
With the VM stopped, you may revert to an older snapshot of the private volume from the above list of “Available revisions (for revert)”, where the last item on the list with the largest integer is the most recent snapshot:
.. code:: bash
.. code:: console
qvm-volume revert vmname:private <revision>