mirror of
https://github.com/QubesOS/qubes-doc.git
synced 2025-07-29 09:39:09 -04:00
update from upstream
This commit is contained in:
commit
f487cd4261
11 changed files with 126 additions and 47 deletions
|
@ -45,6 +45,8 @@ At this point, you need to shutdown all your running qubes as the `default_guivm
|
|||
|
||||
Here, we describe how to setup `sys-gui-gpu` which is a GUI domain with *GPU passthrough* in [GUI domain](/news/2020/03/18/gui-domain/#gpu-passthrough-the-perfect-world-desktop-solution).
|
||||
|
||||
> Note: the purpose of `sys-gui-gpu` is to improve Qubes OS security by detaching the GPU from dom0, this is not intended to improve GPU related performance within qubes, and this will not improve performance.
|
||||
|
||||
[](/attachment/posts/guivm-gpu.png)
|
||||
|
||||
In `dom0`, enable the formula for `sys-gui-gpu` with pillar data:
|
||||
|
|
|
@ -11,58 +11,98 @@ ref: 187
|
|||
title: Secondary storage
|
||||
---
|
||||
|
||||
# Storing qubes on Secondary Drives
|
||||
|
||||
Suppose you have a fast but small primary SSD and a large but slow secondary HDD.
|
||||
You want to store a subset of your app qubes on the HDD.
|
||||
You may want to store a subset of your qubes on the HDD.
|
||||
Or if you install a second SSD, you may want to use *that* for storage of some qubes.
|
||||
This page explains how to use that second drive.
|
||||
|
||||
## Instructions
|
||||
|
||||
Qubes 4.0 is more flexible than earlier versions about placing different VMs on different disks.
|
||||
For example, you can keep templates on one disk and app qubes on another, without messy symlinks.
|
||||
|
||||
These steps assume you have already created a separate [volume group](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/logical_volume_manager_administration/vg_admin#VG_create) and [thin pool](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/logical_volume_manager_administration/thinly_provisioned_volume_creation) (not thin volume) for your HDD.
|
||||
You can query qvm-pool to list available storage drivers:
|
||||
|
||||
``` shell_session
|
||||
qvm-pool --help-drivers
|
||||
```
|
||||
qvm-pool driver explanation:
|
||||
|
||||
```
|
||||
<file> refers to using a simple file for image storage and lacks a few features.
|
||||
<file-reflink> refers to storing images on a filesystem supporting copy on write.
|
||||
<linux-kernel> refers to a directory holding kernel images.
|
||||
<lvm_thin> refers to LVM managed pools.
|
||||
```
|
||||
In theory, you can still use file-based disk images ("file" pool driver), but they will lack some features: for example, you won't be able to do backups without shutting down the qube.
|
||||
|
||||
Additional storage can also be added on a Btrfs filesystem. A unique feature of Btrfs is that data can be compressed transparently. The subvolume can also be backed up using snapshots for an additional layer of protection; Btrfs supports differents level of redundancy; it has parity checksum; Btrfs volumes can be expanded or shrunk. Starting or stopping a VM has less impact and less chance of causing slowdown of the system as some users have noted with LVM. Relevant information for general btrfs configuration will be provided after the section on LVM storage.
|
||||
|
||||
### LVM storage
|
||||
|
||||
These steps assume you have already created a separate [volume group](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/logical_volume_manager_administration/vg_admin#VG_create) and [thin pool](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/logical_volume_manager_administration/thinly_provisioned_volume_creation) (not thin volume) for your second drive..
|
||||
See also [this example](https://www.linux.com/blog/how-full-encrypt-your-linux-system-lvm-luks) if you would like to create an encrypted LVM pool (but note you can use a single logical volume if preferred, and to use the `-T` option on `lvcreate` to specify it is thin). You can find the commands for this example applied to Qubes at the bottom of this R4.0 section.
|
||||
|
||||
First, collect some information in a dom0 terminal:
|
||||
|
||||
```
|
||||
```shell_session
|
||||
sudo pvs
|
||||
sudo lvs
|
||||
```
|
||||
|
||||
Take note of the VG and thin pool names for your HDD, then register it with Qubes:
|
||||
Take note of the VG and thin pool names for your second drive., then register it with Qubes:
|
||||
|
||||
```shell_session
|
||||
```
|
||||
# <pool_name> is a freely chosen pool name
|
||||
# <vg_name> is LVM volume group name
|
||||
# <thin_pool_name> is LVM thin pool name
|
||||
qvm-pool --add <pool_name> lvm_thin -o volume_group=<vg_name>,thin_pool=<thin_pool_name>,revisions_to_keep=2
|
||||
```
|
||||
|
||||
### BTRFS storage
|
||||
Theses steps assume you have already created a separate Btrfs filesystem for your second drive., that it is encrypted with LUKS and it is mounted. It is recommended to use a subvolume as it enables compression and excess storage can be use for other things.
|
||||
|
||||
|
||||
It is possible to use an existing Btrfs storage if it is configured. In dom0, available Btrfs storage can be displayed using:
|
||||
```shell_session
|
||||
mount -t btrfs
|
||||
btrfs show filesystem
|
||||
```
|
||||
To register the storage to qubes:
|
||||
|
||||
```shell_session
|
||||
# <pool_name> is a freely chosen pool name
|
||||
# <dir_path> is the mounted path to the second btrfs storage
|
||||
qvm-pool --add <pool_name> file-reflink -o dir_path=<dir_path>,revisions_to_keep=2
|
||||
```
|
||||
|
||||
#### Using the new pool
|
||||
|
||||
Now, you can create qubes in that pool:
|
||||
|
||||
```
|
||||
```shell_session
|
||||
qvm-create -P <pool_name> --label red <vmname>
|
||||
```
|
||||
|
||||
It isn't possible to directly migrate an existing qube to the new pool, but you can clone it there, then remove the old one:
|
||||
|
||||
```
|
||||
```shell_session
|
||||
qvm-clone -P <pool_name> <sourceVMname> <cloneVMname>
|
||||
qvm-remove <sourceVMname>
|
||||
```
|
||||
|
||||
If that was a template, or other qube referenced elsewhere (NetVM or such), you will need to adjust those references manually after moving.
|
||||
If that was a template, or other qube referenced elsewhere (netVM or such), you will need to adjust those references manually after moving.
|
||||
For example:
|
||||
|
||||
```
|
||||
```shell_session
|
||||
qvm-prefs <appvmname_based_on_old_template> template <new_template_name>
|
||||
```
|
||||
|
||||
In theory, you can still use file-based disk images ("file" pool driver), but it lacks some features such as you won't be able to do backups without shutting down the qube.
|
||||
#### Example setup of second drive.
|
||||
|
||||
### Example HDD setup
|
||||
|
||||
Assuming the secondary hard disk is at `/dev/sdb` (it will be completely erased), you can set it up for encryption by doing in a dom0 terminal (use the same passphrase as the main Qubes disk to avoid a second password prompt at boot):
|
||||
Assuming the secondary hard disk is at /dev/sdb , you can encrypt the drive as follows. Note that the drive contents will be completely erased, In a dom0 terminal run this command - use the same passphrase as the main Qubes disk to avoid a second password prompt at boot:
|
||||
|
||||
```
|
||||
sudo cryptsetup luksFormat --sector-size=512 /dev/sdb
|
||||
|
@ -71,46 +111,83 @@ sudo blkid /dev/sdb
|
|||
|
||||
(The `--sector-size=512` argument can sometimes work around an incompatibility of storage hardware with LVM thin pools on Qubes. If this does not apply to your hardware, the argument will make no difference.)
|
||||
|
||||
Note the device's UUID (in this example "b209..."), we will use it as its luks name for auto-mounting at boot, by doing:
|
||||
Note the device's UUID (in this example "b209..."), we will use it as its luks name for auto-mounting at boot, by editing `/etc/crypttab`, and adding this line to crypttab (replacing both "b209..." entries with your device's UUID taken from blkid) :
|
||||
|
||||
```
|
||||
sudo nano /etc/crypttab
|
||||
```
|
||||
|
||||
And adding this line (change both "b209..." for your device's UUID from blkid) to crypttab:
|
||||
|
||||
```
|
||||
```shell_session
|
||||
luks-b20975aa-8318-433d-8508-6c23982c6cde UUID=b20975aa-8318-433d-8508-6c23982c6cde none
|
||||
```
|
||||
|
||||
Reboot the computer so the new luks device appears at `/dev/mapper/luks-b209...` and we can then create its pool, by doing this on a dom0 terminal (substitute the "b209..." UUIDs with yours):
|
||||
Reboot the computer so the new luks device appears at /dev/mapper/luks-b209... You can then create the new pool by running this command in a dom0 terminal (substitute the b209... UUIDs with your UID):
|
||||
|
||||
First create the physical volume
|
||||
##### For LVM
|
||||
|
||||
```
|
||||
First create the physical volume:
|
||||
```shell_session
|
||||
sudo pvcreate /dev/mapper/luks-b20975aa-8318-433d-8508-6c23982c6cde
|
||||
```
|
||||
|
||||
Then create the LVM volume group, we will use for example "qubes" as the <vg_name>:
|
||||
|
||||
```
|
||||
```shell_session
|
||||
sudo vgcreate qubes /dev/mapper/luks-b20975aa-8318-433d-8508-6c23982c6cde
|
||||
```
|
||||
|
||||
And then use "poolhd0" as the <thin_pool_name> (LVM thin pool name):
|
||||
|
||||
```
|
||||
```shell_session
|
||||
sudo lvcreate -T -n poolhd0 -l +100%FREE qubes
|
||||
```
|
||||
|
||||
Finally we will tell Qubes to add a new pool on the just created thin pool
|
||||
Finally we will tell Qubes to add a new pool on the just created thin pool:
|
||||
|
||||
```
|
||||
```shell_session
|
||||
qvm-pool --add poolhd0_qubes lvm_thin -o volume_group=qubes,thin_pool=poolhd0,revisions_to_keep=2
|
||||
```
|
||||
#### For Btrfs
|
||||
|
||||
By default VMs will be created on the main Qubes disk (i.e. a small SSD), to create them on this secondary HDD do the following on a dom0 terminal:
|
||||
First create the physical volume:
|
||||
|
||||
```shell_session
|
||||
# <label> Btrfs Label
|
||||
sudo mkfs.btrfs -L <label> /dev/mapper/luks-b20975aa-8318-433d-8508-6c23982c6cde
|
||||
```
|
||||
|
||||
Then mount the new Btrfs to a temporary path:
|
||||
|
||||
```shell_session
|
||||
sudo mkdir -p /mnt/new_qube_storage
|
||||
sudo mount /dev/mapper/luks-b20975aa-8318-433d-8508-6c23982c6cde /mnt/new_qube_storage
|
||||
```
|
||||
Create a subvolume to hold the data:
|
||||
```
|
||||
sudo btrfs subvolume create /mnt/new_qube_storage/qubes
|
||||
```
|
||||
Unmount the temporary Btrfs filesystem:
|
||||
```shell_session
|
||||
sudo umount /mnt/new_qube_storage
|
||||
rmdir /mnt/new_qube_storage
|
||||
```
|
||||
Mount the subvolume with compression enabled if desired:
|
||||
```shell_session
|
||||
# <compression> zlib|lzo|zstd
|
||||
# <subvol> btrfs subvolume "qubes" in this example
|
||||
sudo mount /dev/mapper/luks-b20975aa-8318-433d-8508-6c23982c6cde /var/lib/qubes_newpool -o compress=<compression>,subvol=qubes
|
||||
```
|
||||
|
||||
Finally we will tell Qubes to add a new pool on the just created Btrfs subvolume:
|
||||
|
||||
```shell_session
|
||||
qvm-pool --add poolhd0_qubes file-reflink -o dir_path=/var/lib/qubes_newpool,revisions_to_keep=2
|
||||
```
|
||||
|
||||
By default VMs will be created on the main Qubes disk (i.e. a small SSD), to create them on this secondary drive do the following on a dom0 terminal:
|
||||
|
||||
```shell_session
|
||||
qvm-create -P poolhd0_qubes --label red unstrusted-hdd
|
||||
```
|
||||
|
||||
Verify that corresponding lines were added to /etc/fstab and /etc/cryptab to enable auto mounting of the new pool.
|
||||
|
||||
|
||||
[Qubes Backup]: /doc/BackupRestore/
|
||||
[TemplateVM]: /doc/Templates/
|
||||
|
|
|
@ -36,7 +36,7 @@ The current Qubes-certified models are listed below in reverse chronological ord
|
|||
| [NovaCustom](https://novacustom.com/) | [NV41 Series](https://novacustom.com/product/nv41-series/) | [Certification details](/doc/certified-hardware/novacustom-nv41-series/) |
|
||||
| [3mdeb](https://3mdeb.com/) | [Dasharo FidelisGuard Z690](https://web.archive.org/web/20240917145232/https://shop.3mdeb.com/shop/open-source-hardware/dasharo-fidelisguard-z690-qubes-os-certified/) | [Certification details](/doc/certified-hardware/dasharo-fidelisguard-z690/) |
|
||||
| [Nitrokey](https://www.nitrokey.com/) | [NitroPad T430](https://shop.nitrokey.com/shop/product/nitropad-t430-119) | [Certification details](/doc/certified-hardware/nitropad-t430/) |
|
||||
| [Nitrokey](https://www.nitrokey.com/) | [NitroPad X230](https://shop.nitrokey.com/shop/product/nitropad-x230-67) | [Certification details](/doc/certified-hardware/nitropad-x230/) |
|
||||
| [Nitrokey](https://www.nitrokey.com/) | <a id="nitropad-x230"></a>[NitroPad X230](https://shop.nitrokey.com/shop/product/nitropad-x230-67) | [Certification details](/doc/certified-hardware/nitropad-x230/) |
|
||||
| [Insurgo](https://insurgo.ca/) | [PrivacyBeast X230](https://insurgo.ca/produit/qubesos-certified-privacybeast_x230-reasonably-secured-laptop/) | [Certification details](/doc/certified-hardware/insurgo-privacybeast-x230/) |
|
||||
|
||||
## Become hardware certified
|
||||
|
|
|
@ -164,8 +164,8 @@ Please see [How to Update](/doc/how-to-update/).
|
|||
## Why don't templates have normal network access?
|
||||
|
||||
In order to protect you from performing risky activities in templates, they do
|
||||
not have normal network access by default. Instead, templates use an [updates
|
||||
proxy](#updates-proxy) which allows you to install and update software using
|
||||
not have normal network access by default. Instead, templates use an
|
||||
[updates-proxy](#updates-proxy)which allows you to install and update software using
|
||||
the distribution's package manager over the proxy connection.
|
||||
**The updates proxy is already set up to work automatically out-of-the-box and
|
||||
requires no special action from you.**
|
||||
|
|
|
@ -158,7 +158,7 @@ As of Qubes 4.2, firmware updates can be performed from within Qubes for [fwupd-
|
|||
|
||||
### In dom0
|
||||
|
||||
First, ensure that your [UpdateVM](/doc/Software/UpdateVM/)
|
||||
First, ensure that your UpdateVM
|
||||
contains the `fwupd-qubes-vm` package. This package is installed
|
||||
by default for qubes with `qubes-vm-recommended` packages.
|
||||
|
||||
|
@ -198,8 +198,7 @@ Repeat the update process for any additional devices on your computer.
|
|||
|
||||
Devices that are attached to non-dom0 qubes can be updated via a graphical tool for `fwupd`, or via the `fwupdmgr` commandline tool.
|
||||
|
||||
To update the firmware of offline qubes, use the [Updates
|
||||
proxy](/doc/how-to/install/software/#updates-proxy).
|
||||
To update the firmware of offline qubes, use the [Updates proxy](/doc/how-to-install-software/#updates-proxy).
|
||||
|
||||
### Computers without fwupd support
|
||||
|
||||
|
|
|
@ -97,10 +97,10 @@ The same general procedure may be used to upgrade any template based on the stan
|
|||
|
||||
This key was already checked when it was installed (notice that the "From" line refers to a location on your local disk), so you can safely say yes to this prompt.
|
||||
|
||||
**Note:** If you encounter no errors, proceed to step 4.
|
||||
- **Note:** If you encounter no errors, proceed to step 4.
|
||||
If you do encounter errors, see the next two points first.
|
||||
|
||||
* If `dnf` reports that you do not have enough free disk space to proceed
|
||||
- If `dnf` reports that you do not have enough free disk space to proceed
|
||||
with the upgrade process, create an empty file in dom0 to use as a cache
|
||||
and attach it to the template as a virtual disk.
|
||||
|
||||
|
@ -121,7 +121,7 @@ The same general procedure may be used to upgrade any template based on the stan
|
|||
|
||||
If this attempt is successful, proceed to step 4.
|
||||
|
||||
* `dnf` may complain: `At least X MB more space needed on the / filesystem.`
|
||||
- `dnf` may complain: `At least X MB more space needed on the / filesystem.`
|
||||
|
||||
In this case, one option is to [resize the template's disk image](/doc/resize-disk-image/) before reattempting the upgrade process.
|
||||
(See [Additional Information](#additional-information) below for other options.)
|
||||
|
|
|
@ -30,20 +30,20 @@ After that, use the directional buttons (`>`, `>>`, `<` or `<<`) to customize wh
|
|||
|
||||
To update the list of available applications, use the `qvm-sync-appmenus` command in dom0, replacing `<QUBE_NAME>` by the qube name:
|
||||
|
||||
```console
|
||||
```
|
||||
$ qvm-sync-appmenus <QUBE_NAME>
|
||||
```
|
||||
|
||||
When using the *Refresh Applications* button in a qube's settings, the command `qvm-sync-appmenus` is used at least one time. When refreshing an AppVM application, it is also run against the template. So the console equivalent of clicking the *Refresh button* is the following (always in dom0):
|
||||
|
||||
```console
|
||||
```
|
||||
$ qvm-sync-appmenus <APPVM_NAME>
|
||||
$ qvm-sync-appmenus <TEMPLATE_NAME>
|
||||
```
|
||||
|
||||
In dom0, the `qvm-appmenus` tool allows the user to see the list of available applications (unstable feature), the whitelist of currently show application (unstable feature) and to change this list:
|
||||
|
||||
```console
|
||||
```
|
||||
$ qvm-appmenus --set-whitelist <FILE_PATH> <QUBE_NAME>
|
||||
```
|
||||
|
||||
|
|
|
@ -32,9 +32,9 @@ As you can see, the `sudo lvs | head` command includes additional important colu
|
|||
|
||||
If your system is able to boot, but cannot load a desktop environment, it is possible to login to dom0 terminal with Alt + Ctrl + F2.
|
||||
|
||||
If this does not work, check the size of /var/lib/qubes/qubes.xml.
|
||||
If it is zero, you'll need to use one of the file backup (stored in /var/lib/qubes/backup), hopefully you have the current data there.
|
||||
Find the most recent one and place in /var/lib/qubes/qubes.xml instead of the empty file.
|
||||
If this does not work, check the size of `/var/lib/qubes/qubes.xml`.
|
||||
If it is zero, you'll need to use one of the file backup (stored in `/var/lib/qubes/backup`), hopefully you have the current data there.
|
||||
Find the most recent one and place in `/var/lib/qubes/qubes.xml` instead of the empty file.
|
||||
|
||||
In any case you'll need some disk space to start the VM. Check `df -h` output if you have some.
|
||||
If not, here are some hints how to free some disk space:
|
||||
|
|
|
@ -134,7 +134,7 @@ Some laptops cannot read from an external boot device larger than 8GB. If you en
|
|||
|
||||
## Installation completes successfully but then boot loops or hangs on black screen
|
||||
|
||||
There is a [common bug in UEFI implementation](https://xen.markmail.org/message/f6lx2ab4o2fch35r) affecting mostly Lenovo systems, but probably some others too.
|
||||
There is a [common bug in UEFI implementation](https://web.archive.org/web/20170815084755/https://xen.markmail.org/message/f6lx2ab4o2fch35r) affecting mostly Lenovo systems, but probably some others too.
|
||||
While some systems need `mapbs` and/or `noexitboot` disabled to boot, others require them enabled at all times.
|
||||
Although these are enabled by default in the installer, they are disabled after the first stage of a successful install.
|
||||
You can re-enable them either as part of the install process:
|
||||
|
@ -178,7 +178,7 @@ Or if you have already rebooted after the first stage install and have encounter
|
|||
|
||||
## Installation completes successfully but then system crash/restarts on next boot
|
||||
|
||||
Some Dell systems and probably others have [another bug in UEFI firmware](https://markmail.org/message/amw5336otwhdxi76).
|
||||
Some Dell systems and probably others have [another bug in UEFI firmware](https://web.archive.org/web/20170901231026/https://markmail.org/message/amw5336otwhdxi76).
|
||||
These systems need `efi=attr=uc` enabled at all times.
|
||||
Although this is enabled by default in the installer, it is disabled after the first stage of a successful install.
|
||||
You can re-enable it either as part of the install process:
|
||||
|
|
|
@ -19,7 +19,7 @@ For guidance setting up a USB qube, see the [USB documentation](/doc/how-to-use-
|
|||
Currently (until issue [1082](https://github.com/QubesOS/qubes-issues/issues/1082) gets implemented), if you remove the device before detaching it from the qube, Qubes OS (more precisely, `libvirtd`) will think that the device is still attached to the qube and will not allow attaching further devices under the same name.
|
||||
This may be characterized by VM manager crashes and the error message: `Houston, we have a problem`.
|
||||
The easiest way to recover from such a situation is to reboot the qube to which the device was attached.
|
||||
If this isn't an option, you can manually recover from the situation by following the instructions at the [Block Devices documentation](/doc/how-to-use-block-storage-devices/#what-if-i-removed-the-device-before-detaching-it-from-the-vm)
|
||||
If this isn't an option, you can manually recover from the situation by following the instructions at the [Block Devices documentation](/doc/how-to-use-block-storage-devices/#what-if-i-removed-the-device-before-detaching-it-from-the-vm).
|
||||
|
||||
## "Device attach failed" error
|
||||
|
||||
|
|
|
@ -46,5 +46,6 @@ sudo notify-send "$(hostname): Test notify-send OK" --icon=network-idle
|
|||
You should see the `info` message appear on the top of your screen.
|
||||
If that is the case then `notify-send` is not the issue.
|
||||
If it is not, and you have an error of some sort you can:
|
||||
|
||||
1. Remove all calls to `notify-send` from scripts you are using to start VPN
|
||||
2. Use another template qube that has a working `notify-send` or find proper guide and make your current template run `notify-send` work properly.
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue