mirror of
https://github.com/QubesOS/qubes-doc.git
synced 2025-08-07 14:12:18 -04:00
Reorganize files to account for new "External" section
QubesOS/qubes-issues#4693
This commit is contained in:
parent
5cc99a23d1
commit
d31c786942
203 changed files with 0 additions and 0 deletions
98
user/advanced-configuration/bind-dirs.md
Normal file
98
user/advanced-configuration/bind-dirs.md
Normal file
|
@ -0,0 +1,98 @@
|
|||
---
|
||||
layout: doc
|
||||
title: How to make any file in a TemplateBasedVM persistent using bind-dirs
|
||||
permalink: /doc/bind-dirs/
|
||||
redirect_from:
|
||||
- /en/doc/bind-dirs/
|
||||
---
|
||||
|
||||
# How to make any file in a TemplateBasedVM persistent using bind-dirs #
|
||||
|
||||
## What are bind-dirs? ##
|
||||
|
||||
With [bind-dirs](https://github.com/QubesOS/qubes-core-agent-linux/blob/master/vm-systemd/bind-dirs.sh)
|
||||
any arbitrary files or folders can be made persistent in TemplateBasedVMs.
|
||||
|
||||
## What is it useful for? ##
|
||||
|
||||
In a TemplateBasedVM all of the file system comes from the template except `/home`, `/usr/local`, and `/rw`.
|
||||
This means that changes in the rest of the filesystem are lost when the TemplateBasedVM is shutdown.
|
||||
bind-dirs provides a mechanism whereby files usually taken from the template can be persisted across reboots.
|
||||
|
||||
For example, in Whonix, [Tor's data dir /var/lib/tor has been made persistent in the TemplateBased ProxyVM sys-whonix](https://github.com/Whonix/qubes-whonix/blob/8438d13d75822e9ea800b9eb6024063f476636ff/usr/lib/qubes-bind-dirs.d/40_qubes-whonix.conf#L5).
|
||||
In this way sys-whonix can benefit from the Tor anonymity feature 'persistent Tor entry guards' but does not have to be a StandaloneVM.
|
||||
|
||||
## How to use bind-dirs.sh? ##
|
||||
|
||||
Inside your TemplateBasedVM.
|
||||
|
||||
1. Make sure folder `/rw/config/qubes-bind-dirs.d` exists.
|
||||
|
||||
sudo mkdir -p /rw/config/qubes-bind-dirs.d
|
||||
|
||||
2. Create a file `/rw/config/qubes-bind-dirs.d/50_user.conf` with root rights inside a TemplateBasedVM.
|
||||
|
||||
3. Edit the file 50_user.conf to append a folder or file name to the `binds` variable. (In the following example we are using folder `/var/lib/tor`. You can replace that name with a folder or file name of your choice.)
|
||||
|
||||
binds+=( '/var/lib/tor' )
|
||||
|
||||
Multiple entries are possible, each on a separate line.
|
||||
|
||||
4. Save.
|
||||
|
||||
5. Reboot the TemplateBasedVM.
|
||||
|
||||
6. Done.
|
||||
|
||||
If you added for example folder `/var/lib/tor` to the `binds` variable, from now on any files within that folder will persist reboots. If you added for example file `/etc/tor/torrc` to the `binds` variable, from now on any modifications to that file will persist reboots.
|
||||
|
||||
## Other Configuration Folders ##
|
||||
|
||||
* `/usr/lib/qubes-bind-dirs.d` (lowest priority, for packages)
|
||||
* `/etc/qubes-bind-dirs.d` (intermediate priority, for template wide configuration)
|
||||
* `/rw/config/qubes-bind-dirs.d` (highest priority, for per VM configuration)
|
||||
|
||||
## How does it work? ##
|
||||
|
||||
bind-dirs.sh is called on startup of a TemplateBasedVM, and configuration files in the configuration folders above are parsed to build a bash array.
|
||||
Files or folders identified in the array are copied to /rw/bind-dirs if they do not already exist there, and are then bind mounted over the original files/folders.
|
||||
|
||||
Creation of the file and folders in /rw/bind-dirs should be automatic the first time the TemplateBasedVM is restarted after configuration.
|
||||
|
||||
If you want to circumvent this process, you can create the relevant filestructure under /rw/bind-dirs and make any changes at the same time that you perform the configuration, before reboot.
|
||||
|
||||
## Limitations ##
|
||||
|
||||
* Files that exist in the TemplateVM root image cannot be deleted in the TemplateBasedVMs root image using bind-dirs.sh.
|
||||
* Re-running `sudo /usr/lib/qubes/bind-dirs.sh` without a previous `sudo /usr/lib/qubes/bind-dirs.sh umount` does not work.
|
||||
* Running `sudo /usr/lib/qubes/bind-dirs.sh umount` after boot (before shutdown) is probably not sane and nothing can be done about that.
|
||||
* Many editors create a temporary file and copy it over the original file. If you have bind mounted an individual file this will break the mount.
|
||||
Any changes you make will not survive a reboot. If you think it likely you will want to edit a file, then either include the parent directory in bind-dirs rather than the file, or perform the file operation on the file in /rw/bind-dirs.
|
||||
* Some files are altered when a qube boots - e.g. `/etc/hosts`. If you try to use bind-dirs on such files you may break your qube in unpredictable ways.
|
||||
|
||||
You can add persistent rules to `/etc/hosts` file using script `/rw/config/rc.local` that is designed to override configuration in /etc, starting services and etc. For example, to make software inside some TemplateBasedVM resolving the domain `example.com` as `127.0.0.1` open `/rw/config/rc.local` inside this TemplateBasedVM and add:
|
||||
|
||||
~~~
|
||||
echo '127.0.0.1 example.com' >> /etc/hosts
|
||||
~~~
|
||||
|
||||
After every boot of the TemplateBasedVM `rc.local` script will add line `127.0.0.1 example.com` to `/etc/hosts` file and the software inside the TemplateBasedVM will resolve domain `example.com` accordingly. You cam add several rules to `/etc/hosts` the same way.
|
||||
|
||||
## How to remove binds from bind-dirs.sh? ##
|
||||
|
||||
`binds` is actually just a bash variable (an array) and the bind-dirs.sh configuration folders are `source`d as bash snippets in lexical order.
|
||||
Therefore if you wanted to remove an existing entry from the `binds` array, you could do that by using a lexically higher configuration file.
|
||||
For example, if you wanted to make `/var/lib/tor` non-persistant in `sys-whonix` without manually editing [`/usr/lib/qubes-bind-dirs.d/40_qubes-whonix.conf`](https://github.com/Whonix/qubes-whonix/blob/master/usr/lib/qubes-bind-dirs.d/40_qubes-whonix.conf), you could use the following.
|
||||
|
||||
`/rw/config/qubes-bind-dirs.d/50_user.conf`
|
||||
|
||||
~~~
|
||||
binds=( "${binds[@]/'/var/lib/tor'}" )
|
||||
~~~
|
||||
|
||||
(Editing `/usr/lib/qubes-bind-dirs.d/40_qubes-whonix.conf` directly is strongly discouraged, since such changes get lost when that file is changed in the package on upgrades.)
|
||||
|
||||
## Discussion ##
|
||||
|
||||
[TemplateBasedVMs: make selected files and folders located in the root image persistent- review bind-dirs.sh](https://groups.google.com/forum/#!topic/qubes-devel/tcYQ4eV-XX4/discussion)
|
||||
|
101
user/advanced-configuration/config-files.md
Normal file
101
user/advanced-configuration/config-files.md
Normal file
|
@ -0,0 +1,101 @@
|
|||
---
|
||||
layout: doc
|
||||
title: Config Files
|
||||
permalink: /doc/config-files/
|
||||
redirect_from:
|
||||
- /en/doc/config-files/
|
||||
- /doc/ConfigFiles/
|
||||
- "/doc/UserDoc/ConfigFiles/"
|
||||
- "/wiki/UserDoc/ConfigFiles/"
|
||||
---
|
||||
|
||||
Configuration Files
|
||||
===================
|
||||
|
||||
Qubes-specific VM config files
|
||||
------------------------------
|
||||
|
||||
These files are placed in /rw, which survives a VM restart.
|
||||
That way, they can be used to customize a single VM instead of all VMs based on the same template.
|
||||
The scripts here all run as root.
|
||||
|
||||
- `/rw/config/rc.local` - script runs at VM startup.
|
||||
Good place to change some service settings, replace config files with its copy stored in /rw/config, etc.
|
||||
Example usage:
|
||||
|
||||
~~~
|
||||
# Store bluetooth keys in /rw to keep them across VM restarts
|
||||
rm -rf /var/lib/bluetooth
|
||||
ln -s /rw/config/var-lib-bluetooth /var/lib/bluetooth
|
||||
~~~
|
||||
|
||||
- `/rw/config/qubes-ip-change-hook` - script runs in NetVM after every external IP change and on "hardware" link status change.
|
||||
|
||||
- In ProxyVMs (or AppVMs with `qubes-firewall` service enabled), scripts placed in the following directories will be executed in the listed order followed by `qubes-firewall-user-script` after each firewall update.
|
||||
Good place to write own custom firewall rules.
|
||||
|
||||
~~~
|
||||
/etc/qubes/qubes-firewall.d
|
||||
/rw/config/qubes-firewall.d
|
||||
/rw/config/qubes-firewall-user-script
|
||||
~~~
|
||||
|
||||
- `/rw/config/suspend-module-blacklist` - list of modules (one per line) to be unloaded before system goes to sleep.
|
||||
The file is used only in a VM with PCI devices attached.
|
||||
Intended for use with problematic device drivers.
|
||||
|
||||
Note that scripts need to be executable (chmod +x) to be used.
|
||||
|
||||
Also, take a look at [bind-dirs](/doc/bind-dirs) for instructions on how to easily modify arbitrary system files in an AppVM and have those changes persist.
|
||||
|
||||
|
||||
GUI and audio configuration in dom0
|
||||
-----------------------------------
|
||||
|
||||
The GUI configuration file `/etc/qubes/guid.conf` in one of a few not managed by qubes-prefs or the Qubes Manager tool.
|
||||
Sample config (included in default installation):
|
||||
|
||||
~~~
|
||||
# Sample configuration file for Qubes GUI daemon
|
||||
# For syntax go http://www.hyperrealm.com/libconfig/libconfig_manual.html
|
||||
|
||||
global: {
|
||||
# default values
|
||||
#allow_fullscreen = false;
|
||||
#allow_utf8_titles = false;
|
||||
#secure_copy_sequence = "Ctrl-Shift-c";
|
||||
#secure_paste_sequence = "Ctrl-Shift-v";
|
||||
#windows_count_limit = 500;
|
||||
#audio_low_latency = false;
|
||||
};
|
||||
|
||||
# most of setting can be set per-VM basis
|
||||
|
||||
VM: {
|
||||
work: {
|
||||
#allow_utf8_titles = true;
|
||||
};
|
||||
video-vm: {
|
||||
#allow_fullscreen = true;
|
||||
};
|
||||
};
|
||||
~~~
|
||||
|
||||
Currently supported settings:
|
||||
|
||||
- `allow_fullscreen` - allow VM to request its windows to go fullscreen (without any colorful frame).
|
||||
|
||||
**Note:** Regardless of this setting, you can always put a window into fullscreen mode in Xfce4 using the trusted window manager by right-clicking on a window's title bar and selecting "Fullscreen".
|
||||
This functionality should still be considered safe, since a VM window still can't voluntarily enter fullscreen mode.
|
||||
The user must select this option from the trusted window manager in dom0.
|
||||
To exit fullscreen mode from here, press `alt` + `space` to bring up the title bar menu again, then select "Leave Fullscreen".
|
||||
|
||||
- `allow_utf8_titles` - allow the use of UTF-8 in window titles; otherwise, non-ASCII characters are replaced by an underscore.
|
||||
|
||||
- `secure_copy_sequence` and `secure_paste_sequence` - key sequences used to trigger secure copy and paste.
|
||||
|
||||
- `windows_count_limit` - limit on concurrent windows.
|
||||
|
||||
- `audio_low_latency` - force low-latency audio mode (about 40ms compared to 200-500ms by default).
|
||||
Note that this will cause much higher CPU usage in dom0.
|
||||
|
306
user/advanced-configuration/disposablevm-customization.md
Normal file
306
user/advanced-configuration/disposablevm-customization.md
Normal file
|
@ -0,0 +1,306 @@
|
|||
---
|
||||
layout: doc
|
||||
title: DisposableVM Customization
|
||||
permalink: /doc/disposablevm-customization/
|
||||
redirect_from:
|
||||
- /doc/dispvm-customization/
|
||||
- /en/doc/dispvm-customization/
|
||||
- /doc/DispVMCustomization/
|
||||
- /doc/UserDoc/DispVMCustomization/
|
||||
- /wiki/UserDoc/DispVMCustomization/
|
||||
---
|
||||
|
||||
# DisposableVM Customization
|
||||
|
||||
|
||||
## Introduction
|
||||
|
||||
A DisposableVM (previously known as a "DispVM") in can be based on any TemplateBasedVM.
|
||||
You can also choose to use different DisposableVM Templates for different DisposableVMs.
|
||||
To prepare AppVM to be a DisposableVM Template, you need to set `template_for_dispvms` property, for example:
|
||||
|
||||
[user@dom0 ~]$ qvm-prefs fedora-26-dvm template_for_dispvms True
|
||||
|
||||
Additionally, if you want to have menu entries for starting applications in DisposableVM based on this AppVM (instead of in the AppVM itself), you can achieve it with `appmenus-dispvm` feature:
|
||||
|
||||
[user@dom0 ~]$ qvm-features fedora-26-dvm appmenus-dispvm 1
|
||||
|
||||
|
||||
## Security
|
||||
|
||||
If a DisposableVM Template becomes compromised, then any DisposableVM based on that DisposableVM Template could be compromised.
|
||||
Therefore, you should not make any risky customizations (e.g., installing untrusted browser plugins) in important DisposableVM Templates.
|
||||
In particular, the *default* DisposableVM Template is important because it is used by the "Open in DisposableVM" feature.
|
||||
This means that it will have access to everything that you open with this feature.
|
||||
For this reason, it is strongly recommended that you base the default DisposableVM Template on a trusted TemplateVM and refrain from making any risky customizations to it.
|
||||
|
||||
|
||||
## Creating a new DisposableVM Template
|
||||
|
||||
In Qubes 4.0, you're no longer restricted to a single DisposableVM Template. Instead, you can create as many as you want. Whenever you start a new DisposableVM, you can choose to base it on whichever DisposableVM Template you like.
|
||||
To create new DisposableVM Template, lets say `custom-disposablevm-template`, based on `debian-9` template, use following commands:
|
||||
|
||||
[user@dom0 ~]$ qvm-create --template debian-9 --label red custom-disposablevm-template
|
||||
[user@dom0 ~]$ qvm-prefs custom-disposablevm-template template_for_dispvms True
|
||||
[user@dom0 ~]$ qvm-features custom-disposablevm-template appmenus-dispvm 1
|
||||
|
||||
Additionally you may want to set it as default DisposableVM Template:
|
||||
|
||||
[user@dom0 ~]$ qubes-prefs default_dispvm custom-disposablevm-template
|
||||
|
||||
The above default is used whenever a qube request starting a new DisposableVM and do not specify which one (for example `qvm-open-in-dvm` tool). This can be also set in qube settings and will affect service calls from that qube. See [qrexec documentation](/doc/qrexec3/#extra-keywords-available-in-qubes-40-and-later) for details.
|
||||
|
||||
If you wish to use the `fedora-minimal` template as a DisposableVM Template, see the "DisposableVM Template" use case under [fedora-minimal customization](/doc/templates/fedora-minimal/#customization).
|
||||
|
||||
|
||||
## Customization of DisposableVM
|
||||
|
||||
It is possible to change the settings for each new DisposableVM.
|
||||
This can be done by customizing the DisposableVM Template on which it is based:
|
||||
|
||||
1. Start a terminal in the `fedora-26-dvm` qube (or another DisposableVM Template) by running the following command in a dom0 terminal. (If you enable `appmenus-dispvm` feature (as explained at the top), applications menu for this VM (`fedora-26-dvm`) will be "Disposable: fedora-26-dvm" (instead of "Domain: fedora-26-dvm") and entries there will start new DisposableVM based on that VM (`fedora-26-dvm`). Not in that VM (`fedora-26-dvm`) itself).
|
||||
|
||||
[user@dom0 ~]$ qvm-run -a fedora-26-dvm gnome-terminal
|
||||
|
||||
2. Change the qube's settings and/or applications, as desired. Some examples of changes you may want to make include:
|
||||
- Changing Firefox's default startup settings and homepage.
|
||||
- Changing default editor, image viewer.
|
||||
- Changing the DisposableVM's default NetVM. For example, you may wish to set the NetVM to "none." Then, whenever you start a new DisposableVM, you can choose your desired ProxyVM manually (by changing the newly-started DisposableVMs settings). This is useful if you sometimes wish to use a DisposableVM with a Whonix Gateway, for example. It is also useful if you sometimes wish to open untrusted files in a network-disconnected DisposableVM.
|
||||
|
||||
4. Shutdown the qube (either by `poweroff` from qube's terminal, or `qvm-shutdown` from dom0 terminal).
|
||||
|
||||
|
||||
## Using static DisposableVMs for sys-*
|
||||
|
||||
You can use a static DisposableVM for `sys-*` as long as it is stateless.
|
||||
For example, a `sys-net` using DHCP or `sys-usb` will work.
|
||||
In most cases `sys-firewall` will also work, even if you have configured AppVM firewall rules.
|
||||
The only exception is if you require something like VM to VM communication and have manually edited `iptables` or other items directly inside the firewall AppVM.
|
||||
|
||||
To create one that has no PCI devices attached, such as for `sys-firewall`:
|
||||
|
||||
~~~
|
||||
qvm-create -C DispVM -l red <sys-VMName>
|
||||
qvm-prefs <sys-VMName> autostart true
|
||||
qvm-prefs <sys-VMName> netvm <sys-net>
|
||||
qvm-prefs <sys-VMName> provides_network true
|
||||
~~~
|
||||
|
||||
Next, set the old `sys-firewall` autostart to false, and update any references to the old one to instead point to the new.
|
||||
For example, with `qvm-prefs work netvm sys-firewall2`.
|
||||
|
||||
To create one with a PCI device attached such as for `sys-net` or `sys-usb`, use the additional commands as follows.
|
||||
|
||||
**Note** You can use `qvm-pci` to [determine](/doc/pci-devices/#qvm-pci-usage) the `<BDF>`.
|
||||
Also, you will often need to include the `-o no-strict-reset=True` [option](/doc/pci-devices/#no-strict-reset) with USB controllers.
|
||||
|
||||
~~~
|
||||
qvm-create -C DispVM -l red <sys-VMName>
|
||||
qvm-prefs <sys-VMName> virt_mode hvm
|
||||
qvm-service <sys-VMName> meminfo-writer off
|
||||
qvm-pci attach --persistent <sys-VMName> dom0:<BDF>
|
||||
qvm-prefs <sys-VMName> autostart true
|
||||
qvm-prefs <sys-VMName> netvm ''
|
||||
# optional, if this DisposableVM will be providing networking
|
||||
qvm-prefs <sys-VMName> provides_network true
|
||||
~~~
|
||||
|
||||
Next, set the old `sys-` VM's autostart to false, and update any references to the old one.
|
||||
For example, `qvm-prefs sys-firewall netvm <sys-VMName>`.
|
||||
See below for a complete example of a `sys-net` replacement:
|
||||
|
||||
~~~
|
||||
qvm-create -C DispVM -l red sys-net2
|
||||
qvm-prefs sys-net2 virt_mode hvm
|
||||
qvm-service sys-net2 meminfo-writer off
|
||||
qvm-pci attach --persistent sys-net2 dom0:00_1a.0
|
||||
qvm-prefs sys-net2 autostart true
|
||||
qvm-prefs sys-net2 netvm ''
|
||||
qvm-prefs sys-net2 provides_network true
|
||||
qvm-prefs sys-net autostart false
|
||||
qvm-prefs sys-firewall netvm sys-net2
|
||||
qubes-prefs clockvm sys-net2
|
||||
~~~
|
||||
|
||||
Note that these types of DisposableVMs will not show in the Application menu, but you can still get to a terminal if needed with `qvm-run <sys-VMName> gnome-terminal`.
|
||||
|
||||
|
||||
## Adding programs to DisposableVM Application Menu
|
||||
|
||||
For added convenience, arbitrary programs can be added to the Application Menu of the DisposableVM.
|
||||
|
||||
In order to do that, select "Qube settings" entry in selected base AppVM, go to "Applications" tab and select desired applications as for any other qube.
|
||||
|
||||
Note that currently only applications whose main process keeps running until you close the application (i.e. do not start a background process instead) will work. One of known examples of incompatible applications is GNOME Terminal (shown on the list as "Terminal"). Choose different terminal emulator (like XTerm) instead.
|
||||
|
||||
|
||||
## Create Custom sys-net sys-firewall and sys-usb DisposableVMs
|
||||
|
||||
Users have the option of creating customized DisposableVMs for the `sys-net`, `sys-firewall` and `sys-usb` VMs. In this configuration, a fresh VM instance is created each time a DisposableVM is launched. Functionality is near-identical to the default VMs created following a new Qubes’ installation, except the user benefits from a non-persistent filesystem.
|
||||
|
||||
Functionality is not limited, users can:
|
||||
|
||||
* Set custom firewall rule sets and run Qubes VPN scripts.
|
||||
* Set DisposableVMs to autostart at system boot.
|
||||
* Attach PCI devices with the `--persistent` option.
|
||||
|
||||
Using DisposableVMs in this manner is ideal for untrusted qubes which require persistent PCI devices, such as USB VMs and NetVMs.
|
||||
|
||||
>_**Note:**_ Users who want customized VPN or firewall rule sets must create a separate DisposableVM Template for use by each DisposableVM. If DisposableVM Template customization is not needed, then a single DisposableVM Template is used as a template for all DisposableVMs.
|
||||
|
||||
|
||||
### Create and configure the DisposableVM Template on which the DisposableVM will be based
|
||||
|
||||
1. Create the DisposableVM Template
|
||||
|
||||
[user@dom0 ~]$ qvm-create --class AppVM --label gray <DisposableVM-Template-Name>
|
||||
|
||||
2. _(optional)_ In the DisposableVM Template, add custom firewall rule sets, Qubes VPN scripts etc
|
||||
|
||||
Firewall rules sets and Qubes VPN scripts can be added just like any other VM
|
||||
|
||||
3. Set the DisposableVM Template as template for DisposableVMs
|
||||
|
||||
[user@dom0 ~]$ qvm-prefs <DisposableVM-Template-Name> template_for_dispvms true
|
||||
|
||||
|
||||
### Create the sys-net DisposableVM
|
||||
|
||||
1. Create `sys-net` DisposableVM based on the DisposableVM Template
|
||||
|
||||
[user@dom0 ~]$ qvm-create --template <DisposableVM-Template-Name> --class DispVM --label red disp-sys-net
|
||||
|
||||
2. Set `disp-sys-net` virtualization mode to [hvm](/doc/hvm/)
|
||||
|
||||
[user@dom0 ~]$ qvm-prefs disp-sys-net virt_mode hvm
|
||||
|
||||
3. Set `disp-sys-net` to provide network for other VMs
|
||||
|
||||
[user@dom0 ~]$ qvm-prefs disp-sys-net provides_network true
|
||||
|
||||
4. Set `disp-sys-net` NetVM to none
|
||||
|
||||
[user@dom0 ~]$ qvm-prefs disp-sys-net netvm ""
|
||||
|
||||
5. List all available PCI devices to determine the correct _backend:BDF_ address(es) to assign to `disp-sys-net`
|
||||
|
||||
[user@dom0 ~]$ qvm-pci
|
||||
|
||||
6. Attach the network PCI device(s) to `disp-sys-net`: Finding and assigning PCI devices can be found [here](/doc/pci-devices/)
|
||||
|
||||
[user@dom0 ~]$ qvm-pci attach --persistent disp-sys-net <backend>:<bdf>
|
||||
|
||||
7. _(recommended)_ Set `disp-sys-net` to start automatically when Qubes boots
|
||||
|
||||
[user@dom0 ~]$ qvm-prefs disp-sys-net autostart true
|
||||
|
||||
8. _(optional)_ Set `disp-sys-net` as the dom0 time source
|
||||
|
||||
[user@dom0 ~]$ qubes-prefs clockvm disp-sys-net
|
||||
|
||||
|
||||
### Create the sys-firewall DisposableVM
|
||||
|
||||
1. Create `sys-firewall` DisposableVM
|
||||
|
||||
[user@dom0 ~]$ qvm-create --template <DisposableVM-Template-Name> --class DispVM --label green disp-sys-firewall
|
||||
|
||||
2. Set `disp-sys-firewall` to provide network for other VMs
|
||||
|
||||
[user@dom0 ~]$ qvm-prefs disp-sys-firewall provides_network true
|
||||
|
||||
3. Set `disp-sys-net` as the NetVM for `disp-sys-firewall`
|
||||
|
||||
[user@dom0 ~]$ qvm-prefs disp-sys-firewall netvm disp-sys-net
|
||||
|
||||
4. Set `disp-sys-firewall` as NetVM for other AppVMs
|
||||
|
||||
[user@dom0 ~]$ qvm-prefs <vm_name> netvm disp-sys-firewall
|
||||
|
||||
5. _(recommended)_ Set `disp-sys-firewall` to auto-start when Qubes boots
|
||||
|
||||
[user@dom0 ~]$ qvm-prefs disp-sys-firewall autostart true
|
||||
|
||||
6. _(optional)_ Set `disp-sys-firewall` as the default NetVM
|
||||
|
||||
[user@dom0 ~]$ qubes-prefs default_netvm disp-sys-firewall
|
||||
|
||||
|
||||
### Create the sys-usb DisposableVM
|
||||
|
||||
1. Create the `disp-sys-usb`
|
||||
|
||||
[user@dom0 ~]$ qvm-create --template <disposablevm-template-name> --class DispVM --label red disp-sys-usb
|
||||
|
||||
2. Set the `disp-sys-usb` virtualization mode to hvm
|
||||
|
||||
[user@dom0 ~]$ qvm-prefs disp-sys-usb virt_mode hvm
|
||||
|
||||
3. Set `disp-sys-usb` NetVM to none
|
||||
|
||||
[user@dom0 ~]$ qvm-prefs usb-disp netvm ""
|
||||
|
||||
4. List all available PCI devices
|
||||
|
||||
[user@dom0 ~]$ qvm-pci
|
||||
|
||||
5. Attach the USB controller to the `disp-sys-usb`
|
||||
|
||||
>_**Note:**_ Most of the commonly used USB controllers (all Intel integrated controllers) require the `-o no-strict-reset=True` option to be set. Instructions detailing how this option is set can be found [here](/doc/pci-devices/#no-strict-reset).
|
||||
|
||||
[user@dom0 ~]$ qvm-pci attach --persistent disp-sys-usb <backined>:<bdf>
|
||||
|
||||
6. _(optional)_ Set `disp-sys-usb` to auto-start when Qubes boots
|
||||
|
||||
[user@dom0 ~]$ qvm-prefs disp-sys-usb autostart true
|
||||
|
||||
7. Users should now follow instructions on [How to hide USB controllers from dom0](/doc/usb-qubes/#how-to-hide-all-usb-controllers-from-dom0)
|
||||
|
||||
8. At this point, your mouse may not work.
|
||||
Edit the `qubes.InputMouse` policy file in dom0, which is located here:
|
||||
|
||||
/etc/qubes-rpc/policy/qubes.InputMouse
|
||||
|
||||
Add a line like this to the top of the file:
|
||||
|
||||
disp-sys-usb dom0 allow,user=root
|
||||
|
||||
|
||||
### Starting the DisposableVMs
|
||||
|
||||
Prior to starting the new VMs, users should ensure that no other VMs such as the old `sys-net` and `sys-usb` VMs are running. This is because no two VMs can share the same PCI device while both running. It is recommended that users detach the PCI devices from the old VMs without deleting them. This will allow users to reattach the PCI devices if the newly created DisposableVMs fail to start.
|
||||
|
||||
Detach PCI device from VM
|
||||
|
||||
[user@dom0~]$ qvm-pci detach <vm_name> <backend>:<bdf>
|
||||
|
||||
|
||||
### Troubleshooting
|
||||
|
||||
The `disp-sys-usb` VM does not start
|
||||
|
||||
If the `disp-sys-usb` does not start, it could be due to a PCI passthrough problem. For more details on this issue along with possible solutions, users can look [here](/doc/pci-devices/#pci-passthrough-issues)
|
||||
|
||||
|
||||
## Deleting DisposableVMs
|
||||
|
||||
While working in a DisposableVM, you may want to open a document in another DisposableVM.
|
||||
For this reason, the property `default_dispvm` may be set to the name of your DisposableVM in a number of VMs:
|
||||
|
||||
[user@dom0 ~]$ qvm-prefs workvm | grep default_dispvm
|
||||
default_dispvm - custom-disposablevm-template
|
||||
|
||||
This will prevent the deletion of the DisposableVM Template. In order to fix this you need to unset the `default_dispvm` property:
|
||||
|
||||
[user@dom0 ~]$ qvm-prefs workvm default_dispvm ""
|
||||
|
||||
You can then delete the DisposableVM Template:
|
||||
|
||||
[user@dom0 ~]$ qvm-remove custom-disposablevm-template
|
||||
This will completely remove the selected VM(s)
|
||||
custom-disposablevm-template
|
||||
|
||||
If you still encounter the issue, you may have forgot to clean an entry. Looking at the system logs will help you
|
||||
|
||||
[user@dom0 ~]$ journalctl | tail
|
||||
|
333
user/advanced-configuration/managing-vm-kernel.md
Normal file
333
user/advanced-configuration/managing-vm-kernel.md
Normal file
|
@ -0,0 +1,333 @@
|
|||
---
|
||||
layout: doc
|
||||
title: Managing VM kernel
|
||||
permalink: /doc/managing-vm-kernel/
|
||||
redirect_from:
|
||||
- /en/doc/managing-vm-kernel/
|
||||
---
|
||||
|
||||
VM kernel managed by dom0
|
||||
=========================
|
||||
|
||||
By default, VMs kernels are provided by dom0. This means that:
|
||||
|
||||
1. You can select the kernel version (using GUI VM Settings tool or `qvm-prefs` commandline tool);
|
||||
2. You can modify kernel options (using `qvm-prefs` commandline tool);
|
||||
3. You can **not** modify any of the above from inside a VM;
|
||||
4. Installing additional kernel modules is cumbersome.
|
||||
|
||||
*Note* In the examples below, although the specific version numbers might be old, the commands have been verified on R3.2 and R4.0 with debian-9 and fedora-26 templates.
|
||||
|
||||
To select which kernel a given VM will use, you can either use Qubes Manager (VM settings, advanced tab), or the `qvm-prefs` tool:
|
||||
|
||||
~~~
|
||||
[user@dom0 ~]$ qvm-prefs -s my-appvm kernel
|
||||
Missing kernel version argument!
|
||||
Possible values:
|
||||
1) default
|
||||
2) none (kernels subdir in VM)
|
||||
3) <kernel version>, one of:
|
||||
- 3.18.16-3
|
||||
- 3.18.17-4
|
||||
- 3.19.fc20
|
||||
- 3.18.10-2
|
||||
[user@dom0 ~]$ qvm-prefs -s my-appvm kernel 3.18.17-4
|
||||
[user@dom0 ~]$ qvm-prefs -s my-appvm kernel default
|
||||
~~~
|
||||
|
||||
To check/change the default kernel you can either go to "Global settings" in Qubes Manager, or use the `qubes-prefs` tool:
|
||||
|
||||
~~~
|
||||
[user@dom0 ~]$ qubes-prefs
|
||||
clockvm : sys-net
|
||||
default-fw-netvm : sys-net
|
||||
default-kernel : 3.18.17-4
|
||||
default-netvm : sys-firewall
|
||||
default-template : fedora-21
|
||||
updatevm : sys-firewall
|
||||
[user@dom0 ~]$ qubes-prefs -s default-kernel 3.19.fc20
|
||||
~~~
|
||||
|
||||
To view kernel options, you can use the GUI VM Settings tool; to view and change them, use `qvm-prefs` commandline tool:
|
||||
|
||||
~~~
|
||||
[user@dom0 ~]$ qvm-prefs -g work kernelopts
|
||||
nopat
|
||||
[user@dom0 ~]$ qvm-prefs -s work kernelopts "nopat apparmor=1 security=apparmor"
|
||||
~~~
|
||||
|
||||
Installing different kernel using Qubes kernel package
|
||||
----------------------------------
|
||||
|
||||
VM kernels are packages by Qubes team in `kernel-qubes-vm` packages.
|
||||
Generally, the system will keep the three newest available versions.
|
||||
You can list them with the `rpm` command:
|
||||
|
||||
~~~
|
||||
[user@dom0 ~]$ rpm -qa 'kernel-qubes-vm*'
|
||||
kernel-qubes-vm-3.18.10-2.pvops.qubes.x86_64
|
||||
kernel-qubes-vm-3.18.16-3.pvops.qubes.x86_64
|
||||
kernel-qubes-vm-3.18.17-4.pvops.qubes.x86_64
|
||||
~~~
|
||||
|
||||
If you want a more recent version, you can check the `qubes-dom0-unstable` repository.
|
||||
There is also the `kernel-latest-qubes-vm` package which should provide a more recent (non-LTS) kernel, but has received much less testing.
|
||||
As the names suggest, keep in mind that those packages may be less stable than the default ones.
|
||||
|
||||
To check available versions in the `qubes-dom0-unstable` repository:
|
||||
|
||||
~~~
|
||||
[user@dom0 ~]$ sudo qubes-dom0-update --enablerepo=qubes-dom0-unstable --action=list kernel-qubes-vm
|
||||
Using sys-firewall as UpdateVM to download updates for Dom0; this may take some time...
|
||||
Running command on VM: 'sys-firewall'...
|
||||
Loaded plugins: langpacks, post-transaction-actions, yum-qubes-hooks
|
||||
Installed Packages
|
||||
kernel-qubes-vm.x86_64 1000:3.18.10-2.pvops.qubes installed
|
||||
kernel-qubes-vm.x86_64 1000:3.18.16-3.pvops.qubes installed
|
||||
kernel-qubes-vm.x86_64 1000:3.18.17-4.pvops.qubes installed
|
||||
Available Packages
|
||||
kernel-qubes-vm.x86_64 1000:4.1.12-6.pvops.qubes qubes-dom0-unstable
|
||||
No packages downloaded
|
||||
Installed Packages
|
||||
kernel-qubes-vm.x86_64 1000:3.18.10-2.pvops.qubes @anaconda/R3.0
|
||||
kernel-qubes-vm.x86_64 1000:3.18.16-3.pvops.qubes @/kernel-qubes-vm-3.18.16-3.pvops.qubes.x86_64
|
||||
kernel-qubes-vm.x86_64 1000:3.18.17-4.pvops.qubes @qubes-dom0-cached
|
||||
|
||||
~~~
|
||||
|
||||
Installing a new version from `qubes-dom0-unstable` repository:
|
||||
|
||||
~~~
|
||||
[user@dom0 ~]$ sudo qubes-dom0-update --enablerepo=qubes-dom0-unstable kernel-qubes-vm
|
||||
Using sys-firewall as UpdateVM to download updates for Dom0; this may take some time...
|
||||
Running command on VM: 'sys-firewall'...
|
||||
Loaded plugins: langpacks, post-transaction-actions, yum-qubes-hooks
|
||||
Resolving Dependencies
|
||||
(...)
|
||||
|
||||
===========================================================================================
|
||||
Package Arch Version Repository Size
|
||||
===========================================================================================
|
||||
Installing:
|
||||
kernel-qubes-vm x86_64 1000:4.1.12-6.pvops.qubes qubes-dom0-cached 40 M
|
||||
Removing:
|
||||
kernel-qubes-vm x86_64 1000:3.18.10-2.pvops.qubes @anaconda/R3.0 134 M
|
||||
|
||||
Transaction Summary
|
||||
===========================================================================================
|
||||
Install 1 Package
|
||||
Remove 1 Package
|
||||
|
||||
Total download size: 40 M
|
||||
Is this ok [y/d/N]: y
|
||||
Downloading packages:
|
||||
Running transaction check
|
||||
Running transaction test
|
||||
Transaction test succeeded
|
||||
Running transaction (shutdown inhibited)
|
||||
Installing : 1000:kernel-qubes-vm-4.1.12-6.pvops.qubes.x86_64 1/2
|
||||
mke2fs 1.42.12 (29-Aug-2014)
|
||||
This kernel version is used by at least one VM, cannot remove
|
||||
error: %preun(kernel-qubes-vm-1000:3.18.10-2.pvops.qubes.x86_64) scriptlet failed, exit status 1
|
||||
Error in PREUN scriptlet in rpm package 1000:kernel-qubes-vm-3.18.10-2.pvops.qubes.x86_64
|
||||
Verifying : 1000:kernel-qubes-vm-4.1.12-6.pvops.qubes.x86_64 1/2
|
||||
Verifying : 1000:kernel-qubes-vm-3.18.10-2.pvops.qubes.x86_64 2/2
|
||||
|
||||
Installed:
|
||||
kernel-qubes-vm.x86_64 1000:4.1.12-6.pvops.qubes
|
||||
|
||||
Failed:
|
||||
kernel-qubes-vm.x86_64 1000:3.18.10-2.pvops.qubes
|
||||
|
||||
Complete!
|
||||
[user@dom0 ~]$
|
||||
~~~
|
||||
|
||||
In the above example, it tries to remove the 3.18.10-2.pvops.qubes kernel (to keep only three installed), but since some VM uses it, it fails.
|
||||
Installation of the new package is unaffected by this event.
|
||||
|
||||
The newly installed package is set as the default VM kernel.
|
||||
|
||||
Installing different VM kernel based on dom0 kernel
|
||||
---------------------------------------------------
|
||||
|
||||
It is possible to package a kernel installed in dom0 as a VM kernel.
|
||||
This makes it possible to use a VM kernel which is not packaged by Qubes team.
|
||||
This includes:
|
||||
* using a Fedora kernel package
|
||||
* using a manually compiled kernel
|
||||
|
||||
To prepare such a VM kernel, you need to install the `qubes-kernel-vm-support` package in dom0 and also have matching kernel headers installed (`kernel-devel` package in the case of a Fedora kernel package).
|
||||
You can install requirements using `qubes-dom0-update`:
|
||||
|
||||
~~~
|
||||
[user@dom0 ~]$ sudo qubes-dom0-update qubes-kernel-vm-support kernel-devel
|
||||
Using sys-firewall as UpdateVM to download updates for Dom0; this may take some time...
|
||||
Running command on VM: 'sys-firewall'...
|
||||
Loaded plugins: langpacks, post-transaction-actions, yum-qubes-hooks
|
||||
Package 1000:kernel-devel-4.1.9-6.pvops.qubes.x86_64 already installed and latest version
|
||||
Resolving Dependencies
|
||||
(...)
|
||||
|
||||
================================================================================
|
||||
Package Arch Version Repository Size
|
||||
================================================================================
|
||||
Installing:
|
||||
qubes-kernel-vm-support x86_64 3.1.2-1.fc20 qubes-dom0-cached 9.2 k
|
||||
|
||||
Transaction Summary
|
||||
================================================================================
|
||||
Install 1 Package
|
||||
|
||||
Total download size: 9.2 k
|
||||
Installed size: 13 k
|
||||
Is this ok [y/d/N]: y
|
||||
Downloading packages:
|
||||
Running transaction check
|
||||
Running transaction test
|
||||
Transaction test succeeded
|
||||
Running transaction (shutdown inhibited)
|
||||
Installing : qubes-kernel-vm-support-3.1.2-1.fc20.x86_64 1/1
|
||||
|
||||
Creating symlink /var/lib/dkms/u2mfn/3.1.2/source ->
|
||||
/usr/src/u2mfn-3.1.2
|
||||
|
||||
DKMS: add completed.
|
||||
Verifying : qubes-kernel-vm-support-3.1.2-1.fc20.x86_64 1/1
|
||||
|
||||
Installed:
|
||||
qubes-kernel-vm-support.x86_64 0:3.1.2-1.fc20
|
||||
|
||||
Complete!
|
||||
~~~
|
||||
|
||||
Then you can call the `qubes-prepare-vm-kernel` tool to actually package the kernel.
|
||||
The first parameter is kernel version (exactly as seen by the kernel), the second one (optional) is short name.
|
||||
This is visible in Qubes Manager and the `qvm-prefs` tool.
|
||||
|
||||
~~~
|
||||
[user@dom0 ~]$ sudo qubes-prepare-vm-kernel 4.1.9-6.pvops.qubes.x86_64 4.1.qubes
|
||||
--> Building files for 4.1.9-6.pvops.qubes.x86_64 in /var/lib/qubes/vm-kernels/4.1.qubes
|
||||
---> Recompiling kernel module (u2mfn)
|
||||
---> Generating modules.img
|
||||
mke2fs 1.42.12 (29-Aug-2014)
|
||||
---> Generating initramfs
|
||||
--> Done.
|
||||
~~~
|
||||
|
||||
Kernel files structure
|
||||
-----------------------
|
||||
|
||||
Kernel for a VM is stored in `/var/lib/qubes/vm-kernels/KERNEL_VERSION` directory (`KERNEL_VERSION` replaced with actual version). Qubes 4.x supports the following files there:
|
||||
|
||||
- `vmlinuz` - kernel binary (may not be a Linux kernel)
|
||||
- `initramfs` - initramfs for the kernel to load
|
||||
- `modules.img` - ext4 filesystem image containing Linux kernel modules (to be mounted at `/lib/modules`); additionally it should contain a copy of `vmlinuz` and `initramfs` in its root directory (for loading by qemu inside stubdomain)
|
||||
- `default-kernelopts-common.txt` - default kernel options, in addition to those specified with `kernelopts` qube property (can be disabled with `no-default-kernelopts` feature)
|
||||
|
||||
All the files besides `vmlinuz` are optional.
|
||||
|
||||
Using kernel installed in the VM
|
||||
--------------------------------
|
||||
|
||||
Both debian-9 and fedora-26 templates already have grub and related tools preinstalled so if you want to use one of the distribution kernels, all you need to do is clone either template to a new one, then:
|
||||
|
||||
~~~
|
||||
qvm-prefs <clonetemplatename> virt_mode hvm
|
||||
qvm-prefs <clonetemplatename> kernel ''
|
||||
~~~
|
||||
|
||||
If you'd like to use a different kernel than default, continue reading.
|
||||
|
||||
### Installing kernel in Fedora VM
|
||||
|
||||
Install whatever kernel you want.
|
||||
You need to also ensure you have the `kernel-devel` package for the same kernel version installed.
|
||||
|
||||
If you are using a distribution kernel package (`kernel` package), the initramfs and kernel modules may be handled automatically.
|
||||
If you are using a manually built kernel, you need to handle this on your own.
|
||||
Take a look at the `dkms` documentation, especially the `dkms autoinstall` command may be useful.
|
||||
If you did not see the `kernel` install rebuild your initramfs, or are using a manually built kernel, you will need to rebuild it yourself.
|
||||
Replace the version numbers in the example below with the ones appropriate to the kernel you are installing:
|
||||
|
||||
~~~
|
||||
sudo dracut -f /boot/initramfs-4.15.14-200.fc26.x86_64.img 4.15.14-200.fc26.x86_64
|
||||
~~~
|
||||
|
||||
Once the kernel is installed, you need to create a GRUB configuration.
|
||||
You may want to adjust some settings in `/etc/default/grub`; for example, lower `GRUB_TIMEOUT` to speed up VM startup.
|
||||
Then, you need to generate the actual configuration:
|
||||
In Fedora it can be done using the `grub2-mkconfig` tool:
|
||||
|
||||
~~~
|
||||
sudo grub2-mkconfig -o /boot/grub2/grub.cfg
|
||||
~~~
|
||||
|
||||
You can safely ignore this error message:
|
||||
|
||||
~~~
|
||||
grub2-probe: error: cannot find a GRUB drive for /dev/mapper/dmroot. Check your device.map
|
||||
~~~
|
||||
|
||||
Then shutdown the VM.
|
||||
|
||||
**Note:** You may also use `PV` mode instead of `HVM` but this is not recommended for security purposes.
|
||||
If you require `PV` mode, install `grub2-xen` in dom0 and change the template's kernel to `pvgrub2`.
|
||||
Booting to a kernel inside the template is not supported under `PVH`.
|
||||
|
||||
### Installing kernel in Debian VM
|
||||
|
||||
Install whatever kernel you want, making sure to include the headers.
|
||||
If you are using a distribution kernel package (`linux-image-amd64` package), the initramfs and kernel modules should be handled automatically.
|
||||
If not, or you are building the kernel manually, do this using `dkms` and `initramfs-tools`:
|
||||
|
||||
sudo dkms autoinstall -k <kernel-version> # replace this <kernel-version> with actual kernel version
|
||||
sudo update-initramfs -u
|
||||
|
||||
The output should look like this:
|
||||
|
||||
$ sudo dkms autoinstall -k 3.16.0-4-amd64
|
||||
|
||||
u2mfn:
|
||||
Running module version sanity check.
|
||||
- Original module
|
||||
- No original module exists within this kernel
|
||||
- Installation
|
||||
- Installing to /lib/modules/3.16.0-4-amd64/updates/dkms/
|
||||
|
||||
depmod....
|
||||
|
||||
DKMS: install completed.
|
||||
$ sudo update-initramfs -u
|
||||
update-initramfs: Generating /boot/initrd.img-3.16.0-4-amd64
|
||||
|
||||
When the kernel is installed, you need to create a GRUB configuration.
|
||||
You may want to adjust some settings in `/etc/default/grub`; for example, lower `GRUB_TIMEOUT` to speed up VM startup.
|
||||
Then, you need to generate the actual configuration with the `update-grub2` tool:
|
||||
|
||||
~~~
|
||||
sudo mkdir /boot/grub
|
||||
sudo update-grub2
|
||||
~~~
|
||||
|
||||
You can safely ignore this error message:
|
||||
|
||||
~~~
|
||||
grub2-probe: error: cannot find a GRUB drive for /dev/mapper/dmroot. Check your device.map
|
||||
~~~
|
||||
|
||||
Then shutdown the VM.
|
||||
|
||||
**Note:** You may also use `PV` mode instead of `HVM` but this is not recommended for security purposes.
|
||||
If you require `PV` mode, install `grub2-xen` in dom0 and change the template's kernel to `pvgrub2`.
|
||||
Booting to a kernel inside the template is not supported under `PVH`.
|
||||
|
||||
### Troubleshooting
|
||||
|
||||
In case of problems, you can access the VM console using `sudo xl console VMNAME` in dom0, then access the GRUB menu.
|
||||
You need to call it just after starting the VM (until `GRUB_TIMEOUT` expires); for example, in a separate dom0 terminal window.
|
||||
|
||||
In any case you can later access the VM's logs (especially the VM console log `guest-VMNAME.log`).
|
||||
|
||||
You can always set the kernel back to some dom0-provided value to fix a VM kernel installation.
|
||||
|
58
user/advanced-configuration/rpc-policy.md
Normal file
58
user/advanced-configuration/rpc-policy.md
Normal file
|
@ -0,0 +1,58 @@
|
|||
---
|
||||
layout: doc
|
||||
title: RPC Policies
|
||||
permalink: /doc/rpc-policy/
|
||||
---
|
||||
|
||||
RPC Policies
|
||||
============
|
||||
|
||||
This document explains the basics of RPC policies in Qubes.
|
||||
For more information, see [Qrexec: command execution in VMs][qrexec3].
|
||||
|
||||
Here's an example of an RPC policy file in dom0:
|
||||
|
||||
```
|
||||
[user@dom0 user ~]$ cat /etc/qubes-rpc/policy/qubes.FileCopy
|
||||
(...)
|
||||
$tag:work $tag:work allow
|
||||
$tag:work $anyvm deny
|
||||
$anyvm $tag:work deny
|
||||
$anyvm $anyvm ask
|
||||
```
|
||||
|
||||
It has three columns (from left to right): source, destination, and permission.
|
||||
Each row is a rule.
|
||||
For example, the first row says that we're **allowed** (third column) to copy a file (since this is the policy file for `qubes.FileCopy`) **from** (first column) any VM tagged with "work" **to** (second column) any VM tagged with "work".
|
||||
In other words, all the VMs tagged with "work" are allowed to copy files to each other without any prompts.
|
||||
(If the third column were "ask" instead of "allow", there would be prompts.
|
||||
I.e., we would be **asked** to approve the action, instead of it always being **allowed**.)
|
||||
|
||||
Now, the whole policy file is parsed from top to bottom.
|
||||
As soon as a rule is found that matches the action being evaluated, parsing stops.
|
||||
We can see what this means by looking at the second row.
|
||||
It says that we're **denied** from attempting to copy a file **from** any VM tagged with "work" **to** any VM whatsoever.
|
||||
(That's what the `$anyvm` keyword means -- literally any VM in the system).
|
||||
But, wait a minute, didn't we just say (in the first row) that all the VMs tagged with work are **allowed** to copy files to each other?
|
||||
That's exactly right.
|
||||
The first and second rows contradict each other, but that's intentional.
|
||||
Since we know that parsing goes from top to bottom (and stops at the first match), we intentionally put the first row above the second row so that it would take precedence.
|
||||
This is how we create a policy that says: "VMs tagged with 'work' are allowed to copy files to each other but not to any *other* VMs (i.e., not to VMs that *aren't* tagged with 'work')."
|
||||
|
||||
The third row says that we're **denied** from copying files **from** any VM in the system **to** any VM tagged with "work".
|
||||
Again, since parsing goes from top to bottom, this doesn't mean that no files can ever be copied from *any* VM to a VM tagged with "work".
|
||||
Rather, it means that only VMs that match an earlier rule can do so (in this case, only VMs tagged with "work").
|
||||
|
||||
The fourth and final row says that we're **asked** (i.e., prompted) to copy files **from** any VM in the system **to** any VM in the system.
|
||||
(This rule was already in the policy file by default.
|
||||
We added the first three.)
|
||||
Note that it wouldn't make sense to add any rules after this one, since every possible pair of VMs will match the `$anyvm $anyvm` pattern.
|
||||
Therefore, parsing will always stop at this rule, and no rules below it will ever be evaluated.
|
||||
|
||||
All together, the three rules we added say that all VMs tagged with "work" are allowed to copy files to each other; however, they're denied from copying files to other VMs (without the "work" tag), and other VMs (without the "work" tag) are denied from copying files to them.
|
||||
The fourth rule means that the user gets prompted for any situation not already covered.
|
||||
|
||||
Further details about how this system works can be found in [Qrexec: command execution in VMs][qrexec3].
|
||||
|
||||
[qrexec3]: /doc/qrexec3/
|
||||
|
554
user/advanced-configuration/salt.md
Normal file
554
user/advanced-configuration/salt.md
Normal file
|
@ -0,0 +1,554 @@
|
|||
---
|
||||
layout: doc
|
||||
title: Management stack
|
||||
permalink: /doc/salt/
|
||||
---
|
||||
|
||||
# Management Infrastructure
|
||||
|
||||
Since the Qubes R3.1 release we have included the Salt (also called SaltStack)
|
||||
management engine in dom0 as default (with some states already configured).
|
||||
Salt allows administrators to easily configure their systems.
|
||||
In this guide we will show how it is set up and how you can modify it for your
|
||||
own purpose.
|
||||
|
||||
In the current form the **API is provisional** and subject to change between
|
||||
*minor* releases.
|
||||
|
||||
## Understanding Salt
|
||||
|
||||
This document is not meant to be comprehensive Salt documentation; however,
|
||||
before writing anything it is required you have at least *some* understanding of
|
||||
basic Salt-related vocabulary.
|
||||
For more exhaustive documentation, visit [official site][salt-doc], though we
|
||||
must warn you that it is not easy to read if you just start working with Salt
|
||||
and know nothing.
|
||||
|
||||
### The Salt Architecture
|
||||
|
||||
Salt is a client-server model, where the server (called *master*) manages
|
||||
its clients (called *minions*).
|
||||
In typical situations, it is intended that the administrator interacts only
|
||||
with the master and keeps the configurations there.
|
||||
In Qubes, we don't have a master.
|
||||
Instead we have one minion which resides in `dom0` and manages domains from
|
||||
there.
|
||||
This setup is also supported by Salt.
|
||||
|
||||
Salt is a management engine (similar to Ansible, Puppet, and Chef), that
|
||||
enforces a particular state of a minion system.
|
||||
A *state* is an end effect *declaratively* expressed by the administrator.
|
||||
This is the most important concept in the entire engine.
|
||||
All configurations (i.e., the states) are written in YAML.
|
||||
|
||||
A *pillar* is a data back-end declared by the administrator.
|
||||
When states become repetitive, instead of pure YAML they can be written using a
|
||||
template engine (preferably Jinja2); which can use data structures specified in
|
||||
pillars.
|
||||
|
||||
A *formula* is a ready to use, packaged solution that combines a state and a
|
||||
pillar (possibly with some file templates and other auxiliary files).
|
||||
There are many formulas made by helpful people all over the Internet.
|
||||
|
||||
A *grain* is some data that is also available in templates, but its value is not
|
||||
directly specified by administrator.
|
||||
For example, the distribution (e.g., `"Debian"` or `"Gentoo"`) is a value of
|
||||
the grain `"os"`. It also contains other information about the kernel,
|
||||
hardware, etc.
|
||||
|
||||
A *module* is a Python extension to Salt that is responsible for actually
|
||||
enforcing the state in a particular area.
|
||||
It exposes some *imperative* functions for the administrator.
|
||||
For example, there is a `system` module that has a `system.halt` function that,
|
||||
when issued, will immediately halt a domain.
|
||||
There is another function called `state.highstate` which will synchronize the
|
||||
state of the system with the administrator's configuration/desires.
|
||||
|
||||
### Configuration
|
||||
|
||||
#### States
|
||||
|
||||
The smallest unit of configuration is a state.
|
||||
A state is written in YAML and looks like this:
|
||||
|
||||
stateid:
|
||||
cmd.run: #this is the execution module. in this case it will execute a command on the shell
|
||||
- name: echo 'hello world' #this is a parameter of the state.
|
||||
|
||||
The stateid has to be unique throughout all states running for a minion and can
|
||||
be used to order the execution of the references state.
|
||||
`cmd.run` is an execution module.
|
||||
It executes a command on behalf of the administrator.
|
||||
`name: echo 'hello world'` is a parameter for the execution module `cmd.run`.
|
||||
The module used defines which parameters can be passed to it.
|
||||
|
||||
There is a list of [officially available states][salt-doc-states].
|
||||
There are many very useful states:
|
||||
|
||||
* For [managing files][salt-doc-states-file]: Use this to create files or
|
||||
directories and change them (append lines, replace text, set their content etc.)
|
||||
* For [installing and uninstalling][salt-doc-states-pkg] packages.
|
||||
* For [executing shell commands][salt-doc-states-cmd].
|
||||
|
||||
With these three states you can define most of the configuration of a VM.
|
||||
|
||||
You can also [order the execution][salt-doc-states-order] of your states:
|
||||
|
||||
D:
|
||||
cmd.run:
|
||||
- name: echo 1
|
||||
- order: last
|
||||
C:
|
||||
cmd.run:
|
||||
- name: echo 1
|
||||
B:
|
||||
cmd.run:
|
||||
- name: echo 1
|
||||
- require:
|
||||
- cmd: A
|
||||
- require_in:
|
||||
- cmd:C
|
||||
A:
|
||||
cmd.run:
|
||||
- name: echo 1
|
||||
- order: 1
|
||||
|
||||
The order of execution will be `A, B, C, D`.
|
||||
The official documentation has more details on the
|
||||
[require][salt-doc-states-req] and [order][salt-doc-states-ord] arguments.
|
||||
|
||||
#### State Files
|
||||
|
||||
When configuring a system you will write one or more state files (`*.sls`) and
|
||||
put (or symlink) them into the main Salt directory `/srv/salt/`.
|
||||
Each state file contains multiple states and should describe some unit of
|
||||
configuration (e.g., a state file `mail.sls` could setup a VM for e-mail).
|
||||
|
||||
#### Top Files
|
||||
|
||||
After you have several state files, you need something to assign them to a VM.
|
||||
This is done by `*.top` files ([official documentation][salt-doc-top]).
|
||||
Their structure looks like this:
|
||||
|
||||
environment:
|
||||
target_matching_clause:
|
||||
- statefile1
|
||||
- folder2.statefile2
|
||||
|
||||
In most cases, the environment will be called `base`.
|
||||
The `target_matching_clause` will be used to select your minions (VMs).
|
||||
It can be either the name of a VM or a regular expression.
|
||||
If you are using a regular expressions, you need to give Salt a hint you are
|
||||
doing so:
|
||||
|
||||
environment:
|
||||
^app-(work|(?!mail).*)$:
|
||||
- match: pcre
|
||||
- statefile
|
||||
|
||||
For each target you can write a list of state files.
|
||||
Each line is a path to a state file (without the `.sls` extension) relative to
|
||||
the main directory.
|
||||
Each `/` is exchanged with a `.`, so you can't reference files or directories
|
||||
with a `.` in their name.
|
||||
|
||||
### Enabling Top Files and Applying the States
|
||||
|
||||
Now, because we use custom extensions to manage top files (instead of just
|
||||
enabling them all), to enable a particular top file you should issue command:
|
||||
|
||||
$ qubesctl top.enable my-new-vm
|
||||
|
||||
To list all enabled top files:
|
||||
|
||||
$ qubesctl top.enabled
|
||||
|
||||
And to disable one:
|
||||
|
||||
$ qubesctl top.disable my-new-vm
|
||||
|
||||
To apply the states to dom0 and all VMs:
|
||||
|
||||
$ qubesctl --all state.highstate
|
||||
|
||||
(More information on the `qubesctl` command further down.)
|
||||
|
||||
### Template Files
|
||||
|
||||
You will sometimes find yourself writing repetitive states.
|
||||
To solve this, there is the ability to template files or states.
|
||||
This is most commonly done with [Jinja][jinja].
|
||||
Jinja is similar to Python and in many cases behaves in a similar fashion, but
|
||||
there are sometimes differences when, for example, you set some variable inside
|
||||
a loop: the variable outside will not get changed.
|
||||
Instead, to get this behavior, you would use a `do` statement.
|
||||
So you should take a look at the [Jinja API documentation][jinja-tmp].
|
||||
Documentation about using Jinja to directly call Salt functions and get data
|
||||
about your system can be found in the official
|
||||
[Salt documentation][jinja-call-salt-functions].
|
||||
|
||||
## Salt Configuration, QubesOS layout
|
||||
|
||||
All Salt configuration files are in the `/srv/` directory, as usual.
|
||||
The main directory is `/srv/salt/` where all state files reside.
|
||||
States are contained in `*.sls` files.
|
||||
However, the states that are part of the standard Qubes distribution are mostly
|
||||
templates and the configuration is done in pillars from formulas.
|
||||
|
||||
The formulas are in `/srv/formulas`, including stock formulas for domains in
|
||||
`/srv/formulas/dom0/virtual-machines-formula/qvm`, which are used by firstboot.
|
||||
|
||||
Because we use some code that is not found in older versions of Salt, there is
|
||||
a tool called `qubesctl` that should be run instead of `salt-call --local`.
|
||||
It accepts all the same arguments of the vanilla tool.
|
||||
|
||||
## Configuring a VM's System from Dom0
|
||||
|
||||
Salt in Qubes can be used to configure VMs from dom0.
|
||||
Simply set the VM name as the target minion name in the top file.
|
||||
You can also use the `qubes` pillar module to select VMs with a particular
|
||||
property (see below).
|
||||
If you do so, then you need to pass additional arguments to the `qubesctl` tool:
|
||||
|
||||
usage: qubesctl [-h] [--show-output] [--force-color] [--skip-dom0]
|
||||
[--targets TARGETS | --templates | --app | --all]
|
||||
...
|
||||
|
||||
positional arguments:
|
||||
command Salt command to execute (e.g., state.highstate)
|
||||
|
||||
optional arguments:
|
||||
-h, --help show this help message and exit
|
||||
--show-output Show output of management commands
|
||||
--force-color Force color output, allow control characters from VM,
|
||||
UNSAFE
|
||||
--skip-dom0 Skip dom0 configuration (VM creation etc)
|
||||
--targets TARGETS Coma separated list of VMs to target
|
||||
--templates Target all templates
|
||||
--app Target all AppVMs
|
||||
--all Target all non-disposable VMs (TemplateVMs and AppVMs)
|
||||
|
||||
|
||||
To apply a state to all templates, call `qubesctl --templates state.highstate`.
|
||||
|
||||
The actual configuration is applied using `salt-ssh` (running over `qrexec`
|
||||
instead of `ssh`).
|
||||
Which means you don't need to install anything special in a VM you want to
|
||||
manage.
|
||||
Additionally, for each target VM, `salt-ssh` is started from a temporary VM.
|
||||
This way dom0 doesn't directly interact with potentially malicious target VMs;
|
||||
and in the case of a compromised Salt VM, because they are temporary, the
|
||||
compromise cannot spread from one VM to another.
|
||||
|
||||
Beginning with Qubes 4.0 and after [QSB #45], we implemented two changes:
|
||||
|
||||
1. Added the `management_dispvm` VM property, which specifies the DVM
|
||||
Template that should be used for management, such as Salt
|
||||
configuration. TemplateBasedVMs inherit this property from their
|
||||
parent TemplateVMs. If the value is not set explicitly, the default
|
||||
is taken from the global `management_dispvm` property. The
|
||||
VM-specific property is set with the `qvm-prefs` command, while the
|
||||
global property is set with the `qubes-prefs` command.
|
||||
|
||||
2. Created the `default-mgmt-dvm` DisposableVM Template, which is hidden from
|
||||
the menu (to avoid accidental use), has networking disabled, and has
|
||||
a black label (the same as TemplateVMs). This VM is set as the global
|
||||
`management_dispvm`. Keep in mind that this DVM template has full control
|
||||
over the VMs it's used to manage.
|
||||
|
||||
## Writing Your Own Configurations
|
||||
|
||||
Let's start with a quick example:
|
||||
|
||||
my new and shiny VM:
|
||||
qvm.present:
|
||||
- name: salt-test # can be omitted when same as ID
|
||||
- template: fedora-21
|
||||
- label: yellow
|
||||
- mem: 2000
|
||||
- vcpus: 4
|
||||
- flags:
|
||||
- proxy
|
||||
|
||||
It uses the Qubes-specific `qvm.present` state, which ensures that the domain is
|
||||
present (if not, it creates it).
|
||||
|
||||
* The `name` flag informs Salt that the domain should be named `salt-test` (not
|
||||
`my new and shiny VM`).
|
||||
* The `template` flag informs Salt which template should be used for the domain.
|
||||
* The `label` flag informs Salt what color the domain should be.
|
||||
* The `mem` flag informs Salt how much RAM should be allocated to the domain.
|
||||
* The `vcpus` flag informs Salt how many Virtual CPUs should be allocated to the
|
||||
domain
|
||||
* The `proxy` flag informs Salt that the domain should be a ProxyVM.
|
||||
|
||||
As you will notice, the options are the same (or very similar) to those used in
|
||||
`qvm-prefs`.
|
||||
|
||||
This should be put in `/srv/salt/my-new-vm.sls` or another `.sls` file.
|
||||
A separate `*.top` file should be also written:
|
||||
|
||||
base:
|
||||
dom0:
|
||||
- my-new-vm
|
||||
|
||||
**Note** The third line should contain the name of the previous state file,
|
||||
without the `.sls` extension.
|
||||
|
||||
To enable the particular top file you should issue the command:
|
||||
|
||||
$ qubesctl top.enable my-new-vm
|
||||
|
||||
To apply the state:
|
||||
|
||||
$ qubesctl state.highstate
|
||||
|
||||
### Example of Configuring a VM's System from Dom0
|
||||
|
||||
Lets make sure that the `mc` package is installed in all templates.
|
||||
Similar to the previous example, you need to create a state file
|
||||
(`/srv/salt/mc-everywhere.sls`):
|
||||
|
||||
mc:
|
||||
pkg.installed: []
|
||||
|
||||
Then the appropriate top file (`/srv/salt/mc-everywhere.top`):
|
||||
|
||||
base:
|
||||
qubes:type:template:
|
||||
- match: pillar
|
||||
- mc-everywhere
|
||||
|
||||
Now you need to enable the top file:
|
||||
|
||||
$ qubesctl top.enable mc-everywhere
|
||||
|
||||
And apply the configuration:
|
||||
|
||||
$ qubesctl --all state.highstate
|
||||
|
||||
## All Qubes-specific States
|
||||
|
||||
### `qvm.present`
|
||||
|
||||
As in the example above, it creates a domain and sets its properties.
|
||||
|
||||
### `qvm.prefs`
|
||||
|
||||
You can set properties of an existing domain:
|
||||
|
||||
my preferences:
|
||||
qvm.prefs:
|
||||
- name: salt-test2
|
||||
- netvm: sys-firewall
|
||||
|
||||
***Note*** The `name:` option will not change the name of a domain, it will only
|
||||
be used to match a domain to apply the configurations to it.
|
||||
|
||||
### `qvm.service`
|
||||
|
||||
services in my domain:
|
||||
qvm.service:
|
||||
- name: salt-test3
|
||||
- enable:
|
||||
- service1
|
||||
- service2
|
||||
- disable:
|
||||
- service3
|
||||
- service4
|
||||
- default:
|
||||
- service5
|
||||
|
||||
This enables, disables, or sets to default, services as in `qvm-service`.
|
||||
|
||||
### `qvm.running`
|
||||
|
||||
Ensures the specified domain is running:
|
||||
|
||||
domain is running:
|
||||
qvm.running:
|
||||
- name: salt-test4
|
||||
|
||||
|
||||
## Virtual Machine Formulae
|
||||
|
||||
You can use these formulae to download, install, and configure VMs in Qubes.
|
||||
These formulae use pillar data to define default VM names and configuration details.
|
||||
The default settings can be overridden in the pillar data located in:
|
||||
```
|
||||
/srv/pillar/base/qvm/init.sls
|
||||
```
|
||||
In dom0, you can apply a single state with `sudo qubesctl state.sls STATE_NAME`.
|
||||
For example, `sudo qubesctl state.sls qvm.personal` will create a `personal` VM (if it does not already exist) with all its dependencies (TemplateVM, `sys-firewall`, and `sys-net`).
|
||||
|
||||
### Available states
|
||||
|
||||
#### `qvm.sys-net`
|
||||
|
||||
System NetVM
|
||||
|
||||
#### `qvm.sys-usb`
|
||||
|
||||
System UsbVM
|
||||
|
||||
#### `qvm.sys-net-with-usb`
|
||||
|
||||
System UsbVM bundled into NetVM. Do not enable together with `qvm.sys-usb`.
|
||||
|
||||
#### `qvm.usb-keyboard`
|
||||
|
||||
Enable USB keyboard together with USBVM, including for early system boot (for LUKS passhprase).
|
||||
This state implicitly creates a USBVM (`qvm.sys-usb` state), if not already done.
|
||||
|
||||
#### `qvm.sys-firewall`
|
||||
|
||||
System firewall ProxyVM
|
||||
|
||||
#### `qvm.sys-whonix`
|
||||
|
||||
Whonix gateway ProxyVM
|
||||
|
||||
#### `qvm.personal`
|
||||
|
||||
Personal AppVM
|
||||
|
||||
#### `qvm.work`
|
||||
|
||||
Work AppVM
|
||||
|
||||
#### `qvm.untrusted`
|
||||
|
||||
Untrusted AppVM
|
||||
|
||||
#### `qvm.vault`
|
||||
|
||||
Vault AppVM with no NetVM enabled.
|
||||
|
||||
#### `qvm.default-dispvm`
|
||||
|
||||
Default DisposableVM template - fedora-26-dvm AppVM
|
||||
|
||||
#### `qvm.anon-whonix`
|
||||
|
||||
Whonix workstation AppVM.
|
||||
|
||||
#### `qvm.whonix-ws-dvm`
|
||||
|
||||
Whonix workstation AppVM for Whonix DisposableVMs.
|
||||
|
||||
#### `qvm.updates-via-whonix`
|
||||
|
||||
Setup UpdatesProxy to route all templates updates through Tor (sys-whonix here).
|
||||
|
||||
#### `qvm.template-fedora-21`
|
||||
|
||||
Fedora-21 TemplateVM
|
||||
|
||||
#### `qvm.template-fedora-21-minimal`
|
||||
|
||||
Fedora-21 minimal TemplateVM
|
||||
|
||||
#### `qvm.template-debian-7`
|
||||
|
||||
Debian 7 (wheezy) TemplateVM
|
||||
|
||||
#### `qvm.template-debian-8`
|
||||
|
||||
Debian 8 (jessie) TemplateVM
|
||||
|
||||
#### `qvm.template-whonix-gw`
|
||||
|
||||
Whonix Gateway TemplateVM
|
||||
|
||||
#### `qvm.template-whonix-ws`
|
||||
|
||||
Whonix Workstation TemplateVM
|
||||
|
||||
|
||||
## The `qubes` Pillar Module
|
||||
|
||||
Additional pillar data is available to ease targeting configurations (for example all templates).
|
||||
|
||||
**Note:** This list is subject to change in future releases.
|
||||
|
||||
### `qubes:type`
|
||||
|
||||
VM type. Possible values:
|
||||
|
||||
- `admin` - Administration domain (`dom0`)
|
||||
- `template` - Template VM
|
||||
- `standalone` - Standalone VM
|
||||
- `app` - Template based AppVM
|
||||
|
||||
### `qubes:template`
|
||||
|
||||
Template name on which a given VM is based (if any).
|
||||
|
||||
### `qubes:netvm`
|
||||
|
||||
VM which provides network to the given VM
|
||||
|
||||
## Debugging
|
||||
|
||||
The output for each VM is logged in `/var/log/qubes/mgmt-VM_NAME.log`.
|
||||
|
||||
If the log does not contain useful information:
|
||||
1. Run `sudo qubesctl --skip-dom0 --target=VM_NAME state.highstate`
|
||||
2. When your VM is being started (yellow) press Ctrl-z on qubesctl.
|
||||
3. Open terminal in disp-mgmt-VM_NAME.
|
||||
4. Look at /etc/qubes-rpc/qubes.SaltLinuxVM - this is what is
|
||||
executed in the management VM.
|
||||
5. Get the last two lines:
|
||||
|
||||
$ export PATH="/usr/lib/qubes-vm-connector/ssh-wrapper:$PATH"
|
||||
$ salt-ssh "$target_vm" $salt_command
|
||||
|
||||
Adjust $target_vm (VM_NAME) and $salt_command (state.highstate).
|
||||
6. Execute them, fix problems, repeat.
|
||||
|
||||
## Known Pitfalls
|
||||
|
||||
### Using fedora-24-minimal
|
||||
|
||||
The fedora-24-minimal package is missing the `sudo` package.
|
||||
You can install it via:
|
||||
|
||||
$ qvm-run -p -u root fedora-24-minimal-template 'dnf install -y sudo'
|
||||
|
||||
The `-p` will cause the execution to wait until the package is installed.
|
||||
Having the `-p` flag is important when using a state with `cmd.run`.
|
||||
|
||||
### Disk Quota Exceeded (When Installing Templates)
|
||||
|
||||
If you install multiple templates you may encounter this error.
|
||||
The solution is to shut down the updateVM between each install:
|
||||
|
||||
install template and shutdown updateVM:
|
||||
cmd.run:
|
||||
- name: sudo qubes-dom0-update -y fedora-24; qvm-shutdown {% raw %}{{ salt.cmd.run(qubes-prefs updateVM) }}{% endraw %}
|
||||
|
||||
## Further Reading
|
||||
|
||||
* [Salt documentation][salt-doc]
|
||||
* [Salt states][salt-doc-states] ([files][salt-doc-states-file], [commands][salt-doc-states-cmd],
|
||||
[packages][salt-doc-states-pkg], [ordering][salt-doc-states-order])
|
||||
* [Top files][salt-doc-top]
|
||||
* [Jinja templates][jinja]
|
||||
* [Qubes specific modules][salt-qvm-doc]
|
||||
* [Formulas for default Qubes VMs][salt-virtual-machines-states]
|
||||
|
||||
[salt-doc]: https://docs.saltstack.com/en/latest/
|
||||
[salt-qvm-doc]: https://github.com/QubesOS/qubes-mgmt-salt-dom0-qvm/blob/master/README.rst
|
||||
[salt-virtual-machines-states]: https://github.com/QubesOS/qubes-mgmt-salt-dom0-virtual-machines/tree/master/qvm
|
||||
[salt-doc-states]: https://docs.saltstack.com/en/latest/ref/states/all/
|
||||
[salt-doc-states-file]: https://docs.saltstack.com/en/latest/ref/states/all/salt.states.file.html
|
||||
[salt-doc-states-pkg]: https://docs.saltstack.com/en/latest/ref/states/all/salt.states.pkg.html
|
||||
[salt-doc-states-cmd]: https://docs.saltstack.com/en/latest/ref/states/all/salt.states.file.html
|
||||
[salt-doc-states-order]: https://docs.saltstack.com/en/latest/ref/states/ordering.html
|
||||
[salt-doc-states-req]: https://docs.saltstack.com/en/latest/ref/states/requisites.html
|
||||
[salt-doc-states-ord]: https://docs.saltstack.com/en/latest/ref/states/ordering.html#the-order-option
|
||||
[salt-doc-top]:https://docs.saltstack.com/en/latest/ref/states/top.html
|
||||
[jinja]: http://jinja.pocoo.org/
|
||||
[jinja-tmp]: http://jinja.pocoo.org/docs/2.9/templates/
|
||||
[jinja-call-salt-functions]: https://docs.saltstack.com/en/getstarted/config/jinja.html#get-data-using-salt
|
||||
[QSB #45]: /news/2018/12/03/qsb-45/
|
93
user/advanced-configuration/secondary-storage.md
Normal file
93
user/advanced-configuration/secondary-storage.md
Normal file
|
@ -0,0 +1,93 @@
|
|||
---
|
||||
layout: doc
|
||||
title: Secondary Storage
|
||||
permalink: /doc/secondary-storage/
|
||||
redirect_from:
|
||||
- /en/doc/secondary-storage/
|
||||
- /doc/SecondaryStorage/
|
||||
- /wiki/SecondaryStorage/
|
||||
---
|
||||
|
||||
Storing AppVMs on Secondary Drives
|
||||
==================================
|
||||
|
||||
Suppose you have a fast but small primary SSD and a large but slow secondary HDD.
|
||||
You want to store a subset of your AppVMs on the HDD.
|
||||
|
||||
## Instructions ##
|
||||
|
||||
Qubes 4.0 is more flexible than earlier versions about placing different VMs on different disks.
|
||||
For example, you can keep templates on one disk and AppVMs on another, without messy symlinks.
|
||||
|
||||
These steps assume you have already created a separate [volume group](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/logical_volume_manager_administration/vg_admin#VG_create) and [thin pool](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/logical_volume_manager_administration/thinly_provisioned_volume_creation) (not thin volume) for your HDD.
|
||||
See also [this example](https://www.linux.com/blog/how-full-encrypt-your-linux-system-lvm-luks) if you would like to create an encrypted LVM pool (but note you can use a single logical volume if preferred, and to use the `-T` option on `lvcreate` to specify it is thin). You can find the commands for this example applied to Qubes at the bottom of this R4.0 section.
|
||||
|
||||
First, collect some information in a dom0 terminal:
|
||||
|
||||
sudo pvs
|
||||
sudo lvs
|
||||
|
||||
Take note of the VG and thin pool names for your HDD, then register it with Qubes:
|
||||
|
||||
# <pool_name> is a freely chosen pool name
|
||||
# <vg_name> is LVM volume group name
|
||||
# <thin_pool_name> is LVM thin pool name
|
||||
qvm-pool --add <pool_name> lvm_thin -o volume_group=<vg_name>,thin_pool=<thin_pool_name>,revisions_to_keep=2
|
||||
|
||||
Now, you can create qubes in that pool:
|
||||
|
||||
qvm-create -P <pool_name> --label red <vmname>
|
||||
|
||||
It isn't possible to directly migrate an existing qube to the new pool, but you can clone it there, then remove the old one:
|
||||
|
||||
qvm-clone -P <pool_name> <sourceVMname> <cloneVMname>
|
||||
qvm-remove <sourceVMname>
|
||||
|
||||
If that was a template, or other qube referenced elsewhere (NetVM or such), you will need to adjust those references manually after moving.
|
||||
For example:
|
||||
|
||||
qvm-prefs <appvmname_based_on_old_template> template <new_template_name>
|
||||
|
||||
In theory, you can still use file-based disk images ("file" pool driver), but it lacks some features such as you won't be able to do backups without shutting down the qube.
|
||||
|
||||
### Example HDD setup ###
|
||||
|
||||
Assuming the secondary hard disk is at /dev/sdb (it will be completely erased), you can set it up for encryption by doing in a dom0 terminal (use the same passphrase as the main Qubes disk to avoid a second password prompt at boot):
|
||||
|
||||
sudo cryptsetup luksFormat --hash=sha512 --key-size=512 --cipher=aes-xts-plain64 --verify-passphrase /dev/sdb
|
||||
sudo blkid /dev/sdb
|
||||
|
||||
Note the device's UUID (in this example "b209..."), we will use it as its luks name for auto-mounting at boot, by doing:
|
||||
|
||||
sudo nano /etc/crypttab
|
||||
|
||||
And adding this line (change both "b209..." for your device's UUID from blkid) to crypttab:
|
||||
|
||||
luks-b20975aa-8318-433d-8508-6c23982c6cde UUID=b20975aa-8318-433d-8508-6c23982c6cde none
|
||||
|
||||
Reboot the computer so the new luks device appears at /dev/mapper/luks-b209... and we can then create its pool, by doing this on a dom0 terminal (substitute the b209... UUIDs with yours):
|
||||
|
||||
First create the physical volume
|
||||
|
||||
sudo pvcreate /dev/mapper/luks-b20975aa-8318-433d-8508-6c23982c6cde
|
||||
|
||||
Then create the LVM volume group, we will use for example "qubes" as the <vg_name>:
|
||||
|
||||
sudo vgcreate qubes /dev/mapper/luks-b20975aa-8318-433d-8508-6c23982c6cde
|
||||
|
||||
And then use "poolhd0" as the <thin_pool_name> (LVM thin pool name):
|
||||
|
||||
sudo lvcreate -T -n poolhd0 -l +100%FREE qubes
|
||||
|
||||
Finally we will tell Qubes to add a new pool on the just created thin pool
|
||||
|
||||
qvm-pool --add poolhd0_qubes lvm_thin -o volume_group=qubes,thin_pool=poolhd0,revisions_to_keep=2
|
||||
|
||||
By default VMs will be created on the main Qubes disk (i.e. a small SSD), to create them on this secondary HDD do the following on a dom0 terminal:
|
||||
|
||||
qvm-create -P poolhd0_qubes --label red unstrusted-hdd
|
||||
|
||||
|
||||
[Qubes Backup]: /doc/BackupRestore/
|
||||
[TemplateVM]: /doc/Templates/
|
||||
|
208
user/advanced-configuration/usb-qubes.md
Normal file
208
user/advanced-configuration/usb-qubes.md
Normal file
|
@ -0,0 +1,208 @@
|
|||
---
|
||||
layout: doc
|
||||
title: USB Qubes
|
||||
permalink: /doc/usb-qubes/
|
||||
redirect_from:
|
||||
- /doc/usbvm/
|
||||
- /en/doc/usbvm/
|
||||
- /doc/USBVM/
|
||||
- /wiki/USBVM/
|
||||
- /doc/sys-usb/
|
||||
---
|
||||
|
||||
# USB Qubes #
|
||||
|
||||
If during installation you enabled the creation of a USB-qube, your system should be setup already and none of the mentioned steps here should be necessary. (Unless you want to [remove your USB-qube].) If for any reason no USB-qube was created during installation, this guide will show you how to do so.
|
||||
|
||||
**Caution:** If you want to use a USB-keyboard, please beware of the possibility to lock yourself out! To avoid this problem [enable your keyboard for login]!
|
||||
|
||||
|
||||
## Creating and Using a USB qube ##
|
||||
|
||||
**Warning:** This has the potential to prevent you from connecting a keyboard to Qubes via USB.
|
||||
There are problems with doing this in an encrypted install (LUKS).
|
||||
If you find yourself in this situation, see this [issue][2270-comm23].
|
||||
|
||||
A USB qube acts as a secure handler for potentially malicious USB devices, preventing them from coming into contact with dom0 (which could otherwise be fatal to the security of the whole system). It thereby mitigates some of the [security implications] of using USB devices.
|
||||
With a USB qube, every time you connect an untrusted USB drive to a USB port managed by that USB controller, you will have to attach it to the qube in which you wish to use it (if different from the USB qube itself), either by using Qubes VM Manager or the command line (see instructions above).
|
||||
The USB controller may be assigned on the **Devices** tab of a qube's settings page in Qubes VM Manager or by using the [qvm-pci][PCI Devices] command.
|
||||
For guidance on finding the correct USB controller, see the [according passage on PCI-devices][usb-controller].
|
||||
You can create a USB qube using the management stack by performing the following steps as root in dom0:
|
||||
|
||||
sudo qubesctl state.sls qvm.sys-usb
|
||||
|
||||
Alternatively, you can create a USB qube manually as follows:
|
||||
|
||||
1. Read the [PCI Devices] page to learn how to list and identify your USB controllers.
|
||||
Carefully check whether you have a USB controller that would be appropriate to assign to a USB qube.
|
||||
Note that it should be free of input devices, programmable devices, and any other devices that must be directly available to dom0.
|
||||
If you find a free controller, note its name and proceed to step 2.
|
||||
2. Create a new qube.
|
||||
Give it an appropriate name and color label (recommended: `sys-usb`, red).
|
||||
3. In the qube's settings, go to the "Devices" tab.
|
||||
Find the USB controller that you identified in step 1 in the "Available" list.
|
||||
Move it to the "Selected" list by highlighting it and clicking the single arrow `>` button.
|
||||
|
||||
**Caution:** By assigning a USB controller to a USB qube, it will no longer be available to dom0.
|
||||
This can make your system unusable if, for example, you have only one USB controller, and you are running Qubes off of a USB drive.
|
||||
|
||||
4. Click `OK`.
|
||||
Restart the qube.
|
||||
5. Recommended: Check the box on the "Basic" tab which says "Start VM automatically on boot".
|
||||
(This will help to mitigate attacks in which someone forces your system to reboot, then plugs in a malicious USB device.)
|
||||
|
||||
If the USB qube will not start, please have a look at the [faq].
|
||||
|
||||
|
||||
## Enable a USB keyboard for login ##
|
||||
|
||||
**Caution:** Please carefully read the [Security Warning about USB Input Devices] before proceeding!
|
||||
|
||||
If you use USB keyboard, automatic USB qube creation during installation is disabled.
|
||||
Additional steps are required to avoid locking you out from the system.
|
||||
Those steps are not performed by default, because of risk explained in [Security Warning about USB Input Devices].
|
||||
|
||||
|
||||
### Automatic setup ###
|
||||
|
||||
To allow USB keyboard usage (including early boot for LUKS passphrase), make sure you have the latest `qubes-mgmt-salt-dom0-virtual-machines` package (simply [install dom0 updates]) and execute in dom0:
|
||||
|
||||
sudo qubesctl state.sls qvm.usb-keyboard
|
||||
|
||||
The above command will take care of all required configuration, including creating USB qube if not present.
|
||||
Note that it will expose dom0 to USB devices while entering LUKS passphrase.
|
||||
Users are advised to physically disconnect other devices from the system for that time, to minimize the risk.
|
||||
|
||||
To undo these changes, please follow the section on [**Removing a USB qube**][remove your USB-qube]!
|
||||
|
||||
If you wish to perform only a subset of this configuration (for example do not enable USB keyboard during boot), see manual instructions below.
|
||||
|
||||
|
||||
### Manual setup ###
|
||||
|
||||
In order to use a USB keyboard, you must first attach it to a USB qube, then give that qube permission to pass keyboard input to dom0.
|
||||
Edit the `qubes.InputKeyboard` policy file in dom0, which is located here:
|
||||
|
||||
/etc/qubes-rpc/policy/qubes.InputKeyboard
|
||||
|
||||
Add a line like this one to the top of the file:
|
||||
|
||||
sys-usb dom0 allow
|
||||
|
||||
(Change `sys-usb` to your desired USB qube.)
|
||||
|
||||
You can now use your USB keyboard to login and for LUKS decryption during boot.
|
||||
|
||||
For a confirmation dialog each time the USB keyboard is connected, *which will effectively disable your USB keyboard for login and LUKS decryption*, change this line to:
|
||||
|
||||
sys-usb dom0 ask,default_target=dom0
|
||||
|
||||
*Don't do that if you want to unlock your device with a USB keyboard!*
|
||||
|
||||
Additionally, if you want to use USB keyboard to enter LUKS passphrase, it is incompatible with [hiding USB controllers from dom0].
|
||||
You need to revert that procedure (remove `rd.qubes.hide_all_usb` option from files mentioned there) and employ alternative protection during system boot - disconnect other devices during startup.
|
||||
|
||||
|
||||
## Auto Enabling A USB Mouse ##
|
||||
|
||||
**Caution:** Please carefully read the [Security Warning about USB Input Devices] before proceeding.
|
||||
|
||||
Handling a USB mouse isn't as critical as handling a keyboard, since you can login using the keyboard and accept the popup dialogue using your keyboard alone.
|
||||
|
||||
If you want to attach the USB mouse automatically anyway, you have to edit the `qubes.InputMouse` policy file in dom0, located at:
|
||||
|
||||
/etc/qubes-rpc/policy/qubes.InputMouse
|
||||
|
||||
The first line should read similar to:
|
||||
|
||||
sys-usb dom0 ask,default_target=dom0
|
||||
|
||||
which will ask for conformation each time a USB mouse is attached. If the file is empty or does not exist, maybe something went wrong during setup, try to rerun `qubesctl state.sls qvm.sys-usb` in dom0.
|
||||
|
||||
In case you are absolutely sure you do not want to confirm mouse access from `sys-usb` to `dom0`, you may add the following line on top of the file:
|
||||
|
||||
sys-usb dom0 allow
|
||||
|
||||
(Change `sys-usb` to your desired USB qube.)
|
||||
|
||||
|
||||
## How to hide all USB controllers from dom0 ##
|
||||
|
||||
(Note: `rd.qubes.hide_all_usb` is set automatically if you opt to create a USB qube during installation.
|
||||
This also occurs automatically if you choose to [create a USB qube] using the `qubesctl` method, which is the
|
||||
first pair of steps in the linked section.)
|
||||
|
||||
**Warning:** A USB keyboard cannot be used to type the disk passphrase if USB controllers were hidden from dom0.
|
||||
Before hiding USB controllers, make sure your laptop keyboard is not internally connected via USB (by checking output of the `lsusb` command) or that you have a PS/2 keyboard at hand (if using a desktop PC).
|
||||
Failure to do so will render your system unusable.
|
||||
|
||||
If you create a USB qube manually, there will be a brief period of time during the boot process when dom0 will be exposed to your USB controllers (and any attached devices).
|
||||
This is a potential security risk, since even brief exposure to a malicious USB device could result in dom0 being compromised.
|
||||
There are two approaches to this problem:
|
||||
|
||||
1. Physically disconnect all USB devices whenever you reboot the host.
|
||||
2. Hide (i.e., blacklist) all USB controllers from dom0.
|
||||
|
||||
**Warning:** If you use a USB [AEM] device, do not use the second option.
|
||||
Using a USB AEM device requires dom0 to have access to the USB controller to which your USB AEM device is attached.
|
||||
If dom0 cannot read your USB AEM device, AEM will hang.
|
||||
|
||||
The procedure to hide all USB controllers from dom0 is as follows:
|
||||
|
||||
* GRUB2
|
||||
|
||||
1. Open the file `/etc/default/grub` in dom0.
|
||||
2. Find the line that begins with `GRUB_CMDLINE_LINUX`.
|
||||
3. Add `rd.qubes.hide_all_usb` to that line.
|
||||
4. Save and close the file.
|
||||
5. Run the command `grub2-mkconfig -o /boot/grub2/grub.cfg` in dom0.
|
||||
6. Reboot.
|
||||
|
||||
* EFI
|
||||
|
||||
1. Open the file `/boot/efi/EFI/qubes/xen.cfg` in dom0.
|
||||
2. Find the lines that begin with `kernel=`. There may be more than one.
|
||||
3. Add `rd.qubes.hide_all_usb` to those lines.
|
||||
4. Save and close the file.
|
||||
5. Reboot.
|
||||
|
||||
|
||||
## Removing a USB qube ##
|
||||
|
||||
**Warning:** This procedure will result in your USB controller(s) being attached directly to dom0.
|
||||
|
||||
* GRUB2
|
||||
|
||||
1. Shut down the USB qube.
|
||||
2. In Qubes Manager, right-click on the USB qube and select "Remove VM."
|
||||
3. Open the file `/etc/default/grub` in dom0.
|
||||
4. Find the line(s) that begins with `GRUB_CMDLINE_LINUX`.
|
||||
5. If `rd.qubes.hide_all_usb` appears anywhere in those lines, remove it.
|
||||
6. Save and close the file.
|
||||
7. Run the command `grub2-mkconfig -o /boot/grub2/grub.cfg` in dom0.
|
||||
8. Reboot.
|
||||
|
||||
* EFI
|
||||
|
||||
1. Shut down the USB qube.
|
||||
2. In Qubes Manager, right-click on the USB qube and select "Remove VM."
|
||||
3. Open the file `/boot/efi/EFI/qubes/xen.cfg` in dom0.
|
||||
4. Find the line(s) that begins with `kernel=`.
|
||||
5. If `rd.qubes.hide_all_usb` appears anywhere in those lines, remove it.
|
||||
6. Save and close the file.
|
||||
7. Reboot.
|
||||
|
||||
|
||||
[remove your USB-qube]: #removing-a-usb-qube
|
||||
[security implications]: /doc/device-handling-security/#usb-security
|
||||
[enable your keyboard for login]: #enable-a-usb-keyboard-for-login
|
||||
[2270-comm23]: https://github.com/QubesOS/qubes-issues/issues/2270#issuecomment-242900312
|
||||
[PCI Devices]: /doc/pci-devices/
|
||||
[usb-controller]: /doc/usb-devices/#finding-the-right-usb-controller
|
||||
[faq]: /faq/#i-created-a-usbvm-and-assigned-usb-controllers-to-it-now-the-usbvm-wont-boot
|
||||
[Security Warning about USB Input Devices]: /doc/device-handling-security/#security-warning-on-usb-input-devices
|
||||
[install dom0 updates]: /doc/software-update-dom0/#how-to-update-dom0
|
||||
[hiding USB controllers from dom0]: #how-to-hide-all-usb-controllers-from-dom0
|
||||
[AEM]: /doc/anti-evil-maid/
|
||||
[create a USB qube]: #creating-and-using-a-usb-qube
|
||||
|
Loading…
Add table
Add a link
Reference in a new issue