merge master in new_temlates

This commit is contained in:
Anastasia Cotorobai 2019-10-28 20:28:50 +01:00
commit da421cf4f8
No known key found for this signature in database
GPG Key ID: 371E4002CCE13221
74 changed files with 2818 additions and 3295 deletions

View File

@ -6,9 +6,10 @@ permalink: /doc/mount-lvm-image/
# How to mount LVM image
You want to read your LVM image (ex: you did some errors and can't start the VM ).
You want to read your LVM image (e.g., there is a problem where you can't start any VMs except dom0).
1: make the image available for qubesdb.
From dom0 terminal:
```bash
# Example: /dev/qubes_dom0/vm-debian-9-tmp-root

View File

@ -246,7 +246,7 @@ When making contributions, please try to observe the following style conventions
* Use hanging indentations
where appropriate.
* Use underline headings (`=====` and `-----`) if possible.
If this is not possible, use Atx-style headings on both the left and right sides (`### H3 ###`).
If this is not possible, use Atx-style headings: (`### H3 ###`).
* When writing code blocks, use [syntax highlighting](https://github.github.com/gfm/#info-string) where [possible](https://github.com/jneen/rouge/wiki/List-of-supported-languages-and-lexers) and use `[...]` for anything omitted.
* When providing command line examples:
* Tell the reader where to open a terminal (dom0 or a specific domU), and show the command along with its output (if any) in a code block, e.g.:

View File

@ -73,5 +73,5 @@ the instructions above. This will be time consuming process.
[upgrade-r3.1]: /doc/releases/3.1/release-notes/#upgrading
[backup]: /doc/backup-restore/
[qrexec-argument]: https://github.com/QubesOS/qubes-issues/issues/1876
[qrexec-doc]: /doc/qrexec3/#service-argument-in-policy
[qrexec-doc]: /doc/qrexec/#service-policies-with-arguments
[github-release-notes]: https://github.com/QubesOS/qubes-issues/issues?q=is%3Aissue+sort%3Aupdated-desc+milestone%3A%22Release+3.2%22+label%3Arelease-notes+is%3Aclosed

View File

@ -115,7 +115,7 @@ We also provide [detailed instruction][upgrade-to-r4.0] for this procedure.
[qrexec-proxy]: https://github.com/QubesOS/qubes-issues/issues/1854
[qrexec-policy-keywords]: https://github.com/QubesOS/qubes-issues/issues/865
[qrexec-confirm]: https://github.com/QubesOS/qubes-issues/issues/910
[qrexec-doc]: /doc/qrexec3/#extra-keywords-available-in-qubes-40-and-later
[qrexec-doc]: /doc/qrexec/#specifying-vms-tags-types-targets-etc
[storage]: https://github.com/QubesOS/qubes-issues/issues/1842
[vm-interface]: /doc/vm-interface/
[admin-api]: /news/2017/06/27/qubes-admin-api/

View File

@ -0,0 +1,162 @@
---
layout: doc
title: Qubes RPC internals
permalink: /doc/qrexec-internals/
redirect_from:
- /doc/qrexec3-implementation/
- /en/doc/qrexec3-implementation/
- /doc/Qrexec3Implementation/
- /wiki/Qrexec3Implementation/
---
# Qubes RPC internals
(*This page details the current implementation of qrexec (qrexec3).
A [general introduction](/doc/qrexec/) to qrexec is also available.
For the implementation of qrexec2, see [here](/doc/qrexec2/#qubes-rpc-internals).*)
Qrexec framework consists of a number of processes communicating with each other using common IPC protocol (described in detail below).
Components residing in the same domain (`qrexec-client-vm` to `qrexec-agent`, `qrexec-client` to `qrexec-daemon`) use pipes as the underlying transport medium, while components in separate domains (`qrexec-daemon` to `qrexec-agent`, data channel between `qrexec-agent`s) use vchan link.
Because of [vchan limitation](https://github.com/qubesos/qubes-issues/issues/951), it is not possible to establish qrexec connection back to the source domain.
## Dom0 tools implementation
* `/usr/lib/qubes/qrexec-daemon`: One instance is required for every active domain. Responsible for:
* Handling execution and service requests from **dom0** (source: `qrexec-client`).
* Handling service requests from the associated domain (source: `qrexec-client-vm`, then `qrexec-agent`).
* Command line: `qrexec-daemon domain-id domain-name [default user]`
* `domain-id`: Numeric Qubes ID assigned to the associated domain.
* `domain-name`: Associated domain name.
* `default user`: Optional. If passed, `qrexec-daemon` uses this user as default for all execution requests that don't specify one.
* `/usr/lib/qubes/qrexec-policy`: Internal program used to evaluate the RPC policy and deciding whether a RPC call should be allowed.
* `/usr/lib/qubes/qrexec-client`: Used to pass execution and service requests to `qrexec-daemon`. Command line parameters:
* `-d target-domain-name`: Specifies the target for the execution/service request.
* `-l local-program`: Optional. If present, `local-program` is executed and its stdout/stdin are used when sending/receiving data to/from the remote peer.
* `-e`: Optional. If present, stdout/stdin are not connected to the remote peer. Only process creation status code is received.
* `-c <request-id,src-domain-name,src-domain-id>`: used for connecting a VM-VM service request by `qrexec-policy`. Details described below in the service example.
* `cmdline`: Command line to pass to `qrexec-daemon` as the execution/service request. Service request format is described below in the service example.
**Note:** None of the above tools are designed to be used by users directly.
## VM tools implementation
* `qrexec-agent`: One instance runs in each active domain. Responsible for:
* Handling service requests from `qrexec-client-vm` and passing them to connected `qrexec-daemon` in dom0.
* Executing associated `qrexec-daemon` execution/service requests.
* Command line parameters: none.
* `qrexec-client-vm`: Runs in an active domain. Used to pass service requests to `qrexec-agent`.
* Command line: `qrexec-client-vm target-domain-name service-name local-program [local program arguments]`
* `target-domain-name`: Target domain for the service request. Source is the current domain.
* `service-name`: Requested service name.
* `local-program`: `local-program` is executed locally and its stdin/stdout are connected to the remote service endpoint.
## Qrexec protocol details
Qrexec protocol is message-based.
All messages share a common header followed by an optional data packet.
/* uniform for all peers, data type depends on message type */
struct msg_header {
uint32_t type; /* message type */
uint32_t len; /* data length */
};
When two peers establish connection, the server sends `MSG_HELLO` followed by `peer_info` struct:
struct peer_info {
uint32_t version; /* qrexec protocol version */
};
The client then should reply with its own `MSG_HELLO` and `peer_info`.
The lower of two versions define protocol used for this connection.
If either side does not support this version, the connection is closed.
Details of all possible use cases and the messages involved are described below.
### dom0: request execution of `some_command` in domX and pass stdin/stdout
- **dom0**: `qrexec-client` is invoked in **dom0** as follows:
qrexec-client -d domX [-l local_program] user:some_command`
`user` may be substituted with the literal `DEFAULT`. In that case, default Qubes user will be used to execute `some_command`.
- **dom0**: `qrexec-client` sets `QREXEC_REMOTE_DOMAIN` environment variable to **domX**.
- **dom0**: If `local_program` is set, `qrexec-client` executes it and uses that child's stdin/stdout in place of its own when exchanging data with `qrexec-agent` later.
- **dom0**: `qrexec-client` connects to **domX**'s `qrexec-daemon`.
- **dom0**: `qrexec-daemon` sends `MSG_HELLO` header followed by `peer_info` to `qrexec-client`.
- **dom0**: `qrexec-client` replies with `MSG_HELLO` header followed by `peer_info` to `qrexec-daemon`.
- **dom0**: `qrexec-client` sends `MSG_EXEC_CMDLINE` header followed by `exec_params` to `qrexec-daemon`.
/* variable size */
struct exec_params {
uint32_t connect_domain; /* target domain id */
uint32_t connect_port; /* target vchan port for i/o exchange */
char cmdline[0]; /* command line to execute, size = msg_header.len - sizeof(struct exec_params) */
};
In this case, `connect_domain` and `connect_port` are set to 0.
- **dom0**: `qrexec-daemon` replies to `qrexec-client` with `MSG_EXEC_CMDLINE` header followed by `exec_params`, but with empty `cmdline` field. `connect_domain` is set to Qubes ID of **domX** and `connect_port` is set to a vchan port allocated by `qrexec-daemon`.
- **dom0**: `qrexec-daemon` sends `MSG_EXEC_CMDLINE` header followed by `exec_params` to the associated **domX** `qrexec-agent` over vchan. `connect_domain` is set to 0 (**dom0**), `connect_port` is the same as sent to `qrexec-client`. `cmdline` is unchanged except that the literal `DEFAULT` is replaced with actual user name, if present.
- **dom0**: `qrexec-client` disconnects from `qrexec-daemon`.
- **dom0**: `qrexec-client` starts a vchan server using the details received from `qrexec-daemon` and waits for connection from **domX**'s `qrexec-agent`.
- **domX**: `qrexec-agent` receives `MSG_EXEC_CMDLINE` header followed by `exec_params` from `qrexec-daemon` over vchan.
- **domX**: `qrexec-agent` connects to `qrexec-client` over vchan using the details from `exec_params`.
- **domX**: `qrexec-agent` executes `some_command` in **domX** and connects the child's stdin/stdout to the data vchan. If the process creation fails, `qrexec-agent` sends `MSG_DATA_EXIT_CODE` to `qrexec-client` followed by the status code (**int**) and disconnects from the data vchan.
- Data read from `some_command`'s stdout is sent to the data vchan using `MSG_DATA_STDOUT` by `qrexec-agent`. `qrexec-client` passes data received as `MSG_DATA_STDOUT` to its own stdout (or to `local_program`'s stdin if used).
- `qrexec-client` sends data read from local stdin (or `local_program`'s stdout if used) to `qrexec-agent` over the data vchan using `MSG_DATA_STDIN`. `qrexec-agent` passes data received as `MSG_DATA_STDIN` to `some_command`'s stdin.
- `MSG_DATA_STDOUT` or `MSG_DATA_STDIN` with data `len` field set to 0 in `msg_header` is an EOF marker. Peer receiving such message should close the associated input/output pipe.
- When `some_command` terminates, **domX**'s `qrexec-agent` sends `MSG_DATA_EXIT_CODE` header to `qrexec-client` followed by the exit code (**int**). `qrexec-agent` then disconnects from the data vchan.
### domY: invoke execution of qubes service `qubes.SomeRpc` in domX and pass stdin/stdout
- **domY**: `qrexec-client-vm` is invoked as follows:
qrexec-client-vm domX qubes.SomeRpc local_program [params]
- **domY**: `qrexec-client-vm` connects to `qrexec-agent` (via local socket/named pipe).
- **domY**: `qrexec-client-vm` sends `trigger_service_params` data to `qrexec-agent` (without filling the `request_id` field):
struct trigger_service_params {
char service_name[64];
char target_domain[32];
struct service_params request_id; /* service request id */
};
struct service_params {
char ident[32];
};
- **domY**: `qrexec-agent` allocates a locally-unique (for this domain) `request_id` (let's say `13`) and fills it in the `trigger_service_params` struct received from `qrexec-client-vm`.
- **domY**: `qrexec-agent` sends `MSG_TRIGGER_SERVICE` header followed by `trigger_service_params` to `qrexec-daemon` in **dom0** via vchan.
- **dom0**: **domY**'s `qrexec-daemon` executes `qrexec-policy`: `qrexec-policy domY_id domY domX qubes.SomeRpc 13`.
- **dom0**: `qrexec-policy` evaluates if the RPC should be allowed or denied. If the action is allowed it returns `0`, if the action is denied it returns `1`.
- **dom0**: **domY**'s `qrexec-daemon` checks the exit code of `qrexec-policy`.
- If `qrexec-policy` returned **not** `0`: **domY**'s `qrexec-daemon` sends `MSG_SERVICE_REFUSED` header followed by `service_params` to **domY**'s `qrexec-agent`. `service_params.ident` is identical to the one received. **domY**'s `qrexec-agent` disconnects its `qrexec-client-vm` and RPC processing is finished.
- If `qrexec-policy` returned `0`, RPC processing continues.
- **dom0**: if `qrexec-policy` allowed the RPC, it executed `qrexec-client -d domX -c 13,domY,domY_id user:QUBESRPC qubes.SomeRpc domY`.
- **dom0**: `qrexec-client` sets `QREXEC_REMOTE_DOMAIN` environment variable to **domX**.
- **dom0**: `qrexec-client` connects to **domX**'s `qrexec-daemon`.
- **dom0**: **domX**'s `qrexec-daemon` sends `MSG_HELLO` header followed by `peer_info` to `qrexec-client`.
- **dom0**: `qrexec-client` replies with `MSG_HELLO` header followed by `peer_info` to **domX**'s`qrexec-daemon`.
- **dom0**: `qrexec-client` sends `MSG_EXEC_CMDLINE` header followed by `exec_params` to **domX**'s`qrexec-daemon`
/* variable size */
struct exec_params {
uint32_t connect_domain; /* target domain id */
uint32_t connect_port; /* target vchan port for i/o exchange */
char cmdline[0]; /* command line to execute, size = msg_header.len - sizeof(struct exec_params) */
};
In this case, `connect_domain` is set to id of **domY** (from the `-c` parameter) and `connect_port` is set to 0. `cmdline` field contains the RPC to execute, in this case `user:QUBESRPC qubes.SomeRpc domY`.
- **dom0**: **domX**'s `qrexec-daemon` replies to `qrexec-client` with `MSG_EXEC_CMDLINE` header followed by `exec_params`, but with empty `cmdline` field. `connect_domain` is set to Qubes ID of **domX** and `connect_port` is set to a vchan port allocated by **domX**'s `qrexec-daemon`.
- **dom0**: **domX**'s `qrexec-daemon` sends `MSG_EXEC_CMDLINE` header followed by `exec_params` to **domX**'s `qrexec-agent`. `connect_domain` and `connect_port` fields are the same as in the step above. `cmdline` is set to the one received from `qrexec-client`, in this case `user:QUBESRPC qubes.SomeRpc domY`.
- **dom0**: `qrexec-client` disconnects from **domX**'s `qrexec-daemon` after receiving connection details.
- **dom0**: `qrexec-client` connects to **domY**'s `qrexec-daemon` and exchanges `MSG_HELLO` as usual.
- **dom0**: `qrexec-client` sends `MSG_SERVICE_CONNECT` header followed by `exec_params` to **domY**'s `qrexec-daemon`. `connect_domain` is set to ID of **domX** (received from **domX**'s `qrexec-daemon`) and `connect_port` is the one received as well. `cmdline` is set to request ID (`13` in this case).
- **dom0**: **domY**'s `qrexec-daemon` sends `MSG_SERVICE_CONNECT` header followed by `exec_params` to **domY**'s `qrexec-agent`. Data fields are unchanged from the step above.
- **domY**: `qrexec-agent` starts a vchan server on the port received in the step above. It acts as a `qrexec-client` in this case because this is a VM-VM connection.
- **domX**: `qrexec-agent` connects to the vchan server of **domY**'s `qrexec-agent` (connection details were received before from **domX**'s `qrexec-daemon`).
- After that, connection follows the flow of the previous example (dom0-VM).

View File

@ -0,0 +1,340 @@
---
layout: doc
title: Qrexec
permalink: /doc/qrexec/
redirect_from:
- /en/doc/qrexec3/
- /doc/Qrexec3/
- /doc/qrexec3/
- /wiki/Qrexec3/
- /doc/qrexec/
- /en/doc/qrexec/
- /doc/Qrexec/
- /wiki/Qrexec/
---
# Qrexec: secure communication across domains
(*This page is about qrexec v3. For qrexec v2, see [here](/doc/qrexec2/).*)
The **qrexec framework** is used by core Qubes components to implement communication between domains.
Qubes domains are strictly isolated by design.
However, the OS needs a mechanism to allow the administrative domain (dom0) to force command execution in another domain (VM).
For instance, when a user selects an application from the KDE menu, it should start in the selected VM.
Also, it is often useful to be able to pass stdin/stdout/stderr from an application running in a VM to dom0 (and the other way around).
(For example, so that a VM can notify dom0 that there are updates available for it).
By default, Qubes allows VMs initiate such communications in specific circumstances.
The qrexec framework generalizes this process by providing a remote procedure call (RPC) protocol for the Qubes architecture.
It allows users and developers to use and design secure inter-VM tools.
## Qrexec basics: architecture and examples
Qrexec is built on top of *vchan*, a Xen library providing data links between VMs.
During domain startup , a process named `qrexec-daemon` is started in dom0, and a process named `qrexec-agent` is started in the VM.
They are connected over a **vchan** channel.
`qrexec-daemon` listens for connections from a dom0 utility named `qrexec-client`.
Let's say we want to start a process (call it `VMprocess`) in a VM (`someVM`).
Typically, the first thing that a `qrexec-client` instance does is to send a request to the `qrexec-daemon`, which in turn relays it to `qrexec-agent` running in `someVM`.
`qrexec-daemon` assigns unique vchan connection details and sends them to both `qrexec-client` (in dom0) and `qrexec-agent` (in `someVM`).
`qrexec-client` starts a vchan server, which `qrexec-agent` then connects to.
Once this channel is established, stdin/stdout/stderr from the VMprocess is passed between `qrexec-agent` and the `qrexec-client` process.
![qrexec basics diagram](/attachment/wiki/qrexec3/qrexec3-basics.png)
The `qrexec-client` command is used to make connections to VMs from dom0.
For example, the following command
```
$ qrexec-client -e -d someVM user:'touch hello-world.txt'
```
creates an empty file called `hello-world.txt` in the home folder of `someVM`.
The string before the colon specifies what user to run the command as.
The `-e` flag tells `qrexec-client` to exit immediately after sending the execution request and receiving a status code from `qrexec-agent` (whether the process creation succeeded).
With this option, no further data is passed between the domains.
By contrast, the following command demonstrates an open channel between dom0 and someVM (in this case, a remote shell):
```
$ qrexec-client -d someVM user:bash
```
The `qvm-run` command is heavily based on `qrexec-client`.
It also takes care of additional activities, e.g. starting the domain if it is not up yet and starting the GUI daemon.
Thus, it is usually more convenient to use `qvm-run`.
There can be an almost arbitrary number of `qrexec-client` processes for a given domain.
The limiting factor is the number of available vchan channels, which depends on the underlying hypervisor, as well the domain's OS.
## Qubes RPC services
Some common tasks (like copying files between VMs) have an RPC-like structure: a process in one VM (say, the file sender) needs to invoke and send/receive data to some process in other VM (say, the file receiver).
The Qubes RPC framework was created to securely facilitate a range of such actions.
Obviously, inter-VM communication must be tightly controlled to prevent one VM from taking control of another, possibly more privileged, VM.
Therefore the design decision was made to pass all control communication via dom0, that can enforce proper authorization.
Then, it is natural to reuse the already-existing qrexec framework.
Also, note that bare qrexec provides `VM <-> dom0` connectivity, but the command execution is always initiated by dom0.
There are cases when VM needs to invoke and send data to a command in dom0 (e.g. to pass information on newly installed `.desktop` files).
Thus, the framework allows dom0 to be the RPC target as well.
Thanks to the framework, RPC programs are very simple -- both RPC client and server just use their stdin/stdout to pass data.
The framework does all the inner work to connect these processes to each other via `qrexec-daemon` and `qrexec-agent`.
Additionally, disposable VMs are tightly integrated -- RPC to a DisposableVM is identical to RPC to a normal domain, all one needs is to pass `@dispvm` as the remote domain name.
## Qubes RPC administration
### Policy files
The dom0 directory `/etc/qubes-rpc/policy/` contains a file for each available RPC action that a VM might call.
Together the contents of these files make up the RPC access policy database.
Policies are defined in lines with the following format:
```
srcvm destvm (allow|deny|ask[,default_target=default_target_VM])[,user=user_to_run_as][,target=VM_to_redirect_to]
```
You can specify srcvm and destvm by name or by one of the reserved keywords such as `@anyvm`, `@dispvm`, or `dom0`.
(Of these three, only `@anyvm` keyword makes sense in the srcvm field.
Service calls from dom0 are currently always allowed, and `@dispvm` means "new VM created for this particular request," so it is never a source of request.)
Other methods using *tags* and *types* are also available (and discussed below).
Whenever a RPC request for an action is received, the domain checks the first matching line of the relevant file in `/etc/qubes-rpc/policy/` to determine access:
whether to allow the request, what VM to redirect the execution to, and what user account the program should run under.
Note that if the request is redirected (`target=` parameter), policy action remains the same -- even if there is another rule which would otherwise deny such request.
If no policy rule is matched, the action is denied.
If the policy file does not exist, the user is prompted to create one.
If there is still no policy file after prompting, the action is denied.
In the target VM, the file `/etc/qubes-rpc/RPC_ACTION_NAME` must exist, containing the file name of the program that will be invoked, or being that program itself -- in which case it must have executable permission set (`chmod +x`).
### Making an RPC call
From outside of dom0, RPC calls take the following form:
```
$ qrexec-client-vm target_vm_name RPC_ACTION_NAME rpc_client_path client arguments
```
For example:
```
$ qrexec-client-vm work qubes.StartApp+firefox
```
Note that only stdin/stdout is passed between RPC server and client -- notably, no command line arguments are passed.
By default, stderr of client and server is logged in the syslog/journald of the VM where the process is running.
It is also possible to call service without specific client program -- in which case server stdin/out will be connected with the terminal:
```
$ qrexec-client-vm target_vm_name RPC_ACTION_NAME
```
### Specifying VMs: tags, types, targets, etc.
There are severals methods for specifying source/target VMs in RPC policies.
- `@tag:some-tag` - meaning a VM with tag `some-tag`
- `@type:type` - meaning a VM of `type` (like `AppVM`, `TemplateVM` etc)
Target VM can be also specified as `@default`, which matches the case when calling VM didn't specified any particular target (either by using `@default` target, or empty target).
For DisposableVMs, `@dispvm:DISP_VM` is very similar to `@dispvm` but forces using a particular VM (`DISP_VM`) as a base VM to be started as DisposableVM.
For example:
```
anon-whonix @dispvm:anon-whonix-dvm allow
```
Adding such policy itself will not force usage of this particular `DISP_VM` - it will only allow it when specified by the caller.
But `@dispvm:DISP_VM` can also be used as target in request redirection, so _it is possible_ to force particular `DISP_VM` usage, when caller didn't specify it:
```
anon-whonix @dispvm allow,target=@dispvm:anon-whonix-dvm
```
Note that without redirection, this rule would allow using default Disposable VM (`default_dispvm` VM property, which itself defaults to global `default_dispvm` property).
Also note that the request will be allowed (`allow` action) even if there is no second rule allowing calls to `@dispvm:anon-whonix-dvm`, or even if there is a rule explicitly denying it.
This is because the redirection happens _after_ considering the action.
The policy confirmation dialog (`ask` action) allows the user to specify target VM.
User can choose from VMs that, according to policy, would lead to `ask` or `allow` actions.
It is not possible to select VM that policy would deny.
By default no VM is selected, even if the caller provided some, but policy can specify default value using `default_target=` parameter.
For example:
```
work-mail work-archive allow
work-mail @tag:work ask,default_target=work-files
work-mail @default ask,default_target=work-files
```
The first rule allow call from `work-mail` to `work-archive`, without any confirmation.
The second rule will ask the user about calls from `work-mail` VM to any VM with tag `work`.
And the confirmation dialog will have `work-files` VM chosen by default, regardless of the VM specified by the caller (`work-mail` VM).
The third rule allow the caller to not specify target VM at all and let the user choose, still - from VMs with tag `work` (and `work-archive`, regardless of tag), and with `work-files` as default.
### RPC services and security
Be very careful when coding and adding a new RPC service.
Unless the offered functionality equals full control over the target (it is the case with e.g. `qubes.VMShell` action), any vulnerability in an RPC server can be fatal to Qubes security.
On the other hand, this mechanism allows to delegate processing of untrusted input to less privileged (or disposable) AppVMs, thus wise usage of it increases security.
For example, this command will run the `firefox` command in a DisposableVM based on `work`:
```
$ qvm-run --dispvm=work firefox
```
By contrast, consider this command:
```
$ qvm-run --dispvm=work --service qubes.StartApp+firefox
```
This will look for a `firefox.desktop` file in a standard location in a DisposableVM based on `work`, then launch the application described by that file.
The practical difference is that the bare `qvm-run` command uses the `qubes.VMShell` service, which allows you to run an arbitrary command with arbitrary arguments, essentially providing full control over the target VM.
By contrast, the `qubes.StartApp` service allows you to run only applications that are advertised in `/usr/share/applications` (or other standard locations) *without* control over the arguments, so giving a VM access to `qubes.StartApp` is much safer.
While there isn't much practical difference between the two commands above when starting an application from dom0 in Qubes 4.0, there is a significant security risk when launching applications from a domU (e.g., from a separate GUI domain).
This is why `qubes.StartApp` uses our standard `qrexec` argument grammar to strictly filter the permissible grammar of the `Exec=` lines in `.desktop` files that are passed from untrusted domUs to dom0, thereby protecting dom0 from command injection by maliciously-crafted `.desktop` files.
### Service policies with arguments
Sometimes a service name alone isn't enough to make reasonable qrexec policy.
One example of such a situation is [qrexec-based USB passthrough](https://www.qubes-os.org/doc/usb-devices/).
Using just a service name would make it difficult to express the policy "allow access to devices X and Y, but deny to all others."
It isn't feasible to create a separate service for every device: we would need to change the code in multiple files any time we wanted to update the service.
For this reason it is possible to specify a service argument, which will be subject to a policy.
A service argument can make service policies more fine-grained.
With arguments, it is easier to write more precise policies using the "allow" and "deny" actions, instead of relying on the "ask" method.
(Writing too many "ask" policies offloads additional decisions to the user.
Generally, the fewer choices the user must make, the lower the chance to make a mistake.)
Each specific argument that we want to use needs its own policy in dom0 at a path like `/etc/qubes-rpc/policy/RPC_ACTION_NAME+ARGUMENT`.
So for instance, we might have policies called `test.Device`, `test.Device+device1` and `test.Device+device2`.
If the policy for the specific argument is not set (that is, if no file exists for `RPC_ACTION_NAME+ARGUMENT`), then dom0 uses the default policy with no argument for this service.
When calling a service that takes an argument, just add the argument to the service name separated with `+`.
```
$ qrexec-client-vm target_vm_name RPC_ACTION_NAME+ARGUMENT
```
The script will receive `ARGUMENT` as its argument.
The argument will also become available as the `QREXEC_SERVICE_ARGUMENT` environment variable.
This means it is possible to install a different script for a particular service argument.
See [below](#rpc-service-with-argument-file-reader) for an example of an RPC service using an argument.
<!-- TODO document "Yes to All" authorization if it is reintroduced -->
## Qubes RPC examples
To demonstrate some of the possibilities afforded by the qrexec framework, here are two examples of custom RPC services.
### Simple RPC service (addition)
We can create an RPC service that adds two integers in a target domain (the server, call it "anotherVM") and returns back the result to the invoker (the client, "someVM").
In someVM, create a file with the following contents and save it with the path `/usr/bin/our_test_add_client`:
```
#!/bin/sh
echo $1 $2 # pass data to RPC server
exec cat >&$SAVED_FD_1 # print result to the original stdout, not to the other RPC endpoint
```
Our server will be anotherVM at `/usr/bin/our_test_add_server`.
The code for this file is:
```
#!/bin/sh
read arg1 arg2 # read from stdin, which is received from the RPC client
echo $(($arg1+$arg2)) # print to stdout, which is passed to the RPC client
```
We'll need to create a service called `test.Add` with its own definition and policy file in dom0.
Now we need to define what the service does.
In this case, it should call our addition script.
We define the service with another one-line file, `/etc/qubes-rpc/test.Add`:
```
/usr/bin/our_test_add_server
```
The administrative domain will direct traffic based on the current RPC policies.
In dom0, create a file at `/etc/qubes-rpc/policy/test.Add` containing the following:
```
@anyvm @anyvm ask
```
This will allow our client and server to communicate.
Before we make the call, ensure that the client and server scripts have executable permissions.
Finally, invoke the RPC service.
```
$ qrexec-client-vm anotherVM test.Add /usr/bin/our_test_add_client 1 2
```
We should get "3" as answer.
(dom0 will ask for confirmation first.)
**Note:** For a real world example of writing a qrexec service, see this [blog post](https://blog.invisiblethings.org/2013/02/21/converting-untrusted-pdfs-into-trusted.html).
### RPC service with argument (file reader)
Here we create an RPC call that reads a specific file from a predefined directory on the target.
This example uses an [argument](#service-policies-with-arguments) to the policy.
In this example a simplified workflow will be used. The service code placed is placed directly in the service definition file on the target VM.
No separate client script will be needed.
First, on your target VM, create two files in the home directory: `testfile1` and `testfile2`.
Have them contain two different "Hello world!" lines.
Next, we define the RPC service.
On the target VM, place the code below at `/etc/qubes-rpc/test.File`:
```
#!/bin/sh
argument="$1" # service argument, also available as $QREXEC_SERVICE_ARGUMENT
if [ -z "$argument" ]; then
echo "ERROR: No argument given!"
exit 1
fi
cat "/home/user/$argument"
```
(The service argument is already sanitized by qrexec framework.
It is guaranteed to not contain any spaces or slashes, so there should be no need for additional path sanitization.)
Now we create three policy files in dom0.
See the table below for details.
Replace "source_vm1" and others with the names of your own chosen domains.
|------------------------------------------------------------------------|
| Path to file in dom0 | Policy contents |
|-------------------------------------------+----------------------------|
| /etc/qubes-rpc/policy/test.File | @anyvm @anyvm deny |
| /etc/qubes-rpc/policy/test.File+testfile1 | source_vm1 target_vm allow |
| /etc/qubes-rpc/policy/test.File+testfile2 | source_vm2 target_vm allow |
|------------------------------------------------------------------------|
With this done, we can run some tests.
Invoke RPC from `source_vm1` via
```
[user@source_vm1] $ qrexec-client-vm target_vm test.File+testfile1
```
We should get the contents of `/home/user/testfile1` printed to the terminal.
Invoking the service from `source_vm2` should work the same, and `testfile2` should also work.
```
[user@source_vm2] $ qrexec-client-vm target_vm test.File+testfile1
[user@source_vm2] $ qrexec-client-vm target_vm test.File+testfile2
```
But when invoked with other arguments or from a different VM, it should be denied.

View File

@ -11,8 +11,7 @@ redirect_from:
# Command execution in VMs #
(*This page is about qrexec v2. For qrexec v3, see
[here](/doc/qrexec3/).*)
(*This page is about qrexec v2. For qrexec v3, see [here](/doc/qrexec3/).*)
Qubes **qrexec** is a framework for implementing inter-VM (incl. Dom0-VM)
services. It offers a mechanism to start programs in VMs, redirect their
@ -232,7 +231,7 @@ surfaces that are exposed to untrusted or less trusted VMs in that case.
# Qubes RPC internals #
(*This is about the implementation of qrexec v2. For the implementation of
qrexec v3, see [here](/doc/qrexec3/#qubes-rpc-internals). Note that the user
qrexec v3, see [here](/doc/qrexec-internals/). Note that the user
API in v3 is backward compatible: qrexec apps written for Qubes R2 should
run without modification on Qubes R3.*)

View File

@ -1,628 +0,0 @@
---
layout: doc
title: Qrexec3
permalink: /doc/qrexec3/
redirect_from:
- /en/doc/qrexec3/
- /doc/Qrexec3/
- /wiki/Qrexec3/
- /doc/qrexec/
- /en/doc/qrexec/
- /doc/Qrexec/
- /wiki/Qrexec/
- /doc/qrexec3-implementation/
- /en/doc/qrexec3-implementation/
- /doc/Qrexec3Implementation/
- /wiki/Qrexec3Implementation/
---
# Command execution in VMs #
(*This page is about qrexec v3. For qrexec v2, see
[here](/doc/qrexec2/).*)
The **qrexec** framework is used by core Qubes components to implement
communication between domains. Qubes domains are isolated by design, but
there is a need for a mechanism to allow the administrative domain (dom0) to
force command execution in another domain (VM). For instance, when user
selects an application from the KDE menu, it should be started in the selected
VM. Also, it is often useful to be able to pass stdin/stdout/stderr from an
application running in a VM to dom0 (and the other way around). In specific
circumstances, Qubes allows VMs to be initiators of such communications (so,
for example, a VM can notify dom0 that there are updates available for it).
## Qrexec basics ##
Qrexec is built on top of vchan (a library providing data links between
VMs). During domain creation a process named `qrexec-daemon` is started
in dom0, and a process named `qrexec-agent` is started in the VM. They are
connected over **vchan** channel. `qrexec-daemon` listens for connections
from dom0 utility named `qrexec-client`. Typically, the first thing that a
`qrexec-client` instance does is to send a request to `qrexec-daemon` to
start a process (let's name it `VMprocess`) with a given command line in
a specified VM (`someVM`). `qrexec-daemon` assigns unique vchan connection
details and sends them both to `qrexec-client` (in dom0) and `qrexec-agent`
(in `someVM`). `qrexec-client` starts a vchan server which `qrexec-agent`
connects to. Since then, stdin/stdout/stderr from the VMprocess is passed
via vchan between `qrexec-agent` and the `qrexec-client` process.
So, for example, executing in dom0:
qrexec-client -d someVM user:bash
allows to work with the remote shell. The string before the first
semicolon specifies what user to run the command as. Adding `-e` on the
`qrexec-client` command line results in mere command execution (no data
passing), and `qrexec-client` exits immediately after sending the execution
request and receiving status code from `qrexec-agent` (whether the process
creation succeeded). There is also the `-l local_program` flag -- with it,
`qrexec-client` passes stdin/stdout of the remote process to the (spawned
for this purpose) `local_program`, not to its own stdin/stdout.
The `qvm-run` command is heavily based on `qrexec-client`. It also takes care
of additional activities, e.g. starting the domain if it is not up yet and
starting the GUI daemon. Thus, it is usually more convenient to use `qvm-run`.
There can be almost arbitrary number of `qrexec-client` processes for a
domain (so, connected to the same `qrexec-daemon`, same domain) -- their
data is multiplexed independently. Number of available vchan channels is
the limiting factor here, it depends on the underlying hypervisor.
## Qubes RPC services ##
Some tasks (like inter-vm file copy) share the same rpc-like structure:
a process in one VM (say, file sender) needs to invoke and send/receive
data to some process in other VM (say, file receiver). Thus, the Qubes RPC
framework was created, facilitating such actions.
Obviously, inter-VM communication must be tightly controlled to prevent one
VM from taking control over other, possibly more privileged, VM. Therefore
the design decision was made to pass all control communication via dom0,
that can enforce proper authorization. Then, it is natural to reuse the
already-existing qrexec framework.
Also, note that bare qrexec provides `VM <-> dom0` connectivity, but the
command execution is always initiated by dom0. There are cases when VM needs
to invoke and send data to a command in dom0 (e.g. to pass information on
newly installed `.desktop` files). Thus, the framework allows dom0 to be
the rpc target as well.
Thanks to the framework, RPC programs are very simple -- both rpc client
and server just use their stdin/stdout to pass data. The framework does all
the inner work to connect these processes to each other via `qrexec-daemon`
and `qrexec-agent`. Additionally, disposable VMs are tightly integrated --
rpc to a DisposableVM is identical to rpc to a normal domain, all one needs
is to pass `$dispvm` as the remote domain name.
## Qubes RPC administration ##
(*TODO: fix for non-linux dom0*)
In dom0, there is a bunch of files in `/etc/qubes-rpc/policy` directory,
whose names describe the available rpc actions. Their content is the rpc
access policy database. Currently defined actions are:
qubes.ClipboardPaste
qubes.Filecopy
qubes.GetImageRGBA
qubes.GetRandomizedTime
qubes.Gpg
qubes.GpgImportKey
qubes.InputKeyboard
qubes.InputMouse
qubes.NotifyTools
qubes.NotifyUpdates
qubes.OpenInVM
qubes.OpenURL
qubes.PdfConvert
qubes.ReceiveUpdates
qubes.SyncAppMenus
qubes.USB
qubes.VMShell
qubes.WindowIconUpdater
These files contain lines with the following format:
srcvm destvm (allow|deny|ask)[,user=user_to_run_as][,target=VM_to_redirect_to]
You can specify srcvm and destvm by name, or by one of `$anyvm`, `$dispvm`,
`dom0` reserved keywords (note string `dom0` does not match the `$anyvm`
pattern; all other names do). Only `$anyvm` keyword makes sense in srcvm
field (service calls from dom0 are currently always allowed, `$dispvm`
means "new VM created for this particular request," so it is never a
source of request). Currently there is no way to specify source VM by
type. Whenever a rpc request for action X is received, the first line in
`/etc/qubes-rpc/policy/X` that match srcvm/destvm is consulted to determine
whether to allow rpc, what user account the program should run in target VM
under, and what VM to redirect the execution to. Note that if the request is
redirected (`target=` parameter), policy action remains the same - even if
there is another rule which would otherwise deny such request. If the policy
file does not exist, user is prompted to create one; if still there is no
policy file after prompting, the action is denied.
In the target VM, the `/etc/qubes-rpc/RPC_ACTION_NAME` must exist, containing
the file name of the program that will be invoked, or being that program itself
- in which case it must have executable permission set (`chmod +x`).
In the src VM, one should invoke the client via:
/usr/lib/qubes/qrexec-client-vm target_vm_name RPC_ACTION_NAME rpc_client_path client arguments
Note that only stdin/stdout is passed between rpc server and client --
notably, no command line arguments are passed. Source VM name is specified by
`QREXEC_REMOTE_DOMAIN` environment variable. By default, stderr of client
and server is logged to respective `/var/log/qubes/qrexec.XID` files.
It is also possible to call service without specific client program - in which
case server stdin/out will be connected with the terminal:
/usr/lib/qubes/qrexec-client-vm target_vm_name RPC_ACTION_NAME
Be very careful when coding and adding a new rpc service. Unless the
offered functionality equals full control over the target (it is the case
with e.g. `qubes.VMShell` action), any vulnerability in an rpc server can
be fatal to Qubes security. On the other hand, this mechanism allows to
delegate processing of untrusted input to less privileged (or disposable)
AppVMs, thus wise usage of it increases security.
For example, this command will run the `firefox` command in a DisposableVM based
on `work`:
```
$ qvm-run --dispvm=work firefox
```
By contrast, consider this command:
```
$ qvm-run --dispvm=work --service qubes.StartApp+firefox
```
This will look for a `firefox.desktop` file in a standard location in a
DisposableVM based on `work`, then launch the application described by that
file. The practical difference is that the bare `qvm-run` command uses the
`qubes.VMShell` service, which allows you to run an arbitrary command with
arbitrary arguments, essentially providing full control over the target VM. By
contrast, the `qubes.StartApp` service allows you to run only applications that
are advertised in `/usr/share/applications` (or other standard locations)
*without* control over the arguments, so giving a VM access to `qubes.StartApp`
is much safer. While there isn't much practical difference between the two
commands above when starting an application from dom0 in Qubes 4.0, there is a
significant security risk when launching applications from a domU (e.g., from
a separate GUI domain). This is why `qubes.StartApp` uses our standard `qrexec`
argument grammar to strictly filter the permissible grammar of the `Exec=` lines
in `.desktop` files that are passed from untrusted domUs to dom0, thereby
protecting dom0 from command injection by maliciously-crafted `.desktop` files.
### Extra keywords available in Qubes 4.0 and later
**This section is about a not-yet-released version, some details may change**
In Qubes 4.0, target VM can be specified also as `$dispvm:DISP_VM`, which is
very similar to `$dispvm` but forces using a particular VM (`DISP_VM`) as a base
VM to be started as DisposableVM. For example:
anon-whonix $dispvm:anon-whonix-dvm allow
Adding such policy itself will not force usage of this particular `DISP_VM` -
it will only allow it when specified by the caller. But `$dispvm:DISP_VM` can
also be used as target in request redirection, so _it is possible_ to force
particular `DISP_VM` usage, when caller didn't specify it:
anon-whonix $dispvm allow,target=$dispvm:anon-whonix-dvm
Note that without redirection, this rule would allow using default Disposable
VM (`default_dispvm` VM property, which itself defaults to global
`default_dispvm` property).
Also note that the request will be allowed (`allow` action) even if there is no
second rule allowing calls to `$dispvm:anon-whonix-dvm`, or even if
there is a rule explicitly denying it. This is because the redirection happens
_after_ considering the action.
In Qubes 4.0 there are also additional methods to specify source/target VM:
* `$tag:some-tag` - meaning a VM with tag `some-tag`
* `$type:type` - meaning a VM of `type` (like `AppVM`, `TemplateVM` etc)
Target VM can be also specified as `$default`, which matches the case when
calling VM didn't specified any particular target (either by using `$default`
target, or empty target).
In Qubes 4.0 policy confirmation dialog (`ask` action) allow the user to
specify target VM. User can choose from VMs that, according to policy, would
lead to `ask` or `allow` actions. It is not possible to select VM that policy
would deny. By default no VM is selected, even if the caller provided some, but
policy can specify default value using `default_target=` parameter. For
example:
work-mail work-archive allow
work-mail $tag:work ask,default_target=work-files
work-mail $default ask,default_target=work-files
The first rule allow call from `work-mail` to `work-archive`, without any
confirmation.
The second rule will ask the user about calls from `work-mail` VM to any VM with
tag `work`. And the confirmation dialog will have `work-files` VM chosen by
default, regardless of the VM specified by the caller (`work-mail` VM). The
third rule allow the caller to not specify target VM at all and let the user
choose, still - from VMs with tag `work` (and `work-archive`, regardless of
tag), and with `work-files` as default.
### Service argument in policy
Sometimes just service name isn't enough to make reasonable qrexec policy. One
example of such a situation is [qrexec-based USB
passthrough](https://github.com/qubesos/qubes-issues/issues/531) - using just
service name isn't possible to express the policy "allow access to device X and
deny to others". It also isn't feasible to create a separate service for every
device...
For this reason, starting with Qubes 3.2, it is possible to specify a service
argument, which will be subject to policy. Besides the above example of USB
passthrough, a service argument can make many service policies more fine-grained
and easier to write precise policy with "allow" and "deny" actions, instead of
"ask" (offloading additional decisions to the user). And generally the less
choices the user must make, the lower the chance to make a mistake.
The syntax is simple: when calling a service, add an argument to the service name
separated with `+` sign, for example:
/usr/lib/qubes/qrexec-client-vm target_vm_name RPC_ACTION_NAME+ARGUMENT
Then create a policy as usual, including the argument
(`/etc/qubes-rpc/policy/RPC_ACTION_NAME+ARGUMENT`). If the policy for the specific
argument is not set (file does not exist), then the default policy for this service
is loaded (`/etc/qubes-rpc/policy/RPC_ACTION_NAME`).
In target VM (when the call is allowed) the service file will searched as:
- `/etc/qubes-rpc/RPC_ACTION_NAME+ARGUMENT`
- `/etc/qubes-rpc/RPC_ACTION_NAME`
In any case, the script will receive `ARGUMENT` as its argument and additionally as
`QREXEC_SERVICE_ARGUMENT` environment variable. This means it is also possible
to install a different script for a particular service argument.
See below for an example service using an argument.
### Revoking "Yes to All" authorization ###
Qubes RPC policy supports "ask" action. This will prompt the user whether given
RPC call should be allowed. That prompt window also has a "Yes to All" option,
which will allow the action and add a new entry to the policy file, which will
unconditionally allow further calls for the given service-srcVM-dstVM tuple.
In order to remove such authorization, issue this command from a dom0 terminal
(for `qubes.Filecopy` service):
sudo nano /etc/qubes-rpc/policy/qubes.Filecopy
and then remove the first line(s) (before the first `##` comment) which are
the "Yes to All" results.
### Qubes RPC example ###
We will show the necessary files to create an rpc call that adds two integers
on the target and returns back the result to the invoker.
* rpc client code (`/usr/bin/our_test_add_client`):
#!/bin/sh
echo $1 $2 # pass data to rpc server
exec cat >&$SAVED_FD_1 # print result to the original stdout, not to the other rpc endpoint
* rpc server code (*/usr/bin/our\_test\_add\_server*)
#!/bin/sh
read arg1 arg2 # read from stdin, which is received from the rpc client
echo $(($arg1+$arg2)) # print to stdout - so, pass to the rpc client
* policy file in dom0 (*/etc/qubes-rpc/policy/test.Add* )
$anyvm $anyvm ask
* server path definition ( */etc/qubes-rpc/test.Add*)
/usr/bin/our_test_add_server
* invoke rpc via
/usr/lib/qubes/qrexec-client-vm target_vm test.Add /usr/bin/our_test_add_client 1 2
and we should get "3" as answer, after dom0 allows it.
**Note:** For a real world example of writing a qrexec service, see this
[blog post](https://blog.invisiblethings.org/2013/02/21/converting-untrusted-pdfs-into-trusted.html).
### Qubes RPC example - with argument usage ###
We will show the necessary files to create an rpc call that reads a specific file
from a predefined directory on the target. Besides really naive storage, it may
be a very simple password manager.
Additionally, in this example a simplified workflow will be used - server code
placed directly in the service definition file (in `/etc/qubes-rpc` directory). And
no separate client script will be used.
* rpc server code (*/etc/qubes-rpc/test.File*)
#!/bin/sh
argument="$1" # service argument, also available as $QREXEC_SERVICE_ARGUMENT
if [ -z "$argument" ]; then
echo "ERROR: No argument given!"
exit 1
fi
# service argument is already sanitized by qrexec framework and it is
# guaranteed to not contain any space or /, so no need for additional path
# sanitization
cat "/home/user/rpc-file-storage/$argument"
* specific policy file in dom0 (*/etc/qubes-rpc/policy/test.File+testfile1* )
source_vm1 target_vm allow
* another specific policy file in dom0 (*/etc/qubes-rpc/policy/test.File+testfile2* )
source_vm2 target_vm allow
* default policy file in dom0 (*/etc/qubes-rpc/policy/test.File* )
$anyvm $anyvm deny
* invoke rpc from `source_vm1` via
/usr/lib/qubes/qrexec-client-vm target_vm test.File+testfile1
and we should get content of `/home/user/rpc-file-storage/testfile1` as answer.
* also possible to invoke rpc from `source_vm2` via
/usr/lib/qubes/qrexec-client-vm target_vm test.File+testfile2
But when invoked with other argument or from different VM, it should be denied.
# Qubes RPC internals #
(*This is about the implementation of qrexec v3. For the implementation of
qrexec v2, see [here](/doc/qrexec2/#qubes-rpc-internals).*)
Qrexec framework consists of a number of processes communicating with each
other using common IPC protocol (described in detail below). Components
residing in the same domain (`qrexec-client-vm` to `qrexec-agent`, `qrexec-client` to `qrexec-daemon`) use pipes as the underlying transport medium,
while components in separate domains (`qrexec-daemon` to `qrexec-agent`, data channel between `qrexec-agent`s) use vchan link.
Because of [vchan limitation](https://github.com/qubesos/qubes-issues/issues/951), it is not possible to establish qrexec connection back to the source domain.
## Dom0 tools implementation ##
* `/usr/lib/qubes/qrexec-daemon`: One instance is required for every active
domain. Responsible for:
* Handling execution and service requests from **dom0** (source:
`qrexec-client`).
* Handling service requests from the associated domain (source:
`qrexec-client-vm`, then `qrexec-agent`).
* Command line: `qrexec-daemon domain-id domain-name [default user]`
* `domain-id`: Numeric Qubes ID assigned to the associated domain.
* `domain-name`: Associated domain name.
* `default user`: Optional. If passed, `qrexec-daemon` uses this user as
default for all execution requests that don't specify one.
* `/usr/lib/qubes/qrexec-policy`: Internal program used to evaluate the
RPC policy and deciding whether a RPC call should be allowed.
* `/usr/lib/qubes/qrexec-client`: Used to pass execution and service requests
to `qrexec-daemon`. Command line parameters:
* `-d target-domain-name`: Specifies the target for the execution/service
request.
* `-l local-program`: Optional. If present, `local-program` is executed
and its stdout/stdin are used when sending/receiving data to/from the
remote peer.
* `-e`: Optional. If present, stdout/stdin are not connected to the remote
peer. Only process creation status code is received.
* `-c <request-id,src-domain-name,src-domain-id>`: used for connecting
a VM-VM service request by `qrexec-policy`. Details described below in
the service example.
* `cmdline`: Command line to pass to `qrexec-daemon` as the
execution/service request. Service request format is described below in
the service example.
**Note:** None of the above tools are designed to be used by users directly.
## VM tools implementation ##
* `qrexec-agent`: One instance runs in each active domain. Responsible for:
* Handling service requests from `qrexec-client-vm` and passing them to
connected `qrexec-daemon` in dom0.
* Executing associated `qrexec-daemon` execution/service requests.
* Command line parameters: none.
* `qrexec-client-vm`: Runs in an active domain. Used to pass service requests
to `qrexec-agent`.
* Command line: `qrexec-client-vm target-domain-name service-name local-program [local program arguments]`
* `target-domain-name`: Target domain for the service request. Source is
the current domain.
* `service-name`: Requested service name.
* `local-program`: `local-program` is executed locally and its stdin/stdout
are connected to the remote service endpoint.
## Qrexec protocol details ##
Qrexec protocol is message-based. All messages share a common header followed
by an optional data packet.
/* uniform for all peers, data type depends on message type */
struct msg_header {
uint32_t type; /* message type */
uint32_t len; /* data length */
};
When two peers establish connection, the server sends `MSG_HELLO` followed by
`peer_info` struct:
struct peer_info {
uint32_t version; /* qrexec protocol version */
};
The client then should reply with its own `MSG_HELLO` and `peer_info`. The
lower of two versions define protocol used for this connection. If either side
does not support this version, the connection is closed.
Details of all possible use cases and the messages involved are described below.
### dom0: request execution of `some_command` in domX and pass stdin/stdout ###
- **dom0**: `qrexec-client` is invoked in **dom0** as follows:
`qrexec-client -d domX [-l local_program] user:some_command`
- `user` may be substituted with the literal `DEFAULT`. In that case,
default Qubes user will be used to execute `some_command`.
- **dom0**: `qrexec-client` sets `QREXEC_REMOTE_DOMAIN` environment variable
to **domX**.
- **dom0**: If `local_program` is set, `qrexec-client` executes it and uses
that child's stdin/stdout in place of its own when exchanging data with
`qrexec-agent` later.
- **dom0**: `qrexec-client` connects to **domX**'s `qrexec-daemon`.
- **dom0**: `qrexec-daemon` sends `MSG_HELLO` header followed by `peer_info`
to `qrexec-client`.
- **dom0**: `qrexec-client` replies with `MSG_HELLO` header followed by
`peer_info` to `qrexec-daemon`.
- **dom0**: `qrexec-client` sends `MSG_EXEC_CMDLINE` header followed by
`exec_params` to `qrexec-daemon`.
/* variable size */
struct exec_params {
uint32_t connect_domain; /* target domain id */
uint32_t connect_port; /* target vchan port for i/o exchange */
char cmdline[0]; /* command line to execute, size = msg_header.len - sizeof(struct exec_params) */
};
In this case, `connect_domain` and `connect_port` are set to 0.
- **dom0**: `qrexec-daemon` replies to `qrexec-client` with
`MSG_EXEC_CMDLINE` header followed by `exec_params`, but with empty `cmdline`
field. `connect_domain` is set to Qubes ID of **domX** and `connect_port`
is set to a vchan port allocated by `qrexec-daemon`.
- **dom0**: `qrexec-daemon` sends `MSG_EXEC_CMDLINE` header followed
by `exec_params` to the associated **domX** `qrexec-agent` over
vchan. `connect_domain` is set to 0 (**dom0**), `connect_port` is the same
as sent to `qrexec-client`. `cmdline` is unchanged except that the literal
`DEFAULT` is replaced with actual user name, if present.
- **dom0**: `qrexec-client` disconnects from `qrexec-daemon`.
- **dom0**: `qrexec-client` starts a vchan server using the details received
from `qrexec-daemon` and waits for connection from **domX**'s `qrexec-agent`.
- **domX**: `qrexec-agent` receives `MSG_EXEC_CMDLINE` header followed by
`exec_params` from `qrexec-daemon` over vchan.
- **domX**: `qrexec-agent` connects to `qrexec-client` over vchan using the
details from `exec_params`.
- **domX**: `qrexec-agent` executes `some_command` in **domX** and connects
the child's stdin/stdout to the data vchan. If the process creation fails,
`qrexec-agent` sends `MSG_DATA_EXIT_CODE` to `qrexec-client` followed by
the status code (**int**) and disconnects from the data vchan.
- Data read from `some_command`'s stdout is sent to the data vchan using
`MSG_DATA_STDOUT` by `qrexec-agent`. `qrexec-client` passes data received as
`MSG_DATA_STDOUT` to its own stdout (or to `local_program`'s stdin if used).
- `qrexec-client` sends data read from local stdin (or `local_program`'s
stdout if used) to `qrexec-agent` over the data vchan using
`MSG_DATA_STDIN`. `qrexec-agent` passes data received as `MSG_DATA_STDIN`
to `some_command`'s stdin.
- `MSG_DATA_STDOUT` or `MSG_DATA_STDIN` with data `len` field set to 0 in
`msg_header` is an EOF marker. Peer receiving such message should close the
associated input/output pipe.
- When `some_command` terminates, **domX**'s `qrexec-agent` sends
`MSG_DATA_EXIT_CODE` header to `qrexec-client` followed by the exit code
(**int**). `qrexec-agent` then disconnects from the data vchan.
### domY: invoke execution of qubes service `qubes.SomeRpc` in domX and pass stdin/stdout ###
- **domY**: `qrexec-client-vm` is invoked as follows:
`qrexec-client-vm domX qubes.SomeRpc local_program [params]`
- **domY**: `qrexec-client-vm` connects to `qrexec-agent` (via local
socket/named pipe).
- **domY**: `qrexec-client-vm` sends `trigger_service_params` data to
`qrexec-agent` (without filling the `request_id` field):
struct trigger_service_params {
char service_name[64];
char target_domain[32];
struct service_params request_id; /* service request id */
};
struct service_params {
char ident[32];
};
- **domY**: `qrexec-agent` allocates a locally-unique (for this domain)
`request_id` (let's say `13`) and fills it in the `trigger_service_params`
struct received from `qrexec-client-vm`.
- **domY**: `qrexec-agent` sends `MSG_TRIGGER_SERVICE` header followed by
`trigger_service_params` to `qrexec-daemon` in **dom0** via vchan.
- **dom0**: **domY**'s `qrexec-daemon` executes `qrexec-policy`: `qrexec-policy
domY_id domY domX qubes.SomeRpc 13`.
- **dom0**: `qrexec-policy` evaluates if the RPC should be allowed or
denied. If the action is allowed it returns `0`, if the action is denied it
returns `1`.
- **dom0**: **domY**'s `qrexec-daemon` checks the exit code of `qrexec-policy`.
- If `qrexec-policy` returned **not** `0`: **domY**'s `qrexec-daemon`
sends `MSG_SERVICE_REFUSED` header followed by `service_params` to
**domY**'s `qrexec-agent`. `service_params.ident` is identical to the one
received. **domY**'s `qrexec-agent` disconnects its `qrexec-client-vm`
and RPC processing is finished.
- If `qrexec-policy` returned `0`, RPC processing continues.
- **dom0**: if `qrexec-policy` allowed the RPC, it executed `qrexec-client
-d domX -c 13,domY,domY_id user:QUBESRPC qubes.SomeRpc domY`.
- **dom0**: `qrexec-client` sets `QREXEC_REMOTE_DOMAIN` environment variable
to **domX**.
- **dom0**: `qrexec-client` connects to **domX**'s `qrexec-daemon`.
- **dom0**: **domX**'s `qrexec-daemon` sends `MSG_HELLO` header followed by
`peer_info` to `qrexec-client`.
- **dom0**: `qrexec-client` replies with `MSG_HELLO` header followed by
`peer_info` to **domX**'s`qrexec-daemon`.
- **dom0**: `qrexec-client` sends `MSG_EXEC_CMDLINE` header followed by
`exec_params` to **domX**'s`qrexec-daemon`
/* variable size */
struct exec_params {
uint32_t connect_domain; /* target domain id */
uint32_t connect_port; /* target vchan port for i/o exchange */
char cmdline[0]; /* command line to execute, size = msg_header.len - sizeof(struct exec_params) */
};
In this case, `connect_domain` is set to id of **domY** (from the `-c`
parameter) and `connect_port` is set to 0. `cmdline` field contains the
RPC to execute, in this case `user:QUBESRPC qubes.SomeRpc domY`.
- **dom0**: **domX**'s `qrexec-daemon` replies to `qrexec-client` with
`MSG_EXEC_CMDLINE` header followed by `exec_params`, but with empty `cmdline`
field. `connect_domain` is set to Qubes ID of **domX** and `connect_port`
is set to a vchan port allocated by **domX**'s `qrexec-daemon`.
- **dom0**: **domX**'s `qrexec-daemon` sends `MSG_EXEC_CMDLINE` header
followed by `exec_params` to **domX**'s `qrexec-agent`. `connect_domain`
and `connect_port` fields are the same as in the step above. `cmdline` is
set to the one received from `qrexec-client`, in this case `user:QUBESRPC
qubes.SomeRpc domY`.
- **dom0**: `qrexec-client` disconnects from **domX**'s `qrexec-daemon`
after receiving connection details.
- **dom0**: `qrexec-client` connects to **domY**'s `qrexec-daemon` and
exchanges `MSG_HELLO` as usual.
- **dom0**: `qrexec-client` sends `MSG_SERVICE_CONNECT` header followed by
`exec_params` to **domY**'s `qrexec-daemon`. `connect_domain` is set to ID
of **domX** (received from **domX**'s `qrexec-daemon`) and `connect_port` is
the one received as well. `cmdline` is set to request ID (`13` in this case).
- **dom0**: **domY**'s `qrexec-daemon` sends `MSG_SERVICE_CONNECT` header
followed by `exec_params` to **domY**'s `qrexec-agent`. Data fields are
unchanged from the step above.
- **domY**: `qrexec-agent` starts a vchan server on the port received in
the step above. It acts as a `qrexec-client` in this case because this is
a VM-VM connection.
- **domX**: `qrexec-agent` connects to the vchan server of **domY**'s
`qrexec-agent` (connection details were received before from **domX**'s
`qrexec-daemon`).
- After that, connection follows the flow of the previous example (dom0-VM).

15
doc.md
View File

@ -45,6 +45,7 @@ Core documentation for Qubes users.
* [System Requirements](/doc/system-requirements/)
* [Certified Hardware](/doc/certified-hardware/)
* [Hardware Compatibility List (HCL)](/hcl/)
* [Hardware Testing](/doc/hardware-testing/)
### Downloading, Installing, and Upgrading Qubes
@ -63,7 +64,7 @@ Core documentation for Qubes users.
* [Copying from (and to) Dom0](/doc/copy-from-dom0/)
* [Updating Qubes OS](/doc/updating-qubes-os/)
* [Installing and Updating Software in Dom0](/doc/software-update-dom0/)
* [Installing and Updating Software in VMs](/doc/software-update-vm/)
* [Installing and Updating Software in DomUs](/doc/software-update-domu/)
* [Backup, Restoration, and Migration](/doc/backup-restore/)
* [DisposableVMs](/doc/disposablevm/)
* [Block (or Storage) Devices](/doc/block-devices/)
@ -77,12 +78,11 @@ Core documentation for Qubes users.
### Managing Operating Systems within Qubes
* [TemplateVMs](/doc/templates/)
* [Template: Fedora](/doc/templates/fedora/)
* [Template: Fedora Minimal](/doc/templates/fedora-minimal/)
* [Template: Debian](/doc/templates/debian/)
* [Template: Debian Minimal](/doc/templates/debian-minimal/)
* [Fedora](/doc/templates/fedora/)
* [Debian](/doc/templates/debian/)
* [Minimal TemplateVMs](/doc/templates/minimal/)
* [Windows](/doc/windows/)
* [HVM Domains](/doc/hvm/)
* [StandaloneVMs and HVMs](/doc/standalone-and-hvm/)
### Security in Qubes
@ -154,7 +154,6 @@ Core documentation for Qubes developers and advanced users.
* [Qubes Core Admin Client](https://dev.qubes-os.org/projects/core-admin-client/en/latest/)
* [Qubes Admin API](/news/2017/06/27/qubes-admin-api/)
* [Qubes Core Stack](/news/2017/10/03/core3/)
* [Qrexec: command execution in VMs](/doc/qrexec3/)
* [Qubes GUI virtualization protocol](/doc/gui/)
* [Networking in Qubes](/doc/networking/)
* [Implementation of template sharing and updating](/doc/template-implementation/)
@ -167,6 +166,8 @@ Core documentation for Qubes developers and advanced users.
* [Dynamic memory management in Qubes](/doc/qmemman/)
* [Implementation of DisposableVMs](/doc/dvm-impl/)
* [Dom0 secure update mechanism](/doc/dom0-secure-updates/)
* [Qrexec: secure communication across domains](/doc/qrexec/)
* [Qubes RPC internals](/doc/qrexec-internals/)
### Debugging

View File

@ -53,10 +53,3 @@ This applies also to any TemplateBasedVM relative to its parent TemplateVM, but
Credit: [Joanna Rutkovska](https://twitter.com/rootkovska/status/832571372085850112)
Trim for standalone AppVMs
---------------------
The `qvm-trim-template` command is not available for a standalone AppVM.
It is still possible to trim the AppVM disks by using the `fstrim --all` command from the appvm.
You can also add the `discard` option to the mount line in `/etc/fstab` inside the standalone AppVM if you want trimming to be performed automatically, but there may be a performance impact on writes and deletes.

View File

@ -148,15 +148,9 @@ There are multiple ways to create a Kali Linux VM:
[user@kali ~]$ sudo apt-get dist-upgrade
[user@kali ~]$ sudo apt-get autoremove
8. Shutdown and trim `kali` template
8. Shut down `kali` template
- Shutdown `kali` template
[user@kali ~]$ sudo shutdown -h now
- In `dom0` console:
[user@dom0 ~]$ qvm-trim-template kali
[user@kali ~]$ sudo shutdown -h now
9. Start image
@ -285,10 +279,9 @@ These instructions will show you how to upgrade a Debian TemplateVM to Kali Linu
[user@kali-rolling ~]$ sudo apt-get dist-upgrade
[user@kali-rolling ~]$ sudo apt-get autoremove
9. Shut down and trim the new template.
9. Shut down the new template.
[user@dom0 ~]$ qvm-shutdown kali-rolling
[user@dom0 ~]$ qvm-trim-template kali-rolling
10. Ensure a terminal can be opened in the new template.

View File

@ -7,6 +7,11 @@ redirect_from:
- /en/doc/windows-appvms/
- /doc/WindowsAppVms/
- /wiki/WindowsAppVms/
- /doc/windows-tools-3/
- /en/doc/windows-tools-3/
- /doc/WindowsTools3/
- /doc/WindowsTools/
- /wiki/WindowsTools/
---
Qubes Windows Tools
@ -153,6 +158,102 @@ Then, periodically check for updates in the Template VM and the changes will be
Once the template has been created and installed it is easy to create AppVMs based on it:
~~~
qvm-create --hvm <new windows appvm name> --template <name of template vm> --label <label color>
qvm-create --property virt_mode=hvm <new windows appvm name> --template <name of template vm> --label <label color>
~~~
Components
----------
Qubes Windows Tools (QWT for short) contain several components than can be enabled or disabled during installation:
- Shared components (required): common libraries used by QWT components.
- Xen PV drivers: drivers for the virtual hardware exposed by Xen.
- Base Xen PV Drivers (required): paravirtual bus and interface drivers.
- Xen PV Disk Drivers: paravirtual storage drivers.
- Xen PV Network Drivers: paravirtual network drivers.
- Qubes Core Agent: qrexec agent and services. Needed for proper integration with Qubes.
- Move user profiles: user profile directory (c:\users) is moved to VM's private disk backed by private.img file in dom0 (useful mainly for HVM templates).
- Qubes GUI Agent: video driver and gui agent that enable seamless showing of Windows applications on the secure Qubes desktop.
- Disable UAC: User Account Control may interfere with QWT and doesn't really provide any additional benefits in Qubes environment.
**In testing VMs only** it's probably a good idea to install a VNC server before installing QWT. If something goes very wrong with the Qubes gui agent, a VNC server should still allow access to the OS.
**NOTE**: Xen PV disk drivers are not installed by default. This is because they seem to cause problems (BSOD = Blue Screen Of Death). We're working with upstream devs to fix this. *However*, the BSOD seems to only occur after the first boot and everything works fine after that. **Enable the drivers at your own risk** of course, but we welcome reports of success/failure in any case (backup your VM first!). With disk PV drivers absent `qvm-block` will not work for the VM, but you can still use standard Qubes inter-VM file copying mechanisms.
Xen PV driver components may display a message box asking for reboot during installation -- it's safe to ignore them and defer the reboot.
Installation logs
-----------------
If the install process fails or something goes wrong during it, include the installation logs in your bug report. They are created in the `%TEMP%` directory, by default `<user profile>\AppData\Local\Temp`. There are two text files, one small and one big, with names starting with `Qubes_Windows_Tools`.
Uninstalling QWT is supported from version 3.2.1. Uninstalling previous versions is **not recommended**.
After uninstalling you need to manually enable the DHCP Client Windows service, or set IP settings yourself to restore network access.
Configuration
-------------
Starting from version 2.2.\* various aspects of Qubes Windows Tools can be configured through registry. Main configuration key is located in `HKEY_LOCAL_MACHINE\SOFTWARE\Invisible Things Lab\Qubes Tools`. Configuration values set on this level are global to all QWT components. It's possible to override global values with component-specific keys, this is useful mainly for setting log verbosity for troubleshooting. Possible configuration values are:
|**Name**|**Type**|**Description**|**Default value**|
|:-------|:-------|:--------------|:----------------|
|LogDir|String|Directory where logs are created|c:\\Program Files\\Invisible Things Lab\\Qubes Tools\\log|
|LogLevel|DWORD|Log verbosity (see below)|2 (INFO)|
|LogRetention|DWORD|Maximum age of log files (in seconds), older logs are automatically deleted|604800 (7 days)|
Possible log levels:
||
|1|Error|Serious errors that most likely cause irrecoverable failures|
|2|Warning|Unexpected but non-fatal events|
|3|Info|Useful information (default)|
|4|Debug|Internal state dumps for troubleshooting|
|5|Verbose|Trace most function calls|
Debug and Verbose levels can generate large volume of logs and are intended for development/troubleshooting only.
To override global settings for a specific component, create a new key under the root key mentioned above and name it as the executable name, without `.exe` extension. For example, to change qrexec-agent's log level to Debug, set it like this:
![qtw-log-level.png](/attachment/wiki/WindowsTools/qtw-log-level.png)
Component-specific settings currently available:
|**Component**|**Setting**|**Type**|**Description**|**Default value**|
|:------------|:----------|:-------|:--------------|:----------------|
|qga|DisableCursor|DWORD|Disable cursor in the VM. Useful for integration with Qubes desktop so you don't see two cursors. Can be disabled if you plan to use the VM through a remote desktop connection of some sort. Needs gui agent restart to apply change (locking OS/logoff should be enough since qga is restarted on desktop change).|1|
Troubleshooting
---------------
If the VM is inaccessible (doesn't respond to qrexec commands, gui is not functioning), try to boot it in safe mode:
- `qvm-start --debug vmname`
- mash F8 on the boot screen to enable boot options and select Safe Mode (optionally with networking)
Safe Mode should at least give you access to logs (see above).
**Please include appropriate logs when reporting bugs/problems.** Starting from version 2.4.2 logs contain QWT version, but if you're using an earlier version be sure to mention which one. If the OS crashes (BSOD) please include the BSOD code and parameters in your bug report. The BSOD screen should be visible if you run the VM in debug mode (`qvm-start --debug vmname`). If it's not visible or the VM reboots automatically, try to start Windows in safe mode (see above) and 1) disable automatic restart on BSOD (Control Panel - System - Advanced system settings - Advanced - Startup and recovery), 2) check the system event log for BSOD events. If you can, send the `memory.dmp` dump file from c:\Windows.
Xen logs (/var/log/xen/console/guest-*) are also useful as they contain pvdrivers diagnostic output.
If a specific component is malfunctioning, you can increase its log verbosity as explained above to get more troubleshooting information. Below is a list of components:
||
|qrexec-agent|Responsible for most communication with Qubes (dom0 and other domains), secure clipboard, file copying, qrexec services.|
|qrexec-wrapper|Helper executable that's responsible for launching qrexec services, handling their I/O and vchan communication.|
|qrexec-client-vm|Used for communications by the qrexec protocol.|
|qga|Gui agent.|
|QgaWatchdog|Service that monitors session/desktop changes (logon/logoff/locking/UAC...) and simulates SAS sequence (ctrl-alt-del).|
|qubesdb-daemon|Service for accessing Qubes configuration database.|
|network-setup|Service that sets up network parameters according to VM's configuration.|
|prepare-volume|Utility that initializes and formats the disk backed by `private.img` file. It's registered to run on next system boot during QWT setup, if that feature is selected (it can't run *during* the setup because Xen block device drivers are not yet active). It in turn registers move-profiles (see below) to run at early boot.|
|relocate-dir|Utility that moves user profiles directory to the private disk. It's registered as an early boot native executable (similar to chkdsk) so it can run before any profile files are opened by some other process. Its log is in a fixed location: `c:\move-profiles.log` (it can't use our common logger library so none of the log settings apply).|
Updates
-------
When we publish new QWT version (which is announced on `qubes-users` Google Group) it's usually pushed to the `current-testing` or `unstable` repository first. To use versions from current-testing, run this in dom0:
`qubes-dom0-update --enablerepo=qubes-dom0-current-testing qubes-windows-tools`
That command will download a new QWT .iso from the testing repository. It goes without saying that you should **backup your VMs** before installing anything from testing repos.

View File

@ -82,6 +82,13 @@ Note however that you are better off creating a new Windows VM to benefit from t
Windows VM installation
-----------------------
### qvm-create-windows-qube ###
An unofficial, third-party tool for automating this process is available [here](https://github.com/crazyqube/qvm-create-windows-qube).
(Please note that this tool has not been reviewed by the Qubes OS Project.
Use it at your own risk.)
However, if you are an expert or want to do it manually you may continue below.
### Summary ###
~~~
@ -162,7 +169,7 @@ To avoid that error we temporarily have to switch the video adapter to 'cirrus':
qvm-features win7new video-model cirrus
~~~
The VM is now ready to be started; the best practice is to use an installation ISO [located in a VM](/doc/hvm/#installing-an-os-in-an-hvm-qube):
The VM is now ready to be started; the best practice is to use an installation ISO [located in a VM](/doc/standalone-and-hvm/#installing-an-os-in-an-hvm):
~~~
qvm-start --cdrom=untrusted:/home/user/windows_install.iso win7new
@ -189,7 +196,7 @@ qvm-prefs win7new memory 2048
qvm-prefs win7new maxmem 2048
~~~
Revert to the standard VGA adapter :
Revert to the standard VGA adapter: the 'cirrus' adapter will limit the maximum screen resolution to 1024x768 pixels, while the default VGA adapter allows for much higher resolutions (up to 2560x1600 pixels).
~~~
qvm-features --unset win7new video-model

View File

@ -67,26 +67,29 @@ Optional Preparation Steps
[minimal Fedora template][FedoraMinimal]. Get it if you haven't already done
so:
[user@dom0 ~]$ sudo qubes-dom0-update qubes-template-fedora-26-minimal
[user@dom0 ~]$ sudo qubes-dom0-update qubes-template-fedora-30-minimal
2. Since we'll be making some modifications, you may want to clone the minimal
template:
[user@dom0 ~]$ qvm-clone fedora-26-minimal fedora-26-min-mfa
[user@dom0 ~]$ qvm-clone fedora-30-minimal fedora-30-min-mfa
3. To open a root shell on the minimal template (for details, see [Passwordless Root]), run the following command:
3. Since this is going to be a minimal environment in which we run `oathtool`
[user@dom0 ~]$ qvm-run -u root fedora-30-min-mfa xterm
4. Since this is going to be a minimal environment in which we run `oathtool`
from the command line, we'll install only a couple of packages:
[user@fedora-26-min-mfa ~]$ su -
[user@fedora-26-min-mfa ~]# dnf install oathtool vim-minimal
[user@fedora-26-min-mfa ~]$ poweroff
[root@fedora-30-min-mfa ~]# dnf install oathtool vim-minimal
[root@fedora-30-min-mfa ~]$ poweroff
4. Create an AppVM and set it to use the TemplateVM we just created:
5. Create an AppVM and set it to use the TemplateVM we just created:
[user@dom0 ~]$ qvm-create -l black mfa
[user@dom0 ~]$ qvm-prefs -s mfa template fedora-26-min-mfa
[user@dom0 ~]$ qvm-prefs -s mfa template fedora-30-min-mfa
5. Isolate the new AppVM from the network:
6. Isolate the new AppVM from the network:
[user@dom0 ~]$ qvm-prefs -s mfa netvm none
@ -135,7 +138,7 @@ is largely the same.
[user@mfa ~]$ > google
[user@mfa ~]$ vi google
#!/bin/bash
#!/usr/bin/env bash
##My Google Account
##me@gmail.com
oathtool --base32 --totp "xd2n mx5t ekg6 h6bi u74d 745k n4m7 zy3x"
@ -184,3 +187,4 @@ is largely the same.
[Google Authenticator]: https://en.wikipedia.org/wiki/Google_Authenticator
[FedoraMinimal]: /doc/Templates/FedoraMinimal/
[usage]: https://en.wikipedia.org/wiki/Google_Authenticator#Usage
[Passwordless Root]: /doc/templates/minimal/#passwordless-root

View File

@ -32,5 +32,5 @@ When a template is marked as 'installed by package manager', but cannot be unins
- If `installed_by_rpm` remains `True`, reboot your computer to bring qubes.xml in sync with qubesd, and try again to remove the template.
[normal method]: /doc/templates/#how-to-install-uninstall-reinstall-and-switch
[normal method]: /doc/templates/#uninstalling

View File

@ -58,15 +58,15 @@ permalink: /experts/
</div>
<div class="row featured-quotes">
<div class="col-lg-3 col-md-3 text-center">
<a class="avatar-large" href="https://twitter.com/petertoddbtc/status/924981145871060996" target="_blank">
<img src="/attachment/site/expert-peter-todd.jpg">
<a class="avatar-large" href="https://twitter.com/csoghoian" target="_blank">
<img src="/attachment/site/expert-christopher-soghoian.jpg">
</a>
</div>
<div class="col-lg-9 col-md-9 more-top">
<a href="https://twitter.com/petertoddbtc/status/924981145871060996" target="_blank">
<blockquote>"Donated a % of my consulting company's last year revenue to Qubes OS. I rely on it for all my work, and recommend it to clients too."
<a href="https://twitter.com/csoghoian" target="_blank">
<blockquote>"I am so much happier and less stressed out after switching to QubesOS. Can wholeheartedly recommend."
<i class="fa fa-twitter fa-fw" aria-hidden="true"></i>
<footer>Peter Todd<cite title="Source Title">, Applied Cryptography Consultant</cite></footer>
<footer>Christopher Soghoian<cite title="Source Title">, privacy researcher, activist, and principal technologist at the ACLU</cite></footer>
</blockquote>
</a>
</div>
@ -88,15 +88,15 @@ permalink: /experts/
</div>
<div class="row featured-quotes">
<div class="col-lg-3 col-md-3 text-center">
<a class="avatar-large" href="https://twitter.com/csoghoian/status/756212792785534976" target="_blank">
<img src="/attachment/site/expert-christopher-soghoian.jpg">
<a class="avatar-large" href="https://twitter.com/petertoddbtc/status/924981145871060996" target="_blank">
<img src="/attachment/site/expert-peter-todd.jpg">
</a>
</div>
<div class="col-lg-9 col-md-9 more-top">
<a href="https://twitter.com/csoghoian/status/756212792785534976" target="_blank">
<blockquote>"I am so much happier and less stressed out after switching to QubesOS. Can wholeheartedly recommend."
<a href="https://twitter.com/petertoddbtc/status/924981145871060996" target="_blank">
<blockquote>"Donated a % of my consulting company's last year revenue to Qubes OS. I rely on it for all my work, and recommend it to clients too."
<i class="fa fa-twitter fa-fw" aria-hidden="true"></i>
<footer>Christopher Soghoian<cite title="Source Title">, privacy researcher, activist, and principal technologist at the ACLU</cite></footer>
<footer>Peter Todd<cite title="Source Title">, Applied Cryptography Consultant</cite></footer>
</blockquote>
</a>
</div>
@ -131,6 +131,21 @@ permalink: /experts/
</a>
</div>
</div>
<div class="row featured-quotes">
<div class="col-lg-3 col-md-3 text-center">
<a class="avatar-large" href="https://twitter.com/vitalikbuterin/status/1086465679904038912" target="_blank">
<img src="/attachment/site/expert-vitalik-buterin.jpg">
</a>
</div>
<div class="col-lg-9 col-md-9 more-top">
<a href="https://twitter.com/vitalikbuterin/status/1086465679904038912" target="_blank">
<blockquote>"Trying out Qubes OS (qubes-os.org) recently; linux distro designed around increased security by virtualizing everything and making it really convenient to hop between VMs. Surprisingly good user-friendliness!"
<i class="fa fa-twitter fa-fw" aria-hidden="true"></i>
<footer>Vitalik Buterin<cite title="Source Title">, creator of Ethereum</cite></footer>
</blockquote>
</a>
</div>
</div>
</div>
{% include footer.html %}
</div>

View File

@ -118,7 +118,7 @@ Please refer to [this page](/doc/vm-sudo/).
### Why is dom0 so old?
Please see:
- [Why would one want to update software in dom0?](/doc/software-update-dom0/#why-would-one-want-to-install-or-update-software-in-dom0)
- [Installing and updating software in dom0](/doc/software-update-dom0/)
- [Note on dom0 and EOL](/doc/supported-versions/#note-on-dom0-and-eol)
### Do you recommend coreboot as an alternative to vendor BIOS?
@ -421,7 +421,7 @@ For Debian:
For Fedora:
1. (Recommended) Clone an existing Fedora TemplateVM
2. [Enable the appropriate RPMFusion repos in the desired Fedora TemplateVM.](/doc/software-update-vm/#rpmfusion-for-a-fedora-templatevm)
2. [Enable the appropriate RPMFusion repos in the desired Fedora TemplateVM.](/doc/software-update-domu/#rpmfusion-for-fedora-templatevms)
3. Install VLC in that TemplateVM:
$ sudo dnf install vlc

View File

@ -228,8 +228,7 @@ search and reply to messages which were sent prior to your subscription to
the list. However, a Google account is required in order to post through the
web interfaces. (Note: There have been many discussions about why the Qubes
OS Project does not maintain an official forum. The curious can find these
by searching the list archives. However, there is an [unofficial
forum](https://qubes-os.info) that is synced with the mailing lists.)
by searching the list archives.)
## qubes-announce ##

View File

@ -33,6 +33,10 @@ Reporting Security Issues in Qubes OS
If you believe you have found a security issue affecting Qubes OS, either directly or indirectly (e.g. the issue affects Xen in a configuration that is used in Qubes OS), then we would be more than happy to hear from you!
We promise to treat any reported issue seriously and, if the investigation confirms that it affects Qubes, to patch it within a reasonable time and release a public [Qubes Security Bulletin][Security Bulletins] that describes the issue, discusses the potential impact of the vulnerability, references applicable patches or workarounds, and credits the discoverer.
Security Updates
----------------
Qubes security updates are obtained by [Updating Qubes OS].
The Qubes Security Team
-----------------------
@ -82,4 +86,6 @@ Please see [Why and How to Verify Signatures] for information about how to verif
[Simon Gaiser (aka HW42)]: /team/#simon-gaiser-aka-hw42
[Joanna Rutkowska]: /team/#joanna-rutkowska
[emeritus, canaries only]: /news/2018/11/05/qubes-security-team-update/
[Updating Qubes OS]: /doc/updating-qubes-os/

View File

@ -52,7 +52,6 @@ There are three basic steps in this process:
If you run into any problems, please consult the [Troubleshooting FAQ] below.
### 1. Get the Qubes Master Signing Key and verify its authenticity
Every file published by the Qubes Project (ISO, RPM, TGZ files and Git repositories) is digitally signed by one of the developer keys or Release Signing Keys.
@ -62,17 +61,21 @@ This Qubes Master Signing Key was generated on and is kept only on a dedicated,
There are several ways to get the Qubes Master Signing Key.
- If you have access to an existing Qubes installation, it's available in every VM ([except dom0]):
$ gpg2 --import /usr/share/qubes/qubes-master-key.asc
- Fetch it with GPG:
$ gpg --fetch-keys https://keys.qubes-os.org/keys/qubes-master-signing-key.asc
$ gpg2 --fetch-keys https://keys.qubes-os.org/keys/qubes-master-signing-key.asc
- Download it as a [file][Qubes Master Signing Key], then import it with GPG:
$ gpg --import ./qubes-master-signing-key.asc
$ gpg2 --import ./qubes-master-signing-key.asc
- Get it from a public [keyserver] (specified on first use with `--keyserver <URI>`, then saved in `~/.gnupg/gpg.conf`), e.g.:
$ gpg --keyserver pool.sks-keyservers.net --recv-keys 0x427F11FD0FAA4B080123F01CDDFA1A3E36879494
$ gpg2 --keyserver pool.sks-keyservers.net --recv-keys 0x427F11FD0FAA4B080123F01CDDFA1A3E36879494
The Qubes Master Signing Key is also available in the [Qubes Security Pack] and in the archives of the project's [developer][devel-master-key-msg] and [user][user-master-key-msg] [mailing lists].
@ -101,7 +104,7 @@ For additional security, we also publish the fingerprint of the Qubes Master Sig
Once you're confident that you have the legitimate Qubes Master Signing Key, set its trust level to "ultimate" so that it can be used to automatically verify all the keys signed by the Qubes Master Signing Key (in particular, Release Signing Keys).
$ gpg --edit-key 0x36879494
$ gpg2 --edit-key 0x36879494
gpg (GnuPG) 1.4.18; Copyright (C) 2014 Free Software Foundation, Inc.
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
@ -149,20 +152,24 @@ Now, when you import any of the legitimate Qubes developer keys and Release Sign
The filename of the Release Signing Key for your version is `qubes-release-X-signing-key.asc`, where `X` is the major version number of your Qubes release.
There are several ways to get the Release Signing Key for your Qubes release.
- If you have access to an existing Qubes installation, the release keys are available in dom0 in `/etc/pki/rpm-gpg/`.
These can be [copied][copy-from-dom0] into other VMs for further use.
In addition, every other VM contains the release key corresponding to that installation's release in `/etc/pki/rpm-gpg/`.
- Fetch it with GPG:
$ gpg --fetch-keys https://keys.qubes-os.org/keys/qubes-release-X-signing-key.asc
$ gpg2 --keyserver-options no-self-sigs-only,no-import-clean --fetch-keys https://keys.qubes-os.org/keys/qubes-release-X-signing-key.asc
- Download it as a file.
You can find the Release Signing Key for your Qubes version on the [Downloads] page.
You can also download all the currently used developers' signing keys, Release Signing Keys, and the Qubes Master Signing Key from the [Qubes Security Pack] and the [Qubes OS Keyserver].
Once you've downloaded your Release Signing Key, import it with GPG:
$ gpg --import ./qubes-release-X-signing-key.asc
$ gpg2 --keyserver-options no-self-sigs-only,no-import-clean --import ./qubes-release-X-signing-key.asc
The Release Signing Key should be signed by the Qubes Master Signing Key:
$ gpg --list-sigs "Qubes OS Release X Signing Key"
$ gpg2 --list-sigs "Qubes OS Release X Signing Key"
pub rsa4096 2017-03-06 [SC]
5817A43B283DE5A9181A522E1848792F9E2795E9
uid [ full ] Qubes OS Release X Signing Key
@ -182,7 +189,7 @@ The signature filename is always the same as the ISO filename followed by `.asc`
Once you've downloaded both the ISO and its signature file, you can verify the ISO using GPG:
$ gpg -v --verify Qubes-RX-x86_64.iso.asc Qubes-RX-x86_64.iso
$ gpg2 -v --verify Qubes-RX-x86_64.iso.asc Qubes-RX-x86_64.iso
gpg: armor header: Version: GnuPG v1
gpg: Signature made Tue 08 Mar 2016 07:40:56 PM PST using RSA key ID 03FA5082
gpg: using PGP trust model
@ -277,7 +284,7 @@ Since `Qubes-RX-x86_64.iso.DIGESTS` is a clearsigned PGP file, we can use GPG to
2. [Get the Release Signing Key][RSK]
3. Verify the signature in the digest file:
$ gpg -v --verify Qubes-RX-x86_64.iso.DIGESTS
$ gpg2 -v --verify Qubes-RX-x86_64.iso.DIGESTS
gpg: armor header: Hash: SHA256
gpg: armor header: Version: GnuPG v2
gpg: original file name=''
@ -344,10 +351,10 @@ The problem could be one or more of the following:
If you still get the same result, try downloading the ISO again from a different source, then try verifying again.
### I'm getting "bash: gpg: command not found"
### I'm getting "bash: gpg2: command not found"
You don't have `gpg` installed.
Install it, or use `gpg2` instead.
You don't have `gpg2` installed.
Please install it using the method appropriate for your environement (e.g., via your package manager).
### Why am I getting "can't open signed data `Qubes-RX-x86_64.iso' / can't hash datafile: file open error"?
@ -362,7 +369,7 @@ The correct [signature file] is not in your working directory.
### Why am I getting "no valid OpenPGP data found"?
Either you don't have the correct [signature file], or you inverted the arguments to `gpg`.
Either you don't have the correct [signature file], or you inverted the arguments to `gpg2`.
([The signature file goes first.][signature file])
@ -443,9 +450,11 @@ If you still have a question, please address it to the [qubes-users mailing list
[Troubleshooting FAQ]: #troubleshooting-faq
[QMSK]: #1-get-the-qubes-master-signing-key-and-verify-its-authenticity
[RSK]: #2-get-the-release-signing-key
[copy-from-dom0]: /doc/copy-from-dom0/#copying-from-dom0
[signature file]: #3-verify-your-qubes-iso
[digest file]: #how-to-verify-qubes-iso-digests
[Qubes repositories]: https://github.com/QubesOS
[GPG documentation]: https://www.gnupg.org/documentation/
[qubes-users mailing list]: /support/#qubes-users
[except dom0]: https://github.com/QubesOS/qubes-issues/issues/2544

View File

@ -95,6 +95,9 @@ global: {
#secure_paste_sequence = "Ctrl-Shift-v";
#windows_count_limit = 500;
#audio_low_latency = false;
#log_level = 1;
#trayicon_mode = "border1";
#startup_timeout = 91;
};
# most of setting can be set per-VM basis
@ -122,8 +125,22 @@ Currently supported settings:
- `secure_copy_sequence` and `secure_paste_sequence` - key sequences used to trigger secure copy and paste.
- `windows_count_limit` - limit on concurrent windows.
- `audio_low_latency` - force low-latency audio mode (about 40ms compared to 200-500ms by default).
Note that this will cause much higher CPU usage in dom0.
Note that this will cause much higher CPU usage in dom0. It's enabled by
default, disabling it may save CPU in dom0.
- `trayicon_mode` - defines the trayicon coloring mode. Options are
- `bg` - color full icon background to the VM color
- `border1` - add 1px border at the icon edges
- `border2` - add 1px border 1px from the icon edges
- `tint` - tinttint icon to the VM color, can be used with additional
modifiers (you can enable multiple of them)
- `tint+border1,tint+border2` - same as tint, but also add a border
- `tint+saturation50` - same as tint, but reduce icon saturation by 50%
- `tint+whitehack` - same as tint, but change white pixels (0xffffff) to
almost-white (0xfefefe)
- `log level` - log level defines the log options log can take. log level can
have a value of 0(only errors), 1(some basic messages), 2(debug).
- `startup_timeout` - The timeout for startup.

View File

@ -48,9 +48,9 @@ Additionally you may want to set it as default DisposableVM Template:
[user@dom0 ~]$ qubes-prefs default_dispvm custom-disposablevm-template
The above default is used whenever a qube request starting a new DisposableVM and do not specify which one (for example `qvm-open-in-dvm` tool). This can be also set in qube settings and will affect service calls from that qube. See [qrexec documentation](/doc/qrexec3/#extra-keywords-available-in-qubes-40-and-later) for details.
The above default is used whenever a qube request starting a new DisposableVM and do not specify which one (for example `qvm-open-in-dvm` tool). This can be also set in qube settings and will affect service calls from that qube. See [qrexec documentation](/doc/qrexec/#specifying-vms-tags-types-targets-etc) for details.
If you wish to use the `fedora-minimal` template as a DisposableVM Template, see the "DisposableVM Template" use case under [fedora-minimal customization](/doc/templates/fedora-minimal/#customization).
If you wish to use a [Minimal TemplateVM](/doc/templates/minimal/) as a DisposableVM Template, please see the [Minimal TemplateVM](/doc/templates/minimal/) page.
## Customization of DisposableVM
@ -106,6 +106,8 @@ qvm-prefs <sys-VMName> provides_network true
~~~
Next, set the old `sys-` VM's autostart to false, and update any references to the old one.
In particular, make sure to update `/etc/qubes-rpc/policy/qubes.UpdatesProxy` in dom0.
For example, `qvm-prefs sys-firewall netvm <sys-VMName>`.
See below for a complete example of a `sys-net` replacement:
@ -198,6 +200,7 @@ Using DisposableVMs in this manner is ideal for untrusted qubes which require pe
[user@dom0 ~]$ qubes-prefs clockvm disp-sys-net
9. _(recommended)_ Allow templates to be updated via `disp-sys-net`. In dom0, edit `/etc/qubes-rpc/policy/qubes.UpdatesProxy` to change the target from `sys-net` to `disp-sys-net`.
### Create the sys-firewall DisposableVM

View File

@ -9,7 +9,9 @@ redirect_from:
VM kernel managed by dom0
=========================
By default, VMs kernels are provided by dom0. This means that:
By default, VMs kernels are provided by dom0.
(See [here][dom0-kernel-upgrade] for information about upgrading kernels in dom0.)
This means that:
1. You can select the kernel version (using GUI VM Settings tool or `qvm-prefs` commandline tool);
2. You can modify kernel options (using `qvm-prefs` commandline tool);
@ -327,7 +329,10 @@ Booting to a kernel inside the template is not supported under `PVH`.
In case of problems, you can access the VM console using `sudo xl console VMNAME` in dom0, then access the GRUB menu.
You need to call it just after starting the VM (until `GRUB_TIMEOUT` expires); for example, in a separate dom0 terminal window.
In any case you can later access the VM's logs (especially the VM console log `guest-VMNAME.log`).
In any case you can later access the VM's logs (especially the VM console log `/var/log/xen/console/guest-VMNAME.log`).
You can always set the kernel back to some dom0-provided value to fix a VM kernel installation.
[dom0-kernel-upgrade]: /doc/software-update-dom0/#kernel-upgrade

View File

@ -10,8 +10,8 @@ Troubleshooting newer hardware
By default, the kernel that is installed in dom0 comes from the `kernel` package, which is an older Linux LTS kernel.
For most cases this works fine since the Linux kernel developers backport fixes to this kernel, but for some newer hardware, you may run into issues.
For example, the audio might not work if the sound card is too new for the LTS kernel.
To fix this, you can try the `kernel-latest` package - though be aware that it's less tested!
To fix this, you can try the `kernel-latest` package -- though be aware that it's less tested!
(See [here][dom0-kernel-upgrade] for more information about upgrading kernels in dom0.)
In dom0:
~~~
@ -23,3 +23,7 @@ You can double-check that the boot used the newer kernel with `uname -r`, which
Compare this with the output of `rpm -q kernel`.
If the start of `uname -r` matches one of the versions printed by `rpm`, then you're still using the Linux LTS kernel, and you'll probably need to manually fix your boot settings.
If `uname -r` reports a higher version number, then you've successfully booted with the kernel shipped by `kernel-latest`.
[dom0-kernel-upgrade]: /doc/software-update-dom0/#kernel-upgrade

View File

@ -15,10 +15,10 @@ Here's an example of an RPC policy file in dom0:
```
[user@dom0 user ~]$ cat /etc/qubes-rpc/policy/qubes.FileCopy
(...)
$tag:work $tag:work allow
$tag:work $anyvm deny
$anyvm $tag:work deny
$anyvm $anyvm ask
@tag:work @tag:work allow
@tag:work @anyvm deny
@anyvm @tag:work deny
@anyvm @anyvm ask
```
It has three columns (from left to right): source, destination, and permission.
@ -32,7 +32,7 @@ Now, the whole policy file is parsed from top to bottom.
As soon as a rule is found that matches the action being evaluated, parsing stops.
We can see what this means by looking at the second row.
It says that we're **denied** from attempting to copy a file **from** any VM tagged with "work" **to** any VM whatsoever.
(That's what the `$anyvm` keyword means -- literally any VM in the system).
(That's what the `@anyvm` keyword means -- literally any VM in the system).
But, wait a minute, didn't we just say (in the first row) that all the VMs tagged with work are **allowed** to copy files to each other?
That's exactly right.
The first and second rows contradict each other, but that's intentional.
@ -46,7 +46,7 @@ Rather, it means that only VMs that match an earlier rule can do so (in this cas
The fourth and final row says that we're **asked** (i.e., prompted) to copy files **from** any VM in the system **to** any VM in the system.
(This rule was already in the policy file by default.
We added the first three.)
Note that it wouldn't make sense to add any rules after this one, since every possible pair of VMs will match the `$anyvm $anyvm` pattern.
Note that it wouldn't make sense to add any rules after this one, since every possible pair of VMs will match the `@anyvm @anyvm` pattern.
Therefore, parsing will always stop at this rule, and no rules below it will ever be evaluated.
All together, the three rules we added say that all VMs tagged with "work" are allowed to copy files to each other; however, they're denied from copying files to other VMs (without the "work" tag), and other VMs (without the "work" tag) are denied from copying files to them.
@ -54,5 +54,8 @@ The fourth rule means that the user gets prompted for any situation not already
Further details about how this system works can be found in [Qrexec: command execution in VMs][qrexec3].
(***Note**: the `$` character is deprecated in qrexec keywords -- please use `@` instead (e.g. `@anyvm`).
For more information, see the bulletin [here](https://github.com/QubesOS/qubes-secpack/blob/master/QSBs/qsb-038-2018.txt).*)
[qrexec3]: /doc/qrexec3/

View File

@ -14,7 +14,7 @@ If you've installed successfully in legacy mode but had to change some kernel pa
**Change the xen configuration on a USB media**
01. Attach the usb disk, mount the EFI partition (second partition available on the disk)
02. Edit your xen config (`xen.cfg/BOOTX64.cfg`) changing the `kernel` key to add your kernel parameters on the boot entry of your choice
02. As `su`, edit your xen config (`EFI/BOOT/BOOTX64.cfg`) changing the `kernel` key to add your kernel parameters on the boot entry of your choice
03. Install using your modified boot entry
**Change xen configuration directly in an iso image**
@ -25,41 +25,99 @@ If you've installed successfully in legacy mode but had to change some kernel pa
05. Save your changes, unmount and dd to usb device
Cannot start installation, installation completes successfully but then BIOS loops at boot device selection, hangs at four penguins after choosing "Test media and install Qubes OS" in GRUB menu
Installation freezes before displaying installer
-----------------------------------------------------------
Some systems can freeze with the default UEFI install options.
You can try the following to remove `noexitboot` and `mapbs`.
If you have an Nvidia card, see also [Nvidia Troubleshooting](/doc/nvidia-troubleshooting/#disabling-nouveau).
1. Follow the [steps here](/doc/uefi-troubleshooting/#change-installer-kernel-parameters-in-uefi) to edit the `[qubes-verbose]` section of your installer's `BOOTX64.cfg`.
You want to comment out the `mapbs` and `noexitboot` lines.
The end result should look like this:
~~~
[qubes-verbose]
options=console=vga efi=attr=uc
# noexitboot=1
# mapbs=1
kernel=vmlinuz inst.stage2=hd:LABEL=Qubes-R4.0-x86_64 i915.alpha_support=1
ramdisk=initrd.img
~~~
2. Boot the installer and continue to install as normal, but don't reboot the system at the end when prompted.
3. Go to `tty2` (Ctrl-Alt-F2).
4. Use your preferred text editor (`nano` works) to edit `/mnt/sysimage/boot/efi/EFI/qubes/xen.cfg`, verifying the `noexitboot` and `mapbs` lines are not present.
This is also a good time to make permanent any other changes needed to get the installer to work, such as `nouveau.modeset=0`.
For example:
~~~
[4.14.18-1.pvops.qubes.x86_64]
options=loglvl=all dom0_mem=min:1024M dom0_mem=max:4096M iommu=no-igfx ucode=scan efi=attr=uc
~~~
5. Go back to `tty6` (Ctrl-Alt-F6) and click `Reboot`.
6. Continue with setting up default templates and logging in to Qubes.
Installation freezes before displaying installer / disable EFI runtime services
------------------------------------------------------------------------------
On some early, buggy UEFI implementations, you may need to disable EFI under Qubes completely.
This can sometimes be done by switching to legacy mode in your BIOS/UEFI configuration.
If that's not an option there, or legacy mode does not work either, you can try the following to add `efi=no-rs`.
Consider this approach as a last resort, because it will make every Xen update a manual process.
1. Follow the [steps here](/doc/uefi-troubleshooting/#change-installer-kernel-parameters-in-uefi) to edit the `[qubes-verbose]` section of your installer's `xen.cfg`.
You want to modify the `efi=attr=uc` setting and comment out the `mapbs` and `noexitboot` lines.
The end result should look like this:
~~~
[qubes-verbose]
options=console=vga efi=no-rs
# noexitboot=1
# mapbs=1
kernel=vmlinuz inst.stage2=hd:LABEL=Qubes-R4.0-x86_64 i915.alpha_support=1
ramdisk=initrd.img
~~~
2. Boot the installer and continue to install as normal, until towards the end when you will receive a warning about being unable to create the EFI boot entry.
Click continue, but don't reboot the system at the end when prompted.
3. Go to `tty2` (Ctrl-Alt-F2).
4. Use your preferred text editor (`nano` works) to edit `/mnt/sysimage/boot/efi/EFI/qubes/xen.cfg`, adding the `efi=no-rs` option to the end of the `options=` line.
For example:
~~~
[4.14.18-1.pvops.qubes.x86_64]
options=loglvl=all dom0_mem=min:1024M dom0_mem=max:4096M iommu=no-igfx ucode=scan efi=no-rs
~~~
5. Execute the following commands:
~~~
cp -R /mnt/sysimage/boot/efi/EFI/qubes /mnt/sysimage/boot/efi/EFI/BOOT
mv /mnt/sysimage/boot/efi/EFI/BOOT/xen-*.efi /mnt/sysimage/boot/efi/EFI/BOOT/BOOTX64.efi
mv /mnt/sysimage/boot/efi/EFI/BOOT/xen.cfg /mnt/sysimage/boot/efi/EFI/BOOT/BOOTX64.cfg
~~~
6. Go back to `tty6` (Ctrl-Alt-F6) and click `Reboot`.
7. Continue with setting up default templates and logging in to Qubes.
Whenever there is a kernel or Xen update for Qubes, you will need to follow [these steps](/doc/uefi-troubleshooting/#boot-device-not-recognized-after-installing) because your system is using the fallback UEFI bootloader in `[...]/EFI/BOOT` instead of directly booting to the Qubes entry under `[...]/EFI/qubes`.
Installation completes successfully but then boot loops or hangs on black screen
---------------------
There is some [common bug in UEFI implementation](http://xen.markmail.org/message/f6lx2ab4o2fch35r), affecting mostly Lenovo systems, but probably some others too. You can try existing workaround:
There is a [common bug in UEFI implementation](http://xen.markmail.org/message/f6lx2ab4o2fch35r) affecting mostly Lenovo systems, but probably some others too.
While some systems need `mapbs` and/or `noexitboot` disabled to boot, others require them enabled at all times.
Although these are enabled by default in the installer, they are disabled after the first stage of a successful install.
You can re-enable them either as part of the install process:
01. In GRUB menu<sup id="a1-1">[1](#f1)</sup>, select "Troubleshoot", then "Boot from device", then press `e`.
02. At the end of `chainloader` line add `/mapbs /noexitboot`.
03. Perform installation normally, but don't reboot the system at the end yet.
04. Go to `tty2` (Ctrl-Alt-F2).
05. Enable `/mapbs /noexitboot` on just installed system. This step differs between Qubes releases:
**For Qubes 3.1:**
06. Execute `mount | grep boot/efi` and note device name (first column). It should be something like `/dev/sda1`.
07. Execute `efibootmgr -v`, search for `Qubes` entry and note its number (it should be something like `Boot0001` - `0001` is an entry number).
08. Replace existing `Qubes` entry with modified one. Replace `XXXX` with entry number from previous step, `/dev/sda` with your disk name and `-p 1` with `/boot/efi` partition number):
efibootmgr -b XXXX -B
efibootmgr -v -c -u -L Qubes -l /EFI/qubes/xen.efi -d /dev/sda -p 1 "placeholder /mapbs /noexitboot"
09. Compare new entry with the old one (printed in step 6) - it should only differ in additional options at the end, and look probably something like this:
Boot0001* Qubes HD(1,GPT,partition-guid-here,0x800,0x64000)/File(\EFI\qubes\xen.efi)p.l.a.c.e.h.o.l.d.e.r. ./.m.a.p.b.s. ./.n.o.e.x.i.t.b.o.o.t.
If instead it looks like:
Boot0001* Qubes HD(1,0,00000000...0,0x0,0x0)/File(\EFI\qubes\xen.efi)p.l.a.c.e.h.o.l.d.e.r. ./.m.a.p.b.s. ./.n.o.e.x.i.t.b.o.o.t.
then try passing `/dev/sda1` or `/dev/nvme0n1p1` or whatever your EFI partition is instead of `/dev/sda` and `-p 1`.
10. Now you can reboot the system by issuing `reboot` command.
**For Qubes 3.2 or later:**
11. Edit `/mnt/sysimage/boot/efi/EFI/qubes/xen.cfg` (you can use `vi` editor) and add to every kernel section:
1. Perform installation normally, but don't reboot the system at the end yet.
2. Go to `tty2` (Ctrl-Alt-F2).
3. Enable `mapbs` and/or `noexitboot` on the just installed system.
Edit `/mnt/sysimage/boot/efi/EFI/qubes/xen.cfg` (you can use `vi` or `nano` editor) and add to every kernel section:
mapbs=1
noexitboot=1
@ -69,43 +127,63 @@ There is some [common bug in UEFI implementation](http://xen.markmail.org/messag
line (i.e., all sections except the first one, since it doesn't have a
kernel line).
12. Now you can reboot the system by issuing `reboot` command.
4. Go back to `tty6` (Ctrl-Alt-F6) and click `Reboot`.
5. Continue with setting up default templates and logging in to Qubes.
Or if you have already rebooted after the first stage install and have encountered this issue, by:
1. Boot into [rescue mode](/doc/uefi-troubleshooting/#accessing-installer-rescue-mode-on-uefi).
2. Enable `mapbs` and/or `noexitboot` on the just installed system.
Edit `/mnt/sysimage/boot/efi/EFI/qubes/xen.cfg` (you can use `vi` or `nano` editor) and add to every kernel section:
mapbs=1
noexitboot=1
**Note:** You must add these parameters on two separate new lines (one
parameter on each line) at the end of each section that includes a kernel
line (i.e., all sections except the first one, since it doesn't have a
kernel line).
3. Type `reboot`.
4. Continue with setting up default templates and logging in to Qubes.
System crash/restart when booting installer
Installation completes successfully but then system crash/restarts on next boot
-------------------------------------------
Some Dell systems and probably others have [another bug in UEFI firmware](http://markmail.org/message/amw5336otwhdxi76). And there is another workaround for it:
Some Dell systems and probably others have [another bug in UEFI firmware](http://markmail.org/message/amw5336otwhdxi76).
These systems need `efi=attr=uc` enabled at all times.
Although this is enabled by default in the installer, it is disabled after the first stage of a successful install.
You can re-enable it either as part of the install process:
1. Perform installation normally, but don't reboot the system at the end yet.
2. Go to `tty2` (Ctrl-Alt-F2).
3. Execute:
1. In GRUB menu<sup id="a1-2">[1](#f1)</sup> press `e`.
2. At the end of `chainloader` line add `-- efi=attr=uc`.
3. Perform installation normally, but don't reboot the system at the end yet.
4. Go to `tty2` (Ctrl-Alt-F2).
5. Execute:
sed -i -e 's/^options=.*/\0 efi=attr=uc/' /mnt/sysimage/boot/efi/qubes/xen.cfg
or if you're installing 3.2 execute:
sed -i -e 's/^options=.*/\0 efi=attr=uc/' /mnt/sysimage/boot/efi/EFI/qubes/xen.cfg
6. Now you can reboot the system by issuing `reboot` command.
4. Go back to `tty6` (Ctrl-Alt-F6) and click `Reboot`.
5. Continue with setting up default templates and logging in to Qubes.
* * *
<b name="f1">1</b> If you use rEFInd, you can see 3 options regarding the USB installer. Choose "Fallback Boot Loader" to enter the GRUB menu. [](#a1-1) [](#a1-2)
Or if you have already rebooted after the first stage install and have encountered this issue, by:
1. Boot into [rescue mode](/doc/uefi-troubleshooting/#accessing-installer-rescue-mode-on-uefi).
2. Execute:
sed -i -e 's/^options=.*/\0 efi=attr=uc/' /mnt/sysimage/boot/efi/EFI/qubes/xen.cfg
3. Type `reboot`.
4. Continue with setting up default templates and logging in to Qubes.
Boot device not recognized after installing
------------------------------------------
Some firmware will not recognize the default Qubes EFI configuration. As such,
it will have to be manually edited to be bootable. This will need to be done after
every kernel and Xen update to ensure you use the most recently installed versions.
Some firmware will not recognize the default Qubes EFI configuration.
As such, it will have to be manually edited to be bootable.
This will need to be done after every kernel and Xen update to ensure you use the most recently installed versions.
1. Copy the `/boot/efi/EFI/qubes/` directory to `/boot/efi/EFI/BOOT/`
(the contents of `/boot/efi/EFI/BOOT` should be identical to `/boot/efi/EFI/qubes`
besides what is described in steps 2 and 3):
1. Copy the `/boot/efi/EFI/qubes/` directory to `/boot/efi/EFI/BOOT/` (the contents of `/boot/efi/EFI/BOOT` should be identical to `/boot/efi/EFI/qubes` besides what is described in steps 2 and 3):
cp -r /boot/efi/EFI/qubes/. /boot/efi/EFI/BOOT
@ -113,20 +191,19 @@ besides what is described in steps 2 and 3):
mv /boot/efi/EFI/BOOT/xen.cfg /boot/efi/EFI/BOOT/BOOTX64.cfg
3. Copy `/boot/efi/EFI/qubes/xen-*.efi` to `/boot/efi/EFI/qubes/xen.efi`
and `/boot/efi/EFI/BOOT/BOOTX64.efi`. For example with Xen 4.8.3
(you may need to confirm file overwrite):
3. Copy `/boot/efi/EFI/qubes/xen-*.efi` to `/boot/efi/EFI/qubes/xen.efi` and `/boot/efi/EFI/BOOT/BOOTX64.efi`.
For example, with Xen 4.8.3 (you may need to confirm file overwrite):
cp /boot/efi/EFI/qubes/xen-4.8.3.efi /boot/efi/EFI/qubes/xen.efi
cp /boot/efi/EFI/qubes/xen-4.8.3.efi /boot/efi/EFI/BOOT/BOOTX64.efi
Installation finished but "Qubes" boot option is missing and xen.cfg is empty
--------------------------------------------------------------------------------------
In some cases installer fails to finish EFI setup and leave the system without
Qubes-specific EFI configuration. In such a case you need to finish those parts
manually. You can do that just after installation (switch to `tty2` with
Ctrl-Alt-F2), or booting from installation media in "Rescue a Qubes system" mode.
In some cases installer fails to finish EFI setup and leave the system without a Qubes-specific EFI configuration.
In such a case you need to finish those parts manually.
You can do that just after installation (switch to `tty2` with Ctrl-Alt-F2), or by booting from installation media in [rescue mode](/doc/uefi-troubleshooting/#accessing-installer-rescue-mode-on-uefi).
1. Examine `/boot/efi/EFI/qubes` (if using Qubes installation media, it's in `/mnt/sysimage/boot/efi/EFI/qubes`). You should see 4 files there:
@ -143,7 +220,6 @@ Ctrl-Alt-F2), or booting from installation media in "Rescue a Qubes system" mode
3. Create xen.cfg with this content (adjust kernel version, and filesystem
locations, below values are based on default installation of Qubes 3.2):
[global]
default=4.4.14-11.pvops.qubes.x86_64
@ -157,79 +233,11 @@ Ctrl-Alt-F2), or booting from installation media in "Rescue a Qubes system" mode
efibootmgr -v -c -u -L Qubes -l /EFI/qubes/xen.efi -d /dev/sda -p 1 "placeholder /mapbs /noexitboot"
Installation freezes before getting to Anaconda (Qubes 4.0)
-----------------------------------------------------------
Some systems can freeze with the default UEFI install options.
You can try the following to remove `noexitboot` and `mapbs`.
If you have an Nvidia card, see also [Nvidia Troubleshooting](/doc/nvidia-troubleshooting/#disabling-nouveau).
1. Follow the [steps above](/doc/uefi-troubleshooting/#change-installer-kernel-parameters-in-uefi) to edit the `[qubes-verbose]` section of your installer's `xen.cfg`.
You want to comment out the `mapbs` and `noexitboot` lines.
The end result should look like this:
~~~
[qubes-verbose]
options=console=vga efi=attr=uc
# noexitboot=1
# mapbs=1
kernel=vmlinuz inst.stage2=hd:LABEL=Qubes-R4.0-x86_64 i915.alpha_support=1
ramdisk=initrd.img
~~~
2. Boot the installer and continue to install as normal, but don't reboot the system at the end when prompted.
3. Go to `tty2` (Ctrl-Alt-F2).
4. Use your preferred text editor (`nano` works) to edit `/mnt/sysimage/boot/efi/EFI/qubes/xen.cfg`, verifying the `noexitboot` and `mapbs` lines are not present.
This is also a good time to make permanent any other changes needed to get the installer to work, such as `nouveau.modeset=0`.
For example:
~~~
[4.14.18-1.pvops.qubes.x86_64]
options=loglvl=all dom0_mem=min:1024M dom0_mem=max:4096M iommu=no-igfx ucode=scan efi=attr=uc
~~~
5. Go back to `tty6` (Ctrl-Alt-F6) and click `Reboot`.
6. Continue with setting up default templates and logging in to Qubes.
Installation freezes before getting to Anaconda / disable EFI runtime services
------------------------------------------------------------------------------
On some early, buggy UEFI implementations, you may need to disable EFI under Qubes completely.
This can sometimes be done by switching to legacy mode in your BIOS/UEFI configuration.
If that's not an option there, or legacy mode does not work either, you can try the following to add `efi=no-rs`.
1. Follow the [steps above](/doc/uefi-troubleshooting/#change-installer-kernel-parameters-in-uefi) to edit the `[qubes-verbose]` section of your installer's `xen.cfg`.
You want to modify the `efi=attr=uc` setting and comment out the `mapbs` and `noexitboot` lines.
The end result should look like this:
~~~
[qubes-verbose]
options=console=vga efi=no-rs
# noexitboot=1
# mapbs=1
kernel=vmlinuz inst.stage2=hd:LABEL=Qubes-R4.0-x86_64 i915.alpha_support=1
ramdisk=initrd.img
~~~
2. Boot the installer and continue to install as normal, until towards the end when you will receive a warning about being unable to create the EFI boot entry.
Click continue, but don't reboot the system at the end when prompted.
3. Go to `tty2` (Ctrl-Alt-F2).
4. Use your preferred text editor (`nano` works) to edit `/mnt/sysimage/boot/efi/EFI/qubes/xen.cfg`, adding the `efi=no-rs` option to the end of the `options=` line.
For example:
~~~
[4.14.18-1.pvops.qubes.x86_64]
options=loglvl=all dom0_mem=min:1024M dom0_mem=max:4096M iommu=no-igfx ucode=scan efi=no-rs
~~~
5. Execute the following commands:
~~~
cp -R /mnt/sysimage/boot/efi/EFI/qubes /mnt/sysimage/boot/efi/EFI/BOOT
mv /mnt/sysimage/boot/efi/EFI/BOOT/xen.efi /mnt/sysimage/boot/efi/EFI/BOOT/BOOTX64.efi
mv /mnt/sysimage/boot/efi/EFI/BOOT/xen.cfg /mnt/sysimage/boot/efi/EFI/BOOT/BOOTX64.cfg
~~~
6. Go back to `tty6` (Ctrl-Alt-F6) and click `Reboot`.
7. Continue with setting up default templates and logging in to Qubes.
Whenever there is a kernel or Xen update for Qubes, you will need to follow these [other steps above](/doc/uefi-troubleshooting/#boot-device-not-recognized-after-installing) because your system is using the fallback UEFI bootloader in `[...]/EFI/BOOT` instead of directly booting to the Qubes entry under `[...]/EFI/qubes`.
Accessing installer Rescue mode on UEFI
---------------------------------------
In UEFI mode installer do not have boot menu, but starts directly the installation wizard. To get into Rescue mode, you need to switch to tty2 (Ctrl+Alt+F2) and then execute:
In UEFI mode, the installer does not have a boot menu, but boots directly into the installation wizard.
To get into Rescue mode, you need to switch to tty2 (Ctrl+Alt+F2) and then execute:
~~~
pkill -9 anaconda

View File

@ -26,7 +26,9 @@ Qubes OS supports the ability to attach a USB drive (or just its partitions) to
Attaching USB drives is integrated into the Devices Widget: ![device manager icon]
Simply insert your USB drive and click on the widget.
You will see multiple entries for your USB drive; typically, `sys-usb:sda`, `sys-usb:sda1`, and `sys-usb:2-1` for example.
Entries starting with a number (e.g. here `2-1`) are the [whole usb-device][USB]. Entries without a number (e.g. here `sda`) are the whole block-device. Other entries are partitions of that block-device (e.r. here `sda1`).
Entries starting with a number (e.g. here `2-1`) are the [whole usb-device][USB].
Entries without a number (e.g. here `sda`) are the whole block-device.
Other entries are partitions of that block-device (e.r. here `sda1`).
The simplest option is to attach the entire block drive.
In our example, this is `sys-usb:sda`, so hover over it.
@ -40,7 +42,8 @@ See below for more detailed steps.
## Block Devices in VMs ##
If not specified otherwise, block devices will show up as `/dev/xvdi*` in a linux VM, where `*` may be the partition-number. If a block device isn't automatically mounted after attaching, open a terminal in the VM and execute:
If not specified otherwise, block devices will show up as `/dev/xvdi*` in a linux VM, where `*` may be the partition-number.
If a block device isn't automatically mounted after attaching, open a terminal in the VM and execute:
cd ~
mkdir mnt
@ -60,9 +63,11 @@ To specify this device node name, you need to use the command line tool and its
The command-line tool you may use to mount whole USB drives or their partitions is `qvm-block`, a shortcut for `qvm-device block`.
`qvm-block` won't recognise your device by any given name, but rather the device-node the sourceVM assigns. So make sure you have the drive available in the sourceVM, then list the available block devices (step 1.) to find the corresponding device-node.
`qvm-block` won't recognise your device by any given name, but rather the device-node the sourceVM assigns.
So make sure you have the drive available in the sourceVM, then list the available block devices (step 1.) to find the corresponding device-node.
In case of a USB-drive, make sure it's attached to your computer. If you don't see anything that looks like your drive, run `sudo udevadm trigger --action=change` in your USB-qube (typically `sys-usb`)
In case of a USB-drive, make sure it's attached to your computer.
If you don't see anything that looks like your drive, run `sudo udevadm trigger --action=change` in your USB-qube (typically `sys-usb`)
1. In a dom0 console (running as a normal user), list all available block devices:
@ -154,13 +159,16 @@ To attach a file as block device to another qube, first turn it into a loopback
sudo losetup -f --show /path/to/file
[This command][losetup] will create the device node `/dev/loop0` or, if that is already in use, increase the trailing integer until that name is still available. Afterwards it prints the device-node-name it found.
[This command][losetup] will create the device node `/dev/loop0` or, if that is already in use, increase the trailing integer until that name is still available.
Afterwards it prints the device-node-name it found.
2. If you want to use the GUI, you're done. Click the Device Manager ![device manager icon] and select the `loop0`-device to attach it to another qube.
2. If you want to use the GUI, you're done.
Click the Device Manager ![device manager icon] and select the `loop0`-device to attach it to another qube.
If you rather use the command line, continue:
In dom0, run `qvm-block` to display known block devices. The newly created loop device should show up:
In dom0, run `qvm-block` to display known block devices.
The newly created loop device should show up:
~]$ qvm-block
BACKEND:DEVID DESCRIPTION USED BY
@ -177,12 +185,15 @@ To attach a file as block device to another qube, first turn it into a loopback
## Additional Attach Options ##
Attaching a block device through the command line offers additional customisation options, specifiable via the `--option`/`-o` option. (Yes, confusing wording, there's an [issue for that](https://github.com/QubesOS/qubes-issues/issues/4530).)
Attaching a block device through the command line offers additional customisation options, specifiable via the `--option`/`-o` option.
(Yes, confusing wording, there's an [issue for that](https://github.com/QubesOS/qubes-issues/issues/4530).)
### frontend-dev ###
This option allows you to specify the name of the device node made available in the targetVM. This defaults to `xvdi` or, if already occupied, the first available device node name in alphabetical order. (The next one tried will be `xvdj`, then `xvdk`, and so on ...)
This option allows you to specify the name of the device node made available in the targetVM.
This defaults to `xvdi` or, if already occupied, the first available device node name in alphabetical order.
(The next one tried will be `xvdj`, then `xvdk`, and so on ...)
usage example:
@ -193,7 +204,8 @@ This command will attach the partition `sda1` to `work` as `/dev/xvdz`.
### read-only ###
Attach device in read-only mode. Protects the block device in case you don't trust the targetVM.
Attach device in read-only mode.
Protects the block device in case you don't trust the targetVM.
If the device is a read-only device, this option is forced true.
@ -210,7 +222,8 @@ The two commands are equivalent.
### devtype ###
Usually, a block device is attached as disk. In case you need to attach a block device as cdrom, this option allows that.
Usually, a block device is attached as disk.
In case you need to attach a block device as cdrom, this option allows that.
usage example:

View File

@ -23,7 +23,7 @@ qvm-copy-to-vm <dest-vm> <file>
The file will arrive in your destination VM in the `~/QubesIncoming/dom0/` directory.
### Copying logs from Dom0 ###
### Copying logs from dom0 ###
In order to easily copy/paste the contents of logs from dom0 to the inter-VM clipboard, you can simply:
@ -34,7 +34,7 @@ In order to easily copy/paste the contents of logs from dom0 to the inter-VM cli
You may now paste the log contents to any VM as you normally would (i.e., Ctrl-Shift-V, then Ctrl-V).
### Copy/paste from Dom0 ###
### Copy/paste from dom0 ###
For data other than logs, there are several options:
@ -49,23 +49,25 @@ For data other than logs, there are several options:
3. Write the data you wish to copy into `/var/run/qubes/qubes-clipboard.bin`, then `echo -n dom0 > /var/run/qubes/qubes-clipboard.bin.source`.
Then use Ctrl-Shift-V to paste the data to the desired VM.
Copying **to** Dom0
Copying **to** dom0
-------------------
There should normally be few reasons for the user to want to copy files from VMs to Dom0, as Dom0 only acts as a "thin trusted terminal", and no user applications run there.
Copying untrusted files to Dom0 is not advised and may compromise the security of your Qubes system.
Because of this, we do not provide a graphical user interface for it, unlike [copying files between VMs](/doc/copying-files/).
Copying anything into dom0 is not advised, since doing so can compromise the security of your Qubes system.
For this reason, there is no simple means of copying anything into dom0, unlike [copying from dom0](#copying-from-dom0) and [copying files between VMs](/doc/copying-files/).
One common use-case for this is if we want to use a desktop wallpaper in Dom0 we have located in one of our AppVMs (e.g. in the 'personal' AppVM where we got the wallpaper from our camera or downloaded it from the Internet).
While it's a well-justified reason, imagine what would happen if the wallpaper (e.g. a JPEG file) was somehow malformed or malicious and attempted to exploit a hypothetical JPEG parser bug in Dom0 code (e.g. in the Dom0's Xorg/KDE code that parses the wallpaper and displays it).
There should normally be few reasons for the user to want to copy anything from VMs to dom0, as dom0 only acts as a "thin trusted terminal", and no user applications run there.
One possible use-case for this is if we want to use a desktop wallpaper in dom0 we have located in one of our AppVMs (e.g. in the 'personal' AppVM where we got the wallpaper from our camera or downloaded it from the Internet).
While this use-case is understandable, imagine what would happen if the wallpaper (e.g. a JPEG file) was somehow malformed or malicious and attempted to exploit a hypothetical JPEG parser bug in dom0 code (e.g. in the dom0's Xorg/KDE code that parses the wallpaper and displays it).
If you are determined to copy some files to Dom0 anyway, you can use the following method (run this command from Dom0's console):
If you are determined to copy some files to dom0 anyway, you can use the following method.
(If you want to copy text, first save it into a text file.)
Run this command in a dom0 terminal:
~~~
qvm-run --pass-io <src-vm> 'cat /path/to/file_in_src_domain' > /path/to/file_name_in_dom0
~~~
You can use the same method to copy files from Dom0 to VMs (if, for some reason, you don't want to use `qvm-copy-to-vm`):
Note that you can use the same method to copy files from dom0 to VMs (if, for some reason, you don't want to use `qvm-copy-to-vm`):
~~~
cat /path/to/file_in_dom0 | qvm-run --pass-io <dest-vm> 'cat > /path/to/file_name_in_appvm'

View File

@ -11,23 +11,34 @@ redirect_from:
Copy and Paste between domains
==============================
Qubes fully supports secure copy and paste operation between domains. In order to copy a clipboard from domain A to domain B, follow those steps:
Qubes fully supports secure copy and paste operation between domains.
In order to copy a clipboard from domain A to domain B, follow those steps:
1. Click on the application window in domain A where you have selected text for copying. Then use the *app-specific* hot-key (or menu option) to copy this into domain's local clipboard (in other words: do the copy operation as usual, in most cases by pressing Ctrl-C).
2. Then (when the app in domain A is still in focus) press Ctrl-Shift-C magic hot-key. This will tell Qubes that we want to select this domain's clipboard for *global copy* between domains.
3. Now select the destination app, running in domain B, and press Ctrl-Shift-V, another magic hot-key that will tell Qubes to make the clipboard marked in the previous step available to apps running in domain B. This step is necessary because it ensures that only domain B will get access to the clipboard copied from domain A, and not any other domain that might be running in the system.
1. Click on the application window in domain A where you have selected text for copying.
Then use the *app-specific* hot-key (or menu option) to copy this into domain's local clipboard (in other words: do the copy operation as usual, in most cases by pressing Ctrl-C).
2. Then (when the app in domain A is still in focus) press Ctrl-Shift-C magic hot-key.
This will tell Qubes that we want to select this domain's clipboard for *global copy* between domains.
3. Now select the destination app, running in domain B, and press Ctrl-Shift-V, another magic hot-key that will tell Qubes to make the clipboard marked in the previous step available to apps running in domain B.
This step is necessary because it ensures that only domain B will get access to the clipboard copied from domain A, and not any other domain that might be running in the system.
4. Now, in the destination app use the app-specific key combination (usually Ctrl-V) for pasting the clipboard.
Note that the global clipboard will be cleared after step \#3, to prevent accidental leakage to another domain, if the user accidentally pressed Ctrl-Shift-V later.
This 4-step process might look complex, but after some little practice it really is very easy and fast. At the same time it provides the user with full control over who has access to the clipboard.
This 4-step process might look complex, but after some little practice it really is very easy and fast.
At the same time it provides the user with full control over who has access to the clipboard.
Note that only simple plain text copy/paste is supported between AppVMs. This is discussed in a bit more detail in [this message](https://groups.google.com/group/qubes-devel/msg/57fe6695eb8ec8cd).
Note that only simple plain text copy/paste is supported between AppVMs.
This is discussed in a bit more detail in [this message](https://groups.google.com/group/qubes-devel/msg/57fe6695eb8ec8cd).
On Copy/Paste Security
----------------------
The scheme is *secure* because it doesn't allow other VMs to steal the content of the clipboard. However, one should keep in mind that performing a copy and paste operation from *less trusted* to *more trusted* domain can always be potentially insecure, because the data that we insert might potentially try to exploit some hypothetical bug in the destination VM (e.g. the seemingly innocent link that we copy from untrusted domain, might turn out to be, in fact, a large buffer of junk that, when pasted into the destination VM's word processor could exploit a hypothetical bug in the undo buffer). This is a general problem and applies to any data transfer between *less trusted to more trusted* domains. It even applies to copying files between physically separate machines (air-gapped) systems. So, you should always copy clipboard and data only from *more trusted* to *less trusted* domains.
The scheme is *secure* because it doesn't allow other VMs to steal the content of the clipboard.
However, one should keep in mind that performing a copy and paste operation from *less trusted* to *more trusted* domain can always be potentially insecure, because the data that we insert might potentially try to exploit some hypothetical bug in the destination VM (e.g.
the seemingly innocent link that we copy from untrusted domain, might turn out to be, in fact, a large buffer of junk that, when pasted into the destination VM's word processor could exploit a hypothetical bug in the undo buffer).
This is a general problem and applies to any data transfer between *less trusted to more trusted* domains.
It even applies to copying files between physically separate machines (air-gapped) systems.
So, you should always copy clipboard and data only from *more trusted* to *less trusted* domains.
See also [this article](https://blog.invisiblethings.org/2011/03/13/partitioning-my-digital-life-into.html) for more information on this topic, and some ideas of how we might solve this problem in some future version of Qubes.
@ -47,11 +58,12 @@ The Qubes clipboard [RPC policy] is configurable in:
/etc/qubes-rpc/policy/qubes.ClipboardPaste
~~~
You may wish to configure this policy in order to prevent user error. For example, if you are certain that you never wish to paste *into* your "vault" AppVM (and it is highly recommended that you do not), then you should edit the policy as follows:
You may wish to configure this policy in order to prevent user error.
For example, if you are certain that you never wish to paste *into* your "vault" AppVM (and it is highly recommended that you do not), then you should edit the policy as follows:
~~~
$anyvm vault deny
$anyvm $anyvm ask
@anyvm vault deny
@anyvm @anyvm ask
~~~
Shortcut Configuration

View File

@ -23,7 +23,8 @@ GUI
2. A dialog box will appear asking for the name of the destination qube (qube B).
3. A confirmation dialog box will appear(this will be displayed by Dom0, so none of the qubes can fake your consent). After you click ok, qube B will be started if it is not already running, the file copy operation will start, and the files will be copied into the following folder in qube B:
3. A confirmation dialog box will appear(this will be displayed by Dom0, so none of the qubes can fake your consent).
After you click ok, qube B will be started if it is not already running, the file copy operation will start, and the files will be copied into the following folder in qube B:
`/home/user/QubesIncoming/<source>`
@ -45,11 +46,14 @@ qvm-move [--without-progress] file [file]+
On inter-qube file copy security
----------------------------------
The scheme is *secure* because it doesn't allow other qubes to steal the files that are being copied, and also doesn't allow the source qube to overwrite arbitrary files on the destination qube. Also, Qubes's file copy scheme doesn't use any sort of virtual block devices for file copy -- instead we use Xen shared memory, which eliminates lots of processing of untrusted data. For example, the receiving qube is *not* forced to parse untrusted partitions or file systems. In this respect our file copy mechanism provides even more security than file copy between two physically separated (air-gapped) machines!
The scheme is *secure* because it doesn't allow other qubes to steal the files that are being copied, and also doesn't allow the source qube to overwrite arbitrary files on the destination qube.
Also, Qubes's file copy scheme doesn't use any sort of virtual block devices for file copy -- instead we use Xen shared memory, which eliminates lots of processing of untrusted data.
For example, the receiving qube is *not* forced to parse untrusted partitions or file systems.
In this respect our file copy mechanism provides even more security than file copy between two physically separated (air-gapped) machines!
However, one should keep in mind that performing a data transfer from *less trusted* to *more trusted* qubes can always be potentially insecure, because the data that we insert might potentially try to exploit some hypothetical bug in the destination qube (e.g. a seemingly innocent JPEG that we copy from an untrusted qube might contain a specially crafted exploit for a bug in JPEG parsing application in the destination qube). This is a general problem and applies to any data transfer between *less trusted to more trusted* qubes. It even applies to the scenario of copying files between air-gapped machines. So, you should always copy data only from *more trusted* to *less trusted* qubes.
However, one should keep in mind that performing a data transfer from *less trusted* to *more trusted* qubes can always be potentially insecure, because the data that we insert might potentially try to exploit some hypothetical bug in the destination qube (e.g. a seemingly innocent JPEG that we copy from an untrusted qube might contain a specially crafted exploit for a bug in JPEG parsing application in the destination qube).
This is a general problem and applies to any data transfer between *less trusted to more trusted* qubes.
It even applies to the scenario of copying files between air-gapped machines.
So, you should always copy data only from *more trusted* to *less trusted* qubes.
See also [this article](https://blog.invisiblethings.org/2011/03/13/partitioning-my-digital-life-into.html) for more information on this topic, and some ideas of how we might solve this problem in some future version of Qubes.
You may also want to read how to [revoke "Yes to All" authorization](/doc/qrexec3/#revoking-yes-to-all-authorization)

View File

@ -11,14 +11,18 @@ redirect_from:
# Device Handling #
This is an overview of device handling in Qubes OS. For specific devices ([block], [USB] and [PCI] devices), please visit their respective pages.
This is an overview of device handling in Qubes OS.
For specific devices ([block], [USB] and [PCI] devices), please visit their respective pages.
**Important security warning:** Device handling comes with many security implications. Please make sure you carefully read and understand the **[security considerations]**.
**Important security warning:** Device handling comes with many security implications.
Please make sure you carefully read and understand the **[security considerations]**.
## Introduction ##
The interface to deal with devices of all sorts was unified in Qubes 4.0 with the `qvm-device` command and the Qubes Devices Widget. In Qubes 3.X, the Qubes VM Manager dealt with attachment as well. This functionality was moved to the Qubes Device Widget, the tool tray icon with a yellow square located in the top right of your screen by default.
The interface to deal with devices of all sorts was unified in Qubes 4.0 with the `qvm-device` command and the Qubes Devices Widget.
In Qubes 3.X, the Qubes VM Manager dealt with attachment as well.
This functionality was moved to the Qubes Device Widget, the tool tray icon with a yellow square located in the top right of your screen by default.
There are currently four categories of devices Qubes understands:
- Microphones
@ -26,31 +30,41 @@ There are currently four categories of devices Qubes understands:
- USB devices
- PCI devices
Microphones, block devices and USB devices can be attached with the GUI-tool. PCI devices can be attached using the Qube Settings, but require a VM reboot.
Microphones, block devices and USB devices can be attached with the GUI-tool.
PCI devices can be attached using the Qube Settings, but require a VM reboot.
## General Qubes Device Widget Behavior And Handling ##
When clicking on the tray icon (which looks similar to this): ![SD card and thumbdrive][device manager icon] several device-classes separated by lines are displayed as tooltip. Block devices are displayed on top, microphones one below and USB-devices at the bottom.
When clicking on the tray icon (which looks similar to this): ![SD card and thumbdrive][device manager icon] several device-classes separated by lines are displayed as tooltip.
Block devices are displayed on top, microphones one below and USB-devices at the bottom.
On most laptops, integrated hardware such as cameras and fingerprint-readers are implemented as USB-devices and can be found here.
### Attaching Using The Widget ###
Click the tray icon. Hover on a device you want to attach to a VM. A list of running VMs (except dom0) appears. Click on one and your device will be attached!
Click the tray icon.
Hover on a device you want to attach to a VM.
A list of running VMs (except dom0) appears.
Click on one and your device will be attached!
### Detaching Using The Widget ###
To detach a device, click the Qubes Devices Widget icon again. Attached devices are displayed in bold. Hover the one you want to detach. A list of VMs appears, one showing the eject symbol: ![eject icon]
To detach a device, click the Qubes Devices Widget icon again.
Attached devices are displayed in bold.
Hover the one you want to detach.
A list of VMs appears, one showing the eject symbol: ![eject icon]
### Attaching a Device to Several VMs ###
Only `mic` should be attached to more than one running VM. You may *assign* a device to more than one VM (using the [`--persistent`][#attaching-devices] option), however, only one of them can be started at the same time.
Only `mic` should be attached to more than one running VM.
You may *assign* a device to more than one VM (using the [`--persistent`][#attaching-devices] option), however, only one of them can be started at the same time.
But be careful: There is a [bug in `qvm-device block` or `qvm-block`][i4692] which will allow you to *attach* a block device to two running VMs. Don't do that!
But be careful: There is a [bug in `qvm-device block` or `qvm-block`][i4692] which will allow you to *attach* a block device to two running VMs.
Don't do that!
## General `qvm-device` Command Line Tool Behavior ##
@ -60,7 +74,8 @@ All devices, including PCI-devices, may be attached from the commandline using t
### Device Classes ###
`qvm-device` expects DEVICE_CLASS as first argument. DEVICE_CLASS can be one of
`qvm-device` expects DEVICE_CLASS as first argument.
DEVICE_CLASS can be one of
- `pci`
- `usb`
@ -85,7 +100,9 @@ These three options are always available:
- `--verbose`, `-v` - increase verbosity
- `--quiet`, `-q` - decrease verbosity
A full command consists of one DEVICE_CLASS and one action. If no action is given, list is implied. DEVICE_CLASS however is required.
A full command consists of one DEVICE_CLASS and one action.
If no action is given, list is implied.
DEVICE_CLASS however is required.
**SYNOPSIS**:
`qvm-device DEVICE_CLASS {action} [action-specific arguments] [options]`
@ -98,12 +115,16 @@ Actions are applicable to every DEVICE_CLASS and expose some additional options.
### Listing Devices ###
The `list` action lists known devices in the system. `list` accepts VM-names to narrow down listed devices. Devices available in, as well as attached to the named VMs will be listed.
The `list` action lists known devices in the system.
`list` accepts VM-names to narrow down listed devices.
Devices available in, as well as attached to the named VMs will be listed.
`list` accepts two options:
- `--all` - equivalent to specifying every VM name after `list`. No VM-name implies `--all`.
- `--exclude` - exclude VMs from `--all`. Requires `--all`.
- `--all` - equivalent to specifying every VM name after `list`.
No VM-name implies `--all`.
- `--exclude` - exclude VMs from `--all`.
Requires `--all`.
**SYNOPSIS**
`qvm-device DEVICE_CLASS {list|ls|l} [--all [--exclude VM [VM [...]]] | VM [VM [...]]]`
@ -111,11 +132,15 @@ The `list` action lists known devices in the system. `list` accepts VM-names to
### Attaching Devices ###
The `attach` action assigns an exposed device to a VM. This makes the device available in the VM it's attached to. Required argument are targetVM and sourceVM:deviceID. (sourceVM:deviceID can be determined from `list` output)
The `attach` action assigns an exposed device to a VM.
This makes the device available in the VM it's attached to.
Required argument are targetVM and sourceVM:deviceID.
(sourceVM:deviceID can be determined from `list` output)
`attach` accepts two options:
- `--persistent` - attach device on targetVM-boot. If the device is unavailable (physically missing or sourceVM not started), booting the targetVM fails.
- `--persistent` - attach device on targetVM-boot.
If the device is unavailable (physically missing or sourceVM not started), booting the targetVM fails.
- `--option`, `-o` - set additional options specific to DEVICE_CLASS.
**SYNOPSIS**
@ -124,7 +149,9 @@ The `attach` action assigns an exposed device to a VM. This makes the device ava
### Detaching Devices ###
The `detach` action removes an assigned device from a targetVM. It won't be available afterwards anymore. Though it tries to do so gracefully, beware that data-connections might be broken unexpectedly, so close any transaction before detaching a device!
The `detach` action removes an assigned device from a targetVM.
It won't be available afterwards anymore.
Though it tries to do so gracefully, beware that data-connections might be broken unexpectedly, so close any transaction before detaching a device!
If no specific `sourceVM:deviceID` combination is given, *all devices of that DEVICE_CLASS will be detached.*

View File

@ -68,7 +68,8 @@ This is a change in behaviour from R3.2, where DisposableVMs would inherit the s
Therefore, launching a DisposableVM from an AppVM will result in it using the network/firewall settings of the DisposableVM Template on which it is based.
For example, if an AppVM uses sys-net as its NetVM, but the default system DisposableVM uses sys-whonix, any DisposableVM launched from this AppVM will have sys-whonix as its NetVM.
**Warning:** The opposite is also true. This means if you have changed anon-whonix's `default_dispvm` to use the system default, and the system default DisposableVM uses sys-net, launching a DisposableVM from inside anon-whonix will result in the DisposableVM using sys-net.
**Warning:** The opposite is also true.
This means if you have changed anon-whonix's `default_dispvm` to use the system default, and the system default DisposableVM uses sys-net, launching a DisposableVM from inside anon-whonix will result in the DisposableVM using sys-net.
A DisposableVM launched from the Start Menu inherits the NetVM and firewall settings of the DisposableVM Template on which it is based.
Note that changing the "NetVM" setting for the system default DisposableVM Template *does* affect the NetVM of DisposableVMs launched from the Start Menu.
@ -118,10 +119,11 @@ Note that the `qvm-open-in-dvm` process will not exit until you close the applic
## Starting an arbitrary program in a DisposableVM from an AppVM ##
Sometimes it can be useful to start an arbitrary program in a DisposableVM. This can be done from an AppVM by running
Sometimes it can be useful to start an arbitrary program in a DisposableVM.
This can be done from an AppVM by running
~~~
[user@vault ~]$ qvm-run '$dispvm' xterm
[user@vault ~]$ qvm-run '@dispvm' xterm
~~~
The created DisposableVM can be accessed via other tools (such as `qvm-copy-to-vm`) using its `disp####` name as shown in the Qubes Manager or `qvm-ls`.
@ -153,6 +155,21 @@ $ qvm-open-in-vm @dispvm:online-dvm-template https://www.qubes-os.org
This will create a new DisposableVM based on `online-dvm-template`, open the default web browser in that DisposableVM, and navigate to `https://www.qubes-os.org`.
#### Example of RPC policies to allow this behavior
In dom0, add the following line at the beginning of the file `/etc/qubes-rpc/policy/qubes.OpenURL`
~~~
@anyvm @dispvm:online-dvm-template allow
~~~
This line means:
- FROM: Any VM
- TO: A DisposableVM based on the `online-dvm-template` TemplateVM
- WHAT: Allow sending an "Open URL" request
In other words, any VM will be allowed to create a new DisposableVM based on `online-dvm-template` and open a URL inside of that DisposableVM.
More information about RPC policies for DisposableVMs can be found [here][qrexec].
## Customizing DisposableVMs ##
@ -162,4 +179,4 @@ Full instructions can be found [here](/doc/disposablevm-customization/).
[DisposableVM Template]: /doc/glossary/#disposablevm-template
[qrexec]: /doc/qrexec/#qubes-rpc-administration

View File

@ -14,7 +14,9 @@ Enabling Full Screen Mode for select VMs
What is full screen mode?
-------------------------
Normally Qubes GUI virtualization daemon restricts the VM from "owning" the full screen, ensuring that there are always clearly marked decorations drawn by the trusted Window Manager around each of the VMs window. This allows the user to easily realize to which domain a specific window belongs. See the [screenshots](/doc/QubesScreenshots/) for better understanding.
Normally Qubes GUI virtualization daemon restricts the VM from "owning" the full screen, ensuring that there are always clearly marked decorations drawn by the trusted Window Manager around each of the VMs window.
This allows the user to easily realize to which domain a specific window belongs.
See the [screenshots](/doc/QubesScreenshots/) for better understanding.
Why is full screen mode potentially dangerous?
----------------------------------------------
@ -24,8 +26,12 @@ If one allowed one of the VMs to "own" the full screen, e.g. to show a movie on
Secure use of full screen mode
------------------------------
However, it is possible to deal with full screen mode in a secure way assuming there are mechanisms that can be used at any time to show the full desktop, and which cannot be intercepted by the VM. An example of such a mechanism is the KDE's "Present Windows" and "Desktop Grid" effects, which are similar to Mac's "Expose" effect, and which can be used to immediately detect potential "GUI forgery", as they cannot be intercepted by any of the VM (as the GUID never passes down the key combinations that got consumed by KDE Window Manager), and so the VM cannot emulate those. Those effects are enabled by default in KDE once Compositing gets enabled in KDE (System Settings -\> Desktop -\> Enable Desktop Effects), which is recommended anyway. By default they are triggered by Ctrl-F8 and Ctrl-F9 key combinations, but can also be reassigned to other shortcuts.
Another option is to use Alt+Tab for switching windows. This shortcut is also handled by dom0.
However, it is possible to deal with full screen mode in a secure way assuming there are mechanisms that can be used at any time to show the full desktop, and which cannot be intercepted by the VM.
An example of such a mechanism is the KDE's "Present Windows" and "Desktop Grid" effects, which are similar to Mac's "Expose" effect, and which can be used to immediately detect potential "GUI forgery", as they cannot be intercepted by any of the VM (as the GUID never passes down the key combinations that got consumed by KDE Window Manager), and so the VM cannot emulate those.
Those effects are enabled by default in KDE once Compositing gets enabled in KDE (System Settings -\> Desktop -\> Enable Desktop Effects), which is recommended anyway.
By default they are triggered by Ctrl-F8 and Ctrl-F9 key combinations, but can also be reassigned to other shortcuts.
Another option is to use Alt+Tab for switching windows.
This shortcut is also handled by dom0.
Enabling full screen mode for select VMs
----------------------------------------
@ -60,11 +66,8 @@ global: {
Be sure to restart the VM(s) after modifying this file, for the changes to take effect.
**Note:** Regardless of the settings above, you can always put a window into
fullscreen mode in Xfce4 using the trusted window manager by right-clicking on
a window's title bar and selecting "Fullscreen". This functionality should still
be considered safe, since a VM window still can't voluntarily enter fullscreen
mode. The user must select this option from the trusted window manager in dom0.
To exit fullscreen mode from here, press `alt` + `space` to bring up the title
bar menu again, then select "Leave Fullscreen".
**Note:** Regardless of the settings above, you can always put a window into fullscreen mode in Xfce4 using the trusted window manager by right-clicking on a window's title bar and selecting "Fullscreen".
This functionality should still be considered safe, since a VM window still can't voluntarily enter fullscreen mode.
The user must select this option from the trusted window manager in dom0.
To exit fullscreen mode from here, press `alt` + `space` to bring up the title bar menu again, then select "Leave Fullscreen".
For StandaloneHVMs, you should set the screen resolution in the qube to that of the host, (or larger), *before* setting fullscreen mode in Xfce4.

View File

@ -18,5 +18,8 @@ Currently, the only options for reading and recording optical discs (e.g., CDs,
3. Use a SATA optical drive attached to dom0.
(**Caution:** This option is [potentially dangerous](/doc/security-guidelines/#dom0-precautions).)
To access an optical disc via USB follow the [typical procedure for attaching a USB device](/doc/usb-devices/#with-the-command-line-tool), then check with the **Qubes Devices** widget to see what device in the target qube the USB optical drive was attached to. Typically this would be `sr0`. For example, if `sys-usb` has device `3-2` attached to the `work` qube's `sr0`, you would mount it with `mount /dev/sr0 /mnt/removable`. You could also write to a disc with `wodim -v dev=/dev/sr0 -eject /home/user/Qubes.iso`.
To access an optical disc via USB follow the [typical procedure for attaching a USB device](/doc/usb-devices/#with-the-command-line-tool), then check with the **Qubes Devices** widget to see what device in the target qube the USB optical drive was attached to.
Typically this would be `sr0`.
For example, if `sys-usb` has device `3-2` attached to the `work` qube's `sr0`, you would mount it with `mount /dev/sr0 /mnt/removable`.
You could also write to a disc with `wodim -v dev=/dev/sr0 -eject /home/user/Qubes.iso`.

View File

@ -13,19 +13,24 @@ redirect_from:
*This page is part of [device handling in qubes].*
**Warning:** Only dom0 exposes PCI devices. Some of them are strictly required in dom0 (e.g., the host bridge).
**Warning:** Only dom0 exposes PCI devices.
Some of them are strictly required in dom0 (e.g., the host bridge).
You may end up with an unusable system by attaching the wrong PCI device to a VM.
PCI passthrough should be safe by default, but non-default options may be required. Please make sure you carefully read and understand the **[security considerations]** before deviating from default behavior.
PCI passthrough should be safe by default, but non-default options may be required.
Please make sure you carefully read and understand the **[security considerations]** before deviating from default behavior.
## Introduction ##
Unlike other devices ([USB], [block], mic), PCI devices need to be attached on VM-bootup. Similar to how you can't attach a new sound-card after your computer booted (and expect it to work properly), attaching PCI devices to already booted VMs isn't supported.
Unlike other devices ([USB], [block], mic), PCI devices need to be attached on VM-bootup.
Similar to how you can't attach a new sound-card after your computer booted (and expect it to work properly), attaching PCI devices to already booted VMs isn't supported.
The Qubes installer attaches all network class controllers to `sys-net` and all USB controllers to `sys-usb` by default, if you chose to create the network and USB qube during install.
While this covers most use cases, there are some occasions when you may want to manually attach one NIC to `sys-net` and another to a custom NetVM, or have some other type of PCI controller you want to manually attach.
Some devices expose multiple functions with distinct BDF-numbers. Limits imposed by the PC and VT-d architectures may require all functions belonging to the same device to be attached to the same VM. This requirement can be dropped with the `no-strict-reset` option during attachment, bearing in mind the aforementioned [security considerations].
Some devices expose multiple functions with distinct BDF-numbers.
Limits imposed by the PC and VT-d architectures may require all functions belonging to the same device to be attached to the same VM.
This requirement can be dropped with the `no-strict-reset` option during attachment, bearing in mind the aforementioned [security considerations].
In the steps below, you can tell if this is needed if you see the BDF for the same device listed multiple times with only the number after the "." changing.
While PCI device can only be used by one powered on VM at a time, it *is* possible to *assign* the same device to more than one VM at a time.
@ -35,7 +40,8 @@ This can be useful if, for example, you have only one USB controller, but you ha
## Attaching Devices Using the GUI ##
The qube settings for a VM offers the "Devices"-tab. There you can attach PCI-devices to a qube.
The qube settings for a VM offers the "Devices"-tab.
There you can attach PCI-devices to a qube.
1. To reach the settings of any qube either
@ -45,13 +51,16 @@ The qube settings for a VM offers the "Devices"-tab. There you can attach PCI-de
2. Select the "Devices" tab on the top bar.
3. Select a device you want to attach to the qube and click the single arrow right! (`>`)
4. You're done. If everything worked out, once the qube boots (or reboots if it's running) it will start with the pci device attached.
5. In case it doesn't work out, first try disabling memory-balancing in the settings ("Advanced" tab). If that doesn't help, read on to learn how to disable the strict reset requirement!
4. You're done.
If everything worked out, once the qube boots (or reboots if it's running) it will start with the pci device attached.
5. In case it doesn't work out, first try disabling memory-balancing in the settings ("Advanced" tab).
If that doesn't help, read on to learn how to disable the strict reset requirement!
## `qvm-pci` Usage ##
The `qvm-pci` tool allows PCI attachment and detachment. It's a shortcut for [`qvm-device pci`][qvm-device].
The `qvm-pci` tool allows PCI attachment and detachment.
It's a shortcut for [`qvm-device pci`][qvm-device].
To figure out what device to attach, first list the available PCI devices by running (as user) in dom0:
@ -99,14 +108,17 @@ Both can be achieved during attachment with `qvm-pci` as described below.
## Additional Attach Options ##
Attaching a PCI device through the commandline offers additional options, specifiable via the `--option`/`-o` option. (Yes, confusing wording, there's an [issue for that](https://github.com/QubesOS/qubes-issues/issues/4530).)
Attaching a PCI device through the commandline offers additional options, specifiable via the `--option`/`-o` option.
(Yes, confusing wording, there's an [issue for that](https://github.com/QubesOS/qubes-issues/issues/4530).)
`qvm-pci` exposes two additional options. Both are intended to fix device or driver specific issues, but both come with [heavy security implications][security considerations]! **Make sure you understand them before continuing!**
`qvm-pci` exposes two additional options.
Both are intended to fix device or driver specific issues, but both come with [heavy security implications][security considerations]! **Make sure you understand them before continuing!**
### no-strict-reset ###
Do not require PCI device to be reset before attaching it to another VM. This may leak usage data even without malicious intent!
Do not require PCI device to be reset before attaching it to another VM.
This may leak usage data even without malicious intent!
usage example:
@ -115,7 +127,8 @@ usage example:
### permissive ###
Allow write access to full PCI config space instead of whitelisted registers. This increases attack surface and possibility of [side channel attacks].
Allow write access to full PCI config space instead of whitelisted registers.
This increases attack surface and possibility of [side channel attacks].
usage example:

View File

@ -8,50 +8,53 @@ redirect_from:
- /wiki/SoftwareUpdateDom0/
---
Installing and updating software in dom0
========================================
# Installing and updating software in dom0
Why would one want to install or update software in dom0?
---------------------------------------------------------
Updating dom0 is one of the main steps in [Updating Qubes OS].
It is very import to keep dom0 up-to-date with the latest [security] updates.
We also publish dom0 updates for various non-security bug fixes and enhancements to Qubes components.
In addition, you may wish to update the kernel, drivers, or libraries in dom0 when [troubleshooting newer hardware].
Normally, there should be few reasons for installing or updating software in dom0. This is because there is no networking in dom0, which means that even if some bugs are discovered e.g. in the dom0 Desktop Manager, this really is not a problem for Qubes, because none of the third-party software running in dom0 is accessible from VMs or the network in any way. Some exceptions to this include: Qubes GUI daemon, Xen store daemon, and disk back-ends. (We plan move the disk backends to an untrusted domain in a future Qubes release.) Of course, we believe this software is reasonably secure, and we hope it will not need patching.
## Security
However, we anticipate some other situations in which installing or updating dom0 software might be necessary or desirable:
Since there is no networking in dom0, any bugs discovered in dom0 desktop components (e.g., the window manager) are unlikely to pose a problem for Qubes, since none of the third-party software running in dom0 is accessible from VMs or the network in any way.
Nonetheless, since software running in dom0 can potentially exercise full control over the system, it is important to install only trusted software in dom0.
- Updating drivers/libs for new hardware support
- Correcting non-security related bugs (e.g. new buttons for qubes manager)
- Adding new features (e.g. GUI backup tool)
The install/update process is split into two phases: *resolve and download* and *verify and install*.
The *resolve and download* phase is handled by the UpdateVM.
(The role of UpdateVM can be assigned to any VM in the Qube Manager, and there are no significant security implications in this choice.
By default, this role is assigned to the FirewallVM.)
After the UpdateVM has successfully downloaded new packages, they are sent to dom0, where they are verified and installed.
This separation of duties significantly reduces the attack surface, since all of the network and metadata processing code is removed from the TCB.
How is software installed and updated securely in dom0?
-------------------------------------------------------
Although this update scheme is far more secure than directly downloading updates in dom0, it is not invulnerable.
For example, there is nothing that the Qubes OS Project can feasibly do to prevent a malicious RPM from exploiting a hypothetical bug in the cryptographic signature verification operation.
At best, we could switch to a different distro or package manager, but any of them could be vulnerable to the same (or a similar) attack.
While we could, in theory, write a custom solution, it would only be effective if Qubes repos included all of the regular TemplateVM distro's updates, and this would be far too costly for us to maintain.
The install/update process is split into two phases: "resolve and download" and "verify and install." The "resolve and download" phase is handled by the "UpdateVM." (The role of UpdateVM can be assigned to any VM in the Qubes VM Manager, and there are no significant security implications in this choice. By default, this role is assigned to the firewallvm.) After the UpdateVM has successfully downloaded new packages, they are sent to dom0, where they are verified and installed. This separation of duties significantly reduces the attack surface, since all of the network and metadata processing code is removed from the TCB.
## How to update dom0
Although this update scheme is far more secure than directly downloading updates in dom0, it is not invulnerable. For example, there is nothing that the Qubes project can feasibly do to prevent a malicious RPM from exploiting a hypothetical bug in GPG's `--verify` operation. At best, we could switch to a different distro or package manager, but any of them could be vulnerable to the same (or a similar) attack. While we could, in theory, write a custom solution, it would only be effective if Qubes repos included all of the regular TemplateVM distro's updates, and this would be far too costly for us to maintain.
In the Qube Manager, simply select dom0 in the VM list, then click the **Update VM system** button (the blue, downward-pointing arrow).
In addition, updating dom0 has been made more convenient: You will be prompted on the desktop whenever new dom0 updates are available and given the choice to run the update with a single click.
How to install and update software in dom0
------------------------------------------
### How to update dom0
In the Qube Manager, simply select dom0 in the VM list, then click the **Update VM system** button (the blue, downward-pointing arrow). In addition, updating dom0 has been made more convenient: You will be prompted on the desktop whenever new dom0 updates are available and given the choice to run the update with a single click.
Alternatively, command-line tools are available for accomplishing various update-related tasks (some of which are not available via Qubes VM Manager). In order to update dom0 from the command line, start a console in dom0 and then run one of the following commands:
Alternatively, command-line tools are available for accomplishing various update-related tasks (some of which are not available via Qubes VM Manager).
In order to update dom0 from the command line, start a console in dom0 and then run one of the following commands:
To check and install updates for dom0 software:
$ sudo qubes-dom0-update
### How to install a specific package
## How to install a specific package
To install additional packages in dom0 (usually not recommended):
$ sudo qubes-dom0-update anti-evil-maid
You may also pass the `--enablerepo=` option in order to enable optional repositories (see yum configuration in dom0). However, this is only for advanced users who really understand what they are doing.
You may also pass the `--enablerepo=` option in order to enable optional repositories (see yum configuration in dom0).
However, this is only for advanced users who really understand what they are doing.
You can also pass commands to `dnf` using `--action=...`.
### How to downgrade a specific package
## How to downgrade a specific package
**WARNING:** Downgrading a package can expose your system to security vulnerabilities.
@ -69,7 +72,7 @@ You can also pass commands to `dnf` using `--action=...`.
sudo dnf downgrade package-version
~~~
### How to re-install a package
## How to re-install a package
You can re-install in a similar fashion to downgrading.
@ -87,17 +90,18 @@ You can re-install in a similar fashion to downgrading.
sudo dnf reinstall package
~~~
Note that `dnf` will only re-install if the installed and downloaded versions match. You can ensure they match by either updating the package to the latest version, or specifying the package version in the first step using the form `package-version`.
Note that `dnf` will only re-install if the installed and downloaded versions match.
You can ensure they match by either updating the package to the latest version, or specifying the package version in the first step using the form `package-version`.
### How to uninstall a package
## How to uninstall a package
If you've installed a package such as anti-evil-maid, you can remove it with the following command:
sudo dnf remove anti-evil-maid
### Testing repositories
## Testing repositories
There are three Qubes dom0 testing repositories:
There are three Qubes dom0 [testing] repositories:
* `qubes-dom0-current-testing` -- testing packages that will eventually land in the stable
(`current`) repository
@ -106,8 +110,8 @@ There are three Qubes dom0 testing repositories:
* `qubes-dom0-unstable` -- packages that are not intended to land in the stable (`qubes-dom0-current`)
repository; mostly experimental debugging packages
To temporarily enable any of these repos, use the `--enablerepo=<repo-name>`
option. Example commands:
To temporarily enable any of these repos, use the `--enablerepo=<repo-name>` option.
Example commands:
~~~
sudo qubes-dom0-update --enablerepo=qubes-dom0-current-testing
@ -118,10 +122,30 @@ sudo qubes-dom0-update --enablerepo=qubes-dom0-unstable
To enable or disable any of these repos permanently, change the corresponding `enabled` value to `1` in
`/etc/yum.repos.d/qubes-dom0.repo`.
### Kernel Upgrade ###
## Kernel upgrade
Install newer kernel for dom0 and VMs. The package `kernel` is for dom0 and the package `kernel-qubes-vm`
is needed for the VMs. (Note that the following example enables the unstable repo.)
This section describes upgrading the kernel in dom0 and domUs.
### dom0
The packages `kernel` and `kernel-latest` are for dom0.
In the `current` repository:
- `kernel`: an older LTS kernel that has passed Qubes [testing] (the default dom0 kernel)
- `kernel-latest`: the latest release from kernel.org that has passed Qubes [testing] (useful for [troubleshooting newer hardware])
In the `current-testing` repository:
- `kernel`: the latest LTS kernel from kernel.org at the time it was built.
- `kernel-latest`: the latest release from kernel.org at the time it was built.
### domU
The packages `kernel-qubes-vm` and `kernel-latest-qubes-vm` are for domUs.
See [Managing VM kernel] for more information.
### Example
(Note that the following example enables the unstable repo.)
~~~
sudo qubes-dom0-update --enablerepo=qubes-dom0-unstable kernel kernel-qubes-vm
@ -131,12 +155,12 @@ If the update process does not automatically do it (you should see it mentioned
from the update command), you may need to manually rebuild the EFI or grub config depending on which
your system uses.
EFI: Replace the example version numbers with the one you are upgrading to.
*EFI*: Replace the example version numbers with the one you are upgrading to.
~~~
sudo dracut -f /boot/efi/EFI/qubes/initramfs-4.14.35-1.pvops.qubes.x86_64.img 4.14.35-1.pvops.qubes.x86_64
~~~
Grub2
*Grub2*
~~~
sudo grub2-mkconfig -o /boot/grub2/grub.cfg
~~~
@ -147,11 +171,52 @@ If you wish to upgrade to a kernel that is not available from the repos, then
there is no easy way to do so, but [it may still be possible if you're willing
to do a lot of work yourself](https://groups.google.com/d/msg/qubes-users/m8sWoyV58_E/HYdReRIYBAAJ).
### Upgrading over Tor ###
## Changing default kernel
This section describes changing the default kernel in dom0.
It is sometimes needed if you have upgraded to a newer kernel and are having problems booting, for example.
The procedure varies depending on if you are booting with UEFI or grub.
On the next kernel update, the default will revert to the newest.
*EFI*
~~~
sudo nano /boot/efi/EFI/qubes/xen.cfg
~~~
In the `[global]` section at the top, change the `default=` line to match one of the three boot entries listed below.
For example,
~~~
default=4.19.67-1.pvops.qubes.x86_64
~~~
*Grub2*
~~~
sudo nano /etc/default/grub
[update the following two lines, add if needed]
GRUB_DISABLE_SUBMENU=false
GRUB_SAVEDEFAULT=true
[save and exit nano]
sudo grub2-mkconfig -o /boot/grub2/grub.cfg
~~~
Then, reboot.
Once the grub menu appears, choose "Advanced Options for Qubes (with Xen hypervisor)".
Next, the top menu item (for example, "Xen hypervisor, version 4.8.5-9.fc25").
Select the kernel you want as default, and it will be remembered for next boot.
## Updating over Tor ###
Requires installed [Whonix](/doc/privacy/whonix/).
Go to Qubes VM Manager -> System -> Global Settings. See the UpdateVM setting. Choose your desired Whonix-Gateway ProxyVM from the list. For example: sys-whonix.
Go to Qubes VM Manager -> System -> Global Settings.
See the UpdateVM setting.
Choose your desired Whonix-Gateway ProxyVM from the list.
For example: sys-whonix.
Qubes VM Manager -> System -> Global Settings -> UpdateVM -> sys-whonix
[Updating Qubes OS]: /doc/updating-qubes-os/
[security]: /security/
[testing]: /doc/testing/
[troubleshooting newer hardware]: /doc/newer-hardware-troubleshooting/
[Managing VM kernel]: /doc/managing-vm-kernel/

View File

@ -0,0 +1,211 @@
---
layout: doc
title: Installing and updating software in domUs
permalink: /doc/software-update-domu/
redirect_from:
- /doc/software-update-vm/
- /en/doc/software-update-vm/
- /doc/SoftwareUpdateVM/
- /wiki/SoftwareUpdateVM/
---
# Installing and updating software in domUs
Updating [domUs], especially [TemplateVMs] and [StandaloneVMs][StandaloneVM] are important steps in [Updating Qubes OS].
It is very import to keep domUs up-to-date with the latest [security] updates.
Updating these VMs also allows you to receive various non-security bug fixes and enhancements both from the Qubes OS Project and from your upstream distro maintainer.
## Installing software in TemplateVMs
To permanently install new software in a TemplateVM:
1. Start the TemplateVM.
2. Start either a terminal (e.g. `gnome-terminal`) or a dedicated software management application, such as `gpk-application`.
3. Install software as normally instructed inside that operating system (e.g. using `dnf`, or the dedicated GUI application).
4. Shut down the TemplateVM.
5. Restart all [TemplateBasedVMs] based on the TemplateVM.
## Updating software in TemplateVMs
The recommended way to update your TemplateVMs is to use the **Qubes Update** tool.
By default, the icon for this tool will appear in your Notification Area when updates are available.
Simply click on it and follow the guided steps.
If you wish to open this tool directly, you can find it in the System Tools area of the Applications menu.
You can also update TemplateVMs individually.
In the Qube Manager, select the desired TemplateVM, then click **Update qube**.
Advanced users can execute the standard update command for that operating system from the command line, e.g., `dnf update` in Fedora and `apt-get update` in Debian.
## Testing repositories
If you wish to install updates that are still in [testing], you must enable the appropriate testing repositories.
### Fedora
There are three Qubes VM testing repositories (where `*` denotes the Release):
* `qubes-vm-*-current-testing` -- testing packages that will eventually land in the stable (`current`) repository
* `qubes-vm-*-security-testing` -- a subset of `qubes-vm-*-current-testing` that contains packages that qualify as security fixes
* `qubes-vm-*-unstable` -- packages that are not intended to land in the stable (`qubes-vm-*-current`) repository; mostly experimental debugging packages
To temporarily enable any of these repos, use the `--enablerepo=<repo-name>` option.
Example commands:
~~~
sudo dnf upgrade --enablerepo=qubes-vm-*-current-testing
sudo dnf upgrade --enablerepo=qubes-vm-*-security-testing
sudo dnf upgrade --enablerepo=qubes-vm-*-unstable
~~~
To enable or disable any of these repos permanently, change the corresponding `enabled` value to `1` in `/etc/yum.repos.d/qubes-*.repo`.
### Debian
Debian also has three Qubes VM testing repositories (where `*` denotes the Release):
* `*-testing` -- testing packages that will eventually land in the stable (`current`) repository
* `*-securitytesting` -- a subset of `*-testing` that contains packages that qualify as security fixes
* `*-unstable` -- packages that are not intended to land in the stable repository; mostly experimental debugging packages
To enable or disable any of these repos permanently, uncomment the corresponding `deb` line in `/etc/apt/sources.list.d/qubes-r*.list`
## StandaloneVMs
When you create a [StandaloneVM] from a TemplateVM, the StandaloneVM is a complete clone of the TemplateVM, including the entire filesystem.
After the moment of creation, the StandaloneVM is completely independent from the TemplateVM.
Therefore, it will not be updated when the TemplateVM is updated.
Rather, it must be updated individually.
The process for installing and updating software in StandaloneVMs is the same as described above for TemplateVMs.
## Advanced
The following sections cover advanced topics pertaining to installing and updating software in domUs.
### RPMFusion for Fedora TemplateVMs
If you would like to enable the [RPM Fusion] repository, open a Terminal of the TemplateVM and type the following commands:
~~~
sudo dnf config-manager --set-enabled rpmfusion-free rpmfusion-nonfree
sudo dnf upgrade --refresh
~~~
## Reverting changes to a TemplateVM
Perhaps you've just updated your TemplateVM, and the update broke your template.
Or perhaps you've made a terrible mistake, like accidentally confirming the installation of an unsigned package that could be malicious.
If you want to undo changes to a TemplateVM, there are three basic methods:
1. **Root revert.**
This is appropriate for misconfigurations, but not for security concerns.
It will preserve your customizations.
2. **Reinstall the template.**
This is appropriate for both misconfigurations and security concerns, but you will lose all customizations.
3. **Full revert.**
This is appropriate for both misconfigurations and security concerns, and it can preserve your customizations.
However, it is a bit more complex.
### Root revert
**Important:** This command will roll back any changes made *during the last time the TemplateVM was run, but **not** before.*
This means that if you have already restarted the TemplateVM, using this command is unlikely to help, and you'll likely want to reinstall it from the repository instead.
On the other hand, if the template is already broken or compromised, it won't hurt to try reverting first.
Just make sure to **back up** all of your data and changes first!
1. Shut down `<template>`.
If you've already just shut it down, do **not** start it again (see above).
2. In a dom0 terminal:
qvm-volume revert <template>:root
### Reinstall the template
Please see [How to Reinstall a TemplateVM].
### Full revert
This is like the simple revert, except:
- You must also revert the private volume with `qvm-volume revert <template>:private`.
This requires you to have an old revision of the private volume, which does not exist with the current default config.
However, if you don't have anything important in the private volume (likely for a TemplateVM), then you can work around this by just resetting the private volume with `qvm-volume import --no-resize <template>:private /dev/null`.
- The saved revision of the volumes must be uncompromised.
With the default `revisions_to_keep=1` for the root volume, you must **not** have started the template since the compromising action.
### Temporarily allowing networking for software installation
Some third-party applications cannot be installed using the standard repositories and need to be manually downloaded and installed.
When the installation requires internet connection to access third-party repositories, it will naturally fail when run in a Template VM because the default firewall rules for templates only allow connections from package managers.
So it is necessary to modify firewall rules to allow less restrictive internet access for the time of the installation, if one really wants to install those applications into a template.
As soon as software installation is completed, firewall rules should be returned back to the default state.
The user should decide by themselves whether such third-party applications should be equally trusted as the ones that come from the standard Fedora signed repositories and whether their installation will not compromise the default Template VM, and potentially consider installing them into a separate template or a standalone VM (in which case the problem of limited networking access doesn't apply by default), as described above.
### Updates proxy
Updates proxy is a service which allows access only from package managers.
This is meant to mitigate user errors (like using browser in the template VM), rather than some real isolation.
It is done with http proxy (tinyproxy) instead of simple firewall rules because it is hard to list all the repository mirrors (and keep that list up to date).
The proxy is used only to filter the traffic, not to cache anything.
The proxy is running in selected VMs (by default all the NetVMs (1)) and intercepts traffic directed to 10.137.255.254:8082.
Thanks to such configuration all the VMs can use the same proxy address, and if there is a proxy on network path, it will handle the traffic (of course when firewall rules allow that).
If the VM is configured to have access to the updates proxy (2), the startup scripts will automatically configure dnf to really use the proxy (3).
Also access to updates proxy is independent of any other firewall settings (VM will have access to updates proxy, even if policy is set to block all the traffic).
There are two services (`qvm-service`, [service framework]):
1. qubes-updates-proxy (and its deprecated name: qubes-yum-proxy) - a service providing a proxy for templates - by default enabled in NetVMs (especially: sys-net)
2. updates-proxy-setup (and its deprecated name: yum-proxy-setup) - use a proxy provided by another VM (instead of downloading updates directly), enabled by default in all templates
Both the old and new names work.
The defaults listed above are applied if the service is not explicitly listed in the services tab.
#### Technical details
The updates proxy uses RPC/qrexec.
The proxy is configured in qrexec policy on dom0: `/etc/qubes-rpc/policy/qubes.UpdatesProxy`.
By default this is set to sys-net and/or sys-whonix, depending on firstboot choices.
This new design allows for templates to be updated even when they are not connected to any NetVM.
Example policy file in R4.0 (with Whonix installed, but not set as default UpdateVM for all templates):
```
# any VM with tag `whonix-updatevm` should use `sys-whonix`; this tag is added to `whonix-gw` and `whonix-ws` during installation and is preserved during template clone
@tag:whonix-updatevm @default allow,target=sys-whonix
@tag:whonix-updatevm @anyvm deny
# other templates use sys-net
@type:TemplateVM @default allow,target=sys-net
@anyvm @anyvm deny
```
[domUs]: /doc/glossary/#domu
[TemplateVMs]: /doc/templates/
[StandaloneVM]: /doc/standalone-and-hvm/
[Updating Qubes OS]: /doc/updating-qubes-os/
[security]: /security/
[TemplateBasedVMs]: /doc/glossary/#templatebasedvm
[testing]: /doc/testing
[RPM Fusion]: http://rpmfusion.org/
[service framework]: /doc/qubes-service/
[How to Reinstall a TemplateVM]: /doc/reinstall-template/

View File

@ -1,257 +0,0 @@
---
layout: doc
title: Installing and updating software in VMs
permalink: /doc/software-update-vm/
redirect_from:
- /en/doc/software-update-vm/
- /doc/SoftwareUpdateVM/
- /wiki/SoftwareUpdateVM/
---
Installing and updating software in VMs
=======================================
How TemplateVMs work in Qubes
------------------------------
Most of the AppVMs (domains) are based on a *TemplateVM*, which means that their root filesystem (i.e. all the programs and system files) is based on the root filesystem of the corresponding template VM.
This dramatically saves disk space, because each new AppVM needs disk space only for storing the user's files (i.e. the home directory).
Of course the AppVM has only read-access to the template's filesystem -- it cannot modify it in any way.
In addition to saving on the disk space, and reducing domain creation time, another advantage of such scheme is the possibility for centralized software update.
It's just enough to do the update in the template VM, and then all the AppVMs based on this template get updates automatically after they are restarted.
The side effect of this mechanism is, of course, that if you install any software in your AppVM, more specifically in any directory other than `/home`, `/usr/local`, or `/rw` then it will disappear after the AppVM reboots (as the root filesystem for this AppVM will again be "taken" from the TemplateVM).
**This means one normally installs software in the TemplateVM, not in AppVMs.**
The template root filesystem is created in a thin pool, so manual trims are not necessary.
See [here](/doc/disk-trim) for further discussion on enabling discards/trim support.
Installing (or updating) software in the TemplateVM
----------------------------------------------------
In order to permanently install new software, you should:
- Start the template VM and then start either console (e.g. `gnome-terminal`) or dedicated software management application, such as `gpk-application` (*Start-\>Applications-\>Template: fedora-XX-\>Add/Remove software*),
- Install/update software as usual (e.g. using dnf, or the dedicated GUI application).
Then, shutdown the template VM.
- You will see now that all the AppVMs based on this template (by default all your VMs) will be marked as "outdated" in the manager.
This is because their filesystems have not been yet updated -- in order to do that, you must restart each VM.
You don't need to restart all of them at the same time -- e.g. if you just need the newly installed software to be available in your 'personal' domain, then restart only this VM.
You can restart others whenever this will be convenient to you.
Testing repositories
--------------------
### Fedora ###
There are three Qubes VM testing repositories (where `*` denotes the Release):
* `qubes-vm-*-current-testing` -- testing packages that will eventually land in the stable (`current`) repository
* `qubes-vm-*-security-testing` -- a subset of `qubes-vm-*-current-testing` that contains packages that qualify as security fixes
* `qubes-vm-*-unstable` -- packages that are not intended to land in the stable (`qubes-vm-*-current`) repository; mostly experimental debugging packages
To temporarily enable any of these repos, use the `--enablerepo=<repo-name>` option.
Example commands:
~~~
sudo dnf upgrade --enablerepo=qubes-vm-*-current-testing
sudo dnf upgrade --enablerepo=qubes-vm-*-security-testing
sudo dnf upgrade --enablerepo=qubes-vm-*-unstable
~~~
To enable or disable any of these repos permanently, change the corresponding `enabled` value to `1` in `/etc/yum.repos.d/qubes-*.repo`.
### Debian ###
Debian also has three Qubes VM testing repositories (where `*` denotes the Release):
* `*-testing` -- testing packages that will eventually land in the stable (`current`) repository
* `*-securitytesting` -- a subset of `*-testing` that contains packages that qualify as security fixes
* `*-unstable` -- packages that are not intended to land in the stable repository; mostly experimental debugging packages
To enable or disable any of these repos permanently, uncomment the corresponding `deb` line in `/etc/apt/sources.list.d/qubes-r*.list`
Reverting changes to a TemplateVM
---------------------------------
Perhaps you've just updated your TemplateVM, and the update broke your template.
Or perhaps you've made a terrible mistake, like accidentally confirming the installation of an unsigned package that could be malicious.
Fortunately, it's easy to revert changes to TemplateVMs using the command appropriate to your version of Qubes.
**Important:** This command will roll back any changes made *during the last time the TemplateVM was run, but **not** before.*
This means that if you have already restarted the TemplateVM, using this command is unlikely to help, and you'll likely want to reinstall it from the repository instead.
On the other hand, if the template is already broken or compromised, it won't hurt to try reverting first.
Just make sure to **back up** all of your data and changes first!
For example, to revert changes to the `fedora-26` TemplateVM:
1. Shut down `fedora-26`.
If you've already just shut it down, do **not** start it again (see above).
2. In a dom0 terminal, type:
qvm-volume revert fedora-26:root
Notes on trusting your TemplateVM(s)
-------------------------------------
As the TemplateVM is used for creating filesystems for other AppVMs where you actually do the work, it means that the TemplateVM is as trusted as the most trusted AppVM based on this template.
In other words, if your template VM gets compromised, e.g. because you installed an application, whose *installer's scripts* were malicious, then *all* your AppVMs (based on this template) will inherit this compromise.
There are several ways to deal with this problem:
- Only install packages from trusted sources -- e.g. from the pre-configured Fedora repositories.
All those packages are signed by Fedora, and we expect that at least the package's installation scripts are not malicious.
This is enforced by default (at the [firewall VM level](/doc/firewall/)), by not allowing any networking connectivity in the default template VM, except for access to the Fedora repos.
- Use *standalone VMs* (see below) for installation of untrusted software packages.
- Use multiple templates (see below) for different classes of domains, e.g. a less trusted template, used for creation of less trusted AppVMs, would get various packages from less trusted vendors, while the template used for more trusted AppVMs will only get packages from the standard Fedora repos.
Some popular questions:
- So, why should we actually trust Fedora repos -- it also contains large amount of third-party software that might be buggy, right?
As far as the template's compromise is concerned, it doesn't really matter whether `/usr/bin/firefox` is buggy and can be exploited, or not.
What matters is whether its *installation* scripts (such as %post in the rpm.spec) are benign or not.
Template VM should be used only for installation of packages, and nothing more, so it should never get a chance to actually run `/usr/bin/firefox` and get infected from it, in case it was compromised.
Also, some of your more trusted AppVMs would have networking restrictions enforced by the [firewall VM](/doc/firewall/), and again they should not fear this proverbial `/usr/bin/firefox` being potentially buggy and easy to compromise.
- But why trust Fedora?
Because we chose to use Fedora as a vendor for the Qubes OS foundation (e.g. for Dom0 packages and for AppVM packages).
We also chose to trust several other vendors, such as Xen.org, kernel.org, and a few others whose software we use in Dom0.
We had to trust *somebody* as we are unable to write all the software from scratch ourselves.
But there is a big difference in trusting all Fedora packages to be non-malicious (in terms of installation scripts) vs. trusting all those packages are non-buggy and non-exploitable.
We certainly do not assume the latter.
- So, are the template VMs as trusted as Dom0?
Not quite.
Dom0 compromise is absolutely fatal, and it leads to Game Over<sup>TM</sup>.
However, a compromise of a template affects only a subset of all your AppVMs (in case you use more than one template, or also some standalone VMs).
Also, if your AppVMs are network disconnected, even though their filesystems might get compromised due to the corresponding template compromise, it still would be difficult for the attacker to actually leak out the data stolen in an AppVM.
Not impossible (due to existence of cover channels between VMs on x86 architecture), but difficult and slow.
Standalone VMs
--------------
Standalone VMs have their own copy of the whole filesystem, and thus can be updated and managed on their own.
But this means that they take a few GBs on disk, and also that centralized updates do not apply to them.
Sometimes it might be convenient to have a VM that has its own filesystem, where you can directly introduce changes, without the need to start/stop the template VM.
Such situations include e.g.:
- VMs used for development (devel environments require a lot of \*-devel packages and specific devel tools)
- VMs used for installing untrusted packages.
Normally you install digitally signed software from Red Hat/Fedora repositories, and it's reasonable that such software has non malicious *installation* scripts (rpm pre/post scripts).
However, when you would like to install some packages from less trusted sources, or unsigned, then using a dedicated (untrusted) standalone VM might be a better way.
In order to create a standalone VM you can use a command line like this (from console in Dom0):
```
qvm-create --class StandaloneVM --label <label> --property virt_mode=hvm <vmname>
```
... or click appropriate options in the Qubes Manager's Create VM window.
(Note: Technically, `virt_mode=hvm` is not necessary for every StandaloneVM. However, it makes sense if you want to use a kernel from within the VM.)
Using more than one template
----------------------------
It's also possible to have more than one template VM in the system.
E.g. one could clone the default template using the `qvm-clone` command in Dom0.
This allows to have a customized template, e.g. with devel packages, or less trusted apps, shared by some subset of domains.
It is important to note that the default template is "system managed" and therefore cannot be backed up using Qubes' built-in backup function.
In order to ensure the preservation of your custom settings and the availability of a "known-good" backup template, you may wish to clone the default system template and use your clone as the default template for your AppVMs.
When you create a new domain you can choose which template this VM should be based on.
If you use command line, you should use the `--template` switch:
~~~
qvm-create <vmname> --template <templatename> --label <label>
~~~
Temporarily allowing networking for software installation
---------------------------------------------------------
Some third-party applications cannot be installed using the standard yum repositories, and need to be manually downloaded and installed.
When the installation requires internet connection to access third-party repositories, it will naturally fail when run in a Template VM because the default firewall rules for templates only allow connections from package managers.
So it is necessary to modify firewall rules to allow less restrictive internet access for the time of the installation, if one really wants to install those applications into a template.
As soon as software installation is completed, firewall rules should be returned back to the default state.
The user should decide by themselves whether such third-party applications should be equally trusted as the ones that come from the standard Fedora signed repositories and whether their installation will not compromise the default Template VM, and potentially consider installing them into a separate template or a standalone VM (in which case the problem of limited networking access doesn't apply by default), as described above.
Updates proxy
-------------
Updates proxy is a service which allows access only from package managers.
This is meant to mitigate user errors (like using browser in the template VM), rather than some real isolation.
It is done with http proxy (tinyproxy) instead of simple firewall rules because it is hard to list all the repository mirrors (and keep that list up to date).
The proxy is used only to filter the traffic, not to cache anything.
The proxy is running in selected VMs (by default all the NetVMs (1)) and intercepts traffic directed to 10.137.255.254:8082.
Thanks to such configuration all the VMs can use the same proxy address, and if there is a proxy on network path, it will handle the traffic (of course when firewall rules allow that).
If the VM is configured to have access to the updates proxy (2), the startup scripts will automatically configure dnf to really use the proxy (3).
Also access to updates proxy is independent of any other firewall settings (VM will have access to updates proxy, even if policy is set to block all the traffic).
There are two services (`qvm-service`, [service framework](https://www.qubes-os.org/doc/qubes-service/)):
1. qubes-updates-proxy (and its deprecated name: qubes-yum-proxy) - a service providing a proxy for templates - by default enabled in NetVMs (especially: sys-net)
2. updates-proxy-setup (and its deprecated name: yum-proxy-setup) - use a proxy provided by another VM (instead of downloading updates directly), enabled by default in all templates
Both the old and new names work.
The defaults listed above are applied if the service is not explicitly listed in the services tab.
### Technical details
The updates proxy uses RPC/qrexec.
The proxy is configured in qrexec policy on dom0: `/etc/qubes-rpc/policy/qubes.UpdatesProxy`.
By default this is set to sys-net and/or sys-whonix, depending on firstboot choices.
This new design allows for templates to be updated even when they are not connected to any netvm.
Example policy file in R4.0 (with whonix installed, but not set as default updatevm for all templates):
```
# any VM with tag `whonix-updatevm` should use `sys-whonix`; this tag is added to `whonix-gw` and `whonix-ws` during installation and is preserved during template clone
$tag:whonix-updatevm $default allow,target=sys-whonix
$tag:whonix-updatevm $anyvm deny
# other templates use sys-net
$type:TemplateVM $default allow,target=sys-net
$anyvm $anyvm deny
```
Note on treating AppVM's root filesystem non-persistence as a security feature
------------------------------------------------------------------------------
As explained above, any AppVM that is based on a Template VM (i.e. which is not a Standalone VM) has its root filesystem non-persistent across the VM reboots.
In other words whatever changes the VM makes (or the malware running in this AppVM makes) to its root filesystem, are automatically discarded whenever one restarts the AppVM.
This might seem like an excellent anti-malware mechanism to be used inside the AppVM...
However, one should be careful with treating this property as a reliable way to keep the AppVM malware-free.
This is because the non-persistence, in the case of normal AppVMs, applies only to the root filesystem and not to the user filesystem (on which the `/home`, `/rw`, and `/usr/local` are stored) for obvious reasons.
It is possible that malware, especially malware that could be specifically written to target a Qubes-based AppVMs, could install its hooks inside the user home directory files only.
Examples of obvious places for such hooks could be: `.bashrc`, the Firefox profile directory which contains the extensions, or some PDF or DOC documents that are expected to be opened by the user frequently (assuming the malware found an exploitable bug in the PDF or DOC reader), and surely many others places, all in the user's home directory.
One advantage of the non-persistent rootfs though, is that the malware is still inactive before the user's filesystem gets mounted and "processed" by system/applications, which might theoretically allow for some scanning programs (or a skilled user) to reliably scan for signs of infections of the AppVM.
But, of course, the problem of finding malware hooks in general is hard, so this would work likely only for some special cases (e.g. an AppVM which doesn't use Firefox, as otherwise it would be hard to scan the Firefox profile directory reliably to find malware hooks there).
Also note that the user filesystem's metadata might got maliciously modified by malware in order to exploit a hypothetical bug in the AppVM kernel whenever it mounts the malformed filesystem.
However, these exploits will automatically stop working (and so the infection might be cleared automatically) after the hypothetical bug got patched and the update applied (via template update), which is an exceptional feature of Qubes OS.
Also note that DisposableVMs do not have persistent user filesystem, and so they start up completely "clean" every time.
Note the word "clean" means in this context: the same as their template filesystem, of course.
RPMFusion for a Fedora TemplateVM
---------------------------------
If you would like to enable the [RPM Fusion](http://rpmfusion.org/) repository, open a Terminal of the TemplateVM and type the following commands:
~~~
sudo dnf config-manager --set-enabled rpmfusion-free rpmfusion-nonfree
sudo dnf upgrade --refresh
~~~

View File

@ -7,9 +7,10 @@ permalink: /doc/updating-qubes-os/
Updating Qubes OS
=================
This page is about updating your system while staying on the same [supported version of Qubes OS].
If you're instead looking to upgrade from your current version of Qubes OS to a newer version, see the [Upgrade Guides].
*This page is about updating your system while staying on the same [supported version of Qubes OS].
If you're instead looking to upgrade from your current version of Qubes OS to a newer version, see the [Upgrade Guides].*
It is very important to keep your Qubes OS system up-to-date to ensure you have the latest [security] updates, as well as the latest non-security enhancements and bug fixes.
Fully updating your Qubes OS system means updating:
- [Dom0]
@ -18,7 +19,7 @@ Fully updating your Qubes OS system means updating:
Visit the pages above to see to how to update each one.
The final step is to make sure that all of your VMs are running a supported operating system so that they're all receiving security updates.
The final step is to make sure that all of your VMs are running a supported operating system so that they're all receiving upstream security updates.
For example, you might be using a [Fedora TemplateVM].
The [Fedora Project] is independent of the Qubes OS Project.
They set their own [schedule] for when each Fedora release reaches [end-of-life] (EOL).
@ -29,9 +30,10 @@ The one exception is dom0, which [doesn't have to be upgraded][dom0-eol].
[supported version of Qubes OS]: /doc/supported-versions/#qubes-os
[Upgrade Guides]: /doc/upgrade/
[security]: /security/
[Dom0]: /doc/software-update-dom0/
[TemplateVMs]: /doc/software-update-vm/#installing-or-updating-software-in-the-templatevm
[StandaloneVMs]: /doc/software-update-vm/#standalone-vms
[TemplateVMs]: /doc/software-update-domu/#updating-software-in-templatevms
[StandaloneVMs]: /doc/software-update-domu/#standalonevms
[Fedora TemplateVM]: /doc/templates/fedora/
[Fedora Project]: https://getfedora.org/
[schedule]: https://fedoraproject.org/wiki/Fedora_Release_Life_Cycle#Maintenance_Schedule

View File

@ -12,7 +12,9 @@ redirect_from:
If you are looking to handle USB *storage* devices (thumbdrives or USB-drives), please have a look at the [block device] page.
**Important security warning:** USB passthrough comes with many security implications. Please make sure you carefully read and understand the **[security considerations]**. Whenever possible, attach a [block device] instead.
**Important security warning:** USB passthrough comes with many security implications.
Please make sure you carefully read and understand the **[security considerations]**.
Whenever possible, attach a [block device] instead.
Examples of valid cases for USB-passthrough:
@ -20,7 +22,8 @@ Examples of valid cases for USB-passthrough:
- [external audio devices]
- [optical drives] for recording
(If you are thinking to use a two-factor-authentication device, [there is an app for that][qubes u2f proxy]. But it has some [issues][4661].)
(If you are thinking to use a two-factor-authentication device, [there is an app for that][qubes u2f proxy].
But it has some [issues][4661].)
## Attaching And Detaching a USB Device ##
@ -29,11 +32,14 @@ Examples of valid cases for USB-passthrough:
### With Qubes Device Manager ###
Click the device-manager-icon: ![device manager icon]
A list of available devices appears. USB-devices have a USB-icon to their right: ![usb icon]
A list of available devices appears.
USB-devices have a USB-icon to their right: ![usb icon]
Hover on one device to display a list of VMs you may attach it to.
Click one of those. The USB device will be attached to it. You're done.
Click one of those.
The USB device will be attached to it.
You're done.
After you finished using the USB-device, you can detach it the same way by clicking on the Devices Widget.
You will see an entry in bold for your device such as **`sys-usb:2-5 - 058f_USB_2.0_Camera`**.
@ -81,13 +87,15 @@ When you finish, detach the device.
### Creating And Using a USB qube ###
If you've selected to install a usb-qube during system installation, everything is already set up for you in `sys-usb`. If you've later decided to create a usb-qube, please follow [this guide][USB-qube howto].
If you've selected to install a usb-qube during system installation, everything is already set up for you in `sys-usb`.
If you've later decided to create a usb-qube, please follow [this guide][USB-qube howto].
### Installation Of `qubes-usb-proxy` ###
To use this feature, the[`qubes-usb-proxy`][qubes-usb-proxy] package needs to be installed in the templates used for the USB qube and qubes you want to connect USB devices to.
This section exists for reference or in case something broke and you need to reinstall `qubes-usb-proxy`. Under normal conditions, `qubes-usb-proxy` should already be installed and good to go.
This section exists for reference or in case something broke and you need to reinstall `qubes-usb-proxy`.
Under normal conditions, `qubes-usb-proxy` should already be installed and good to go.
If you receive this error: `ERROR: qubes-usb-proxy not installed in the VM`, you can install the `qubes-usb-proxy` with the package manager in the VM you want to attach the USB device to.
Note: you cannot pass through devices from dom0 (in other words: a [USB qube][USB-qube howto] is required).

View File

@ -36,8 +36,10 @@ This method is simpler than the method recommended in [QSB #46], but it is just
Hardware Requirements
---------------------
Please see the [system requirements] and [Hardware Compatibility List] pages for
more information on required and recommended hardware.
Qubes OS has very specific [system requirements].
To ensure compatibility, we strongly recommend using [Qubes-certified hardware].
Other hardware may require you to perform significant troubleshooting.
You may also find it helpful to consult user submissions to the [Hardware Compatibility List].
**Note:** We don't recommend installing Qubes in a virtual machine! It will
likely not work. Please don't send emails asking about it. You can, however,
@ -136,6 +138,7 @@ Getting Help
[QSB #46]: /news/2019/01/23/qsb-46/
[system requirements]: /doc/system-requirements/
[Qubes-certified hardware]: /doc/certified-hardware/
[Hardware Compatibility List]: /hcl/
[live USB]: /doc/live-usb/
[downloads]: /downloads/

View File

@ -4,11 +4,13 @@ title: Supported Versions
permalink: /doc/supported-versions/
---
Supported Versions
==================
# Supported Versions
This page details the level and period of support for versions of operating systems in the Qubes ecosystem.
## Qubes OS
Qubes OS
--------
Qubes OS releases are supported for **six months** after each subsequent major
or minor release (see [Version Scheme]). The current release and past major
releases are always available on the [Downloads] page, while all ISOs, including
@ -24,17 +26,17 @@ past minor releases, are available from our [download mirrors].
| Release 4.0 | 2018-03-28 | TBA | Current, supported |
| Release 4.1 | TBA | TBA | [In development][4.1] |
### Note on point releases ###
Please note that point releases, such as 3.2.1 and 4.0.1, do not designate
separate, new versions of Qubes OS. Rather, they designate their respective
major or minor releases, such as 4.0 and 3.2, inclusive of all package updates
up to a certain point. For example, installing Release 4.0 and fully updating it
results in the same system as installing Release 4.0.1. Therefore, point
releases are not displayed as separate rows on any of the tables on this page.
### Note on point releases
Please note that point releases, such as 3.2.1 and 4.0.1, do not designate separate, new versions of Qubes OS.
Rather, they designate their respective major or minor releases, such as 4.0 and 3.2, inclusive of all package updates up to a certain point.
For example, installing Release 4.0 and fully updating it results in the same system as installing Release 4.0.1.
Therefore, point releases are not displayed as separate rows on any of the tables on this page.
## Dom0
Dom0
----
The table below shows the OS used for dom0 in each Qubes OS release.
| Qubes OS | Dom0 OS |
@ -46,21 +48,19 @@ The table below shows the OS used for dom0 in each Qubes OS release.
| Release 3.2 | Fedora 23 |
| Release 4.0 | Fedora 25 |
### Note on dom0 and EOL ###
Dom0 is isolated from domUs. DomUs can access only a few interfaces,
such as Xen, device backends (in the dom0 kernel and in other VMs, such as the
NetVM), and Qubes tools (gui-daemon, qrexec-daemon, etc.). These components are
[security-critical], and we provide updates for all of them (when necessary),
regardless of the support status of the base distribution. For this reason, we
consider it safe to continue using a given base distribution in dom0 even after
it has reached EOL (end-of-life).
### Note on dom0 and EOL
Dom0 is isolated from domUs. DomUs can access only a few interfaces, such as Xen, device backends (in the dom0 kernel and in other VMs, such as the NetVM), and Qubes tools (gui-daemon, qrexec-daemon, etc.).
These components are [security-critical], and we provide updates for all of them (when necessary), regardless of the support status of the base distribution.
For this reason, we consider it safe to continue using a given base distribution in dom0 even after it has reached EOL (end-of-life).
TemplateVMs
-----------
The table below shows the [TemplateVM] versions supported by each Qubes OS
release. Currently, only Fedora, Debian, and Whonix TemplateVMs are officially supported.
## TemplateVMs
The table below shows the [TemplateVM] versions supported by each Qubes OS release.
Currently, only [Fedora] and [Debian] TemplateVMs are officially supported by the Qubes OS Project.
[Whonix] TemplateVMs are supported by our partner, the [Whonix Project].
| Qubes OS | Fedora | Debian | Whonix |
| ------------- | ---------------------------- | --------------------------------------------- | ---------- |
@ -75,11 +75,12 @@ release. Currently, only Fedora, Debian, and Whonix TemplateVMs are officially s
extensive testing.
Whonix
------
### Note on Whonix support
[Whonix] is an advanced feature in Qubes OS.
Those who wish to use it must stay reasonably close to the cutting edge by upgrading to new stable versions of Qubes OS and Whonix TemplateVMs within a month of their respective releases.
[Whonix] TemplateVMs are supported by our partner, the [Whonix Project].
The Whonix Project has set its own support policy for Whonix TemplateVMs in Qubes.
This policy requires Whonix TemplateVM users to stay reasonably close to the cutting edge by upgrading to new stable versions of Qubes OS and Whonix TemplateVMs within a month of their respective releases.
To be precise:
* One month after a new stable version of Qubes OS is released, Whonix TemplateVMs will no longer be supported on any older version of Qubes OS.
@ -98,5 +99,8 @@ We aim to announce both types of events one month in advance in order to remind
[TemplateVM]: /doc/templates/
[extended support]: /news/2018/03/28/qubes-40/#the-past-and-the-future
[4.1]: https://github.com/QubesOS/qubes-issues/issues?utf8=%E2%9C%93&q=is%3Aissue+milestone%3A%22Release+4.1%22+
[Fedora]: /doc/templates/fedora/
[Debian]: /doc/templates/debian/
[Whonix]: /doc/whonix/
[Whonix Project]: https://www.whonix.org/

View File

@ -28,6 +28,17 @@ How to test updates:
* Enable [dom0 testing repositories].
* Enable [TemplateVM testing repositories].
Every new update is first uploaded to the `security-testing` repository if it is a security update or `current-testing` if it is a normal update.
The update remains in `security-testing` or `current-testing` for a minimum of one week.
On occasion, an exception is made for a particularly critical security update, which is immediately pushed to the `current` stable repository.
In general, however, security updates remain in `security-testing` for two weeks before migrating to `current`.
Normal updates generally remain in `current-testing` until they have been sufficiently tested by the community, which can weeks or even months, depending on the amount of feedback received (see [Providing Feedback]).
"Sufficient testing" is, in practice, a fluid term that is up the developers' judgment. In general, it means either that no negative feedback and at least one piece of positive feedback has been received or that the package has been in `current-testing` for long enough, depending on the component and the complexity of the changes.
A limitation of the current testing setup is that it is only possible to migrate the *most recent version* of a package from `current-testing` to `current`.
This means that, if a newer version of a package is uploaded to `current-testing`, it will no longer be possible to migrate any older versions of that same package from `current-testing` to `current`, even if one of those older versions has been deemed stable enough.
While this limitation can be inconvenient, the benefits outweigh the costs, since it greatly simplifies the testing and reporting process.
Providing Feedback
------------------
If you're testing new releases or updates, we would be grateful for your feedback.
@ -38,13 +49,15 @@ We welcome any kind of feedback on any package in any testing repository.
Even a simple <span class="fa fa-thumbs-up" title="Thumbs Up"></span> or <span class="fa fa-thumbs-down" title="Thumbs Down"></span> on the package's associated issue would help us to decide whether the package is ready to be migrated to a stable repository.
If you [report a bug] in a package that is in a testing repository, please reference the appropriate issue in [updates-status].
[contribute]: /doc/contributing/
[qubes-builder]: /doc/qubes-builder/
[Version Scheme]: /doc/version-scheme/
[Release Checklist]: /doc/releases/todo/
[dom0 testing repositories]: /doc/software-update-dom0/#testing-repositories
[TemplateVM testing repositories]: /doc/software-update-vm/#testing-repositories
[TemplateVM testing repositories]: /doc/software-update-domu/#testing-repositories
[automated build process]: https://github.com/QubesOS/qubes-infrastructure/blob/master/README.md
[updates-status]: https://github.com/QubesOS/updates-status/issues
[report a bug]: /doc/reporting-bugs/
[Providing Feedback]: #providing-feedback

View File

@ -15,6 +15,9 @@ We aim for these vendors to be as diverse as possible in terms of geography, cos
Note, however, that we certify only that a particular hardware *configuration* is *supported* by Qubes.
We take no responsibility for our partners' manufacturing or shipping processes, nor can we control whether physical hardware is modified (whether maliciously or otherwise) *en route* to the user.
There are also other hardware models on which we have tested Qubes OS.
See [Hardware Testing] for details.
## Qubes-certified Laptop: Insurgo PrivacyBeast X230
@ -84,6 +87,7 @@ While we are willing to troubleshoot simple issues, we will need to charge a con
If you are interested in having your hardware certified, please [contact us].
[Hardware Testing]: /doc/hardware-testing/
[stateless laptop]: https://blog.invisiblethings.org/2015/12/23/state_harmful.html
[System Requirements]: /doc/system-requirements/
[Hardware Compatibility List]: /hcl/

View File

@ -0,0 +1,44 @@
---
layout: doc
title: Hardware Testing
permalink: /doc/hardware-testing/
---
# Hardware Testing
The Qubes developers test Qubes OS on certain hardware models.
The tested hardware described on this page differs from [Qubes Certified Hardware] in a few key ways:
- Qubes Certified Hardware has to meet more demanding standards than hardware that is merely tested.
- All Qubes Certified Hardware is tested, but not all tested hardware is certified.
- A specific certified configuration is guaranteed to be supported on specific versions of Qubes OS, whereas hardware testing provides no guarantees.
In general, you can think of tested hardware as "unofficial recommended" hardware:
- [Qubes Certified Hardware] --- Qubes developer certified, officially recommended
- Hardware Testing (this page) --- Qubes developer tested, unofficially recommended
- [Hardware Compatibility List (HCL)] --- community test results, neither recommended nor disrecommended
## Tested Models
These hardware models have been (and continue to be) tested and work well with Qubes OS:
- Lenovo ThinkPad X1 Carbon Gen 5
- Lenovo ThinkPad x230
Note: The Lenovo X and T series are similar enough to assume similar compatibility of the matching model from the other series.
## Desired Models
The Qubes devs would like to test these models, but the hardware is not available to us.
If anyone is willing to lend or donate these models to us, we would be happy to test them:
- Lenovo Thinkpad T or X series with Intel 8th Gen CPU and integrated graphics
- Dell Latitude with Intel 8th Gen CPU and integrated graphics
Note: The Lenovo X and T series are similar enough to assume similar compatibility of the matching model from the other series.
[Qubes Certified Hardware]: /doc/certified-hardware/
[Hardware Compatibility List (HCL)]: /hcl/

View File

@ -29,6 +29,7 @@ redirect_from: /compatible-hardware/
<li><a href="/doc/hcl/#generating-and-submitting-new-reports">How do I Submit a Report?</a></li>
<li><a href="/doc/system-requirements/">Qubes OS System Requirements</a></li>
<li><a href="/doc/certified-hardware/">Certified Hardware</a></li>
<li><a href="/doc/hardware-testing/">Hardware Testing</a></li>
</ul>
</div>
<div class="col-lg-9 col-md-9">

View File

@ -29,6 +29,8 @@ If using the list to make a purchasing decision, we recommend that you choose ha
- the best achievable Qubes security level (green columns in HVM, IOMMU, TPM)
- and general machine compatibility (green columns in Qubes version, dom0 kernel, remarks).
Also see [Certified Hardware] and [Hardware Testing].
Generating and Submitting New Reports
-------------------------------------
@ -44,3 +46,8 @@ Please consider sending the **HCL Support Files** `.cpio.gz` file as well. To ge
**Please note:**
The **HCL Support Files** may contain numerous hardware details, including serial numbers. If, for privacy or security reasons, you do not wish to make this information public, please **do not** send the `.cpio.gz` file to the public mailing list.
[Certified Hardware]: /doc/certified-hardware/
[Hardware Testing]: /doc/hardware-testing/

View File

@ -47,7 +47,7 @@ redirect_from:
* [Intel VT-x] with [EPT] or [AMD-V] with [RVI]
* [Intel VT-d] or [AMD-Vi (aka AMD IOMMU)]
* 4 GB RAM
* 32 GB disk space
* 32 GiB disk space
### Recommended ###
@ -65,13 +65,14 @@ redirect_from:
* Please see the [Hardware Compatibility List] for a compilation of hardware reports generated and submitted by users across various Qubes versions.
(For more information about the HCL itself, see [here][hcl-doc].)
* See the [Certified Hardware] page.
* See the [Hardware Testing] page.
## Important Notes ##
* Qubes **can** be installed on systems which do not meet the recommended requirements.
Such systems will still offer significant security improvements over traditional operating systems, since things like GUI isolation and kernel protection do not require special hardware.
* Qubes **can** be installed on a USB flash drive or external disk, and testing has shown that this works very well. A fast USB 3.0 flash drive is recommended for this.
(As a reminder, its capacity must be at least 32 GB.)
(As a reminder, its capacity must be at least 32 GiB.)
Simply plug the flash drive into the computer before booting into the Qubes installer from a separate installation medium, choose the flash drive as the target installation disk, and proceed with the installation normally.
After Qubes has been installed on the flash drive, it can then be plugged into other computers in order to boot into Qubes.
In addition to the convenience of having a portable copy of Qubes, this allows users to test for hardware compatibility on multiple machines (e.g., at a brick-and-mortar computer
@ -85,6 +86,7 @@ redirect_from:
[nvidia]: /doc/install-nvidia-driver/
[hardware certification requirements for Qubes 4.x]: /news/2016/07/21/new-hw-certification-for-q4/
[Certified Hardware]: /doc/certified-hardware/
[Hardware Testing]: /doc/hardware-testing/
[Hardware Compatibility List]: /hcl/
[hcl-doc]: /doc/hcl/
[hcl-report]: /doc/hcl/#generating-and-submitting-new-reports

View File

@ -1,115 +0,0 @@
---
layout: doc
title: Debian Minimal Template
permalink: /doc/templates/debian-minimal/
---
Debian - minimal
================
The template weighs about 200 MB compressed (0.75 GB on disk) and has only the most vital packages installed, including a minimal X and xterm installation.
The minimal template, however, can be easily extended to fit your requirements.
The sections below contain instructions on cloning the template and provide some examples for commonly desired use cases.
Note that use of the minimal template requires some familiarity with the command line and basics of Qubes.
Installation
------------
The Debian minimal template can be installed with the following command:
~~~
[user@dom0 ~]$ sudo qubes-dom0-update --enablerepo=qubes-templates-itl-testing qubes-template-debian-9-minimal
~~~
The download may take a while depending on your connection speed.
Duplication and first steps
---------------------------
It is highly recommended that you clone the original template, and make any changes in the clone instead of the original template.
The following command clones the template.
(Replace `your-new-clone` with your desired name.)
~~~
[user@dom0 ~]$ qvm-clone debian-9-minimal your-new-clone
~~~
You must start the template in order to customize it.
Customization
-------------
Customizing the template for specific use cases normally only requires installing additional packages.
The following table provides an overview of which packages are needed for which purpose.
As you would expect, the required packages can be installed in the running template with any apt-based command.
For example : (Replace "packages` with a space-delimited list of packages to be installed.)
~~~
[user@your-new-clone ~]$ sudo apt install packages
~~~
Qubes 4.0
---------
In Qubes R4.0 the minimal template is not configured for passwordless root.
To update or install packages to it, from a dom0 terminal window run:
~~~
[user@dom0 ~]$ qvm-run -u root debian-9-minimal xterm
~~~
to open a root terminal in the template, from which you can use apt tools without sudo.
You will have to do this every time you want root access if you choose not to enable passwordless root.
If you want the usual qubes `sudo ...` commands, open the root terminal using the above command, and in the root xterm window enter
~~~
bash-4.4# apt install qubes-core-agent-passwordless-root
~~~
Optionally check this worked: from the gui open the minimal template's xterm and give the command:
~~~
[user@debian-9-minimal ~]$ sudo -l
~~~
which should give you output that includes the NOPASSWD keyword.
### Package table for Qubes 4.0
Use case | Description | Required steps
--- | --- | ---
**Standard utilities** | If you need the commonly used utilities | Install the following packages: `pciutils` `vim-minimal` `less` `psmisc` `gnome-keyring`
**Networking** | If you want networking | Install qubes-core-agent-networking
**Audio** | If you want sound from your VM... | Install `pulseaudio-qubes`
**FirewallVM** | You can use the minimal template as a template for a [FirewallVM](/doc/firewall/), like `sys-firewall` | Install `qubes-core-agent-networking`, and `nftables`. Also install `qubes-core-agent-dom0-updates` if you want to use a qube based on the template as an updateVM (normally sys-firewall).
**NetVM** | You can use this template as the basis for a NetVM such as `sys-net` | Install the following packages: `qubes-core-agent-networking`, `qubes-core-agent-network-manager`, and `nftables`.
**NetVM (extra firmware)** | If your network devices need extra packages for a network VM | Use the `lspci` command to identify the devices, then find the package that provides necessary firnware and install it.
**Network utilities** | If you need utilities for debugging and analyzing network connections | Install the following packages: `tcpdump` `telnet` `nmap` `nmap-ncat`
**USB** | If you want to use this template as the basis for a [USB](/doc/usb/) qube such as `sys-usb` | Install `qubes-usb-proxy`. To use USB mouse or keyboard install `qubes-input-proxy-sender`.
**VPN** | You can use this template as basis for a [VPN](/doc/vpn/) qube | You may need to install network-manager VPN packages, depending on the VPN technology you'll be using. After creating a machine based on this template, follow the [VPN howto](/doc/vpn/#set-up-a-proxyvm-as-a-vpn-gateway-using-networkmanager) to configure it.
In Qubes 4.0, additional packages from the `qubes-core-agent` suite may be needed to make the customized minimal template work properly.
These packages are:
- `qubes-core-agent-nautilus`: This package provides integration with the Nautilus file manager (without it, items like "copy to VM/open in disposable VM" will not be shown in Nautilus).
- `qubes-core-agent-thunar`: This package provides integration with the thunar file manager (without it, items like "copy to VM/open in disposable VM" will not be shown in thunar).
- `qubes-core-agent-dom0-updates`: Script required to handle `dom0` updates. Any template on which the qube responsible for 'dom0' updates (e.g. `sys-firewall`) is based must contain this package.
- `qubes-menus`: Defines menu layout.
- `qubes-desktop-linux-common`: Contains icons and scripts to improve desktop experience.
Also, there are packages to provide additional services:
- `qubes-gpg-split`: For implementing split GPG.
- `qubes-u2f`: For implementing secure forwarding of U2F messages.
- `qubes-pdf-converter`: For implementing safe conversion of PDFs.
- `qubes-img-converter`: For implementing safe conversion of images.
- `qubes-snapd-helper`: If you want to use snaps in qubes.
- `qubes-thunderbird`: Additional tools for use in thunderbird.
- `qubes-app-shutdown-idle`: If you want qubes to automatically shutdown when idle.
- `qubes-mgmt-\*`: If you want to use salt management on the template and qubes.
Documentation on all of these can be found in the [docs](/doc)
You could, of course, use qubes-vm-recommended to automatically install many of these, but in that case you are well on the way to a standard Debian template.

View File

@ -1,135 +0,0 @@
---
layout: doc
title: Upgrading the Debian 8 Template to Debian 9
permalink: /doc/template/debian/upgrade-8-to-9/
redirect_from:
- /doc/debian-template-upgrade-8/
- /en/doc/debian-template-upgrade-8/
- /doc/DebianTemplateUpgrade8/
- /wiki/DebianTemplateUpgrade8/
---
Upgrading the Debian 8 Template
===============================
Please note that if you installed packages from one of the testing repositories you must make sure that the repository is enabled in `/etc/apt/sources.list.d/qubes-r3.list` before attempting the upgrade.
Otherwise, your upgrade will [break](https://github.com/QubesOS/qubes-issues/issues/2418).
Summary: Upgrading a Debian 8 Template to Debian 9
--------------------------------------------------
[user@dom0 ~]$ qvm-clone debian-8 debian-9
[user@dom0 ~]$ qvm-run -a debian-9 gnome-terminal
[user@debian-9 ~]$ sudo sed -i 's/jessie/stretch/g' /etc/apt/sources.list
[user@debian-9 ~]$ sudo sed -i 's/jessie/stretch/g' /etc/apt/sources.list.d/qubes-r3.list
[user@debian-9 ~]$ sudo apt-get update && sudo apt-get dist-upgrade -y
[user@debian-9 ~]$ sudo apt-get autoremove
(Shut down TemplateVM by any normal means.)
[user@dom0 ~]$ qvm-trim-template debian-9
(Done.)
Detailed: Upgrading the Standard Debian 8 Template to Debian 9
--------------------------------------------------------------
These instructions will show you how to upgrade the standard Debian 8
TemplateVM to Debian 9. The same general procedure may be used to upgrade
any template based on the standard Debian 8 template.
1. Ensure the existing template is not running.
[user@dom0 ~]$ qvm-shutdown debian-8
2. Clone the existing template and start a terminal in the new template.
[user@dom0 ~]$ qvm-clone debian-8 debian-9
[user@dom0 ~]$ qvm-run -a debian-9 gnome-terminal
3. Update your apt repositories to use stretch instead of jessie
(This can be done manually with a text editor, but sed can be used to
automatically update the files.)
[user@debian-9 ~]$ sudo sed -i 's/jessie/stretch/g' /etc/apt/sources.list
[user@debian-9 ~]$ sudo sed -i 's/jessie/stretch/g' /etc/apt/sources.list.d/qubes-r3.list
4. Update the package lists and upgrade to Debian 9. During the process,
it will likely prompt to overwrite two files, qubes-r3.list and
pulse/client.conf. qubes-r3.list can be overwritten, while pulse/client.conf
need to left as the currently installed version.
[user@debian-9 ~]$ sudo apt-get update && sudo apt-get dist-upgrade -y
5. Remove unnecessary packages that were previously installed
[user@debian-9 ~]$ sudo apt-get autoremove
6. Shutdown the new TemplateVM via dom0 command line or Qubes VM Manager;
[user@dom0 ~]$ qvm-shutdown debian-9
7. Trim the new template (see **Compacting the Upgraded Template** for details
and other options).
[user@dom0 ~]$ qvm-trim-template debian-9
8. (Recommended) [Switch everything that was set to the old template to the new
template.](/doc/templates/#how-to-switch-templates)
9. (Optional) Remove the old default template.
[user@dom0 ~]$ sudo yum remove qubes-template-debian-8
Compacting the Upgraded Template
--------------------------------
Neither `fstrim` nor the `discard` mount option works on the TemplateVM's root
filesystem, so when a file is removed in the template, space is not freed in
dom0. This means that the template will use about twice as much space as is
really necessary after upgrading.
If you have at least `qubes-core-dom0-2.1.68` installed or are on Qubes R3.0,
you can use the `qvm-trim-template` tool:
[user@dom0 ~]$ qvm-trim-template debian-9
If you do not have `qubes-core-dom0-2.1.68` or are on older Qubes version, you can
compact the `root.img` manually. To do this, you will need about 15GB (the
TemplateVM's max size + the actually used space there) free space in dom0.
1. Start the template and fill all the free space with zeros, for example
with:
[user@debian-9 ~]$ dd if=/dev/zero of=/var/tmp/zero
2. Wait for the "No space left on device" error. Then:
[user@debian-9 ~]$ rm -f /var/tmp/zero
3. Shut down the template and all VMs based on it. Then:
[user@dom0 ~]$ cd /var/lib/qubes/vm-templates/debian-9
[user@dom0 ~]$ cp --sparse=always root.img root.img.new
[user@dom0 ~]$ mv root.img.new root.img
Additional Information
----------------------
Debian Stretch packages were first made available in the Qubes R3.1 repositories.
If sound is not working, you may need to enable the Qubes testing repository to get the testing version of qubes-gui-agent.
This can be done by editing the /etc/apt/sources.list.d/qubes-r3.list file and uncommenting the Qubes Updates Candidates repo.
User-initiated updates/upgrades may not run when a templateVM first starts.
This is due to a new Debian config setting that attempts to update automatically; it should be disabled with:
`sudo systemctl disable apt-daily.{service,timer}`.
Relevant Discussions
--------------------
* [Stretch Template Installation](https://groups.google.com/forum/#!topicsearchin/qubes-devel/debian$20stretch/qubes-devel/4rdayBF_UTc)
* [Stretch availability in 3.2](https://groups.google.com/forum/#!topicsearchin/qubes-devel/debian$20stretch/qubes-devel/cekPfBqQMOI)
* [Fixing sound in Debian Stretch](https://groups.google.com/forum/#!topic/qubes-users/JddCE54GFiU)
* [User apt commands blocked on startup](https://github.com/QubesOS/qubes-issues/issues/2621)

View File

@ -1,114 +1,164 @@
---
layout: doc
title: Upgrading the Debian Templates
title: Upgrading Debian TemplateVMs
permalink: /doc/template/debian/upgrade/
redirect_from:
- /doc/template/debian/upgrade-8-to-9/
- /doc/debian-template-upgrade-8/
- /en/doc/debian-template-upgrade-8/
- /doc/DebianTemplateUpgrade8/
- /wiki/DebianTemplateUpgrade8/
---
Upgrading Debian Templates
===============================
# Upgrading Debian TemplateVMs
In general, upgrading a Debian template follows the same process as [upgrading a native Debian system][upgrade].
You should consult the release notes for the target version. For Debian-10 see [here][release].
This page provides instructions for performing an in-place upgrade of an installed [Debian TemplateVM].
If you wish to install a new, unmodified Debian TemplateVM instead of upgrading a template that is already installed in your system, please see the [Debian TemplateVM] page instead.
Please note that if you installed packages from one of the testing repositories you must make sure that the repository is enabled in `/etc/apt/sources.list.d/qubes-r4.list` before attempting the upgrade.
Otherwise, your upgrade will [break](https://github.com/QubesOS/qubes-issues/issues/2418).
By default, Qubes uses codenames in the apt sources files, although the templates are referred to by release number.
Check the code names for the templates, and ensure you are aware of any changes you have made in the repository definitons.
In this example we are upgrading from debian-9 (stretch) to debian-10 (buster)
In general, upgrading a Debian TemplateVM follows the same process as [upgrading a native Debian system][upgrade].
Summary: Upgrading a Debian Template
--------------------------------------------------
## Summary instructions for Debian TemplateVMs
1. Clone the existing template.
2. Open a terminal in the new template.
3. Edit the sources.list files to refer to the new version.
4. Perform the upgrade:
```
sudo apt update
sudo apt upgrade
sudo apt dist-upgrade
```
5. Compact the template
6. Shutdown and start using the new template.
**Note:** The prompt on each line indicates where each command should be entered: `dom0`, `debian-<old>`, or `debian-<new>`, where `<old>` is the Debian version number *from* which you are upgrading, and `<new>` is the Debian version number *to* which you are upgrading.
Detailed instructions:
--------------------------------------------------------------
[user@dom0 ~]$ qvm-clone debian-<old> debian-<new>
[user@dom0 ~]$ qvm-run -a debian-<new> gnome-terminal
[user@debian-<new> ~]$ sudo sed -i 's/<old-name>/<new-name>/g' /etc/apt/sources.list
[user@debian-<new> ~]$ sudo sed -i 's/<old-name>/<new-name>/g' /etc/apt/sources.list.d/qubes-r4.list
[user@debian-<new> ~]$ sudo apt update
[user@debian-<new> ~]$ sudo apt upgrade
[user@debian-<new> ~]$ sudo apt dist-upgrade
[user@dom0 ~]$ qvm-shutdown debian-<new>
These instructions show you how to upgrade the standard Debian 9 TemplateVM to Debian 10.
**Recommended:** [Switch everything that was set to the old template to the new template.][switch]
## Detailed instructions for Debian TemplateVMs
These instructions will show you how to upgrade Debian TemplateVMs.
The same general procedure may be used to upgrade any template based on the standard Debian TemplateVM.
**Note:** The prompt on each line indicates where each command should be entered: `dom0`, `debian-<old>`, or `debian-<new>`, where `<old>` is the Debian version number *from* which you are upgrading, and `<new>` is the Debian version number *to* which you are upgrading.
1. Ensure the existing template is not running.
[user@dom0 ~]$ qvm-shutdown debian-9
[user@dom0 ~]$ qvm-shutdown debian-<old>
2. Clone the existing template and start a terminal in the new template.
[user@dom0 ~]$ qvm-clone debian-9 debian-10
[user@dom0 ~]$ qvm-run -a debian-10 gnome-terminal
[user@dom0 ~]$ qvm-clone debian-<old> debian-<new>
[user@dom0 ~]$ qvm-run -a debian-<new> gnome-terminal
3. Update your apt repositories to use buster instead of stretch
(This can be done manually with a text editor, but sed can be used to automatically update the files.)
3. Update your `apt` repositories to use the new release's code name instead of the old release's code name.
(This can be done manually with a text editor, but `sed` can be used to automatically update the files.)
```
$ sudo sed -i 's/stretch/buster/g' /etc/apt/sources.list
$ sudo sed -i 's/stretch/buster/g' /etc/apt/sources.list.d/qubes-r4.list
```
[user@debian-<new> ~]$ sudo sed -i 's/<old-name>/<new-name>/g' /etc/apt/sources.list
[user@debian-<new> ~]$ sudo sed -i 's/<old-name>/<new-name>/g' /etc/apt/sources.list.d/qubes-r4.list
4. Update the package lists and upgrade to Debian 10.
During the process, it may prompt you to overwrite the file `qubes-r4.list`.
You should overwrite this file.
```
$ sudo apt update
$ sudo apt upgrade
$ sudo apt dist-upgrade
```
5. (optional) Remove unnecessary packages that were previously installed
4. Update the package lists and upgrade.
During the process, it may prompt you to overwrite the file `qubes-r4.list`.
You should overwrite this file.
`sudo apt-get autoremove`
[user@debian-<new> ~]$ sudo apt update
[user@debian-<new> ~]$ sudo apt upgrade
[user@debian-<new> ~]$ sudo apt dist-upgrade
6. Compact the template.
5. (Optional) Remove unnecessary packages that were previously installed.
7. Shutdown the new TemplateVM via dom0 command line or the Qube Manager.
[user@debian-<new> ~]$ sudo apt-get autoremove
8. (Recommended) [Switch everything that was set to the old template to the new template.][switch]
6. (Optional) Clean cached packages from `/var/cache/apt`.
9. (Optional) Change the default template to use the new template.
[user@debian-<new> ~]$ sudo apt-get clean
10. (Optional) Remove the old template using dom0 command line or the Qube Manager.
7. (Optional) Trim the new template.
(This should [no longer be necessary][template-notes], but it does not hurt.
Some users have [reported][5055] that it makes a difference.)
[user@debian-<new> ~]$ sudo fstrim -av
[user@dom0 ~]$ qvm-shutdown debian-<new>
[user@dom0 ~]$ qvm-start debian-<new>
[user@debian-<new> ~]$ sudo fstrim -av
8. Shut down the new TemplateVM.
[user@dom0 ~]$ qvm-shutdown debian-<new>
9. (Recommended) [Switch everything that was set to the old template to the new template.][switch]
10. (Optional) Make the new template the global default.
[user@dom0 ~]$ qubes-prefs --set debian-<new>
11. (Optional) Remove the old template.
(Make sure to type the name of the old template, not the new one.)
[user@dom0 ~]$ sudo dnf remove qubes-template-debian-<old>
Compacting the Upgraded Template
--------------------------------
## StandaloneVMs
1. Open a terminal in the template and run:
```
$ sudo fstrim -av
$ sudo shutdown -h
```
2. Restart the template and run step 1 again.
This ensures that changes in the upgrade process are not stored in a difference file.
The procedure for upgrading a Debian [StandaloneVM] is the same as for a TemplateVM.
Additional Information
----------------------
User-initiated updates/upgrades may not run when a templateVM first starts.
This is due to a Debian config setting that attempts to update the system automatically.
You should disable this by opening a terminal in the template and running:
```
$ sudo systemctl disable apt-daily.{service,timer}`.
```
## Release-specific notes
Relevant Discussions
--------------------
* [User apt commands blocked on startup][2621]
This section contains notes about upgrading to specific releases.
### Debian 10 ("Buster")
Please see [Debian's Buster upgrade instructions][buster].
### Debian 9 ("Stretch")
* The upgrade process may prompt you to overwrite two files: `qubes-r4.list` and `pulse/client.conf`.
`qubes-r4.list` can be overwritten, but `pulse/client.conf` must be left as the currently-installed version.
* If sound is not working, you may need to enable the Qubes testing repository to get the testing version of `qubes-gui-agent`.
This can be done by editing the `/etc/apt/sources.list.d/qubes-r4.list` file and uncommenting the `Qubes Updates Candidates` repo.
* User-initiated updates/upgrades may not run when a templateVM first starts.
This is due to a new Debian config setting that attempts to update automatically; it should be disabled with `sudo systemctl disable apt-daily.{service,timer}`.
Relevant discussions:
* [Stretch Template Installation](https://groups.google.com/forum/#!topicsearchin/qubes-devel/debian$20stretch/qubes-devel/4rdayBF_UTc)
* [Stretch availability in 3.2](https://groups.google.com/forum/#!topicsearchin/qubes-devel/debian$20stretch/qubes-devel/cekPfBqQMOI)
* [Fixing sound in Debian Stretch](https://groups.google.com/forum/#!topic/qubes-users/JddCE54GFiU)
* [User apt commands blocked on startup](https://github.com/QubesOS/qubes-issues/issues/2621)
Also see [Debian's Stretch upgrade instructions][stretch].
### Debian 8 ("Jessie")
Please see [Debian's Jessie upgrade instructions][jessie].
### End-of-life (EOL) releases
We strongly recommend against using any Debian release that has reached [end-of-life (EOL)].
## Additional information
* Please note that, if you installed packages from one of the testing repositories, you must make sure that the repository is enabled in `/etc/apt/sources.list.d/qubes-r4.list` before attempting the upgrade.
Otherwise, your upgrade will [break](https://github.com/QubesOS/qubes-issues/issues/2418).
* By default, Qubes uses code names in the `apt` sources files, although the templates are referred to by release number.
Check the code names for the templates, and ensure you are aware of any changes you have made in the repository definitions.
[Debian TemplateVM]: /doc/templates/debian/
[upgrade]: https://wiki.debian.org/DebianUpgrade
[2621]: https://github.com/QubesOS/qubes-issues/issues/2621
[switch]: /doc/templates/#how-to-switch-templates)
[release]: https://www.debian.org/releases/buster/amd64/release-notes/ch-upgrading.en.html
[switch]: /doc/templates/#how-to-switch-templates
[switch]: /doc/templates/#switching
[jessie]: https://www.debian.org/releases/jessie/amd64/release-notes/ch-upgrading.en.html
[stretch]: https://www.debian.org/releases/stretch/amd64/release-notes/ch-upgrading.en.html
[buster]: https://www.debian.org/releases/buster/amd64/release-notes/ch-upgrading.en.html
[end-of-life (EOL)]: https://wiki.debian.org/DebianReleases#Production_Releases
[StandaloneVM]: /doc/standalone-and-hvm/
[template-notes]: /doc/templates/#important-notes
[5055]: https://github.com/QubesOS/qubes-issues/issues/5055

View File

@ -1,6 +1,6 @@
---
layout: doc
title: Debian Template
title: The Debian TemplateVM
permalink: /doc/templates/debian/
redirect_from:
- /doc/debian/
@ -9,45 +9,62 @@ redirect_from:
- /wiki/Templates/Debian/
---
Debian template(s)
===============
# The Debian TemplateVM
If you would like to use Debian Linux distribution in your qubes, you can install one of the available Debian templates.
Updates for these templates are provided by ITL and are signed by this key:
pub 4096R/47FD92FA 2014-07-27
Key fingerprint = 2D43 E932 54EE EA7C B31B 6A77 5E58 18AB 47FD 92FA
uid Qubes OS Debian Packages Signing Key
The key is already installed when you install (signed) template package. You
can also obtain the key from [git
repository](https://github.com/QubesOS/qubes-core-agent-linux/blob/master/misc/qubes-archive-keyring.gpg),
which is also integrity-protected using signed git tags.
If you want a debian-minimal template, this can be built using [Qubes-builder](https://www.qubes-os.org/doc/qubes-builder/),by selecting a +minimal flavour in setup, and then
make qubes-vm && make template
Installing
----------
Templates can be installed with the following command:
Debian 7 (wheezy) - obsolete/archive:
[user@dom0 ~]$ sudo qubes-dom0-update qubes-template-debian-7
Debian 8 (jessie) - oldoldstable:
[user@dom0 ~]$ sudo qubes-dom0-update qubes-template-debian-8
Debian 9 (stretch) - oldstable:
[user@dom0 ~]$ sudo qubes-dom0-update qubes-template-debian-9
The Debian [TemplateVM] is an officially [supported] TemplateVM in Qubes OS.
This page is about the standard (or "full") Debian TemplateVM.
For the minimal version, please see the [Minimal TemplateVMs] page.
There is also a [Qubes page on the Debian Wiki].
A Debian-10 template is currently available from the testing repository.
## Installing
To [install] a specific Debian TemplateVM that is not currently installed in your system, use the following command in dom0:
$ sudo qubes-dom0-update qubes-template-debian-XX
(Replace `XX` with the Debian version number of the template you wish to install.)
To reinstall a Debian TemplateVM that is already installed in your system, see [How to Reinstall a TemplateVM].
## After Installing
After installing a fresh Debian TemplateVM, we recommend performing the following steps:
1. [Update the TemplateVM].
2. [Switch any TemplateBasedVMs that are based on the old TemplateVM to the new one][switch].
3. If desired, [uninstall the old TemplateVM].
## Updating
Please see [Updating software in TemplateVMs].
## Upgrading
Please see [Upgrading Debian TemplateVMs].
## Release-specific notes
This section contains notes about specific Debian releases.
### Debian 10
Debian 10 templates are currently available from the testing repository.
Debian 10 (buster) - minimal:
[user@dom0 ~]$ sudo qubes-dom0-update --enablerepo=qubes-templates-itl-testing qubes-template-debian-10-minimal
Because this template was built *before* buster became stable, it cannot be updated without [manually accepting the change in status][5149].
Also, to install additional Qubes packages you will have to enable the qubes-testing repository.
Debian 10 (buster) - stable:
@ -56,21 +73,8 @@ Debian 10 (buster) - stable:
Because this template was built *before* buster became stable, it cannot be updated without [manually accepting the change in status][5149].
Upgrading
---------
To upgrade your Debian TemplateVM, please consult the guide that corresponds to your situation:
* [Upgrading the Debian 8 Template to Debian 9](/doc/template/debian/upgrade-8-to-9/)
Known issues
------------
### Starting services
The Debian way (generally) is to start daemons if they are installed.
This means that if you install (say) ssh-server in a template, *all* the qubes that use that template will run a ssh server when they start. (They will, naturally, all have the same server key.) This may not be what you want.
@ -106,7 +110,8 @@ The lesson is that you should carefully look at what is being installed to your
By default, templates in 4.0 only have a loopback interface.
Some packages will throw an error on installation in this situation. For example, Samba expects to be configured using a network interface post installation.
Some packages will throw an error on installation in this situation.
For example, Samba expects to be configured using a network interface post installation.
One solution is to add a dummy interface to allow the package to install correctly:
@ -115,18 +120,17 @@ One solution is to add a dummy interface to allow the package to install correct
ip link set d0 up
Contributing
----------------
If you want to help in improving the template, feel free to [contribute](/wiki/ContributingHowto).
More information
----------------
* [Debian wiki](https://wiki.debian.org/Qubes)
[stretch]: /doc/template/debian/upgrade-8-to-9/
[TemplateVM]: /doc/templates/
[Minimal TemplateVMs]: /doc/templates/minimal/
[Qubes page on the Debian Wiki]: https://wiki.debian.org/Qubes
[end-of-life]: https://wiki.debian.org/DebianReleases#Production_Releases
[supported]: /doc/supported-versions/#templatevms
[How to Reinstall a TemplateVM]: /doc/reinstall-template/
[Update the TemplateVM]: /doc/software-update-vm/
[switch]: /doc/templates/#switching
[uninstall the old TemplateVM]: /doc/templates/#uninstalling
[Updating software in TemplateVMs]: /doc/software-update-domu/#updating-software-in-templatevms
[Upgrading Debian TemplateVMs]: /doc/template/debian/upgrade/
[5149]: https://github.com/QubesOS/qubes-issues/issues/5149
[install]: /doc/templates/#installing

View File

@ -1,134 +0,0 @@
---
layout: doc
title: The Fedora Minimal TemplateVM
permalink: /doc/templates/fedora-minimal/
redirect_from:
- /doc/fedora-minimal/
- /en/doc/templates/fedora-minimal/
- /doc/Templates/FedoraMinimal/
- /wiki/Templates/FedoraMinimal/
---
The Fedora Minimal TemplateVM
=============================
The Fedora Minimal TemplateVM (`fedora-minimal`) only weighs about 600 MB compressed (1.6 GB on disk) and has only the most vital packages installed, including a minimal X and xterm installation.
The sections below contain instructions for using the template and provide some examples for common use cases.
Important
---------
1. The Fedora Minimal template is intended only for advanced users.
If you encounter problems with the Fedora Minimal template, we recommend that you use the [default Fedora template] instead.
2. If something works with the default Fedora template but not the Fedora Minimal template, this is most likely due to user error (e.g., a missing package or misconfiguration) rather than a bug.
In such cases, you should write to [qubes-users] to ask for help rather than filing a bug report, then [contribute what you learn to the documentation][doc-guidelines].
3. The Fedora Minimal template is intentionally *minimal*.
[Do not ask for your favorite package to be added to the minimal template by default.][pref-default]
Installation
------------
The Fedora Minimal template can be installed with the following command (where `XX` is your desired version number):
~~~
[user@dom0 ~]$ sudo qubes-dom0-update qubes-template-fedora-XX-minimal
~~~
The download may take a while depending on your connection speed.
Customization
-------------
It is highly recommended to clone the original template and make any changes in the clone instead of the original template.
The following command clones the template.
Replace `XX` with your installed Fedora Minimal version number and `your-new-clone` with your desired clone name.
~~~
[user@dom0 ~]$ qvm-clone fedora-XX-minimal your-new-clone
~~~
You must start the clone in order to customize it.
Customizing the template for specific use cases normally only requires installing additional packages.
The following list provides an overview of which packages are needed for which purpose.
As usual, the required packages are to be installed in the running template with the following command (replace `packages` with a space-delimited list of packages to be installed):
~~~
[user@your-new-clone ~]$ sudo dnf install packages
~~~
- Commonly used utilities: `pciutils` `vim-minimal` `less` `psmisc` `gnome-keyring`.
- Audio: `pulseaudio-qubes`.
- [FirewallVM](/doc/firewall/), such as the template for `sys-firewall`: at least `qubes-core-agent-networking` and `iproute`, and also `qubes-core-agent-dom0-updates` if you want to use it as the `UpdateVM` (which is normally `sys-firewall`).
- NetVM, such as the template for `sys-net`: `qubes-core-agent-networking` `qubes-core-agent-network-manager` `NetworkManager-wifi` `network-manager-applet` `wireless-tools` `dejavu-sans-fonts` `notification-daemon` `gnome-keyring` `polkit` `@hardware-support`.
If your network devices need extra packages for the template to work as a network VM, use the `lspci` command to identify the devices, then run `dnf search firmware` (replace `firmware` with the appropriate device identifier) to find the needed packages and then install them.
If you need utilities for debugging and analyzing network connections, install `tcpdump` `telnet` `nmap` `nmap-ncat`.
- [USB qube](/doc/usb-qubes/), such as the template for `sys-usb`: `qubes-input-proxy-sender`.
- [VPN qube](/doc/vpn/): Use the `dnf search "NetworkManager VPN plugin"` command to look up the VPN packages you need, based on the VPN technology you'll be using, and install them.
Some GNOME related packages may be needed as well.
After creation of a machine based on this template, follow the [VPN instructions](/doc/vpn/#set-up-a-proxyvm-as-a-vpn-gateway-using-networkmanager) to configure it.
You may also wish to consider additional packages from the `qubes-core-agent` suite:
- `qubes-core-agent-qrexec`: Qubes qrexec agent. Installed by default.
- `qubes-core-agent-systemd`: Qubes unit files for SystemD init style. Installed by default.
- `qubes-core-agent-passwordless-root`, `polkit`: By default, the Fedora Minimal template doesn't have passwordless root. These two packages enable this feature.
- `qubes-core-agent-nautilus`: This package provides integration with the Nautilus file manager (without it things like "copy to VM/open in disposable VM" will not be shown in Nautilus).
- `qubes-core-agent-sysvinit`: Qubes unit files for SysV init style or upstart.
- `qubes-core-agent-networking`: Networking support. Required for general network access and particularly if the template is to be used for a `sys-net` or `sys-firewall` VM.
- `qubes-core-agent-network-manager`: Integration for NetworkManager. Useful if the template is to be used for a `sys-net` VM.
- `network-manager-applet`: Useful (together with `dejavu-sans-fonts` and `notification-daemon`) to have a system tray icon if the template is to be used for a `sys-net` VM.
- `qubes-core-agent-dom0-updates`: Script required to handle `dom0` updates. Any template which the VM responsible for 'dom0' updates (e.g. `sys-firewall`) is based on must contain this package.
- `qubes-usb-proxy`: Required if the template is to be used for a USB qube (`sys-usb`) or for any destination qube to which USB devices are to be attached (e.g `sys-net` if using USB network adapter).
- `pulseaudio-qubes`: Needed to have audio on the template VM.
See [here][customization] for further information on customizing `fedora-minimal`.
Passwordless root
-----------------
It is an intentional design choice for passwordless to be optional.
Since the Fedora Minimal template is *minimal*, it is not configured for passwordless root by default.
To update or install packages to it, from a dom0 terminal window:
~~~
[user@dom0 ~]$ qvm-run -u root fedora-29-minimal xterm
~~~
to open a root terminal in the template, from which you can use dnf without sudo. You will have to do this every time if you choose not to enable passwordless root.
If you want the usual qubes `sudo dnf ...` commands, open the root terminal just this once using the above command, and in the root xterm window enter
~~~
bash-4.4# dnf install qubes-core-agent-passwordless-root polkit
~~~
Optionally check this worked: from the gui open the minimal template's xterm and give the command
~~~
[user@fed-min-clone ~]$ sudo -l
~~~
which should give you output that includes the NOPASSWD keyword.
Logging
-------
The `rsyslog` logging service is not installed by default, as all logging is instead being handled by the `systemd` journal.
Users requiring the `rsyslog` service should install it manually.
To access the `journald` log, use the `journalctl` command.
[default Fedora template]: /doc/templates/fedora/
[qubes-users]: /support/#qubes-users
[doc-guidelines]: /doc/doc-guidelines/
[pref-default]: /faq/#could-you-please-make-my-preference-the-default
[customization]: /doc/fedora-minimal-template-customization/

View File

@ -1,208 +0,0 @@
---
layout: doc
title: Upgrading the Fedora 26 Template to Fedora 27
permalink: /doc/template/fedora/upgrade-26-to-27/
redirect_from:
- /doc/fedora-template-upgrade-26/
- /en/doc/fedora-template-upgrade-26/
- /doc/FedoraTemplateUpgrade26/
- /wiki/FedoraTemplateUpgrade26/
---
Upgrading the Fedora 26 Template to Fedora 27
=============================================
This page provides instructions for performing an in-place upgrade of an
installed Fedora 26 [TemplateVM] to Fedora 27. If you wish to install a new,
unmodified Fedora 27 template instead of upgrading a template that is already
installed in your system, please see the [Fedora TemplateVM] page instead.
These instructions can also be used to upgrade a Fedora 25 TemplateVM to
Fedora 27. Simply start by cloning `fedora-25` instead of `fedora-26` in the
instructions below.
Important information regarding RPM Fusion repos
------------------------------------------------
If your RPM Fusion repositories are **disabled** when you upgrade a TemplateVM from Fedora 26 to 27, all RPM Fusion packages and RPM Fusion repo definitions will be removed from that TemplateVM.
If your RPM Fusion repositories are **enabled** when upgrading, all RPM Fusion packages and repo definitions will be retained and updated as expected.
For most users, this behavior should not cause a problem, since a TemplateVM in which the RPM Fusion repos are disabled is probably a TemplateVM in which you never wish to use them.
However, if you wish to have the RPM Fusion repo definitions after upgrading in a TemplateVM in which they are currently disabled, you may wish to temporarily enable them prior to upgrading or manually create, copy, or download them after upgrading.
Instructions
------------
### Summary: Upgrading the Standard Fedora 26 Template to Fedora 27 ###
**Note:** The prompt on each line indicates where each command should be entered
(`@dom0` or `@fedora-27`).
[user@dom0 ~]$ qvm-clone fedora-26 fedora-27
[user@dom0 ~]$ truncate -s 5GB /var/tmp/template-upgrade-cache.img
[user@dom0 ~]$ qvm-run -a fedora-27 gnome-terminal
[user@dom0 ~]$ dev=$(sudo losetup -f --show /var/tmp/template-upgrade-cache.img)
[user@dom0 ~]$ qvm-block attach fedora-27 dom0:${dev##*/}
[user@fedora-27 ~]$ sudo mkfs.ext4 /dev/xvdi
[user@fedora-27 ~]$ sudo mount /dev/xvdi /mnt/removable
[user@fedora-27 ~]$ sudo dnf clean all
[user@fedora-27 ~]$ sudo dnf --releasever=27 --setopt=cachedir=/mnt/removable --best --allowerasing distro-sync
[user@fedora-27 ~]$ sudo fstrim -v /
(Shut down TemplateVM by any normal means.)
[user@dom0 ~]$ sudo losetup -d $dev
[user@dom0 ~]$ rm /var/tmp/template-upgrade-cache.img
(Optional cleanup: Switch everything over to the new template and delete the old
one. See instructions below for details.)
### Detailed: Upgrading the Standard Fedora 26 Template to Fedora 27 ###
These instructions will show you how to upgrade the standard Fedora 26
TemplateVM to Fedora 27. The same general procedure may be used to upgrade any
template based on the standard Fedora 26 template.
**Note:** The command-line prompt on each line indicates where each command
should be entered (`@dom0` or `@fedora-27`).
1. Ensure the existing template is not running.
[user@dom0 ~]$ qvm-shutdown fedora-26
2. Clone the existing template and start a terminal in the new template.
[user@dom0 ~]$ qvm-clone fedora-26 fedora-27
[user@dom0 ~]$ qvm-run -a fedora-27 gnome-terminal
3. Attempt the upgrade process in the new template.
[user@fedora-27 ~]$ sudo dnf clean all
[user@fedora-27 ~]$ sudo dnf --releasever=27 distro-sync --best --allowerasing
**Note:** `dnf` might ask you to approve importing a new package signing
key. For example, you might see a prompt like this one:
warning: /var/cache/dnf/fedora-d02ca361e1b58501/packages/python2-babel-2.3.4-1.fc27.noarch.rpm: Header V3 RSA/SHA256 Signature, key ID f5282ee4: NOKEY
Importing GPG key 0xF5282EE4:
Userid : "Fedora (27) <fedora-27-primary@fedoraproject.org>"
Fingerprint: 860E 19B0 AFA8 00A1 7518 81A6 F55E 7430 F528 2EE4
From : /etc/pki/rpm-gpg/RPM-GPG-KEY-fedora-27-x86_64
Is this ok [y/N]:
This key was already checked when it was installed (notice that the "From"
line refers to a location on your local disk), so you can safely say yes to
this prompt.
**Note:** If you encounter no errors, proceed to step 4. If you do encounter
errors, see the next two points first.
* If `dnf` reports that you do not have enough free disk space to proceed
with the upgrade process, create an empty file in dom0 to use as a cache
and attach it to the template as a virtual disk.
[user@dom0 ~]$ truncate -s 5GB /var/tmp/template-upgrade-cache.img
[user@dom0 ~]$ dev=$(sudo losetup -f --show /var/tmp/template-upgrade-cache.img)
[user@dom0 ~]$ qvm-block attach fedora-27 dom0:${dev##*/}
Then reattempt the upgrade process, but this time use the virtual disk
as a cache.
[user@fedora-27 ~]$ sudo mkfs.ext4 /dev/xvdi
[user@fedora-27 ~]$ sudo mount /dev/xvdi /mnt/removable
[user@fedora-27 ~]$ sudo dnf clean all
[user@fedora-27 ~]$ sudo dnf --releasever=27 --setopt=cachedir=/mnt/removable --best --allowerasing distro-sync
If this attempt is successful, proceed to step 4.
* `dnf` may complain:
At least X MB more space needed on the / filesystem.
In this case, one option is to [resize the TemplateVM's disk
image][resize-disk-image] before reattempting the upgrade process.
(See [Additional Information] below for other options.)
4. Trim the new template.
[user@fedora-27 ~]$ sudo fstrim -v /
5. Shut down the new TemplateVM (from the command-line or Qubes VM Manager).
[user@dom0 ~]$ qvm-shutdown fedora-27
6. Remove the cache file, if you created one.
[user@dom0 ~]$ sudo losetup -d $dev
[user@dom0 ~]$ rm /var/tmp/template-upgrade-cache.img
7. (Recommended) [Switch everything that was set to the old template to the new
template.][switching]
8. (Optional) Remove the old template. (Make sure to type `fedora-26`, not
`fedora-27`.)
[user@dom0 ~]$ sudo dnf remove qubes-template-fedora-26
### Upgrading StandaloneVMs ###
The procedure for upgrading a StandaloneVM from Fedora 26 to Fedora 27 is the
same as for a TemplateVM.
### Summary: Upgrading the Minimal Fedora 26 Template to Fedora 27 ###
**Note:** The prompt on each line indicates where each command should be entered
(`@dom0` or `@fedora-27`).
[user@dom0 ~]$ qvm-clone fedora-26-minimal fedora-27-minimal
[user@dom0 ~]$ qvm-run -u root -a fedora-27-minimal xterm
[root@fedora-27-minimal ~]# dnf clean all
[user@fedora-27-minimal ~]# dnf --releasever=27 --best --allowerasing distro-sync
[user@fedora-27-minimal ~]# fstrim -v /
(Shut down TemplateVM by any normal means.)
(If you encounter insufficient space issues, you may need to use the methods
described for the standard template above.)
Additional Information
----------------------
As mentioned above, you may encounter the following `dnf` error:
At least X MB more space needed on the / filesystem.
In this case, you have several options:
1. [Increase the TemplateVM's disk image size][resize-disk-image].
This is the solution mentioned in the main instructions above.
2. Delete files in order to free up space. One way to do this is by
uninstalling packages. You may then reinstalling them again after you
finish the upgrade process, if desired). However, you may end up having to
increase the disk image size anyway (see previous option).
3. Do the upgrade in parts, e.g., by using package groups. (First upgrade
`@core` packages, then the rest.)
4. Do not perform an in-place upgrade. Instead, simply download and install a
new template package, then redo all desired template modifications.
With regard to the last option, here are some useful messages from the
mailing list which also apply to TemplateVM management and migration in
general:
* [Marek](https://groups.google.com/d/msg/qubes-users/mCXkxlACILQ/dS1jbLRP9n8J)
* [Jason M](https://groups.google.com/d/msg/qubes-users/mCXkxlACILQ/5PxDfI-RKAsJ)
[TemplateVM]: /doc/templates/
[Fedora TemplateVM]: /doc/templates/fedora/
[resize-disk-image]: /doc/resize-disk-image/
[Additional Information]: #additional-information
[Compacting the Upgraded Template]: #compacting-the-upgraded-template
[switching]: /doc/templates/#how-to-switch-templates
[DispVM]: /doc/disposablevm/

View File

@ -1,231 +0,0 @@
---
layout: doc
title: Upgrading the Fedora 27 Template to Fedora 28
permalink: /doc/template/fedora/upgrade-27-to-28/
redirect_from:
- /doc/fedora-template-upgrade-27/
- /en/doc/fedora-template-upgrade-27/
- /doc/FedoraTemplateUpgrade27/
- /wiki/FedoraTemplateUpgrade27/
---
Upgrading the Fedora 27 Template to Fedora 28
=============================================
This page provides instructions for performing an in-place upgrade of an
installed Fedora 27 [TemplateVM] to Fedora 28. If you wish to install a new,
unmodified Fedora 28 template instead of upgrading a template that is already
installed in your system, please see the [Fedora TemplateVM] page instead.
These instructions can also be used to upgrade a Fedora 26 TemplateVM to
Fedora 28. Simply start by cloning `fedora-26` instead of `fedora-27` in the
instructions below.
Important information regarding RPM Fusion repos
------------------------------------------------
If your RPM Fusion repositories are **disabled** when you upgrade a TemplateVM from Fedora 27 to 28, all RPM Fusion packages and RPM Fusion repo definitions will be removed from that TemplateVM.
If your RPM Fusion repositories are **enabled** when upgrading, all RPM Fusion packages and repo definitions will be retained and updated as expected.
For most users, this behavior should not cause a problem, since a TemplateVM in which the RPM Fusion repos are disabled is probably a TemplateVM in which you never wish to use them.
However, if you wish to have the RPM Fusion repo definitions after upgrading in a TemplateVM in which they are currently disabled, you may wish to temporarily enable them prior to upgrading or manually create, copy, or download them after upgrading.
Workaround for `python2-xcffib` upgrade error
---------------------------------------------
When attempting to upgrade from Fedora 26 or 27 to Fedora 28, you may encounter an error similar to this:
Error: Transaction check error:
file /usr/lib/python2.7/site-packages/xcffib-0.5.1-py2.7.egg-info/PKG-INFO from install of python2-xcffib-0.5.1-5.fc28.noarch conflicts with file from package python-xcffib-0.5.1-1.fc26.noarch
file /usr/lib/python2.7/site-packages/xcffib/_ffi.pyc from install of python2-xcffib-0.5.1-5.fc28.noarch conflicts with file from package python-xcffib-0.5.1-1.fc26.noarch
file /usr/lib/python2.7/site-packages/xcffib/_ffi.pyo from install of python2-xcffib-0.5.1-5.fc28.noarch conflicts with file from package python-xcffib-0.5.1-1.fc26.noarch
file /usr/lib/python2.7/site-packages/xcffib/xinput.pyc from install of python2-xcffib-0.5.1-5.fc28.noarch conflicts with file from package python-xcffib-0.5.1-1.fc26.noarch
file /usr/lib/python2.7/site-packages/xcffib/xinput.pyo from install of python2-xcffib-0.5.1-5.fc28.noarch conflicts with file from package python-xcffib-0.5.1-1.fc26.noarch
To work around this error:
1. Upgrade while excluding the problematic packages by using `-x python2-xcffib -x qubes-gui-vm -x qubes-gui-agent`.
2. Upgrade `python2-xcffib` using `sudo dnf swap python-xcffib python2-xcffib`.
(This should automatically upgrade the other excluded packages too.)
Instructions
------------
### Summary: Upgrading the Standard Fedora 27 Template to Fedora 28 ###
**Note:** The prompt on each line indicates where each command should be entered
(`@dom0` or `@fedora-28`).
[user@dom0 ~]$ qvm-clone fedora-27 fedora-28
[user@dom0 ~]$ truncate -s 5GB /var/tmp/template-upgrade-cache.img
[user@dom0 ~]$ qvm-run -a fedora-28 gnome-terminal
[user@dom0 ~]$ dev=$(sudo losetup -f --show /var/tmp/template-upgrade-cache.img)
[user@dom0 ~]$ qvm-block attach fedora-28 dom0:${dev##*/}
[user@fedora-28 ~]$ sudo mkfs.ext4 /dev/xvdi
[user@fedora-28 ~]$ sudo mount /dev/xvdi /mnt/removable
[user@fedora-28 ~]$ sudo dnf clean all
[user@fedora-28 ~]$ sudo dnf --releasever=28 --setopt=cachedir=/mnt/removable --best --allowerasing distro-sync
[user@fedora-28 ~]$ sudo fstrim -v /
(Shut down TemplateVM by any normal means.)
[user@dom0 ~]$ sudo losetup -d $dev
[user@dom0 ~]$ rm /var/tmp/template-upgrade-cache.img
(Optional cleanup: Switch everything over to the new template and delete the old
one. See instructions below for details.)
### Detailed: Upgrading the Standard Fedora 27 Template to Fedora 28 ###
These instructions will show you how to upgrade the standard Fedora 27
TemplateVM to Fedora 28. The same general procedure may be used to upgrade any
template based on the standard Fedora 27 template.
**Note:** The command-line prompt on each line indicates where each command
should be entered (`@dom0` or `@fedora-28`).
1. Ensure the existing template is not running.
[user@dom0 ~]$ qvm-shutdown fedora-27
2. Clone the existing template and start a terminal in the new template.
[user@dom0 ~]$ qvm-clone fedora-27 fedora-28
[user@dom0 ~]$ qvm-run -a fedora-28 gnome-terminal
3. Attempt the upgrade process in the new template.
[user@fedora-28 ~]$ sudo dnf clean all
[user@fedora-28 ~]$ sudo dnf --releasever=28 distro-sync --best --allowerasing
**Note:** `dnf` might ask you to approve importing a new package signing
key. For example, you might see a prompt like this one:
warning: /var/cache/dnf/fedora-d02ca361e1b58501/packages/python2-babel-2.3.4-1.fc28.noarch.rpm: Header V3 RSA/SHA256 Signature, key ID 9db62fb1: NOKEY
Importing GPG key 0x9DB62FB1:
Userid : "Fedora (28) <fedora-28-primary@fedoraproject.org>"
Fingerprint: 128C F232 A937 1991 C8A6 5695 E08E 7E62 9DB6 2FB1
From : /etc/pki/rpm-gpg/RPM-GPG-KEY-fedora-28-x86_64
Is this ok [y/N]:
This key was already checked when it was installed (notice that the "From"
line refers to a location on your local disk), so you can safely say yes to
this prompt.
**Note:** If you encounter no errors, proceed to step 4. If you do encounter
errors, see the next two points first.
* If `dnf` reports that you do not have enough free disk space to proceed
with the upgrade process, create an empty file in dom0 to use as a cache
and attach it to the template as a virtual disk.
[user@dom0 ~]$ truncate -s 5GB /var/tmp/template-upgrade-cache.img
[user@dom0 ~]$ dev=$(sudo losetup -f --show /var/tmp/template-upgrade-cache.img)
[user@dom0 ~]$ qvm-block attach fedora-28 dom0:${dev##*/}
Then reattempt the upgrade process, but this time use the virtual disk
as a cache.
[user@fedora-28 ~]$ sudo mkfs.ext4 /dev/xvdi
[user@fedora-28 ~]$ sudo mount /dev/xvdi /mnt/removable
[user@fedora-28 ~]$ sudo dnf clean all
[user@fedora-28 ~]$ sudo dnf --releasever=28 --setopt=cachedir=/mnt/removable --best --allowerasing distro-sync
If this attempt is successful, proceed to step 4.
* `dnf` may complain:
At least X MB more space needed on the / filesystem.
In this case, one option is to [resize the TemplateVM's disk
image][resize-disk-image] before reattempting the upgrade process.
(See [Additional Information] below for other options.)
4. Check that you are on the correct (new) fedora release.
[user@fedora-28 ~]$ cat /etc/fedora-release
5. Trim the new template.
[user@fedora-28 ~]$ sudo fstrim -v /
6. Shut down the new TemplateVM (from the command-line or Qubes VM Manager).
[user@dom0 ~]$ qvm-shutdown fedora-28
7. Remove the cache file, if you created one.
[user@dom0 ~]$ sudo losetup -d $dev
[user@dom0 ~]$ rm /var/tmp/template-upgrade-cache.img
8. (Recommended) [Switch everything that was set to the old template to the new
template.][switching]
9. (Optional) Remove the old template. (Make sure to type `fedora-27`, not
`fedora-28`.)
[user@dom0 ~]$ sudo dnf remove qubes-template-fedora-27
### Upgrading StandaloneVMs ###
The procedure for upgrading a StandaloneVM from Fedora 27 to Fedora 28 is the
same as for a TemplateVM.
### Summary: Upgrading the Minimal Fedora 27 Template to Fedora 28 ###
**Note:** The prompt on each line indicates where each command should be entered
(`@dom0` or `@fedora-28`).
[user@dom0 ~]$ qvm-clone fedora-27-minimal fedora-28-minimal
[user@dom0 ~]$ qvm-run -u root -a fedora-28-minimal xterm
[root@fedora-28-minimal ~]# dnf clean all
[user@fedora-28-minimal ~]# dnf --releasever=28 --best --allowerasing distro-sync
[user@fedora-28-minimal ~]# fstrim -v /
(Shut down TemplateVM by any normal means.)
(If you encounter insufficient space issues, you may need to use the methods
described for the standard template above.)
Additional Information
----------------------
As mentioned above, you may encounter the following `dnf` error:
At least X MB more space needed on the / filesystem.
In this case, you have several options:
1. [Increase the TemplateVM's disk image size][resize-disk-image].
This is the solution mentioned in the main instructions above.
2. Delete files in order to free up space. One way to do this is by
uninstalling packages. You may then reinstalling them again after you
finish the upgrade process, if desired). However, you may end up having to
increase the disk image size anyway (see previous option).
3. Do the upgrade in parts, e.g., by using package groups. (First upgrade
`@core` packages, then the rest.)
4. Do not perform an in-place upgrade. Instead, simply download and install a
new template package, then redo all desired template modifications.
With regard to the last option, here are some useful messages from the
mailing list which also apply to TemplateVM management and migration in
general:
* [Marek](https://groups.google.com/d/msg/qubes-users/mCXkxlACILQ/dS1jbLRP9n8J)
* [Jason M](https://groups.google.com/d/msg/qubes-users/mCXkxlACILQ/5PxDfI-RKAsJ)
[TemplateVM]: /doc/templates/
[Fedora TemplateVM]: /doc/templates/fedora/
[resize-disk-image]: /doc/resize-disk-image/
[Additional Information]: #additional-information
[Compacting the Upgraded Template]: #compacting-the-upgraded-template
[switching]: /doc/templates/#how-to-switch-templates
[DispVM]: /doc/disposablevm/

View File

@ -1,212 +0,0 @@
---
layout: doc
title: Upgrading the Fedora 28 Template to Fedora 29
permalink: /doc/template/fedora/upgrade-28-to-29/
redirect_from:
- /doc/fedora-template-upgrade-28/
- /en/doc/fedora-template-upgrade-28/
- /doc/FedoraTemplateUpgrade28/
- /wiki/FedoraTemplateUpgrade28/
---
Upgrading the Fedora 28 Template to Fedora 29
=============================================
This page provides instructions for performing an in-place upgrade of an
installed Fedora 28 [TemplateVM] to Fedora 29. If you wish to install a new,
unmodified Fedora 29 template instead of upgrading a template that is already
installed in your system, please see the [Fedora TemplateVM] page instead.
These instructions can also be used to upgrade a Fedora 26 TemplateVM to
Fedora 29. Simply start by cloning `fedora-26` instead of `fedora-28` in the
instructions below.
Important information regarding RPM Fusion repos
------------------------------------------------
If your RPM Fusion repositories are **disabled** when you upgrade a TemplateVM from Fedora 28 to 29, all RPM Fusion packages and RPM Fusion repo definitions will be removed from that TemplateVM.
If your RPM Fusion repositories are **enabled** when upgrading, all RPM Fusion packages and repo definitions will be retained and updated as expected.
For most users, this behavior should not cause a problem, since a TemplateVM in which the RPM Fusion repos are disabled is probably a TemplateVM in which you never wish to use them.
However, if you wish to have the RPM Fusion repo definitions after upgrading in a TemplateVM in which they are currently disabled, you may wish to temporarily enable them prior to upgrading or manually create, copy, or download them after upgrading.
Instructions
------------
### Summary: Upgrading the Standard Fedora 28 Template to Fedora 29 ###
**Note:** The prompt on each line indicates where each command should be entered
(`@dom0` or `@fedora-29`).
[user@dom0 ~]$ qvm-clone fedora-28 fedora-29
[user@dom0 ~]$ truncate -s 5GB /var/tmp/template-upgrade-cache.img
[user@dom0 ~]$ qvm-run -a fedora-29 gnome-terminal
[user@dom0 ~]$ dev=$(sudo losetup -f --show /var/tmp/template-upgrade-cache.img)
[user@dom0 ~]$ qvm-block attach fedora-29 dom0:${dev##*/}
[user@fedora-29 ~]$ sudo mkfs.ext4 /dev/xvdi
[user@fedora-29 ~]$ sudo mount /dev/xvdi /mnt/removable
[user@fedora-29 ~]$ sudo dnf clean all
[user@fedora-29 ~]$ sudo dnf --releasever=29 --setopt=cachedir=/mnt/removable --best --allowerasing distro-sync
[user@fedora-29 ~]$ sudo fstrim -v /
(Shut down TemplateVM by any normal means.)
[user@dom0 ~]$ sudo losetup -d $dev
[user@dom0 ~]$ rm /var/tmp/template-upgrade-cache.img
(Optional cleanup: Switch everything over to the new template and delete the old
one. See instructions below for details.)
### Detailed: Upgrading the Standard Fedora 28 Template to Fedora 29 ###
These instructions will show you how to upgrade the standard Fedora 28
TemplateVM to Fedora 29. The same general procedure may be used to upgrade any
template based on the standard Fedora 28 template.
**Note:** The command-line prompt on each line indicates where each command
should be entered (`@dom0` or `@fedora-29`).
1. Ensure the existing template is not running.
[user@dom0 ~]$ qvm-shutdown fedora-28
2. Clone the existing template and start a terminal in the new template.
[user@dom0 ~]$ qvm-clone fedora-28 fedora-29
[user@dom0 ~]$ qvm-run -a fedora-29 gnome-terminal
3. Attempt the upgrade process in the new template.
[user@fedora-29 ~]$ sudo dnf clean all
[user@fedora-29 ~]$ sudo dnf --releasever=29 distro-sync --best --allowerasing
**Note:** `dnf` might ask you to approve importing a new package signing
key. For example, you might see a prompt like this one:
warning: /mnt/removable/updates-0b4cc238d1aa4ffe/packages/kernel-4.18.17-300.fc29.x86_64.rpm: Header V3 RSA/SHA256 Signature, key ID 429476b4: NOKEY
Importing GPG key 0x429476B4:
Userid : "Fedora 29 (29) <fedora-29@fedoraproject.org>"
Fingerprint: 5A03 B4DD 8254 ECA0 2FDA 1637 A20A A56B 4294 76B4
From : /etc/pki/rpm-gpg/RPM-GPG-KEY-fedora-29-x86_64
Is this ok [y/N]: y
This key was already checked when it was installed (notice that the "From"
line refers to a location on your local disk), so you can safely say yes to
this prompt.
**Note:** If you encounter no errors, proceed to step 4. If you do encounter
errors, see the next two points first.
* If `dnf` reports that you do not have enough free disk space to proceed
with the upgrade process, create an empty file in dom0 to use as a cache
and attach it to the template as a virtual disk.
[user@dom0 ~]$ truncate -s 5GB /var/tmp/template-upgrade-cache.img
[user@dom0 ~]$ dev=$(sudo losetup -f --show /var/tmp/template-upgrade-cache.img)
[user@dom0 ~]$ qvm-block attach fedora-29 dom0:${dev##*/}
Then reattempt the upgrade process, but this time use the virtual disk
as a cache.
[user@fedora-29 ~]$ sudo mkfs.ext4 /dev/xvdi
[user@fedora-29 ~]$ sudo mount /dev/xvdi /mnt/removable
[user@fedora-29 ~]$ sudo dnf clean all
[user@fedora-29 ~]$ sudo dnf --releasever=29 --setopt=cachedir=/mnt/removable --best --allowerasing distro-sync
If this attempt is successful, proceed to step 4.
* `dnf` may complain:
At least X MB more space needed on the / filesystem.
In this case, one option is to [resize the TemplateVM's disk
image][resize-disk-image] before reattempting the upgrade process.
(See [Additional Information] below for other options.)
4. Check that you are on the correct (new) fedora release.
[user@fedora-29 ~]$ cat /etc/fedora-release
5. Trim the new template.
[user@fedora-29 ~]$ sudo fstrim -v /
6. Shut down the new TemplateVM (from the command-line or Qubes VM Manager).
[user@dom0 ~]$ qvm-shutdown fedora-29
7. Remove the cache file, if you created one.
[user@dom0 ~]$ sudo losetup -d $dev
[user@dom0 ~]$ rm /var/tmp/template-upgrade-cache.img
8. (Recommended) [Switch everything that was set to the old template to the new
template.][switching]
9. (Optional) Remove the old template. (Make sure to type `fedora-28`, not
`fedora-29`.)
[user@dom0 ~]$ sudo dnf remove qubes-template-fedora-28
### Upgrading StandaloneVMs ###
The procedure for upgrading a StandaloneVM from Fedora 28 to Fedora 29 is the
same as for a TemplateVM.
### Summary: Upgrading the Minimal Fedora 28 Template to Fedora 29 ###
**Note:** The prompt on each line indicates where each command should be entered
(`@dom0` or `@fedora-29`).
[user@dom0 ~]$ qvm-clone fedora-28-minimal fedora-29-minimal
[user@dom0 ~]$ qvm-run -u root -a fedora-29-minimal xterm
[root@fedora-29-minimal ~]# dnf clean all
[user@fedora-29-minimal ~]# dnf --releasever=29 --best --allowerasing distro-sync
[user@fedora-29-minimal ~]# fstrim -v /
(Shut down TemplateVM by any normal means.)
(If you encounter insufficient space issues, you may need to use the methods
described for the standard template above.)
Additional Information
----------------------
As mentioned above, you may encounter the following `dnf` error:
At least X MB more space needed on the / filesystem.
In this case, you have several options:
1. [Increase the TemplateVM's disk image size][resize-disk-image].
This is the solution mentioned in the main instructions above.
2. Delete files in order to free up space. One way to do this is by
uninstalling packages. You may then reinstalling them again after you
finish the upgrade process, if desired). However, you may end up having to
increase the disk image size anyway (see previous option).
3. Do the upgrade in parts, e.g., by using package groups. (First upgrade
`@core` packages, then the rest.)
4. Do not perform an in-place upgrade. Instead, simply download and install a
new template package, then redo all desired template modifications.
With regard to the last option, here are some useful messages from the
mailing list which also apply to TemplateVM management and migration in
general:
* [Marek](https://groups.google.com/d/msg/qubes-users/mCXkxlACILQ/dS1jbLRP9n8J)
* [Jason M](https://groups.google.com/d/msg/qubes-users/mCXkxlACILQ/5PxDfI-RKAsJ)
[TemplateVM]: /doc/templates/
[Fedora TemplateVM]: /doc/templates/fedora/
[resize-disk-image]: /doc/resize-disk-image/
[Additional Information]: #additional-information
[Compacting the Upgraded Template]: #compacting-the-upgraded-template
[switching]: /doc/templates/#how-to-switch-templates
[DispVM]: /doc/dispvm/

View File

@ -0,0 +1,213 @@
---
layout: doc
title: Upgrading Fedora TemplateVMs
permalink: /doc/template/fedora/upgrade/
redirect_from:
- /doc/template/fedora/upgrade-26-to-27/
- /doc/fedora-template-upgrade-26/
- /en/doc/fedora-template-upgrade-26/
- /doc/FedoraTemplateUpgrade26/
- /wiki/FedoraTemplateUpgrade26/
- /doc/template/fedora/upgrade-27-to-28/
- /doc/fedora-template-upgrade-27/
- /en/doc/fedora-template-upgrade-27/
- /doc/FedoraTemplateUpgrade27/
- /wiki/FedoraTemplateUpgrade27/
- /doc/template/fedora/upgrade-28-to-29/
- /doc/fedora-template-upgrade-28/
- /en/doc/fedora-template-upgrade-28/
- /doc/FedoraTemplateUpgrade28/
- /wiki/FedoraTemplateUpgrade28/
- /doc/template/fedora/upgrade-29-to-30/
---
# Upgrading Fedora TemplateVMs
This page provides instructions for performing an in-place upgrade of an installed [Fedora TemplateVM].
If you wish to install a new, unmodified Fedora TemplateVM instead of upgrading a template that is already installed in your system, please see the [Fedora TemplateVM] page instead.
## Summary instructions for standard Fedora TemplateVMs
**Note:** The prompt on each line indicates where each command should be entered: `dom0`, `fedora-<old>`, or `fedora-<new>`, where `<old>` is the Fedora version number *from* which you are upgrading, and `<new>` is the Fedora version number *to* which you are upgrading.
[user@dom0 ~]$ qvm-clone fedora-<old> fedora-<new>
[user@dom0 ~]$ truncate -s 5GB /var/tmp/template-upgrade-cache.img
[user@dom0 ~]$ qvm-run -a fedora-<new> gnome-terminal
[user@dom0 ~]$ dev=$(sudo losetup -f --show /var/tmp/template-upgrade-cache.img)
[user@dom0 ~]$ qvm-block attach fedora-<new> dom0:${dev##*/}
[user@fedora-<new> ~]$ sudo mkfs.ext4 /dev/xvdi
[user@fedora-<new> ~]$ sudo mount /dev/xvdi /mnt/removable
[user@fedora-<new> ~]$ sudo dnf clean all
[user@fedora-<new> ~]$ sudo dnf --releasever=<new> --setopt=cachedir=/mnt/removable --best --allowerasing distro-sync
[user@dom0 ~]$ qvm-shutdown fedora-<new>
[user@dom0 ~]$ sudo losetup -d $dev
[user@dom0 ~]$ rm /var/tmp/template-upgrade-cache.img
**Recommended:** [Switch everything that was set to the old template to the new template.][switch]
## Detailed instructions for standard Fedora TemplateVMs
These instructions will show you how to upgrade the standard Fedora TemplateVM.
The same general procedure may be used to upgrade any template based on the standard Fedora TemplateVM.
**Note:** The prompt on each line indicates where each command should be entered: `dom0`, `fedora-<old>`, or `fedora-<new>`, where `<old>` is the Fedora version number *from* which you are upgrading, and `<new>` is the Fedora version number *to* which you are upgrading.
1. Ensure the existing template is not running.
[user@dom0 ~]$ qvm-shutdown fedora-<old>
2. Clone the existing template and start a terminal in the new template.
[user@dom0 ~]$ qvm-clone fedora-<old> fedora-<new>
[user@dom0 ~]$ qvm-run -a fedora-<new> gnome-terminal
3. Attempt the upgrade process in the new template.
[user@fedora-<new> ~]$ sudo dnf clean all
[user@fedora-<new> ~]$ sudo dnf --releasever=<new> distro-sync --best --allowerasing
**Note:** `dnf` might ask you to approve importing a new package signing key.
For example, you might see a prompt like this one:
warning: /mnt/removable/updates-0b4cc238d1aa4ffe/packages/example-package.fc<new>.x86_64.rpm: Header V3 RSA/SHA256 Signature, key ID XXXXXXXX: NOKEY
Importing GPG key 0xXXXXXXXX:
Userid : "Fedora <new> (<new>) <fedora-<new>@fedoraproject.org>"
Fingerprint: XXXX XXXX XXXX XXXX XXXX XXXX XXXX XXXX XXXX XXXX
From : /etc/pki/rpm-gpg/RPM-GPG-KEY-fedora-<new>-x86_64
Is this ok [y/N]: y
This key was already checked when it was installed (notice that the "From" line refers to a location on your local disk), so you can safely say yes to this prompt.
**Note:** If you encounter no errors, proceed to step 4.
If you do encounter errors, see the next two points first.
* If `dnf` reports that you do not have enough free disk space to proceed
with the upgrade process, create an empty file in dom0 to use as a cache
and attach it to the template as a virtual disk.
[user@dom0 ~]$ truncate -s 5GB /var/tmp/template-upgrade-cache.img
[user@dom0 ~]$ dev=$(sudo losetup -f --show /var/tmp/template-upgrade-cache.img)
[user@dom0 ~]$ qvm-block attach fedora-<new> dom0:${dev##*/}
Then reattempt the upgrade process, but this time use the virtual disk as a cache.
[user@fedora-<new> ~]$ sudo mkfs.ext4 /dev/xvdi
[user@fedora-<new> ~]$ sudo mount /dev/xvdi /mnt/removable
[user@fedora-<new> ~]$ sudo dnf clean all
[user@fedora-<new> ~]$ sudo dnf --releasever=<new> --setopt=cachedir=/mnt/removable --best --allowerasing distro-sync
If this attempt is successful, proceed to step 4.
* `dnf` may complain:
At least X MB more space needed on the / filesystem.
In this case, one option is to [resize the TemplateVM's disk image][resize-disk-image] before reattempting the upgrade process.
(See [Additional Information] below for other options.)
4. Check that you are on the correct (new) Fedora release.
[user@fedora-<new> ~]$ cat /etc/fedora-release
5. (Optional) Trim the new template.
(This should [no longer be necessary][template-notes], but it does not hurt.
Some users have [reported][5055] that it makes a difference.)
[user@fedora-<new> ~]$ sudo fstrim -av
[user@dom0 ~]$ qvm-shutdown fedora-<new>
[user@dom0 ~]$ qvm-start fedora-<new>
[user@fedora-<new> ~]$ sudo fstrim -av
6. Shut down the new TemplateVM.
[user@dom0 ~]$ qvm-shutdown fedora-<new>
7. Remove the cache file, if you created one.
[user@dom0 ~]$ sudo losetup -d $dev
[user@dom0 ~]$ rm /var/tmp/template-upgrade-cache.img
8. (Recommended) [Switch everything that was set to the old template to the new template.][switch]
9. (Optional) Make the new template the global default.
[user@dom0 ~]$ qubes-prefs --set fedora-<new>
10. (Optional) Remove the old template.
(Make sure to type the name of the old template, not the new one.)
[user@dom0 ~]$ sudo dnf remove qubes-template-fedora-<old>
## Summary instructions for Fedora Minimal TemplateVMs
**Note:** The prompt on each line indicates where each command should be entered: `dom0`, `fedora-<old>`, or `fedora-<new>`, where `<old>` is the Fedora version number *from* which you are upgrading, and `<new>` is the Fedora version number *to* which you are upgrading.
[user@dom0 ~]$ qvm-clone fedora-<old>-minimal fedora-<new>-minimal
[user@dom0 ~]$ qvm-run -u root -a fedora-<new>-minimal xterm
[root@fedora-<new>-minimal ~]# dnf clean all
[user@fedora-<new>-minimal ~]# dnf --releasever=<new> --best --allowerasing distro-sync
[user@fedora-<new>-minimal ~]# fstrim -v /
(Shut down TemplateVM by any normal means.)
(If you encounter insufficient space issues, you may need to use the methods described for the standard template above.)
## StandaloneVMs
The procedure for upgrading a Fedora [StandaloneVM] is the same as for a TemplateVM.
## Release-specific notes
This section contains notes about upgrading to specific releases.
### Fedora 30
If your RPM Fusion repositories are **disabled** when you upgrade a TemplateVM to 30, all RPM Fusion packages and RPM Fusion repo definitions will be removed from that TemplateVM.
If your RPM Fusion repositories are **enabled** when upgrading, all RPM Fusion packages and repo definitions will be retained and updated as expected.
For most users, this behavior should not cause a problem, since a TemplateVM in which the RPM Fusion repos are disabled is probably a TemplateVM in which you never wish to use them.
However, if you wish to have the RPM Fusion repo definitions after upgrading in a TemplateVM in which they are currently disabled, you may wish to temporarily enable them prior to upgrading or manually create, copy, or download them after upgrading.
### End-of-life (EOL) releases
We strongly recommend against using any Fedora release that has reached [end-of-life (EOL)].
## Additional information
As mentioned above, you may encounter the following `dnf` error:
At least X MB more space needed on the / filesystem.
In this case, you have several options:
1. [Increase the TemplateVM's disk image size][resize-disk-image].
This is the solution mentioned in the main instructions above.
2. Delete files in order to free up space. One way to do this is by uninstalling packages.
You may then reinstall them again after you finish the upgrade process, if desired).
However, you may end up having to increase the disk image size anyway (see previous option).
3. Do the upgrade in parts, e.g., by using package groups.
(First upgrade `@core` packages, then the rest.)
4. Do not perform an in-place upgrade.
Instead, simply download and install a new template package, then redo all desired template modifications.
Here are some useful messages from the mailing list that also apply to TemplateVM management and migration in general from
[Marek](https://groups.google.com/d/msg/qubes-users/mCXkxlACILQ/dS1jbLRP9n8J) and
[Jason M](https://groups.google.com/d/msg/qubes-users/mCXkxlACILQ/5PxDfI-RKAsJ).
[Fedora TemplateVM]: /doc/templates/fedora/
[resize-disk-image]: /doc/resize-disk-image/
[Additional Information]: #additional-information
[switch]: /doc/templates/#switching
[DispVM]: /doc/dispvm/
[end-of-life (EOL)]: https://fedoraproject.org/wiki/End_of_life
[StandaloneVM]: /doc/standalone-and-hvm/
[template-notes]: /doc/templates/#important-notes
[5055]: https://github.com/QubesOS/qubes-issues/issues/5055

View File

@ -4,25 +4,13 @@ title: The Fedora TemplateVM
permalink: /doc/templates/fedora/
---
The Fedora TemplateVM
=====================
# The Fedora TemplateVM
The Fedora [TemplateVM] is the default TemplateVM in Qubes OS.
This page is about the standard (or "full") Fedora TemplateVM.
For the minimal and Xfce version, please see the [Fedora Minimal] and [Fedora Xfce] page.
The Fedora [TemplateVM] is the default TemplateVM in Qubes OS. This page is about the standard (or "full") Fedora TemplateVM. For the minimal and Xfce version, please see the [Minimal TemplateVMs] and [Fedora Xfce] page.
Installing
----------
## Installing
The Fedora TemplateVM comes preinstalled with Qubes OS.
However, there may be times when you wish to install a fresh copy from the Qubes repositories, e.g.:
* When a version of Fedora reaches EOL ([end-of-life]).
* When a new version of Fedora you wish to use becomes [supported] as a TemplateVM.
* When you suspect your Fedora TemplateVM has been compromised.
* When you have made modifications to the Fedora TemplateVM that you no longer want.
To install a specific Fedora TemplateVM that is not currently installed in your system, use the following command in dom0:
To [install] a specific Fedora TemplateVM that is not currently installed in your system, use the following command in dom0:
$ sudo qubes-dom0-update qubes-template-fedora-XX
@ -31,36 +19,44 @@ To install a specific Fedora TemplateVM that is not currently installed in your
To reinstall a Fedora TemplateVM that is already installed in your system, see [How to Reinstall a TemplateVM].
After Installing
----------------
## After Installing
After installing a fresh Fedora TemplateVM, we recommend performing the following steps:
1. [Update the TemplateVM].
2. [Switch any TemplateBasedVMs that are based on the old TemplateVM to the new one][switch-templates].
2. [Switch any TemplateBasedVMs that are based on the old TemplateVM to the new one][switch].
3. If desired, [uninstall the old TemplateVM].
Upgrading
---------
## Updating
To upgrade your Fedora TemplateVM, please consult the guide that corresponds to your situation:
Please see [Updating software in TemplateVMs].
* [Upgrading the Fedora 29 Template to Fedora 30](/doc/template/fedora/upgrade-29-to-30/)
* [Upgrading the Fedora 28 Template to Fedora 29](/doc/template/fedora/upgrade-28-to-29/)
* [Upgrading the Fedora 27 Template to Fedora 28](/doc/template/fedora/upgrade-27-to-28/)
* [Upgrading the Fedora 26 Template to Fedora 27](/doc/template/fedora/upgrade-26-to-27/)
## Upgrading
Please see [Upgrading Fedora TemplateVMs].
## Release-specific notes
This section contains notes about specific Fedora releases.
(There is currently no release-specific information documented.)
[TemplateVM]: /doc/templates/
[Fedora Minimal]: /doc/templates/fedora-minimal/
[Fedora Xfce]: /doc/templates/fedora-xfce/
[Minimal TemplateVMs]: /doc/templates/minimal/
[end-of-life]: https://fedoraproject.org/wiki/Fedora_Release_Life_Cycle#Maintenance_Schedule
[supported]: /doc/supported-versions/#templatevms
[How to Reinstall a TemplateVM]: /doc/reinstall-template/
[Update the TemplateVM]: /doc/software-update-vm/
[switch-templates]: /doc/templates/#how-to-switch-templates
[uninstall the old TemplateVM]: /doc/templates/#how-to-uninstall
[switch]: /doc/templates/#switching
[uninstall the old TemplateVM]: /doc/templates/#uninstalling
[Updating software in TemplateVMs]: /doc/software-update-domu/#updating-software-in-templatevms
[Upgrading Fedora TemplateVMs]: /doc/template/fedora/upgrade/
[install]: /doc/templates/#installing

View File

@ -1,185 +0,0 @@
---
layout: doc
title: Upgrading the Fedora 29 TemplateVM to Fedora 30
permalink: /doc/template/fedora/upgrade-29-to-30/
---
Upgrading the Fedora 29 TemplateVM to Fedora 30
===============================================
This page provides instructions for performing an in-place upgrade of an installed Fedora 29 [TemplateVM] to Fedora 30.
If you wish to install a new, unmodified Fedora 30 template instead of upgrading a template that is already installed in your system, please see the [Fedora TemplateVM] page instead.
Important information regarding RPM Fusion repos
------------------------------------------------
If your RPM Fusion repositories are **disabled** when you upgrade a TemplateVM from Fedora 29 to 30, all RPM Fusion packages and RPM Fusion repo definitions will be removed from that TemplateVM.
If your RPM Fusion repositories are **enabled** when upgrading, all RPM Fusion packages and repo definitions will be retained and updated as expected.
For most users, this behavior should not cause a problem, since a TemplateVM in which the RPM Fusion repos are disabled is probably a TemplateVM in which you never wish to use them.
However, if you wish to have the RPM Fusion repo definitions after upgrading in a TemplateVM in which they are currently disabled, you may wish to temporarily enable them prior to upgrading or manually create, copy, or download them after upgrading.
Summary Instructions for Standard Fedora TemplateVMs
----------------------------------------------------
**Note:** The prompt on each line indicates where each command should be entered (`@dom0` or `@fedora-30`).
[user@dom0 ~]$ qvm-clone fedora-29 fedora-30
[user@dom0 ~]$ truncate -s 5GB /var/tmp/template-upgrade-cache.img
[user@dom0 ~]$ qvm-run -a fedora-30 gnome-terminal
[user@dom0 ~]$ dev=$(sudo losetup -f --show /var/tmp/template-upgrade-cache.img)
[user@dom0 ~]$ qvm-block attach fedora-30 dom0:${dev##*/}
[user@fedora-30 ~]$ sudo mkfs.ext4 /dev/xvdi
[user@fedora-30 ~]$ sudo mount /dev/xvdi /mnt/removable
[user@fedora-30 ~]$ sudo dnf clean all
[user@fedora-30 ~]$ sudo dnf --releasever=30 --setopt=cachedir=/mnt/removable --best --allowerasing distro-sync
[user@fedora-30 ~]$ sudo fstrim -v /
(Shut down TemplateVM by any normal means.)
[user@dom0 ~]$ sudo losetup -d $dev
[user@dom0 ~]$ rm /var/tmp/template-upgrade-cache.img
(Optional cleanup: Switch everything over to the new template and delete the old one.
See instructions below for details.)
Detailed Instructions for Standard Fedora TemplateVMs
-----------------------------------------------------
These instructions will show you how to upgrade the standard Fedora 29 TemplateVM to Fedora 30.
The same general procedure may be used to upgrade any template based on the standard Fedora 29 template.
**Note:** The command-line prompt on each line indicates where each command should be entered (`@dom0` or `@fedora-30`).
1. Ensure the existing template is not running.
[user@dom0 ~]$ qvm-shutdown fedora-29
2. Clone the existing template and start a terminal in the new template.
[user@dom0 ~]$ qvm-clone fedora-29 fedora-30
[user@dom0 ~]$ qvm-run -a fedora-30 gnome-terminal
3. Attempt the upgrade process in the new template.
[user@fedora-30 ~]$ sudo dnf clean all
[user@fedora-30 ~]$ sudo dnf --releasever=30 distro-sync --best --allowerasing
**Note:** `dnf` might ask you to approve importing a new package signing key.
For example, you might see a prompt like this one:
warning: /mnt/removable/updates-0b4cc238d1aa4ffe/packages/example-package.fc30.x86_64.rpm: Header V3 RSA/SHA256 Signature, key ID cfc659b9: NOKEY
Importing GPG key 0xCFC659B9:
Userid : "Fedora 30 (30) <fedora-30@fedoraproject.org>"
Fingerprint: F1D8 EC98 F241 AAF2 0DF6 9420 EF3C 111F CFC6 59B9
From : /etc/pki/rpm-gpg/RPM-GPG-KEY-fedora-30-x86_64
Is this ok [y/N]: y
This key was already checked when it was installed (notice that the "From" line refers to a location on your local disk), so you can safely say yes to this prompt.
**Note:** If you encounter no errors, proceed to step 4.
If you do encounter errors, see the next two points first.
* If `dnf` reports that you do not have enough free disk space to proceed
with the upgrade process, create an empty file in dom0 to use as a cache
and attach it to the template as a virtual disk.
[user@dom0 ~]$ truncate -s 5GB /var/tmp/template-upgrade-cache.img
[user@dom0 ~]$ dev=$(sudo losetup -f --show /var/tmp/template-upgrade-cache.img)
[user@dom0 ~]$ qvm-block attach fedora-30 dom0:${dev##*/}
Then reattempt the upgrade process, but this time use the virtual disk as a cache.
[user@fedora-30 ~]$ sudo mkfs.ext4 /dev/xvdi
[user@fedora-30 ~]$ sudo mount /dev/xvdi /mnt/removable
[user@fedora-30 ~]$ sudo dnf clean all
[user@fedora-30 ~]$ sudo dnf --releasever=30 --setopt=cachedir=/mnt/removable --best --allowerasing distro-sync
If this attempt is successful, proceed to step 4.
* `dnf` may complain:
At least X MB more space needed on the / filesystem.
In this case, one option is to [resize the TemplateVM's disk image][resize-disk-image] before reattempting the upgrade process.
(See [Additional Information] below for other options.)
4. Check that you are on the correct (new) fedora release.
[user@fedora-30 ~]$ cat /etc/fedora-release
5. Trim the new template.
[user@fedora-30 ~]$ sudo fstrim -v /
6. Shut down the new TemplateVM (from the command line or the Qube Manager).
[user@dom0 ~]$ qvm-shutdown fedora-30
7. Remove the cache file, if you created one.
[user@dom0 ~]$ sudo losetup -d $dev
[user@dom0 ~]$ rm /var/tmp/template-upgrade-cache.img
8. (Recommended) [Switch everything that was set to the old template to the new template.][switching]
9. (Optional) Remove the old template. (Make sure to type `fedora-29`, not `fedora-30`.)
[user@dom0 ~]$ sudo dnf remove qubes-template-fedora-29
Summary Instructions for Fedora Minimal TemplateVMs
---------------------------------------------------
**Note:** The prompt on each line indicates where each command should be entered (`@dom0` or `@fedora-30`).
[user@dom0 ~]$ qvm-clone fedora-29-minimal fedora-30-minimal
[user@dom0 ~]$ qvm-run -u root -a fedora-30-minimal xterm
[root@fedora-30-minimal ~]# dnf clean all
[user@fedora-30-minimal ~]# dnf --releasever=30 --best --allowerasing distro-sync
[user@fedora-30-minimal ~]# fstrim -v /
(Shut down TemplateVM by any normal means.)
(If you encounter insufficient space issues, you may need to use the methods described for the standard template above.)
Upgrading StandaloneVMs
-----------------------
The procedure for upgrading a StandaloneVM from Fedora 29 to Fedora 30 is the same as for a TemplateVM.
Additional Information
----------------------
As mentioned above, you may encounter the following `dnf` error:
At least X MB more space needed on the / filesystem.
In this case, you have several options:
1. [Increase the TemplateVM's disk image size][resize-disk-image].
This is the solution mentioned in the main instructions above.
2. Delete files in order to free up space. One way to do this is by uninstalling packages.
You may then reinstall them again after you finish the upgrade process, if desired).
However, you may end up having to increase the disk image size anyway (see previous option).
3. Do the upgrade in parts, e.g., by using package groups.
(First upgrade `@core` packages, then the rest.)
4. Do not perform an in-place upgrade.
Instead, simply download and install a new template package, then redo all desired template modifications.
Here are some useful messages from the mailing list that also apply to TemplateVM management and migration in general from
[Marek](https://groups.google.com/d/msg/qubes-users/mCXkxlACILQ/dS1jbLRP9n8J) and
[Jason M](https://groups.google.com/d/msg/qubes-users/mCXkxlACILQ/5PxDfI-RKAsJ).
[TemplateVM]: /doc/templates/
[Fedora TemplateVM]: /doc/templates/fedora/
[resize-disk-image]: /doc/resize-disk-image/
[Additional Information]: #additional-information
[Compacting the Upgraded Template]: #compacting-the-upgraded-template
[switching]: /doc/templates/#how-to-switch-templates
[DispVM]: /doc/dispvm/

View File

@ -0,0 +1,174 @@
---
layout: doc
title: Minimal TemplateVMs
permalink: /doc/templates/minimal/
redirect_from:
- /doc/templates/fedora-minimal/
- /doc/fedora-minimal/
- /en/doc/templates/fedora-minimal/
- /doc/Templates/FedoraMinimal/
- /wiki/Templates/FedoraMinimal/
- /doc/templates/debian-minimal/
---
# Minimal TemplateVMs
The Minimal [TemplateVMs] are lightweight versions of their standard TemplateVM counterparts.
They have only the most vital packages installed, including a minimal X and xterm installation.
The sections below contain instructions for using the template and provide some examples for common use cases.
There are currently two Minimal TemplateVMs corresponding to the standard [Fedora] and [Debian] TemplateVMs.
## Important
1. The Minimal TemplateVMs are intended only for advanced users.
If you encounter problems with the Minimal TemplateVMs, we recommend that you use their standard TemplateVM counterparts instead.
2. If something works with a standard TemplateVM but not the minimal version, this is most likely due to user error (e.g., a missing package or misconfiguration) rather than a bug.
In such cases, you should write to [qubes-users] to ask for help rather than filing a bug report, then [contribute what you learn to the documentation][doc-guidelines].
3. The Minimal TemplateVMs are intentionally *minimal*.
[Do not ask for your favorite package to be added to the minimal template by default.][pref-default]
## Installation
The Minimal TemplateVMs can be installed with the following command (where `X` is your desired distro and version number):
[user@dom0 ~]$ sudo qubes-dom0-update qubes-template-X-minimal
If your desired version is not found, it may still be in [testing].
You may wish to try again with the testing repository enabled:
[user@dom0 ~]$ sudo qubes-dom0-update --enablerepo=qubes-templates-itl-testing qubes-template-X-minimal
The download may take a while depending on your connection speed.
## Passwordless root
It is an intentional design choice for [Passwordless Root Access in VMs] to be optional in Minimal TemplateVMs.
Since the Minimal TemplateVMs are *minimal*, they are not configured for passwordless root by default.
To update or install packages, execute the following command in dom0 (where `X` is your distro and version number):
[user@dom0 ~]$ qvm-run -u root X-minimal xterm
This opens a root terminal in the Minimal TemplateVM, from which you can use execute root commands without `sudo`.
You will have to do this every time if you choose not to enable passwordless root.
If you want to be able to use `sudo` inside a Minimal TemplateVM (or TemplateBasedVMs based on a Minimal TemplateVM), open a root terminal as just instructed, then install the `qubes-core-agent-passwordless-root` package.
Optionally, verify that passwordless root now works by opening a normal (non-root) xterm window in the Minimal TemplateVM, then issue the command `sudo -l`.
This should give you output that includes the `NOPASSWD` keyword.
## Customization
You may wish to clone the original template and make any changes in the clone instead of the original template.
You must start the clone in order to customize it.
Customizing the template for specific use cases normally only requires installing additional packages.
## Distro-specific notes
This following sections provide information that is specific to a particular Minimal TemplateVM distro.
### Fedora
The following list provides an overview of which packages are needed for which purpose.
As usual, the required packages are to be installed in the running template with the following command (replace `packages` with a space-delimited list of packages to be installed):
[user@your-new-clone ~]$ sudo dnf install packages
- Commonly used utilities: `pciutils` `vim-minimal` `less` `psmisc` `gnome-keyring`.
- Audio: `pulseaudio-qubes`.
- [FirewallVM](/doc/firewall/), such as the template for `sys-firewall`: at least `qubes-core-agent-networking` and `iproute`, and also `qubes-core-agent-dom0-updates` if you want to use it as the `UpdateVM` (which is normally `sys-firewall`).
- NetVM, such as the template for `sys-net`: `qubes-core-agent-networking` `qubes-core-agent-network-manager` `NetworkManager-wifi` `network-manager-applet` `wireless-tools` `dejavu-sans-fonts` `notification-daemon` `gnome-keyring` `polkit` `@hardware-support`.
If your network devices need extra packages for the template to work as a network VM, use the `lspci` command to identify the devices, then run `dnf search firmware` (replace `firmware` with the appropriate device identifier) to find the needed packages and then install them.
If you need utilities for debugging and analyzing network connections, install `tcpdump` `telnet` `nmap` `nmap-ncat`.
- [USB qube](/doc/usb-qubes/), such as the template for `sys-usb`: `qubes-input-proxy-sender`.
- [VPN qube](/doc/vpn/): Use the `dnf search "NetworkManager VPN plugin"` command to look up the VPN packages you need, based on the VPN technology you'll be using, and install them.
Some GNOME related packages may be needed as well.
After creation of a machine based on this template, follow the [VPN instructions](/doc/vpn/#set-up-a-proxyvm-as-a-vpn-gateway-using-networkmanager) to configure it.
You may also wish to consider additional packages from the `qubes-core-agent` suite:
- `qubes-core-agent-qrexec`: Qubes qrexec agent. Installed by default.
- `qubes-core-agent-systemd`: Qubes unit files for SystemD init style. Installed by default.
- `qubes-core-agent-passwordless-root`, `polkit`: By default, the Fedora Minimal template doesn't have passwordless root. These two packages enable this feature.
- `qubes-core-agent-nautilus`: This package provides integration with the Nautilus file manager (without it things like "copy to VM/open in disposable VM" will not be shown in Nautilus).
- `qubes-core-agent-sysvinit`: Qubes unit files for SysV init style or upstart.
- `qubes-core-agent-networking`: Networking support. Required for general network access and particularly if the template is to be used for a `sys-net` or `sys-firewall` VM.
- `qubes-core-agent-network-manager`: Integration for NetworkManager. Useful if the template is to be used for a `sys-net` VM.
- `network-manager-applet`: Useful (together with `dejavu-sans-fonts` and `notification-daemon`) to have a system tray icon if the template is to be used for a `sys-net` VM.
- `qubes-core-agent-dom0-updates`: Script required to handle `dom0` updates. Any template which the VM responsible for 'dom0' updates (e.g. `sys-firewall`) is based on must contain this package.
- `qubes-usb-proxy`: Required if the template is to be used for a USB qube (`sys-usb`) or for any destination qube to which USB devices are to be attached (e.g `sys-net` if using USB network adapter).
- `pulseaudio-qubes`: Needed to have audio on the template VM.
See [here][customization] for further information on customizing `fedora-minimal`.
#### Logging
The `rsyslog` logging service is not installed by default, as all logging is instead being handled by the `systemd` journal.
Users requiring the `rsyslog` service should install it manually.
To access the `journald` log, use the `journalctl` command.
### Debian
As you would expect, the required packages can be installed in the running template with any apt-based command.
For example : (Replace "packages` with a space-delimited list of packages to be installed.)
[user@your-new-clone ~]$ sudo apt install packages
Use case | Description | Required steps
--- | --- | ---
**Standard utilities** | If you need the commonly used utilities | Install the following packages: `pciutils` `vim-minimal` `less` `psmisc` `gnome-keyring`
**Networking** | If you want networking | Install qubes-core-agent-networking
**Audio** | If you want sound from your VM... | Install `pulseaudio-qubes`
**FirewallVM** | You can use the minimal template as a template for a [FirewallVM](/doc/firewall/), like `sys-firewall` | Install `qubes-core-agent-networking`, and `nftables`. Also install `qubes-core-agent-dom0-updates` if you want to use a qube based on the template as an updateVM (normally sys-firewall).
**NetVM** | You can use this template as the basis for a NetVM such as `sys-net` | Install the following packages: `qubes-core-agent-networking`, `qubes-core-agent-network-manager`, and `nftables`.
**NetVM (extra firmware)** | If your network devices need extra packages for a network VM | Use the `lspci` command to identify the devices, then find the package that provides necessary firnware and install it.
**Network utilities** | If you need utilities for debugging and analyzing network connections | Install the following packages: `tcpdump` `telnet` `nmap` `nmap-ncat`
**USB** | If you want to use this template as the basis for a [USB](/doc/usb/) qube such as `sys-usb` | Install `qubes-usb-proxy`. To use USB mouse or keyboard install `qubes-input-proxy-sender`.
**VPN** | You can use this template as basis for a [VPN](/doc/vpn/) qube | You may need to install network-manager VPN packages, depending on the VPN technology you'll be using. After creating a machine based on this template, follow the [VPN howto](/doc/vpn/#set-up-a-proxyvm-as-a-vpn-gateway-using-networkmanager) to configure it.
In Qubes 4.0, additional packages from the `qubes-core-agent` suite may be needed to make the customized minimal template work properly.
These packages are:
- `qubes-core-agent-nautilus`: This package provides integration with the Nautilus file manager (without it, items like "copy to VM/open in disposable VM" will not be shown in Nautilus).
- `qubes-core-agent-thunar`: This package provides integration with the thunar file manager (without it, items like "copy to VM/open in disposable VM" will not be shown in thunar).
- `qubes-core-agent-dom0-updates`: Script required to handle `dom0` updates. Any template on which the qube responsible for 'dom0' updates (e.g. `sys-firewall`) is based must contain this package.
- `qubes-menus`: Defines menu layout.
- `qubes-desktop-linux-common`: Contains icons and scripts to improve desktop experience.
Also, there are packages to provide additional services:
- `qubes-gpg-split`: For implementing split GPG.
- `qubes-u2f`: For implementing secure forwarding of U2F messages.
- `qubes-pdf-converter`: For implementing safe conversion of PDFs.
- `qubes-img-converter`: For implementing safe conversion of images.
- `qubes-snapd-helper`: If you want to use snaps in qubes.
- `qubes-thunderbird`: Additional tools for use in thunderbird.
- `qubes-app-shutdown-idle`: If you want qubes to automatically shutdown when idle.
- `qubes-mgmt-\*`: If you want to use salt management on the template and qubes.
Documentation on all of these can be found in the [docs](/doc)
You could, of course, use qubes-vm-recommended to automatically install many of these, but in that case you are well on the way to a standard Debian template.
[TemplateVMs]: /doc/templates/
[Fedora]: /doc/templates/fedora/
[Debian]: /doc/templates/debian/
[qubes-users]: /support/#qubes-users
[doc-guidelines]: /doc/doc-guidelines/
[pref-default]: /faq/#could-you-please-make-my-preference-the-default
[testing]: /doc/testing/
[customization]: /doc/fedora-minimal-template-customization/
[Passwordless Root Access in VMs]: /doc/vm-sudo/

View File

@ -11,17 +11,10 @@ How to Reinstall a TemplateVM
If you suspect your [TemplateVM] is broken, misconfigured, or compromised, you can reinstall any TemplateVM that was installed from the Qubes repository.
The procedure varies by Qubes version; see the appropriate section below.
Automatic Method
----------------
**Warning:** Due to a known security issue ([QSB #050]), we currently recommend against using the automatic reinstallation method.
Instead, we recommend using the manual method described in the next section.
An announcement will be made, and this notice will be updated, when packages fixing the automatic method are available.
First, copy any files that you wish to keep from the TemplateVM's `/home` and `/rw` folders to a safe storage location.
Then, in a dom0 terminal, run:
@ -89,5 +82,4 @@ If you want to reinstall more than one TemplateVM, repeat these instructions for
[TemplateVM]: /doc/templates/
[QSB #050]: /news/2019/07/24/qsb-050/

View File

@ -1,29 +1,58 @@
---
layout: doc
title: HVM
permalink: /doc/hvm/
title: StandaloneVMs and HVMs
permalink: /doc/standalone-and-hvm/
redirect_from:
- /doc/hvm/
- /doc/hvm-create/
- /en/doc/hvm-create/
- /doc/HvmCreate/
- /wiki/HvmCreate/
---
HVM
===
# StandaloneVMs and HVMs
A **Hardware-assisted Virtual Machine (HVM)**, also known as a **Fully-Virtualized Virtual Machine**, utilizes the virtualization extensions of the host CPU.
These are typically contrasted with **Paravirtualized (PV)** VMs.
A [StandaloneVM](/doc/glossary/#standalonevm) is a type of VM in Qubes that is created by cloning a [TemplateVM](/doc/templates/).
Unlike TemplateVMs, however, StandaloneVMs do not supply their root filesystems to other VMs.
Examples of situations in which StandaloneVMs can be useful include:
HVMs allow you to create qubes based on any OS for which you have an installation ISO.
So you can easily have qubes running Windows, *BSD, or any Linux distribution. You can also use HVMs to run "live" distros.
* VMs used for development (dev environments often require a lot of specific packages and tools)
* VMs used for installing untrusted packages.
Normally, you install digitally signed software from Red Hat/Fedora repositories, and it's reasonable that such software has non malicious *installation* scripts (rpm pre/post scripts).
However, when you would like to install some packages from less trusted sources, or unsigned, then using a dedicated (untrusted) standalone VM might be a better way.
By default, every Qubes VM runs in **PVH** mode (which has security advantages over both PV and HVM) except for those with attached PCI devices, which run in HVM mode.
Meanwhile, a [Hardware-assisted Virtual Machine (HVM)](/doc/glossary/#hvm), also known as a "Fully-Virtualized Virtual Machine," utilizes the virtualization extensions of the host CPU.
These are typically contrasted with [Paravirtualized (PV)](/doc/glossary/#pv) VMs.
HVMs allow you to create qubes based on any OS for which you have an installation ISO, so you can easily have qubes running Windows, *BSD, or any Linux distribution.
You can also use HVMs to run "live" distros.
By default, every Qubes VM runs in [PVH](/doc/glossary/#pvhvm) mode (which has security advantages over both PV and HVM) except for those with attached PCI devices, which run in HVM mode.
See [here](https://blog.invisiblethings.org/2017/07/31/qubes-40-rc1.html) for a discussion of the switch from PV to HVM and [here](/news/2018/01/11/qsb-37/) for the announcement about the change to using PVH as default.
The StandaloneVM/TemplateVM distinction and the HVM/PV/PVH distinctions are orthogonal.
The former is about root filesystem inheritance, whereas the latter is about the virtualization mode.
In practice, however, it is most common for StandaloneVMs to be HVMs and for HVMs to be StandaloneVMs.
In fact, this is so common that [StandaloneHVMs](/doc/glossary/#standalonehvm) are typically just called "HVMs."
Hence, this page covers both topics.
Creating an HVM qube
----------------------
## Creating a StandaloneVM
You can create a StandaloneVM in the Qube Manager by selecting the "Type" of "Standalone qube copied from a template" or "Empty standalone qube (install your own OS)."
Alternatively, from the dom0 command line:
```
qvm-create --class StandaloneVM --label <label> --property virt_mode=hvm <vmname>
```
(Note: Technically, `virt_mode=hvm` is not necessary for every StandaloneVM.
However, it makes sense if you want to use a kernel from within the VM.)
## Creating an HVM
Using the GUI:
In Qube Manager, select "Create new qube" from the Qube menu, or select the "Create a new qube" button.
@ -49,8 +78,7 @@ libvirt.libvirtError: invalid argument: could not find capabilities for arch=x86
Make sure that you give the new qube adequate memory to install and run.
Installing an OS in an HVM qube
---------------------------------
## Installing an OS in an HVM
You will have to boot the qube with the installation media "attached" to it. You may either use the GUI or use command line instructions.
At the command line you can do this in three ways:
@ -76,8 +104,7 @@ You may have to restart the qube several times in order to complete installation
Several invocations of `qvm-start` command (as shown above) might be needed.
Setting up networking for HVM domains
-------------------------------------
## Setting up networking for HVMs
Just like standard paravirtualized AppVMs, the HVM qubes get fixed IP addresses centrally assigned by Qubes.
Normally Qubes agent scripts (or services on Windows) running within each AppVM are responsible for setting up networking within the VM according to the configuration created by Qubes (through [keys](/doc/vm-interface/#qubesdb) exposed by dom0 to the VM).
@ -99,13 +126,12 @@ The DNS IP addresses are `10.139.1.1` and `10.139.1.2`.
There is [opt-in support](/doc/networking/#ipv6) for IPv6 forwarding.
Using Template-based HVM domains
--------------------------------
## Using TemplateBasedHVMs
Qubes allows HVM VMs to share a common root filesystem from a select Template VM, just as for Linux AppVMs.
Qubes allows HVMs to share a common root filesystem from a select TemplateVM (see [TemplateHVM](/doc/glossary/#templatehvm) and [TemplateBasedHVM](/doc/glossary/#templatebasedhvm)).
This mode can be used for any HVM (e.g. FreeBSD running in a HVM).
In order to create a HVM TemplateVM you use the following command, suitably adapted:
In order to create a TemplateHVM you use the following command, suitably adapted:
~~~
qvm-create --class TemplateVM <qube> --property virt_mode=HVM --property kernel='' -l green
@ -120,8 +146,7 @@ If you use this Template as it is, then any HVMs that use it will effectively be
Please see [this page](/doc/windows-appvms/) for specific advice on installing and using Windows-based Templates.
Cloning HVM domains
-------------------
## Cloning HVMs
Just like normal AppVMs, the HVM domains can also be cloned either using the command-line `qvm-clone` or via the Qube Manager's 'Clone VM' option in the right-click menu.
@ -222,9 +247,7 @@ timezone : localtime
~~~
Assigning PCI devices to HVM domains
------------------------------------
## Assigning PCI devices to HVMs
HVM domains (including Windows VMs) can be [assigned PCI devices](/doc/assigning-devices/) just like normal AppVMs.
E.g. one can assign one of the USB controllers to the Windows VM and should be able to use various devices that require Windows software, such as phones, electronic devices that are configured via FTDI, etc.
@ -236,10 +259,9 @@ This is illustrated on the screenshot below:
![r2b1-win7-usb-disable.png](/attachment/wiki/HvmCreate/r2b1-win7-usb-disable.png)
Converting VirtualBox VM to HVM
-------------------------------
## Converting VirtualBox VMs to Qubes HVMs
You can convert any VirtualBox VMs to an HVM using this method.
You can convert any VirtualBox VM to a Qubes HVM using this method.
For example, Microsoft provides [free 90 day evaluation VirtualBox VMs for browser testing](https://developer.microsoft.com/en-us/microsoft-edge/tools/vms/).
@ -271,7 +293,7 @@ Convert vmdk to raw:
qemu-img convert -O raw *.vmdk win10.raw
~~~
Copy the root image file to a temporary location in Dom0:
Copy the root image file to a temporary location, typing this in a Dom0 terminal:
~~~
qvm-run --pass-io untrusted 'cat "/media/user/externalhd/win10.raw"' > /home/user/win10-root.img
@ -280,7 +302,7 @@ qvm-run --pass-io untrusted 'cat "/media/user/externalhd/win10.raw"' > /home/use
Create a new HVM in Dom0 with the root image we just copied to Dom0 (change the amount of RAM in GB as you wish):
~~~
qvm-create --hvm win10 --label red --mem=4096 --root-move-from /home/user/win10-root.img
qvm-create --property=virt_mode=hvm --property=memory=4096 --property=kernel='' --label red --standalone --root-move-from /home/user/win10-root.img win10
~~~
Start win10 VM:
@ -310,10 +332,10 @@ qemu-img -h | tail -n1
~~~
Further reading
---------------
## Further reading
Other documents related to HVM:
- [Windows VMs](/doc/windows-vm/)
- [LinuxHVMTips](/doc/linux-hvm-tips/)

View File

@ -1,6 +1,6 @@
---
layout: doc
title: Templates
title: TemplateVMs
permalink: /doc/templates/
redirect_from:
- /doc/template/
@ -11,48 +11,68 @@ redirect_from:
# TemplateVMs
Every TemplateBasedVM in Qubes is, as the name implies, based on some TemplateVM.
The TemplateVM is where all the software available to TemplateBasedVMs is installed.
The default template is based on Fedora, but there are additional templates based on other Linux distributions.
In [Getting Started], we covered the distinction in Qubes OS between where you *install* your software and where you *run* your software.
Your software is installed in [TemplateVMs] (or "templates" for short).
Each TemplateVM shares its root filesystem (i.e., all of its programs and system files) with other qubes called [TemplateBasedVMs].
TemplateBasedVMs are where you run your software and store your data.
The TemplateVM system has significant benefits:
* **Security:** Each qube has read-only access to the TemplateVM on which it's based, so if a qube is compromised, it cannot infect its TemplateVM or any of the other qubes based on that TemplateVM.
* **Storage:** Each qube based on a TemplateVM uses only the disk space required to store its own data (i.e., your files in its home directory), which dramatically saves on disk space.
* **Speed:** It is extremely fast to create new TemplateBasedVMs, since the root filesystem already exists in the TemplateVM.
* **Updates:** Updates are naturally centralized, since updating a TemplateVM means that all qubes based on it will automatically use those updates after they're restarted.
An important side effect of this system is that any software installed in a TemplateBasedVM (rather than in the TemplateVM on which it is based) will disappear after the TemplateBasedVM reboots (see [Inheritance and Persistence]).
For this reason, we recommend installing most of your software in TemplateVMs, not TemplateBasedVMs.
The default TemplateVM in Qubes is based on Fedora, but there are additional templates based on other Linux distributions.
There are also templates available with or without certain software preinstalled.
The concept of TemplateVMs is initially described [here](/getting-started/#appvms-qubes-and-templatevms).
The technical details of this implementation are described in the developer documentation [here](/doc/template-implementation/).
You may find it useful to have multiple TemplateVMs installed in order to provide:
Some templates are available in ready-to-use binary form, but some of them are available only as source code, which can be built using the [Qubes Builder](/doc/qubes-builder/).
In particular, some template "flavors" are available in source code form only.
Take a look at the [Qubes Builder documentation](/doc/qubes-builder/) for instructions on how to compile them.
* Different security levels (e.g., more or less trusted software installed)
* Different environments (e.g., Fedora, Debian, Whonix)
* Different tools (e.g., office, media, development, hardware drivers)
## Official templates
## Official
These are the official Qubes OS Project templates.
We build and release updates for these templates.
We guarantee that the binary updates are compiled from exactly the same source code as we publish.
* [Fedora](/doc/templates/fedora/) (default base template)
* [Fedora - Minimal](/doc/templates/fedora-minimal)
* [Fedora - Xfce](/doc/templates/fedora-xfce)
* [Debian](/doc/templates/debian/)
* [Fedora] (default)
* Fedora [Minimal]
* Fedora [Xfce]
* [Debian]
* Debian [Minimal]
## Community
## Community templates
These templates are supported by the Qubes community.
Some of them are available in ready-to-use binary package form (built by the Qubes developers), while others are available only in source code form.
In all cases, the Qubes OS Project does not provide updates for these templates.
However, such updates may be provided by the template maintainer.
These templates are supported by the Qubes community. Some of them are available in ready-to-use binary package form (built by the Qubes developers), while others are available only in source code form. In all cases, the Qubes OS Project does not provide updates for these templates. However, such updates may be provided by the template maintainer.
By installing these templates, you are trusting not only the Qubes developers and the distribution maintainers, but also the template maintainer.
In addition, these templates may be somewhat less stable, since the Qubes developers do not test them.
By installing these templates, you are trusting not only the Qubes developers and the distribution maintainers, but also the template maintainer. In addition, these templates may be somewhat less stable, since the Qubes developers do not test them.
* [Whonix]
* [Ubuntu]
* [Arch Linux]
* [CentOS](/doc/templates/centos/)
* [Whonix](/doc/templates/whonix/)
* [Ubuntu](/doc/templates/ubuntu/)
* [Archlinux](/doc/templates/archlinux/)
* [CentOS](/doc/templates/centos/)
## Installing
Certain TemplateVMs come preinstalled with Qubes OS.
However, there may be times when you wish to install a fresh TemplateVM from the Qubes repositories, e.g.:
## How to install, uninstall, reinstall, and switch
* When a TemplateVM version you're using reaches [end-of-life][supported].
* When a new version of a TemplateVM that you wish to use becomes [supported].
* When you suspect your TemplateVM has been compromised.
* When you have made modifications to your TemplateVM that you no longer want.
### How to install
Please refer to each TemplateVM's installation instructions below.
Please refer to each TemplateVM's installation instructions.
Usually, the installation method is to execute the following type of command in dom0:
$ sudo qubes-dom0-update qubes-template-<name>
@ -60,7 +80,24 @@ Usually, the installation method is to execute the following type of command in
(where `qubes-template-<name>` is the name of your TemplateVM package)
### How to uninstall
## After Installing
After installing a fresh TemplateVM, we recommend performing the following steps:
1. [Update the TemplateVM].
2. [Switch any TemplateBasedVMs that are based on the old TemplateVM to the new one][switch].
3. If desired, [uninstall the old TemplateVM].
## Updating
Updating TemplateVMs is an important part of [Updating Qubes OS].
Please see [Updating software in TemplateVMs].
## Uninstalling
To uninstall a TemplateVM, execute the following type of command in dom0:
@ -68,19 +105,19 @@ To uninstall a TemplateVM, execute the following type of command in dom0:
(where `qubes-template-<name>` is the name of your TemplateVM package)
If this doesn't work, you can [remove it manually](/doc/remove-vm-manually/).
If this doesn't work, please see [How to Remove VMs Manually].
If the Applications Menu entry doesn't go away after you uninstall a TemplateVM, execute the following type of command in dom0:
$ rm ~/.local/share/applications/<template-vm-name>
### How to reinstall
## Reinstalling
To reinstall a currently installed TemplateVM, see [here](/doc/reinstall-template/).
Please see [How to Reinstall a TemplateVM].
### How to switch templates
## Switching
When you install a new template or upgrade a clone of a template, it is recommended that you switch everything that was set to the old template to the new template:
@ -92,15 +129,19 @@ When you install a new template or upgrade a clone of a template, it is recommen
Applications Menu --> System Tools --> Qubes Template Manager
3. Base the [DisposableVM Template](/doc/glossary/#disposablevm-template) on the new template.
3. Base the [DisposableVM Template] on the new template.
[user@dom0 ~]$ qvm-create -l red -t new-template new-template-dvm
[user@dom0 ~]$ qvm-prefs new-template-dvm template_for_dispvms True
[user@dom0 ~]$ qvm-features new-template-dvm appmenus-dispvm 1
[user@dom0 ~]$ qubes-prefs default-dispvm new-template-dvm
## Advanced
## Inheritance and Persistence
The following sections cover advanced topics pertaining to TemplateVMs.
### Inheritance and Persistence
Whenever a TemplateBasedVM is created, the contents of the `/home` directory of its parent TemplateVM are *not* copied to the child TemplateBasedVM's `/home`.
The child TemplateBasedVM's `/home` is always independent from its parent TemplateVM's `/home`, which means that any subsequent changes to the parent TemplateVM's `/home` will not affect the child TemplateBasedVM's `/home`.
@ -114,24 +155,110 @@ No changes in any other directories in TemplateBasedVMs persist in this manner.
|TemplateBasedVM (3) | `/etc/skel` to `/home`, `/usr/local.orig` to `/usr/local` | `/rw` (includes `/home`, `/usr/local` and `bind-dirs`)
|DisposableVM | `/rw` (includes `/home`, `/usr/local` and `bind-dirs`) | Nothing
(1) Upon creation
(2) Following shutdown
(3) Including [DisposableVM Templates](/doc/glossary/#disposablevm-template)
(1) Upon creation
(2) Following shutdown
(3) Including [DisposableVM Template]s
## Updating TemplateVMs
### Trusting your TemplateVMs
Templates are not automatically updated when [updating dom0](/doc/software-update-dom0/).
This is by design, since doing so would cause all user modifications to templates to be lost.
Instead, you should update your templates [from within each template](/doc/software-update-vm/).
As the TemplateVM is used for creating filesystems for other AppVMs where you actually do the work, it means that the TemplateVM is as trusted as the most trusted AppVM based on this template.
In other words, if your template VM gets compromised, e.g. because you installed an application, whose *installer's scripts* were malicious, then *all* your AppVMs (based on this template) will inherit this compromise.
When you create a StandaloneVM from a TemplateVM, the StandaloneVM is independent from the TemplateVM.
It will not be updated when the TemplateVM is updated.
Rather, it must be updated individually from inside the StandaloneVM.
There are several ways to deal with this problem:
* Only install packages from trusted sources -- e.g. from the pre-configured Fedora repositories.
All those packages are signed by Fedora, and we expect that at least the package's installation scripts are not malicious.
This is enforced by default (at the [firewall VM level](/doc/firewall/)), by not allowing any networking connectivity in the default template VM, except for access to the Fedora repos.
* Use *standalone VMs* (see below) for installation of untrusted software packages.
* Use multiple templates (see below) for different classes of domains, e.g. a less trusted template, used for creation of less trusted AppVMs, would get various packages from less trusted vendors, while the template used for more trusted AppVMs will only get packages from the standard Fedora repos.
Some popular questions:
> So, why should we actually trust Fedora repos -- it also contains large amount of third-party software that might be buggy, right?
As far as the template's compromise is concerned, it doesn't really matter whether `/usr/bin/firefox` is buggy and can be exploited, or not.
What matters is whether its *installation* scripts (such as %post in the rpm.spec) are benign or not.
Template VM should be used only for installation of packages, and nothing more, so it should never get a chance to actually run `/usr/bin/firefox` and get infected from it, in case it was compromised.
Also, some of your more trusted AppVMs would have networking restrictions enforced by the [firewall VM](/doc/firewall/), and again they should not fear this proverbial `/usr/bin/firefox` being potentially buggy and easy to compromise.
> But why trust Fedora?
Because we chose to use Fedora as a vendor for the Qubes OS foundation (e.g. for Dom0 packages and for AppVM packages).
We also chose to trust several other vendors, such as Xen.org, kernel.org, and a few others whose software we use in Dom0.
We had to trust *somebody* as we are unable to write all the software from scratch ourselves.
But there is a big difference in trusting all Fedora packages to be non-malicious (in terms of installation scripts) vs. trusting all those packages are non-buggy and non-exploitable.
We certainly do not assume the latter.
> So, are the template VMs as trusted as Dom0?
Not quite.
Dom0 compromise is absolutely fatal, and it leads to Game Over<sup>TM</sup>.
However, a compromise of a template affects only a subset of all your AppVMs (in case you use more than one template, or also some standalone VMs).
Also, if your AppVMs are network disconnected, even though their filesystems might get compromised due to the corresponding template compromise, it still would be difficult for the attacker to actually leak out the data stolen in an AppVM.
Not impossible (due to existence of cover channels between VMs on x86 architecture), but difficult and slow.
## Important Notes
### Note on treating TemplateBasedVMs' root filesystem non-persistence as a security feature
* `qvm-trim-template` is not necessary. All VMs are created in a thin pool
and trimming is handled automatically. No user action is required.
Any TemplateBasedVM that is based on a TemplateVM has its root filesystem non-persistent across VM reboots.
In other words, whatever changes the VM makes (or the malware running in this VM makes) to its root filesystem, are automatically discarded whenever one restarts the VM.
This might seem like an excellent anti-malware mechanism to be used inside the VM.
However, one should be careful with treating this property as a reliable way to keep the VM malware-free.
This is because the non-persistence, in the case of normal VMs, applies only to the root filesystem and not to the user filesystem (on which the `/home`, `/rw`, and `/usr/local` are stored) for obvious reasons.
It is possible that malware, especially malware that could be specifically written to target a Qubes-based VMs, could install its hooks inside the user home directory files only.
Examples of obvious places for such hooks could be: `.bashrc`, the Firefox profile directory which contains the extensions, or some PDF or DOC documents that are expected to be opened by the user frequently (assuming the malware found an exploitable bug in the PDF or DOC reader), and surely many others places, all in the user's home directory.
One advantage of the non-persistent rootfs though, is that the malware is still inactive before the user's filesystem gets mounted and "processed" by system/applications, which might theoretically allow for some scanning programs (or a skilled user) to reliably scan for signs of infections of the AppVM.
But, of course, the problem of finding malware hooks in general is hard, so this would work likely only for some special cases (e.g. an AppVM which doesn't use Firefox, as otherwise it would be hard to scan the Firefox profile directory reliably to find malware hooks there).
Also note that the user filesystem's metadata might got maliciously modified by malware in order to exploit a hypothetical bug in the AppVM kernel whenever it mounts the malformed filesystem.
However, these exploits will automatically stop working (and so the infection might be cleared automatically) after the hypothetical bug got patched and the update applied (via template update), which is an exceptional feature of Qubes OS.
Also note that DisposableVMs do not have persistent user filesystem, and so they start up completely "clean" every time.
Note the word "clean" means in this context: the same as their template filesystem, of course.
### Important Notes
* `qvm-trim-template` is no longer necessary or available in Qubes 4.0 and higher.
All VMs are created in a thin pool and trimming is handled automatically.
No user action is required.
See [Disk Trim] for more information.
* RPM-installed templates are "system managed" and therefore cannot be backed up using Qubes' built-in backup function.
In order to ensure the preservation of your custom settings and the availability of a "known-good" backup template, you may wish to clone the default system template and use your clone as the default template for your AppVMs.
* Some templates are available in ready-to-use binary form, but some of them are available only as source code, which can be built using the [Qubes Builder].
In particular, some template "flavors" are available in source code form only.
For the technical details of the template system, please see [TemplateVM Implementation].
Take a look at the [Qubes Builder] documentation for instructions on how to compile them.
[Getting Started]: /getting-started/
[TemplateVMs]: /doc/glossary/#templatevm
[TemplateBasedVMs]: /doc/glossary/#templatebasedvm
[Fedora]: /doc/templates/fedora/
[Minimal]: /doc/templates/minimal/
[Xfce]: /doc/templates/fedora-xfce
[Debian]: /doc/templates/debian/
[Whonix]: /doc/templates/whonix/
[Ubuntu]: /doc/templates/ubuntu/
[Arch Linux]: /doc/templates/archlinux/
[CentOS]: /doc/templates/centos/
[Qubes Builder]: /doc/qubes-builder/
[TemplateVM Implementation]: /doc/template-implementation/
[How to Remove VMs Manually]: /doc/remove-vm-manually/
[DisposableVM Template]: /doc/glossary/#disposablevm-template
[Updating Qubes OS]: /doc/updating-qubes-os/
[Disk Trim]: /doc/disk-trim
[Inheritance and Persistence]: #inheritance-and-persistence
[supported]: /doc/supported-versions/
[Update the TemplateVM]: #updating
[switch]: #switching
[uninstall the old TemplateVM]: #uninstalling
[Updating software in TemplateVMs]: /doc/software-update-domu/#updating-software-in-templatevms
[How to Reinstall a TemplateVM]: /doc/reinstall-template/

View File

@ -7,9 +7,9 @@ permalink: /doc/windows/
Windows VMs in Qubes OS
=======================
Like any other unmodified OSes, Windows can be installed in Qubes as an [HVM](/doc/hvm/) domain.
Like any other unmodified OSes, Windows can be installed in Qubes as an [HVM](/doc/standalone-and-hvm/) domain.
Qubes Windows Tools are then usually installed to provide integration with the rest of the Qubes system; they also include Xen's paravirtualized (PV) drivers to increase performance compared to qemu emulated devices. Alternatively, only Xen's PV drivers can be installed if integration with Qubes isn't required or if the tools aren't supported on a give version of Windows. In the latter case, one would have to [enable inter-VM networking](https://www.qubes-os.org/doc/firewall/#enabling-networking-between-two-qubes) to be able to exchange files with HVMs.
Qubes Windows Tools are then usually installed to provide integration with the rest of the Qubes system; they also include Xen's paravirtualized (PV) drivers to increase performance compared to qemu emulated devices. Alternatively, only Xen's PV drivers can be installed if integration with Qubes isn't required or if the tools aren't supported on a give version of Windows. In the latter case, one would have to [enable inter-VM networking](/doc/firewall/#enabling-networking-between-two-qubes) to be able to exchange files with HVMs.
For more information about Windows VMs in Qubes OS, please see the specific guides below:

View File

@ -61,7 +61,7 @@ By default, most domUs lack direct hardware access.
TemplateVM
----------
Template Virtual Machine.
[Template Virtual Machine](/doc/templates/).
Any [VM](#vm) that supplies its root filesystem to another VM.
TemplateVMs are intended for installing and updating software applications, but not for running them.
@ -76,7 +76,7 @@ Any [VM](#vm) that depends on a [TemplateVM](#templatevm) for its root filesyste
Standalone(VM)
--------------
Standalone (Virtual Machine).
[Standalone (Virtual Machine)](/doc/standalone-and-hvm/).
In general terms, a [VM](#vm) is described as **standalone** if and only if it does not depend on any other VM for its root filesystem.
(In other words, a VM is standalone if and only if it is not a TemplateBasedVM.)
More specifically, a **StandaloneVM** is a type of VM in Qubes that is created by cloning a TemplateVM.
@ -87,7 +87,8 @@ AppVM
-----
Application Virtual Machine.
A [VM](#vm) that is intended for running software applications.
Typically a TemplateBasedVM, but may be a StandaloneVM. Never a TemplateVM.
Typically a TemplateBasedVM, but may be a StandaloneVM.
Never a TemplateVM.
NetVM
-----
@ -113,7 +114,8 @@ A FirewallVM called `sys-firewall` is created by default in most Qubes installat
DisposableVM
------------
[Disposable Virtual Machine]. A temporary [AppVM](#appvm) based on a [DisposableVM Template](#disposablevm-template) that can quickly be created, used, and destroyed.
[Disposable Virtual Machine](/doc/disposablevm/).
A temporary [AppVM](#appvm) based on a [DisposableVM Template](#disposablevm-template) that can quickly be created, used, and destroyed.
DispVM
------
@ -133,7 +135,7 @@ Rather, DisposableVM Templates are complementary to TemplateVMs insofar as Dispo
There are two main kinds of DisposableVM Templates:
* **Dedicated** DisposableVM Templates are intended neither for installing nor running software.
Rather, they are intended for *customizing* or *configuring* software that has already been installed on the TemplateVM on which the DisposableVM Template is based (see [DisposableVM Customization]).
Rather, they are intended for *customizing* or *configuring* software that has already been installed on the TemplateVM on which the DisposableVM Template is based (see [DisposableVM Customization](/doc/disposablevm-customization/).
This software is then intended to be run (in its customized state) in DisposableVMs that are based on the DisposableVM Template.
* **Non-dedicated** DisposableVM Templates are typically [AppVMs](#appvm) on which DisposableVMs are based.
For example, an AppVM could be used to generate and store trusted data.
@ -148,7 +150,7 @@ However, paravirtualized VMs require a PV-enabled kernel and PV drivers, so the
HVM
---
Hardware-assisted Virtual Machine.
[Hardware-assisted Virtual Machine](/doc/standalone-and-hvm/).
Any fully virtualized, or hardware-assisted, [VM](#vm) utilizing the virtualization extensions of the host CPU.
Although HVMs are typically slower than paravirtualized VMs due to the required emulation, HVMs allow the user to create domains based on any operating system.
@ -193,6 +195,3 @@ QWT
----
An abbreviation of Qubes [Windows Tools](#windows-tools).
[Disposable Virtual Machine]: /doc/disposablevm/
[DisposableVM Customization]: /doc/disposablevm-customization/

View File

@ -23,9 +23,9 @@ Dom0
DomU
----
* [qrexec-client-vm](https://raw.githubusercontent.com/QubesOS/qubes-core-agent-linux/master/doc/vm-tools/qrexec-client-vm.rst)
* [qvm-copy-to-vm](https://raw.githubusercontent.com/QubesOS/qubes-core-agent-linux/master/doc/vm-tools/qvm-copy-to-vm.rst)
* [qvm-open-in-dvm](https://raw.githubusercontent.com/QubesOS/qubes-core-agent-linux/master/doc/vm-tools/qvm-open-in-dvm.rst)
* [qvm-open-in-vm](https://raw.githubusercontent.com/QubesOS/qubes-core-agent-linux/master/doc/vm-tools/qvm-open-in-vm.rst)
* [qvm-run-vm](https://raw.githubusercontent.com/QubesOS/qubes-core-agent-linux/master/doc/vm-tools/qvm-run-vm.rst)
* [qrexec-client-vm](https://github.com/QubesOS/qubes-core-qrexec/blob/blob/master/agent/qrexec-client-vm.rst)
* [qvm-copy-to-vm](https://github.com/QubesOS/qubes-core-agent-linux/blob/master/doc/vm-tools/qvm-copy-to-vm.rst)
* [qvm-open-in-dvm](https://github.com/QubesOS/qubes-core-agent-linux/blob/master/doc/vm-tools/qvm-open-in-dvm.rst)
* [qvm-open-in-vm](https://github.com/QubesOS/qubes-core-agent-linux/blob/master/doc/vm-tools/qvm-open-in-vm.rst)
* [qvm-run-vm](https://github.com/QubesOS/qubes-core-agent-linux/blob/master/doc/vm-tools/qvm-run-vm.rst)

View File

@ -35,14 +35,23 @@ In Dom0 install anti-evil-maid:
sudo qubes-dom0-update anti-evil-maid
~~~
More information regarding configuration in the [README](https://github.com/QubesOS/qubes-antievilmaid/blob/master/anti-evil-maid/README) file.
For more information, see the [qubes-antievilmaid](https://github.com/QubesOS/qubes-antievilmaid) repository, which includes a `README`.
Security Considerations
-----------------------
[Qubes security guidelines](/doc/security-guidelines/) dictate that USB devices should never be attached directly to dom0, since this can result in the entire system being compromised. However, in its default configuration, installing and using AEM requires attaching a USB drive (i.e., [mass storage device](https://en.wikipedia.org/wiki/USB_mass_storage_device_class)) directly to dom0. (The other option is to install AEM to an internal disk. However, this carries significant security implications, as explained [here](https://blog.invisiblethings.org/2011/09/07/anti-evil-maid.html).) This presents us with a classic security trade-off: each Qubes user must make a choice between protecting dom0 from a potentially malicious USB drive, on the one hand, and protecting the system from Evil Maid attacks, on the other hand. Given the practical feasibility of attacks like [BadUSB](https://srlabs.de/badusb/) and revelations regarding pervasive government hardware backdoors, this is no longer a straightforward decision. New, factory-sealed USB drives cannot simply be assumed to be "clean" (e.g., to have non-malicious microcontroller firmware). Therefore, it is up to each individual Qubes user to evaluate the relative risk of each attack vector against his or her security model.
[Qubes security guidelines](/doc/security-guidelines/) dictate that USB devices should never be attached directly to dom0, since this can result in the entire system being compromised.
However, in its default configuration, installing and using AEM requires attaching a USB drive (i.e., [mass storage device](https://en.wikipedia.org/wiki/USB_mass_storage_device_class)) directly to dom0.
(The other option is to install AEM to an internal disk.
However, this carries significant security implications, as explained [here](https://blog.invisiblethings.org/2011/09/07/anti-evil-maid.html).) This presents us with a classic security trade-off: each Qubes user must make a choice between protecting dom0 from a potentially malicious USB drive, on the one hand, and protecting the system from Evil Maid attacks, on the other hand.
Given the practical feasibility of attacks like [BadUSB](https://srlabs.de/badusb/) and revelations regarding pervasive government hardware backdoors, this is no longer a straightforward decision.
New, factory-sealed USB drives cannot simply be assumed to be "clean" (e.g., to have non-malicious microcontroller firmware).
Therefore, it is up to each individual Qubes user to evaluate the relative risk of each attack vector against his or her security model.
For example, a user who frequently travels with a Qubes laptop holding sensitive data may be at a much higher risk of Evil Maid attacks than a home user with a stationary Qubes desktop. If the frequent traveler judges her risk of an Evil Maid attack to be higher than the risk of a malicious USB device, she might reasonably opt to install and use AEM. On the other hand, the home user might deem the probability of an Evil Maid attack occurring in her own home to be so low that there is a higher probability that any USB drive she purchases is already compromised, in which case she might reasonably opt never to attach any USB devices directly to dom0. (In either case, users can--and should--secure dom0 against further USB-related attacks through the use of a [USBVM](/doc/security-guidelines/#creating-and-using-a-usbvm).)
For example, a user who frequently travels with a Qubes laptop holding sensitive data may be at a much higher risk of Evil Maid attacks than a home user with a stationary Qubes desktop.
If the frequent traveler judges her risk of an Evil Maid attack to be higher than the risk of a malicious USB device, she might reasonably opt to install and use AEM.
On the other hand, the home user might deem the probability of an Evil Maid attack occurring in her own home to be so low that there is a higher probability that any USB drive she purchases is already compromised, in which case she might reasonably opt never to attach any USB devices directly to dom0.
(In either case, users can--and should--secure dom0 against further USB-related attacks through the use of a [USBVM](/doc/security-guidelines/#creating-and-using-a-usbvm).)
For more information, please see [this discussion thread](https://groups.google.com/d/msg/qubes-devel/EBc4to5IBdg/n1hfsHSfbqsJ).
@ -51,3 +60,4 @@ Known issues
- USB 3.0 isn't supported yet
- [AEM is not compatible with having an SSD cache](https://groups.google.com/d/msgid/qubes-users/70021590-fb3a-4f95-9ce5-4b340530ddbf%40petaramesh.org)

View File

@ -16,11 +16,23 @@ The Role of the Firewall
**[Firewalling in Qubes](/doc/firewall/) is not intended to be a leak-prevention mechanism.**
There are several reasons for this, which will be explained below. However, the main reason is that Qubes cannot prevent an attacker who has compromised one AppVM with restrictive firewall rules from leaking data via cooperative covert channels through another compromised AppVM with nonrestrictive firewall rules.
There are several reasons for this, which will be explained below.
However, the main reason is that Qubes cannot prevent an attacker who has compromised one AppVM with restrictive firewall rules from leaking data via cooperative covert channels through another compromised AppVM with nonrestrictive firewall rules.
For example, suppose you have an `email` AppVM. You have set the firewall rules for `email` such that it can communicate only with your email server. Now suppose that an attacker sends you a GPG-encrypted message which exploits a hypothetical bug in the GnuPG process. There are now multiple ways the attacker could proceed to leak data (such as confidential email messages) from `email`. The most obvious way is by simply emailing the data to himself. Another possibility is that the attacker has also compromised another one of your AppVMs, such as your `netvm`, which is normally assumed to be untrusted and has unrestricted access to the network. In this case, the attacker might move data from `email` to the `netvm` via a covert channel, such as the CPU cache. Such covert channels have been described and even implemented in some "lab environments" and might allow for bandwidths of even a few tens of bits/sec. It is unclear whether such channels could be implemented in a real world system, where multiple VMs are running at the same time, each handling tens or hundreds of processes, all using the same cache memory, but it is worth keeping in mind. Of course, this would require special malware written specifically to attack Qubes OS, and perhaps even a specific Qubes OS version and/or configuration. Nevertheless, it might be possible.
For example, suppose you have an `email` AppVM.
You have set the firewall rules for `email` such that it can communicate only with your email server.
Now suppose that an attacker sends you a GPG-encrypted message which exploits a hypothetical bug in the GnuPG process.
There are now multiple ways the attacker could proceed to leak data (such as confidential email messages) from `email`.
The most obvious way is by simply emailing the data to himself.
Another possibility is that the attacker has also compromised another one of your AppVMs, such as your `netvm`, which is normally assumed to be untrusted and has unrestricted access to the network.
In this case, the attacker might move data from `email` to the `netvm` via a covert channel, such as the CPU cache.
Such covert channels have been described and even implemented in some "lab environments" and might allow for bandwidths of even a few tens of bits/sec.
It is unclear whether such channels could be implemented in a real world system, where multiple VMs are running at the same time, each handling tens or hundreds of processes, all using the same cache memory, but it is worth keeping in mind.
Of course, this would require special malware written specifically to attack Qubes OS, and perhaps even a specific Qubes OS version and/or configuration.
Nevertheless, it might be possible.
Note that physically air-gapped machines are not necessarily immune to this problem. Covert channels can potentially take many forms (e.g., sneakernet thumb drive, bluetooth, or even microphone and speakers).
Note that physically air-gapped machines are not necessarily immune to this problem.
Covert channels can potentially take many forms (e.g., sneakernet thumb drive, bluetooth, or even microphone and speakers).
For a further discussion of covert channels, see [this thread](https://groups.google.com/d/topic/qubes-users/AqZV65yZLuU/discussion) and [#817](https://github.com/QubesOS/qubes-issues/issues/817).
@ -31,10 +43,15 @@ In order to understand and attempt to prevent data leaks in Qubes, we must disti
1. **Intentional leaks.** Malicious software which actively tries to leak data out of an AppVM, perhaps via cooperative covert channels established with other malicious software in another AppVM or on some server via networking, if networking, even limited, is allowed for the AppVM.
2. **Intentional sniffing.** Malicious software trying to use side channels to, e.g., actively guess some key material used in another VM by some non-malicious software there (e.g., non-leak-proof GPG accidentally leaking out bits of the private key by generating some timing patterns when using this key for some crypto operation). Such attacks have been described in the academic literature, but it is doubtful that they would succeed in practice in a moderately busy general purpose system like Qubes OS where the attacker normally has no way to trigger the target crypto operation explicitly and it is normally required that the attacker trigger many such operations.
2. **Intentional sniffing.** Malicious software trying to use side channels to, e.g., actively guess some key material used in another VM by some non-malicious software there (e.g., non-leak-proof GPG accidentally leaking out bits of the private key by generating some timing patterns when using this key for some crypto operation).
Such attacks have been described in the academic literature, but it is doubtful that they would succeed in practice in a moderately busy general purpose system like Qubes OS where the attacker normally has no way to trigger the target crypto operation explicitly and it is normally required that the attacker trigger many such operations.
3. **Unintentional leaks.** Non-malicious software which is either buggy or doesn't maintain the privacy of user data, whether by design or accident. For example, software which automatically sends error reports to a remote server, where these reports contain details about the system which the user did not want to share.
3. **Unintentional leaks.** Non-malicious software which is either buggy or doesn't maintain the privacy of user data, whether by design or accident.
For example, software which automatically sends error reports to a remote server, where these reports contain details about the system which the user did not want to share.
Both Qubes firewall and an empty NetVM (i.e., setting the NetVM of an AppVM to "none") can fully protect against leaks of type 3. However, neither Qubes firewall nor an empty NetVM are guaranteed to protect against leaks of types 1 and 2. There are few effective, practical policy measures available to end-users today to stop the leaks of type 1. It is likely that the only way to fully protect against leaks of type 2 is to either pause or shut down all other VMs while performing sensitive operations in the target VM(s) (such as key generation).
Both Qubes firewall and an empty NetVM (i.e., setting the NetVM of an AppVM to "none") can fully protect against leaks of type 3.
However, neither Qubes firewall nor an empty NetVM are guaranteed to protect against leaks of types 1 and 2.
There are few effective, practical policy measures available to end-users today to stop the leaks of type 1.
It is likely that the only way to fully protect against leaks of type 2 is to either pause or shut down all other VMs while performing sensitive operations in the target VM(s) (such as key generation).
For further discussion, see [this thread](https://groups.google.com/d/topic/qubes-users/t0cmNfuVduw/discussion).

View File

@ -6,7 +6,8 @@ permalink: /doc/device-handling-security/
# Device Handling Security #
Any additional ability a VM gains is additional attack surface. It's a good idea to always attach the minimum entity required in a VM.
Any additional ability a VM gains is additional attack surface.
It's a good idea to always attach the minimum entity required in a VM.
For example, attaching a full USB-device offers [more attack surface than attaching a single block device][USB security], while
attaching a full block device (e.g. `sda`) again offers more attack surface than attaching a single partition (e.g. `sda1`), since the targetVM doesn't have to parse the partition-table.
@ -15,22 +16,38 @@ attaching a full block device (e.g. `sda`) again offers more attack surface than
## PCI Security ##
Attaching a PCI device to a qube has serious security implications. It exposes the device driver running in the qube to an external device (and sourceVM, which contains the device - e.g. `sys-usb`). In many cases a malicious device can choose what driver will be loaded (for example by manipulating device metadata like vendor and product identifiers) - even if the intended driver is sufficiently secure, the device may try to attack a different, less secure driver.
Furthermore that VM has full control of the device and may be able to exploit bugs or malicious implementation of the hardware, as well as plain security problems the hardware may pose. (For example, if you attach a USB controller, all the security implications of USB passthrough apply as well.)
Attaching a PCI device to a qube has serious security implications.
It exposes the device driver running in the qube to an external device (and sourceVM, which contains the device - e.g. `sys-usb`).
In many cases a malicious device can choose what driver will be loaded (for example by manipulating device metadata like vendor and product identifiers) - even if the intended driver is sufficiently secure, the device may try to attack a different, less secure driver.
Furthermore that VM has full control of the device and may be able to exploit bugs or malicious implementation of the hardware, as well as plain security problems the hardware may pose.
(For example, if you attach a USB controller, all the security implications of USB passthrough apply as well.)
By default, Qubes requires any PCI device to be resettable from the outside (i.e. via the hypervisor), which completely reinitialises the device. This ensures that any device that was attached to a compromised VM, even if that VM was able to use bugs in the PCI device to inject malicious code, can be trusted again. (Or at least as trusted as it was when Qubes booted.)
By default, Qubes requires any PCI device to be resettable from the outside (i.e. via the hypervisor), which completely reinitialises the device.
This ensures that any device that was attached to a compromised VM, even if that VM was able to use bugs in the PCI device to inject malicious code, can be trusted again.
(Or at least as trusted as it was when Qubes booted.)
Some devices do not implement a reset option. In these cases, Qubes by default does not allow attaching the device to any VM. If you decide to override this precaution, beware that the device may only be trusted when attached to the first VM. Afterwards, it should be **considered tainted** until the whole system is shut down. Even without malicious intent, usage data may be leaked.
Some devices do not implement a reset option.
In these cases, Qubes by default does not allow attaching the device to any VM.
If you decide to override this precaution, beware that the device may only be trusted when attached to the first VM.
Afterwards, it should be **considered tainted** until the whole system is shut down.
Even without malicious intent, usage data may be leaked.
In case device reset is disabled for any reason, detaching the device should be considered a risk. Ideally, devices for which the `no-strict-reset` option is set are attached once to a VM which isn't shut down until the system is shut down.
In case device reset is disabled for any reason, detaching the device should be considered a risk.
Ideally, devices for which the `no-strict-reset` option is set are attached once to a VM which isn't shut down until the system is shut down.
Additionally, Qubes restricts the config-space a VM may use to communicate with a PCI device. Only whitelisted registers are accessible. However, some devices or applications require full PCI access. In these cases, the whole config-space may be allowed. you're potentially weakening the device isolation, especially if your system is not equipped with a VT-d Interrupt Remapping unit. This increases the VM's ability to run a [side channel attack] and vulnerability to the same.
Additionally, Qubes restricts the config-space a VM may use to communicate with a PCI device.
Only whitelisted registers are accessible.
However, some devices or applications require full PCI access.
In these cases, the whole config-space may be allowed.
You're potentially weakening the device isolation, especially if your system is not equipped with a VT-d Interrupt Remapping unit.
This increases the VM's ability to run a [side channel attack] and vulnerability to the same.
See [Xen PCI Passthrough: PV guests and PCI quirks] and [Software Attacks on Intel VT-d] \(page 7) for more details.
## USB Security ##
The connection of an untrusted USB device to dom0 is a security risk since the device can attack an arbitrary USB driver (which are included in the linux kernel), exploit bugs during partition-table-parsing or simply pretend to be a keyboard. There are many ready-to-use implementations of such attacks, e.g. a [USB Rubber Ducky][rubber duck].
The connection of an untrusted USB device to dom0 is a security risk since the device can attack an arbitrary USB driver (which are included in the linux kernel), exploit bugs during partition-table-parsing or simply pretend to be a keyboard.
There are many ready-to-use implementations of such attacks, e.g. a [USB Rubber Ducky][rubber duck].
The whole USB stack is put to work to parse the data presented by the USB device in order to determine if it is a USB mass storage device, to read its configuration, etc.
This happens even if the drive is then assigned and mounted in another qube.

View File

@ -16,9 +16,8 @@ The Qubes Firewall
Understanding firewalling in Qubes
----------------------------------
Every qube in Qubes is connected to the network via a FirewallVM, which is used to
enforce network-level policies. By default there is one default FirewallVM, but
the user is free to create more, if needed.
Every qube in Qubes is connected to the network via a FirewallVM, which is used to enforce network-level policies.
By default there is one default FirewallVM, but the user is free to create more, if needed.
For more information, see the following:
@ -29,24 +28,19 @@ For more information, see the following:
How to edit rules
-----------------
In order to edit rules for a given qube, select it in the Qubes
Manager and press the "firewall" button:
In order to edit rules for a given qube, select it in the Qubes Manager and press the "firewall" button:
![r2b1-manager-firewall.png](/attachment/wiki/QubesFirewall/r2b1-manager-firewall.png)
*R4.0 note:* ICMP and DNS are no longer accessible in the GUI, but can be changed
via `qvm-firewall` described below. Connections to Updates Proxy are no longer made
over network so can not be allowed or blocked with firewall rules
(see [R4.0 Updates proxy](https://www.qubes-os.org/doc/software-update-vm/) for more detail.
*R4.0 note:* ICMP and DNS are no longer accessible in the GUI, but can be changed via `qvm-firewall` described below.
Connections to Updates Proxy are no longer made over network so can not be allowed or blocked with firewall rules (see [R4.0 Updates proxy](https://www.qubes-os.org/doc/software-update-vm/) for more detail.
Note that if you specify a rule by DNS name it will be resolved to IP(s)
*at the moment of applying the rules*, and not on the fly for each new
connection. This means it will not work for servers using load balancing. More
on this in the message quoted below.
Note that if you specify a rule by DNS name it will be resolved to IP(s) *at the moment of applying the rules*, and not on the fly for each new connection.
This means it will not work for servers using load balancing.
More on this in the message quoted below.
Alternatively, one can use the `qvm-firewall` command from Dom0 to edit the
firewall rules by hand. The firewall rules for each VM are saved in an XML file
in that VM's directory in dom0:
Alternatively, one can use the `qvm-firewall` command from Dom0 to edit the firewall rules by hand.
The firewall rules for each VM are saved in an XML file in that VM's directory in dom0:
/var/lib/qubes/appvms/<vm-name>/firewall.xml
@ -55,31 +49,22 @@ This equates to somewhere between 35 and 39 rules.
If this limit is exceeded, the qube will not start.
The limit was removed in R4.0.
It is possible to work around this limit by enforcing the rules on the qube itself
by putting appropriate rules in `/rw/config`.
It is possible to work around this limit by enforcing the rules on the qube itself by putting appropriate rules in `/rw/config`.
See [Where to put firewall rules](#where-to-put-firewall-rules).
In complex cases, it might be appropriate to load a ruleset using `iptables-restore`
called from `/rw/config/rc.local`.
In complex cases, it might be appropriate to load a ruleset using `iptables-restore` called from `/rw/config/rc.local`.
Reconnecting VMs after a NetVM reboot
-------------------------------------
Normally Qubes doesn't let the user stop a NetVM if there are other qubes
running which use it as their own NetVM. But in case the NetVM stops for
whatever reason (e.g. it crashes, or the user forces its shutdown via qvm-kill
via terminal in Dom0), Qubes R4.0 will often automatically repair the
connection. If it does not, then there is an easy way to restore the connection to
the NetVM by issuing:
Normally Qubes doesn't let the user stop a NetVM if there are other qubes running which use it as their own NetVM.
But in case the NetVM stops for whatever reason (e.g. it crashes, or the user forces its shutdown via qvm-kill via terminal in Dom0), Qubes R4.0 will often automatically repair the connection.
If it does not, then there is an easy way to restore the connection to the NetVM by issuing:
` qvm-prefs <vm> netvm <netvm> `
Normally qubes do not connect directly to the actual NetVM which has networking
devices, but rather to the default sys-firewall first, and in most cases it would
be the NetVM that will crash, e.g. in response to S3 sleep/restore or other
issues with WiFi drivers. In that case it is only necessary to issue the above
command once, for the sys-firewall (this assumes default VM-naming used by the
default Qubes installation):
Normally qubes do not connect directly to the actual NetVM which has networking devices, but rather to the default sys-firewall first, and in most cases it would be the NetVM that will crash, e.g. in response to S3 sleep/restore or other issues with WiFi drivers.
In that case it is only necessary to issue the above command once, for the sys-firewall (this assumes default VM-naming used by the default Qubes installation):
` qvm-prefs sys-firewall netvm sys-net `
@ -107,18 +92,14 @@ Enabling networking between two qubes
-------------------------------------
Normally any networking traffic between qubes is prohibited for security reasons.
However, in special situations, one might want to selectively allow specific qubes
to establish networking connectivity between each other. For example,
this might be useful in some development work, when one wants to test
networking code, or to allow file exchange between HVM domains (which do not
have Qubes tools installed) via SMB/scp/NFS protocols.
However, in special situations, one might want to selectively allow specific qubes to establish networking connectivity between each other.
For example, this might be useful in some development work, when one wants to test networking code, or to allow file exchange between HVM domains (which do not have Qubes tools installed) via SMB/scp/NFS protocols.
In order to allow networking between qubes A and B follow these steps:
* Make sure both A and B are connected to the same firewall vm (by default all
VMs use the same firewall VM).
* Note the Qubes IP addresses assigned to both qubes. This can be done using the
`qvm-ls -n` command, or via the Qubes Manager preferences pane for each qube.
* Make sure both A and B are connected to the same firewall vm (by default all VMs use the same firewall VM).
* Note the Qubes IP addresses assigned to both qubes.
This can be done using the `qvm-ls -n` command, or via the Qubes Manager preferences pane for each qube.
* Start both qubes, and also open a terminal in the firewall VM
* In the firewall VM's terminal enter the following iptables rule:
@ -132,19 +113,12 @@ sudo iptables -I FORWARD 2 -s <IP address of A> -d <IP address of B> -j ACCEPT
sudo iptables -I INPUT -s <IP address of A> -j ACCEPT
~~~
* Now you should be able to reach B from A -- test it using e.g. ping
issued from A. Note however, that this doesn't allow you to reach A from
B -- for this you would need two more rules, with A and B swapped.
* If everything works as expected, then the above iptables rules should be
written into firewallVM's `qubes-firewall-user-script` script which is run
on every firewall update, and A and B's `rc.local` script which is run when
the qube is launched. The `qubes-firewall-user-script` is necessary because Qubes
orders every firewallVM to update all the rules whenever a new connected qube is
started. If we didn't enter our rules into this "hook" script, then shortly
our custom rules would disappear and inter-VM networking would stop working.
Here's an example how to update the script (note that, by default, there is no
script file present, so we will probably be creating it, unless we had some other
custom rules defined earlier in this firewallVM):
* Now you should be able to reach B from A -- test it using e.g. ping issued from A.
Note however, that this doesn't allow you to reach A from B -- for this you would need two more rules, with A and B swapped.
* If everything works as expected, then the above iptables rules should be written into firewallVM's `qubes-firewall-user-script` script which is run on every firewall update, and A and B's `rc.local` script which is run when the qube is launched.
The `qubes-firewall-user-script` is necessary because Qubes orders every firewallVM to update all the rules whenever a new connected qube is started.
If we didn't enter our rules into this "hook" script, then shortly our custom rules would disappear and inter-VM networking would stop working.
Here's an example how to update the script (note that, by default, there is no script file present, so we will probably be creating it, unless we had some other custom rules defined earlier in this firewallVM):
~~~
[user@sys-firewall ~]$ sudo bash
@ -160,14 +134,96 @@ sudo iptables -I INPUT -s <IP address of A> -j ACCEPT
[root@B user]# chmod +x /rw/config/rc.local
~~~
Opening a single TCP port to other network-isolated qube
--------------------------------------------------------
In the case where a specific TCP port needs to be exposed from a qubes to another one, it is not necessary to enable networking between them but one can use the qubes RPC service `qubes.ConnectTCP`.
**1. Simple port binding**
Consider the following example. `mytcp-service` qube has a TCP service running on port `444` and `untrusted` qube needs to access this service.
* In dom0, add the following to `/etc/qubes-rpc/policy/qubes.ConnectTCP`:
~~~
untrusted @default allow,target=mytcp-service
~~~
* In untrusted, use the Qubes tool `qvm-connect-tcp`:
~~~
[user@untrusted #]$ qvm-connect-tcp 444:@default:444
~~~
> Note: The syntax is the same as SSH tunnel handler. The first `444` correspond to the localport destination of `unstrusted`, `@default` the remote machine and the second `444` to the remote machine port.
The service of `mytcp-service` running on port `444` is now accessible in `untrusted` as `localhost:444`.
Here `@default` is used to hide the destination qube which is specified in the Qubes RPC policy by `target=mytcp-service`. Equivalent call is to use the tool as follow:
~~~
[user@untrusted #]$ qvm-connect-tcp ::444
~~~
which means to use default local port of `unstrusted` as the same of the remote port and unspecified destination qube is `@default` by default in `qrexec` call.
**2. Binding remote port on another local port**
Consider now the case where someone prefers to specify the destination qube and use another port in untrusted,for example `10044`. Instead of previous case, add
~~~
untrusted mytcp-service allow
~~~
in `/etc/qubes-rpc/policy/qubes.ConnectTCP` and in untrusted, use the tool as follow:
~~~
[user@untrusted #]$ qvm-connect-tcp 10444:mytcp-service:444
~~~
The service of `mytcp-service` running on port `444` is now accessible in `untrusted` as `localhost:10444`.
**3. Binding to different qubes using RPC policies**
One can go further than the previous examples by redirecting different ports to different qubes. For example, let assume that another qube `mytcp-service-bis` with a TCP service is running on port `445`. If someone wants `untrusted` to be able to reach this service but port `445` is reserved to `mytcp-service-bis` then, in dom0, add the following to `/etc/qubes-rpc/policy/qubes.ConnectTCP+445`:
~~~
untrusted @default allow,target=mytcp-service-bis
~~~
In that case, calling `qvm-connect-tcp` like previous examples, will still bind TCP port `444` of `mytcp-service` to `untrusted` but now, calling it with port `445`
~~~
[user@untrusted #]$ qvm-connect-tcp ::445
~~~
will restrict the binding to only the corresponding TCP port of `mytcp-service-bis`.
**4. Permanent port binding**
For creating a permanent port bind between two qubes, `systemd` can be used. We use the case of the first example. In `/rw/config` (or any place you find suitable) of qube `untrusted`, create `my-tcp-service.socket` with content:
~~~
[Unit]
Description=my-tcp-service
[Socket]
ListenStream=127.0.0.1:444
Accept=true
[Install]
WantedBy=sockets.target
~~~
and `my-tcp-service@.service` with content:
~~~
[Unit]
Description=my-tcp-service
[Service]
ExecStart=qrexec-client-vm '' qubes.ConnectTCP+444
StandardInput=socket
StandardOutput=inherit
~~~
In `/rw/config/rc.local`, append the lines:
~~~
cp -r /rw/config/my-tcp-service.socket /rw/config/my-tcp-service@.service /lib/systemd/system/
systemctl daemon-reload
systemctl start my-tcp-service.socket
~~~
When the qube `unstrusted` has started (after a first reboot), you can directly access the service of `mytcp-service` running on port `444` as `localhost:444`.
Port forwarding to a qube from the outside world
------------------------------------------------
In order to allow a service present in a qube to be exposed to the outside world
in the default setup (where the qube has sys-firewall as network VM, which in
turn has sys-net as network VM)
the following needs to be done:
In order to allow a service present in a qube to be exposed to the outside world in the default setup (where the qube has sys-firewall as network VM, which in turn has sys-net as network VM) the following needs to be done:
* In the sys-net VM:
* Route packets from the outside world to the sys-firewall VM
@ -178,9 +234,7 @@ the following needs to be done:
* In the qube:
* Allow packets through the qube firewall to reach the service
As an example we can take the use case of a web server listening on port 443
that we want to expose on our physical interface eth0, but only to our local
network 192.168.x.0/24.
As an example we can take the use case of a web server listening on port 443 that we want to expose on our physical interface eth0, but only to our local network 192.168.x.0/24.
> Note: To have all interfaces available and configured, make sure the 3 qubes are up and running
@ -188,26 +242,22 @@ network 192.168.x.0/24.
**1. Route packets from the outside world to the FirewallVM**
From a Terminal window in sys-net VM, take note of the 'Interface name' and
'IP address' on which you want to expose your service (i.e. ens5, 192.168.x.x)
From a Terminal window in sys-net VM, take note of the 'Interface name' and 'IP address' on which you want to expose your service (i.e. ens5, 192.168.x.x)
` ifconfig | grep -i cast `
> Note: The vifx.0 interface is the one connected to your sys-firewall VM so it
is _not_ an outside world interface...
From a Terminal window in sys-firewall VM, take note of the 'IP address' for
interface Eth0 (10.137.1.x or 10.137.0.x in Qubes R4)
From a Terminal window in sys-firewall VM, take note of the 'IP address' for interface Eth0 (10.137.1.x or 10.137.0.x in Qubes R4)
` ifconfig | grep -i cast `
Back into the sys-net VM's Terminal, code a natting firewall rule to route
traffic on the outside interface for the service to the sys-firewall VM
Back into the sys-net VM's Terminal, code a natting firewall rule to route traffic on the outside interface for the service to the sys-firewall VM
` iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 443 -d 192.168.x.x -j DNAT --to-destination 10.137.1.x `
Code the appropriate new filtering firewall rule to allow new connections for
the service
Code the appropriate new filtering firewall rule to allow new connections for the service
` iptables -I FORWARD 2 -i eth0 -d 10.137.1.x -p tcp --dport 443 -m conntrack --ctstate NEW -j ACCEPT `
@ -218,8 +268,7 @@ the service
`nft add rule ip qubes-firewall forward meta iifname eth0 ip daddr 10.137.0.x tcp dport 443 ct state new counter accept`
Verify you are cutting through the sys-net VM firewall by looking at its
counters (column 2)
Verify you are cutting through the sys-net VM firewall by looking at its counters (column 2)
` iptables -t nat -L -v -n `
@ -233,8 +282,7 @@ Send a test packet by trying to connect to the service from an external device
` telnet 192.168.x.x 443 `
Once you have confirmed that the counters increase, store these command in
`/rw/config/rc.local` so they get set on sys-net start-up
Once you have confirmed that the counters increase, store these command in `/rw/config/rc.local` so they get set on sys-net start-up
` sudo nano /rw/config/rc.local `
@ -246,19 +294,19 @@ Once you have confirmed that the counters increase, store these command in
# My service routing
# Create a new firewall natting chain for my service
if iptables -t nat -N MY-HTTPS; then
if iptables -w -t nat -N MY-HTTPS; then
# Add a natting rule if it did not exit (to avoid cluter if script executed multiple times)
iptables -t nat -A MY-HTTPS -j DNAT --to-destination 10.137.1.x
iptables -w -t nat -A MY-HTTPS -j DNAT --to-destination 10.137.1.x
fi
# If no prerouting rule exist for my service
if ! iptables -t nat -n -L PREROUTING | grep --quiet MY-HTTPS; then
if ! iptables -w -t nat -n -L PREROUTING | grep --quiet MY-HTTPS; then
# add a natting rule for the traffic (same reason)
iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 443 -d 192.168.0.x -j MY-HTTPS
iptables -w -t nat -A PREROUTING -i eth0 -p tcp --dport 443 -d 192.168.0.x -j MY-HTTPS
fi
@ -266,18 +314,18 @@ fi
# My service filtering
# Create a new firewall filtering chain for my service
if iptables -N MY-HTTPS; then
if iptables -w -N MY-HTTPS; then
# Add a filtering rule if it did not exit (to avoid cluter if script executed multiple times)
iptables -A MY-HTTPS -s 192.168.x.0/24 -j ACCEPT
iptables -w -A MY-HTTPS -s 192.168.x.0/24 -j ACCEPT
fi
# If no forward rule exist for my service
if ! iptables -n -L FORWARD | grep --quiet MY-HTTPS; then
if ! iptables -w -n -L FORWARD | grep --quiet MY-HTTPS; then
# add a forward rule for the traffic (same reason)
iptables -I FORWARD 2 -d 10.137.1.x -p tcp --dport 443 -m conntrack --ctstate NEW -j MY-HTTPS
iptables -w -I FORWARD 2 -d 10.137.1.x -p tcp --dport 443 -m conntrack --ctstate NEW -j MY-HTTPS
fi
~~~
@ -303,18 +351,15 @@ Finally make this file executable, so it runs at each boot
**2. Route packets from the FirewallVM to the VM**
From a Terminal window in the VM where the service to be exposed is running, take note of the 'IP address' for
interface Eth0 (i.e. 10.137.2.y, 10.137.0.y in Qubes R4)
From a Terminal window in the VM where the service to be exposed is running, take note of the 'IP address' for interface Eth0 (i.e. 10.137.2.y, 10.137.0.y in Qubes R4)
` ifconfig | grep -i cast `
Back into the sys-firewall VM's Terminal, code a natting firewall rule to route
traffic on its outside interface for the service to the qube
Back into the sys-firewall VM's Terminal, code a natting firewall rule to route traffic on its outside interface for the service to the qube
` iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 443 -d 10.137.1.x -j DNAT --to-destination 10.137.2.y `
Code the appropriate new filtering firewall rule to allow new connections for
the service
Code the appropriate new filtering firewall rule to allow new connections for the service
` iptables -I FORWARD 2 -i eth0 -s 192.168.x.0/24 -d 10.137.2.y -p tcp --dport 443 -m conntrack --ctstate NEW -j ACCEPT `
@ -337,19 +382,19 @@ Once you have confirmed that the counters increase, store these command in `/rw/
# My service routing
# Create a new firewall natting chain for my service
if iptables -t nat -N MY-HTTPS; then
if iptables -w -t nat -N MY-HTTPS; then
# Add a natting rule if it did not exit (to avoid cluter if script executed multiple times)
iptables -t nat -A MY-HTTPS -j DNAT --to-destination 10.137.2.y
iptables -w -t nat -A MY-HTTPS -j DNAT --to-destination 10.137.2.y
fi
# If no prerouting rule exist for my service
if ! iptables -t nat -n -L PREROUTING | grep --quiet MY-HTTPS; then
if ! iptables -w -t nat -n -L PREROUTING | grep --quiet MY-HTTPS; then
# add a natting rule for the traffic (same reason)
iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 443 -d 10.137.1.x -j MY-HTTPS
iptables -w -t nat -A PREROUTING -i eth0 -p tcp --dport 443 -d 10.137.1.x -j MY-HTTPS
fi
@ -357,18 +402,18 @@ fi
# My service filtering
# Create a new firewall filtering chain for my service
if iptables -N MY-HTTPS; then
if iptables -w -N MY-HTTPS; then
# Add a filtering rule if it did not exit (to avoid cluter if script executed multiple times)
iptables -A MY-HTTPS -s 192.168.x.0/24 -j ACCEPT
iptables -w -A MY-HTTPS -s 192.168.x.0/24 -j ACCEPT
fi
# If no forward rule exist for my service
if ! iptables -n -L FORWARD | grep --quiet MY-HTTPS; then
if ! iptables -w -n -L FORWARD | grep --quiet MY-HTTPS; then
# add a forward rule for the traffic (same reason)
iptables -I FORWARD 4 -d 10.137.2.y -p tcp --dport 443 -m conntrack --ctstate NEW -j MY-HTTPS
iptables -w -I FORWARD 4 -d 10.137.2.y -p tcp --dport 443 -m conntrack --ctstate NEW -j MY-HTTPS
fi
@ -392,8 +437,8 @@ sudo chmod +x /rw/config/qubes-firewall-user-script
**3. Allow packets into the qube to reach the service**
Here no routing is required, only filtering. Proceed in the same way as above
but store the filtering rule in the `/rw/config/rc.local` script.
Here no routing is required, only filtering.
Proceed in the same way as above but store the filtering rule in the `/rw/config/rc.local` script.
` sudo name /rw/config/rc.local `
@ -402,33 +447,34 @@ but store the filtering rule in the `/rw/config/rc.local` script.
# My service filtering
# Create a new firewall filtering chain for my service
if iptables -N MY-HTTPS; then
if iptables -w -N MY-HTTPS; then
# Add a filtering rule if it did not exit (to avoid cluter if script executed multiple times)
iptables -A MY-HTTPS -j ACCEPT
iptables -w -A MY-HTTPS -j ACCEPT
fi
# If no forward rule exist for my service
if ! iptables -n -L FORWARD | grep --quiet MY-HTTPS; then
# If no input rule exists for my service
if ! iptables -w -n -L INPUT | grep --quiet MY-HTTPS; then
# add a forward rule for the traffic (same reason)
iptables -I INPUT 5 -d 10.137.2.x -p tcp --dport 443 -m conntrack --ctstate NEW -j MY-HTTPS
iptables -w -I INPUT 5 -d 10.137.2.x -p tcp --dport 443 -m conntrack --ctstate NEW -j MY-HTTPS
fi
~~~
This time testing should allow connectivity to the service as long as the
service is up :-)
This time testing should allow connectivity to the service as long as the service is up :-)
Where to put firewall rules
---------------------------
Implicit in the above example [scripts](/doc/config-files/), but worth
calling attention to: for all qubes *except* AppVMs supplying networking,
iptables commands should be added to the `/rw/config/rc.local` script. For
AppVMs supplying networking (`sys-firewall` inclusive),
iptables commands should be added to
`/rw/config/qubes-firewall-user-script`.
Implicit in the above example [scripts](/doc/config-files/), but worth calling attention to: for all qubes *except* AppVMs supplying networking, iptables commands should be added to the `/rw/config/rc.local` script.
For AppVMs supplying networking (`sys-firewall` inclusive), iptables commands should be added to `/rw/config/qubes-firewall-user-script`.
Firewall troubleshooting
------------------------
Firewall logs are stored in the systemd journal of the qube the firewall is running in (probably `sys-firewall`).
You can view them by running `sudo journalctl -u qubes-firewall.service` in the relevant qube.
Sometimes these logs can contain useful information about errors that are preventing the firewall from behaving as you would expect.

View File

@ -18,17 +18,10 @@ redirect_from:
## What is Split GPG and why should I use it instead of the standard GPG? ##
Split GPG implements a concept similar to having a smart card with your
private GPG keys, except that the role of the "smart card" plays another Qubes
AppVM. This way one, not-so-trusted domain, e.g. the one where Thunderbird is
running, can delegate all crypto operations, such as encryption/decryption
and signing to another, more trusted, network-isolated, domain. This way
the compromise of your domain where Thunderbird or another client app is
running -- arguably a not-so-unthinkable scenario -- does not allow the
attacker to automatically also steal all your keys. (We should make a rather
obvious comment here that the so-often-used passphrases on private keys are
pretty meaningless because the attacker can easily set up a simple backdoor
which would wait until the user enters the passphrase and steal the key then.)
Split GPG implements a concept similar to having a smart card with your private GPG keys, except that the role of the "smart card" plays another Qubes AppVM.
This way one, not-so-trusted domain, e.g. the one where Thunderbird is running, can delegate all crypto operations, such as encryption/decryption and signing to another, more trusted, network-isolated, domain.
This way the compromise of your domain where Thunderbird or another client app is running -- arguably a not-so-unthinkable scenario -- does not allow the attacker to automatically also steal all your keys.
(We should make a rather obvious comment here that the so-often-used passphrases on private keys are pretty meaningless because the attacker can easily set up a simple backdoor which would wait until the user enters the passphrase and steal the key then.)
The diagram below presents the big picture of Split GPG architecture.
@ -36,79 +29,57 @@ The diagram below presents the big picture of Split GPG architecture.
### Advantages of Split GPG vs. traditional GPG with a smart card ###
It is often thought that the use of smart cards for private key storage
guarantees ultimate safety. While this might be true (unless the attacker
can find a usually-very-expensive-and-requiring-physical-presence way to
extract the key from the smart card) but only with regards to the safety of
the private key itself. However, there is usually nothing that could stop
the attacker from requesting the smart card to perform decryption of all the
user documents the attacker has found or need to decrypt. In other words,
while protecting the user's private key is an important task, we should not
forget that ultimately it is the user data that are to be protected and that
the smart card chip has no way of knowing the requests to decrypt documents
are now coming from the attacker's script and not from the user sitting in
front of the monitor. (Similarly the smart card doesn't make the process
of digitally signing a document or a transaction in any way more secure --
the user cannot know what the chip is really signing. Unfortunately this
problem of signing reliability is not solvable by Split GPG)
It is often thought that the use of smart cards for private key storage guarantees ultimate safety.
While this might be true (unless the attacker can find a usually-very-expensive-and-requiring-physical-presence way to extract the key from the smart card) but only with regards to the safety of the private key itself.
However, there is usually nothing that could stop the attacker from requesting the smart card to perform decryption of all the user documents the attacker has found or need to decrypt.
In other words, while protecting the user's private key is an important task, we should not forget that ultimately it is the user data that are to be protected and that the smart card chip has no way of knowing the requests to decrypt documents are now coming from the attacker's script and not from the user sitting in front of the monitor.
(Similarly the smart card doesn't make the process of digitally signing a document or a transaction in any way more secure -- the user cannot know what the chip is really signing.
Unfortunately this problem of signing reliability is not solvable by Split GPG)
With Qubes Split GPG this problem is drastically minimized, because each time
the key is to be used the user is asked for consent (with a definable time
out, 5 minutes by default), plus is always notified each time the key is used
via a tray notification from the domain where GPG backend is running. This
way it would be easy to spot unexpected requests to decrypt documents.
With Qubes Split GPG this problem is drastically minimized, because each time the key is to be used the user is asked for consent (with a definable time out, 5 minutes by default), plus is always notified each time the key is used via a tray notification from the domain where GPG backend is running.
This way it would be easy to spot unexpected requests to decrypt documents.
![r2-split-gpg-1.png](/attachment/wiki/SplitGpg/r2-split-gpg-1.png)
![r2-split-gpg-3.png](/attachment/wiki/SplitGpg/r2-split-gpg-3.png)
### Current limitations ###
- Current implementation requires importing of public keys to the vault
domain. This opens up an avenue to attack the gpg running in the backend domain
via a hypothetical bug in public key importing code. See ticket [#474] for more
details and plans how to get around this problem, as well as the section on
[using split GPG with subkeys] below.
- Current implementation requires importing of public keys to the vault domain.
This opens up an avenue to attack the gpg running in the backend domain via a hypothetical bug in public key importing code.
See ticket [#474] for more details and plans how to get around this problem, as well as the section on [using Split GPG with subkeys] below.
- It doesn't solve the problem of allowing the user to know what is to be
signed before the operation gets approved. Perhaps the GPG backend domain
could start a DisposableVM and have the to-be-signed document displayed
there? To Be Determined.
- It doesn't solve the problem of allowing the user to know what is to be signed before the operation gets approved.
Perhaps the GPG backend domain could start a DisposableVM and have the to-be-signed document displayed there? To Be Determined.
- The Split GPG client will fail to sign or encrypt if the private key in the
GnuPG backend is protected by a passphrase. It will give an `Inappropriate ioctl
for device` error. Do not set passphrases for the private keys in the GPG
backend domain. Doing so won't provide any extra security anyway, as explained
[above][intro] and [below][using split GPG with subkeys]. If you are generating
a new key pair, or if you have a private key that already has a passphrase, you
can use `gpg2 --edit-key <key_id>` then `passwd` to set an empty passphrase.
Note that `pinentry` might show an error when you try to set an empty
passphrase, but it will still make the change. (See [this StackExchange
answer][se-pinentry] for more information.)
- The Split GPG client will fail to sign or encrypt if the private key in the GnuPG backend is protected by a passphrase.
It will give an `Inappropriate ioctl for device` error.
Do not set passphrases for the private keys in the GPG backend domain.
Doing so won't provide any extra security anyway, as explained [above][intro] and [below][using Split GPG with subkeys].
If you are generating a new key pair, or if you have a private key that already has a passphrase, you can use `gpg2 --edit-key <key_id>` then `passwd` to set an empty passphrase.
Note that `pinentry` might show an error when you try to set an empty passphrase, but it will still make the change.
(See [this StackExchange answer][se-pinentry] for more information.)
## Configuring Split GPG ##
In dom0, make sure the `qubes-gpg-split-dom0` package is installed.
[user@dom0 ~]$ sudo qubes-dom0-update qubes-gpg-split-dom0
If using templates based on Debian or Whonix, make sure you have the `qubes-gpg-split`
package installed.
If using templates based on Debian or Whonix, make sure you have the `qubes-gpg-split` package installed.
[user@debian-8 ~]$ sudo apt install qubes-gpg-split
For Fedora.
[user@fedora-25 ~]$ sudo dnf install qubes-gpg-split
Start with creating a dedicated AppVM for storing your keys (the GPG backend
domain). It is recommended that this domain be network disconnected (set its
netvm to `none`) and only used for this one purpose. In later examples this
AppVM is named `work-gpg`, but of course it might have any other name.
Start with creating a dedicated AppVM for storing your keys (the GPG backend domain).
It is recommended that this domain be network disconnected (set its netvm to `none`) and only used for this one purpose.
In later examples this AppVM is named `work-gpg`, but of course it might have any other name.
### Setting up the GPG backend domain ###
Make sure that gpg is installed there, and there are some private keys in the
keyring, e.g.:
Make sure that gpg is installed there, and there are some private keys in the keyring, e.g.:
[user@work-gpg ~]$ gpg -K
/home/user/.gnupg/secring.gpg
@ -118,26 +89,23 @@ keyring, e.g.:
ssb 4096R/30498E2A 2012-11-15
(...)
This is pretty much all that is required.
However, you might want to modify the default timeout: this tells the backend for how long the user's approval for key access should be valid.
(The default is 5 minutes.)
You can change this via the `QUBES_GPG_AUTOACCEPT` variable.
This is pretty much all that is required.
However, you might want to modify the default timeout: this tells the backend for how long the user's approval for key access should be valid.
(The default is 5 minutes.) You can change this via the `QUBES_GPG_AUTOACCEPT` variable.
You can override it e.g. in `~/.profile`:
[user@work-gpg ~]$ echo "export QUBES_GPG_AUTOACCEPT=86400" >> ~/.profile
Please note that at one time, this parameter was set in ~/.bash_profile.
This will no longer work.
If you have the parameter set in ~/.bash_profile you *must* update your configuration.
Please be aware of the caveat regarding passphrase-protected keys in the
[Current limitations][current-limitations] section.
Please be aware of the caveat regarding passphrase-protected keys in the [Current limitations][current-limitations] section.
### Configuring the client apps to use Split GPG backend ###
Normally it should be enough to set the `QUBES_GPG_DOMAIN` to the GPG backend
domain name and use `qubes-gpg-client` in place of `gpg`, e.g.:
Normally it should be enough to set the `QUBES_GPG_DOMAIN` to the GPG backend domain name and use `qubes-gpg-client` in place of `gpg`, e.g.:
[user@work ~]$ export QUBES_GPG_DOMAIN=work-gpg
[user@work ~]$ gpg -K
@ -148,57 +116,46 @@ domain name and use `qubes-gpg-client` in place of `gpg`, e.g.:
uid Qubes OS Security Team <security@qubes-os.org>
ssb 4096R/30498E2A 2012-11-15
(...)
[user@work ~]$ qubes-gpg-client secret_message.txt.asc
[user@work ~]$ qubes-gpg-client secret_message.txt.asc
(...)
Note that running normal `gpg -K` in the demo above shows no private keys
stored in this AppVM.
Note that running normal `gpg -K` in the demo above shows no private keys stored in this AppVM.
A note on `gpg` and `gpg2`:
Throughout this guide, we refer to `gpg`, but note that Split-GPG uses `gpg2`
under the hood for compatibility with programs like Enigmail (which now supports
only `gpg2`). If you encounter trouble while trying to set up Split-GPG, make
sure you're using `gpg2` for your configuration and testing, since keyring data
may differ between the two installations.
Throughout this guide, we refer to `gpg`, but note that Split GPG uses `gpg2` under the hood for compatibility with programs like Enigmail (which now supports only `gpg2`).
If you encounter trouble while trying to set up Split GPG, make sure you're using `gpg2` for your configuration and testing, since keyring data may differ between the two installations.
### Using Thunderbird + Enigmail with Split GPG ###
### Advanced Configuration ###
However, when using Thunderbird with Enigmail extension it is
not enough, because Thunderbird doesn't preserve the environment
variables. Instead it is recommended to use a simple script provided by
`/usr/bin/qubes-gpg-client-wrapper` file by pointing Enigmail to use this
script instead of the standard GnuPG binary:
![tb-enigmail-split-gpg-settings-2.png](/attachment/wiki/SplitGpg/tb-enigmail-split-gpg-settings-2.png)
The script also sets the `QUBES_GPG_DOMAIN` variable automatically based on
the content of the file `/rw/config/gpg-split-domain`, which should be set to
the name of the GPG backend VM. This file survives the AppVM reboot, of course.
The `qubes-gpg-client-wrapper` script sets the `QUBES_GPG_DOMAIN` variable automatically based on the content of the file `/rw/config/gpg-split-domain`, which should be set to the name of the GPG backend VM. This file survives the AppVM reboot, of course.
[user@work ~]$ sudo bash
[root@work ~]$ echo "work-gpg" > /rw/config/gpg-split-domain
#### Qubes 4.0 Specifics ####
Split GPG's default qrexec policy requires the user to enter the name of the AppVM containing GPG keys on each invocation. To improve usability for applications like Thunderbird with Enigmail, in `dom0` place the following line at the top of the file `/etc/qubes-rpc/policy/qubes.Gpg`:
New qrexec policies in Qubes R4.0 by default require the user to enter the name
of the domain containing GPG keys each time it is accessed. To improve usability
for Thunderbird+Enigmail, in `dom0` place the following line at the top of the file
`/etc/qubes-rpc/policy/qubes.Gpg`:
work-email work-gpg allow
```
work-email work-gpg allow
```
where `work-email` is the Thunderbird+Enigmail AppVM and `work-gpg` contains
your GPG keys.
where `work-email` is the Thunderbird + Enigmail AppVM and `work-gpg` contains your GPG keys.
You may also edit the qrexec policy file for Split GPG in order to tell Qubes your default gpg vm (qrexec prompts will appear with the gpg vm preselected as the target, instead of the user needing to type a name in manually). To do this, append `,default_target=<vmname>` to `ask` in `/etc/qubes-rpc/policy/qubes.Gpg`. For the examples given on this page:
@anyvm @anyvm ask,default_target=work-gpg
Note that, because this makes it easier to accept Split GPG's qrexec authorization prompts, it may decrease security if the user is not careful in reviewing presented prompts. This may also be inadvisable if there are multiple AppVMs with Split GPG set up.
### Using Thunderbird + Enigmail with Split GPG ###
It is recommended to set up and use `/usr/bin/qubes-gpg-client-wrapper`, as discussed above, by pointing Enigmail at this script instead of the standard GnuPG binary:
![tb-enigmail-split-gpg-settings-2.png](/attachment/wiki/SplitGpg/tb-enigmail-split-gpg-settings-2.png)
## Using Git with Split GPG ##
Git can be configured to used with Split-GPG, something useful if you would
like to contribute to the Qubes OS Project as every commit is required to be
signed. The most basic `~/.gitconfig` file to with working Split-GPG looks
something like this.
Git can be configured to used with Split GPG, something useful if you would like to contribute to the Qubes OS Project as every commit is required to be signed.
The most basic `~/.gitconfig` file to with working Split GPG looks something like this.
[user]
name = YOUR NAME
@ -208,39 +165,34 @@ something like this.
[gpg]
program = qubes-gpg-client-wrapper
Your key id is the public id of your signing key, which can be found by running
`qubes-gpg-client -k`. In this instance, the key id is DD160C74.
Your key id is the public id of your signing key, which can be found by running `qubes-gpg-client -k`.
In this instance, the key id is DD160C74.
[user@work ~]$ qubes-gpg-client -k
/home/user/.gnupg/pubring.kbx
-----------------------------
-----------------------------
pub rsa4096/DD160C74 2016-04-26
uid Qubes User
To sign commits, you now add the "-S" flag to your commit command, which should
prompt for Split-GPG usage. If you would like automatically sign all commits,
you can add the following snippet to `~/.gitconfig`.
To sign commits, you now add the "-S" flag to your commit command, which should prompt for Split GPG usage.
If you would like automatically sign all commits, you can add the following snippet to `~/.gitconfig`.
[commit]
gpgsign = true
Lastly, if you would like to add aliases to sign and verify tags using the
conventions the Qubes OS Project recommends, you can add the following snippet
to `~/.gitconfig`.
Lastly, if you would like to add aliases to sign and verify tags using the conventions the Qubes OS Project recommends, you can add the following snippet to `~/.gitconfig`.
[alias]
stag = "!id=`git rev-parse --verify HEAD`; git tag -s user_${id:0:8} -m \"Tag for commit $id\""
vtag = !git tag -v `git describe`
Replace `user` with your short, unique nickname. Now you can use `git stag` to
add a signed tag to a commit and `git vtag` to verify the most recent tag that
is reachable from a commit.
Replace `user` with your short, unique nickname.
Now you can use `git stag` to add a signed tag to a commit and `git vtag` to verify the most recent tag that is reachable from a commit.
## Importing public keys ###
Use `qubes-gpg-import-key` in the client AppVM to import the key into the
GPG backend VM. Of course a (safe, unspoofable) user consent dialog box is
displayed to accept this.
Use `qubes-gpg-import-key` in the client AppVM to import the key into the GPG backend VM.
Of course a (safe, unspoofable) user consent dialog box is displayed to accept this.
[user@work ~]$ export QUBES_GPG_DOMAIN=work-gpg
[user@work ~]$ qubes-gpg-import-key ~/Downloads/marmarek.asc
@ -252,17 +204,13 @@ displayed to accept this.
## Advanced: Using Split GPG with Subkeys ##
Users with particularly high security requirements may wish to use Split
GPG with [subkeys]. However, this setup
comes at a significant cost: It will be impossible to sign other people's keys
with the master secret key without breaking this security model. Nonetheless,
if signing others' keys is not required, then Split GPG with subkeys offers
unparalleled security for one's master secret key.
Users with particularly high security requirements may wish to use Split GPG with [subkeys].
However, this setup comes at a significant cost: It will be impossible to sign other people's keys with the master secret key without breaking this security model.
Nonetheless, if signing others' keys is not required, then Split GPG with subkeys offers unparalleled security for one's master secret key.
### Setup Description ###
In this example, the following keys are stored in the following locations
(see below for definitions of these terms):
In this example, the following keys are stored in the following locations (see below for definitions of these terms):
| PGP Key(s) | VM Name |
| ---------- | ------------ |
@ -274,123 +222,79 @@ In this example, the following keys are stored in the following locations
* `sec` (master secret key)
Depending on your needs, you may wish to create this as a **certify-only
(C)** key, i.e., a key which is capable only of signing (a.k.a.,
"certifying") other keys. This key may be created *without* an expiration
date. This is for two reasons. First, the master secret key is never to
leave the `vault` VM, so it is extremely unlikely ever to be obtained by
an adversary (see below). Second, an adversary who *does* manage to obtain
the master secret key either possesses the passphrase to unlock the key
(if one is used) or does not. An adversary who *does* possess the passphrase
can simply use it to legally extend the expiration date of the key
(or remove it entirely). An adversary who does *not* possess the passphrase
cannot use the key at all. In either case, an expiration date provides no
additional benefit.
Depending on your needs, you may wish to create this as a **certify-only (C)** key, i.e., a key which is capable only of signing (a.k.a., "certifying") other keys.
This key may be created *without* an expiration date.
This is for two reasons.
First, the master secret key is never to leave the `vault` VM, so it is extremely unlikely ever to be obtained by an adversary (see below).
Second, an adversary who *does* manage to obtain the master secret key either possesses the passphrase to unlock the key (if one is used) or does not.
An adversary who *does* possess the passphrase can simply use it to legally extend the expiration date of the key (or remove it entirely).
An adversary who does *not* possess the passphrase cannot use the key at all.
In either case, an expiration date provides no additional benefit.
By the same token, however, having a passphrase on the key is of little
value. An adversary who is capable of stealing the key from your `vault`
would almost certainly also be capable of stealing the passphrase as
you enter it. An adversary who obtains the passphrase can then use it
in order to change or remove the passphrase from the key. Therefore,
using a passphrase at all should be considered optional. It is, however,
recommended that a **revocation certificate** be created and safely stored
in multiple locations so that the master keypair can be revoked in the
(exceedingly unlikely) event that it is ever compromised.
By the same token, however, having a passphrase on the key is of little value.
An adversary who is capable of stealing the key from your `vault` would almost certainly also be capable of stealing the passphrase as you enter it.
An adversary who obtains the passphrase can then use it in order to change or remove the passphrase from the key.
Therefore, using a passphrase at all should be considered optional.
It is, however, recommended that a **revocation certificate** be created and safely stored in multiple locations so that the master keypair can be revoked in the (exceedingly unlikely) event that it is ever compromised.
* `ssb` (secret subkey)
Depending on your needs, you may wish to create two different subkeys: one
for **signing (S)** and one for **encryption (E)**. You may also wish to
give these subkeys reasonable expiration dates (e.g., one year). Once these
keys expire, it is up to you whether to *renew* these keys by extending the
expiration dates or to create *new* subkeys when the existing set expires.
Depending on your needs, you may wish to create two different subkeys: one for **signing (S)** and one for **encryption (E)**.
You may also wish to give these subkeys reasonable expiration dates (e.g., one year).
Once these keys expire, it is up to you whether to *renew* these keys by extending the expiration dates or to create *new* subkeys when the existing set expires.
On the one hand, an adversary who obtains any existing encryption subkey
(for example) will be able to use it in order to decrypt all emails (for
example) which were encrypted to that subkey. If the same subkey were to
continue to be used--and its expiration date continually extended--only
that one key would need to be stolen (e.g., as a result of the `work-gpg`
VM being compromised; see below) in order to decrypt *all* of the user's
emails. If, on the other hand, each encryption subkey is used for at most
approximately one year, then an adversary who obtains the secret subkey will
be capable of decrypting at most approximately one year's worth of emails.
On the one hand, an adversary who obtains any existing encryption subkey (for example) will be able to use it in order to decrypt all emails (for example) which were encrypted to that subkey.
If the same subkey were to continue to be used--and its expiration date continually extended--only that one key would need to be stolen (e.g., as a result of the `work-gpg` VM being compromised; see below) in order to decrypt *all* of the user's emails.
If, on the other hand, each encryption subkey is used for at most approximately one year, then an adversary who obtains the secret subkey will be capable of decrypting at most approximately one year's worth of emails.
On the other hand, creating a new signing subkey each year without
renewing (i.e., extending the expiration dates of) existing signing
subkeys would mean that all of your old signatures would eventually
read as "EXPIRED" whenever someone attempts to verify them. This can be
problematic, since there is no consensus on how expired signatures should
be handled. Generally, digital signatures are intended to last forever,
so this is a strong reason against regularly retiring one's signing subkeys.
On the other hand, creating a new signing subkey each year without renewing (i.e., extending the expiration dates of) existing signing subkeys would mean that all of your old signatures would eventually read as "EXPIRED" whenever someone attempts to verify them.
This can be problematic, since there is no consensus on how expired signatures should be handled.
Generally, digital signatures are intended to last forever, so this is a strong reason against regularly retiring one's signing subkeys.
* `pub` (public key)
This is the complement of the master secret key. It can be uploaded to
keyservers (or otherwise publicly distributed) and may be signed by others.
This is the complement of the master secret key.
It can be uploaded to keyservers (or otherwise publicly distributed) and may be signed by others.
* `vault`
This is a network-isolated VM. The initial master keypair and subkeys are
generated in this VM. The master secret key *never* leaves this VM under
*any* circumstances. No files or text is *ever* [copied] or [pasted] into
this VM under *any* circumstances.
This is a network-isolated VM.
The initial master keypair and subkeys are generated in this VM.
The master secret key *never* leaves this VM under *any* circumstances.
No files or text is *ever* [copied] or [pasted] into this VM under *any* circumstances.
* `work-gpg`
This is a network-isolated VM. This VM is used *only* as the GPG backend for
`work-email`. The secret subkeys (but *not* the master secret key) are
[copied] from the `vault` VM to this VM. Files from less trusted VMs are
*never* [copied] into this VM under *any* circumstances.
This is a network-isolated VM.
This VM is used *only* as the GPG backend for `work-email`.
The secret subkeys (but *not* the master secret key) are [copied] from the `vault` VM to this VM.
Files from less trusted VMs are *never* [copied] into this VM under *any* circumstances.
* `work-email`
This VM has access to the mail server. It accesses the `work-gpg` VM via
the Split GPG protocol. The public key may be stored in this VM so that
it can be attached to emails and for other such purposes.
This VM has access to the mail server.
It accesses the `work-gpg` VM via the Split GPG protocol.
The public key may be stored in this VM so that it can be attached to emails and for other such purposes.
### Security Benefits ###
In the standard Split GPG setup, there are at least two ways in
which the `work-gpg` VM might be compromised. First, an attacker
who is capable of exploiting a hypothetical bug in `work-email`'s
[MUA] could gain control of
the `work-email` VM and send a malformed request which exploits a hypothetical
bug in the GPG backend (running in the `work-gpg` VM), giving the attacker
control of the `work-gpg` VM. Second, a malicious public key file which is
imported into the `work-gpg` VM might exploit a hypothetical bug in the GPG
backend which is running there, again giving the attacker control of the
`work-gpg` VM. In either case, such an attacker might then be able to leak
both the master secret key and its passphrase (if any is used, it would
regularly be input in the work-gpg VM and therefore easily obtained by an
attacker who controls this VM) back to the `work-email` VM or to another VM
(e.g., the `netvm`, which is always untrusted by default) via the Split GPG
protocol or other [covert channels]. Once the master secret
key is in the `work-email` VM, the attacker could simply email it to himself
(or to the world).
In the standard Split GPG setup, there are at least two ways in which the `work-gpg` VM might be compromised.
First, an attacker who is capable of exploiting a hypothetical bug in `work-email`'s [MUA] could gain control of the `work-email` VM and send a malformed request which exploits a hypothetical bug in the GPG backend (running in the `work-gpg` VM), giving the attacker control of the `work-gpg` VM.
Second, a malicious public key file which is imported into the `work-gpg` VM might exploit a hypothetical bug in the GPG backend which is running there, again giving the attacker control of the `work-gpg` VM.
In either case, such an attacker might then be able to leak both the master secret key and its passphrase (if any is used, it would regularly be input in the work-gpg VM and therefore easily obtained by an attacker who controls this VM) back to the `work-email` VM or to another VM (e.g., the `netvm`, which is always untrusted by default) via the Split GPG protocol or other [covert channels].
Once the master secret key is in the `work-email` VM, the attacker could simply email it to himself (or to the world).
In the alternative setup described in this section (i.e., the subkey
setup), even an attacker who manages to gain access to the `work-gpg` VM
will not be able to obtain the user's master secret key since it is simply
not there. Rather, the master secret key remains in the `vault` VM, which
is extremely unlikely to be compromised, since nothing is ever copied or
transferred into it.<sup>\*</sup> The attacker might nonetheless be able to
leak the secret subkeys from the `work-gpg` VM in the manner described above,
but even if this is successful, the secure master secret key can simply be
used to revoke the compromised subkeys and to issue new subkeys in their
place. (This is significantly less devastating than having to create a new
*master* keypair.)
In the alternative setup described in this section (i.e., the subkey setup), even an attacker who manages to gain access to the `work-gpg` VM will not be able to obtain the user's master secret key since it is simply not there.
Rather, the master secret key remains in the `vault` VM, which is extremely unlikely to be compromised, since nothing is ever copied or transferred into it.
<sup>\*</sup> The attacker might nonetheless be able to leak the secret subkeys from the `work-gpg` VM in the manner described above, but even if this is successful, the secure master secret key can simply be used to revoke the compromised subkeys and to issue new subkeys in their place.
(This is significantly less devastating than having to create a new *master* keypair.)
<sup>\*</sup>In order to gain access to the `vault` VM, the attacker
would require the use of, e.g., a general Xen VM escape exploit
or a [signed, compromised package which is already installed in the
TemplateVM][trusting-templates]
upon which the `vault` VM is based.
<sup>\*</sup>In order to gain access to the `vault` VM, the attacker would require the use of, e.g., a general Xen VM escape exploit or a [signed, compromised package which is already installed in the TemplateVM][trusting-templates] upon which the `vault` VM is based.
### Subkey Tutorials and Discussions ###
(Note: Although the tutorials below were not written with Qubes Split GPG
in mind, they can be adapted with a few commonsense adjustments. As always,
exercise caution and use your good judgment.)
(Note: Although the tutorials below were not written with Qubes Split GPG in mind, they can be adapted with a few commonsense adjustments.
As always, exercise caution and use your good judgment.)
- ["OpenPGP in Qubes OS" on the qubes-users mailing list][openpgp-in-qubes-os]
- ["Creating the Perfect GPG Keypair" by Alex Cabal][cabal]
@ -399,7 +303,7 @@ exercise caution and use your good judgment.)
[#474]: https://github.com/QubesOS/qubes-issues/issues/474
[using split GPG with subkeys]: #advanced-using-split-gpg-with-subkeys
[using Split GPG with subkeys]: #advanced-using-split-gpg-with-subkeys
[intro]: #what-is-split-gpg-and-why-should-i-use-it-instead-of-the-standard-gpg
[se-pinentry]: https://unix.stackexchange.com/a/379373
[subkeys]: https://wiki.debian.org/Subkeys
@ -407,10 +311,9 @@ exercise caution and use your good judgment.)
[pasted]: /doc/copy-paste#on-copypaste-security
[MUA]: https://en.wikipedia.org/wiki/Mail_user_agent
[covert channels]: /doc/data-leaks
[trusting-templates]: /doc/software-update-vm/#notes-on-trusting-your-templatevms
[trusting-templates]: /doc/templates/#trusting-your-templatevms
[openpgp-in-qubes-os]: https://groups.google.com/d/topic/qubes-users/Kwfuern-R2U/discussion
[cabal]: https://alexcabal.com/creating-the-perfect-gpg-keypair/
[luck]: https://gist.github.com/abeluck/3383449
[apapadop]: https://apapadop.wordpress.com/2013/08/21/using-gnupg-with-qubesos/
[current-limitations]: #current-limitations

View File

@ -6,115 +6,68 @@ permalink: /doc/u2f-proxy/
# The Qubes U2F Proxy
The [Qubes U2F Proxy] is a secure proxy intended
to make use of U2F two-factor authentication devices with web browsers without
exposing the browser to the full USB stack, not unlike the [USB keyboard and
mouse proxies][USB] implemented in Qubes.
The [Qubes U2F Proxy] is a secure proxy intended to make use of U2F two-factor authentication devices with web browsers without exposing the browser to the full USB stack, not unlike the [USB keyboard and mouse proxies][USB] implemented in Qubes.
## What is U2F?
[U2F], which stands for "Universal 2nd Factor", is a framework for
authentication using hardware devices (U2F tokens) as "second factors", i.e.
*what you have* as opposed to *what you know*, like a passphrase. This
additional control provides [good protection][krebs] in cases in which the
passphrase is stolen (e.g. by phishing or keylogging). While passphrase
compromise may not be obvious to the user, a physical device that cannot be
duplicated must be stolen to be used outside of the owner's control.
Nonetheless, it is important to note at the outset that U2F cannot guarantee
security when the host system is compromised (e.g. a malware-infected operating
system under an adversary's control).
[U2F], which stands for "Universal 2nd Factor", is a framework for authentication using hardware devices (U2F tokens) as "second factors", i.e. *what you have* as opposed to *what you know*, like a passphrase.
This additional control provides [good protection][krebs] in cases in which the passphrase is stolen (e.g. by phishing or keylogging).
While passphrase compromise may not be obvious to the user, a physical device that cannot be duplicated must be stolen to be used outside of the owner's control.
Nonetheless, it is important to note at the outset that U2F cannot guarantee security when the host system is compromised (e.g. a malware-infected operating system under an adversary's control).
The U2F specification defines protocols for multiple layers from USB to the
browser API, and the whole stack is intended to be used with web applications
(most commonly websites) in browsers. In most cases, tokens are USB dongles.
The protocol is very simple, allowing the devices to store very little state
inside (so the tokens may be reasonably cheap) while simultaneously
authenticating a virtually unlimited number of services (so each person needs
only one token, not one token per application). The user interface is usually
limited to a single LED and a button that is pressed to confirm each
transaction, so the devices themselves are also easy to use.
The U2F specification defines protocols for multiple layers from USB to the browser API, and the whole stack is intended to be used with web applications (most commonly websites) in browsers.
In most cases, tokens are USB dongles.
The protocol is very simple, allowing the devices to store very little state inside (so the tokens may be reasonably cheap) while simultaneously authenticating a virtually unlimited number of services (so each person needs only one token, not one token per application).
The user interface is usually limited to a single LED and a button that is pressed to confirm each transaction, so the devices themselves are also easy to use.
Currently, the most common form of two-step authentication consists of a numeric
code that the user manually types into a web application. These codes are
typically generated by an app on the user's smartphone or sent via SMS. By now,
it is well-known that this form of two-step authentication is vulnerable to
phishing and man-in-the-middle attacks due to the fact that the application
requesting the two-step authentication code is typically not itself
authenticated by the user. (In other words, users can accidentally give their
codes to attackers because they do not always know who is really requesting the
code.) In the U2F model, by contrast, the browser ensures that the token
receives valid information about the web application requesting authentication,
so the token knows which application it is authenticating (for details, see
[here][u2f-details]). Nonetheless, [some attacks are still possible][wired] even
with U2F (more on this below).
Currently, the most common form of two-step authentication consists of a numeric code that the user manually types into a web application.
These codes are typically generated by an app on the user's smartphone or sent via SMS.
By now, it is well-known that this form of two-step authentication is vulnerable to phishing and man-in-the-middle attacks due to the fact that the application requesting the two-step authentication code is typically not itself authenticated by the user.
(In other words, users can accidentally give their codes to attackers because they do not always know who is really requesting the code.) In the U2F model, by contrast, the browser ensures that the token receives valid information about the web application requesting authentication, so the token knows which application it is authenticating (for details, see [here][u2f-details]).
Nonetheless, [some attacks are still possible][wired] even with U2F (more on this below).
## The Qubes approach to U2F
In a conventional setup, web browsers and the USB stack (to which the U2F token
is connected) are all running in the same monolithic OS. Since the U2F model
assumes that the browser is trustworthy, any browser in the OS is able to access
any key stored on the U2F token. The user has no way to know which keys have
been accessed by which browsers for which services. If any of the browsers are
compromised, it should be assumed that all of the token's keys have been
compromised. (This problem can be mitigated, however, if the U2F device has a
special display to show the user what's being authenticated.) Moreover, since
the USB stack is in the same monolithic OS, the system is vulnerable to attacks
like [BadUSB].
In a conventional setup, web browsers and the USB stack (to which the U2F token is connected) are all running in the same monolithic OS.
Since the U2F model assumes that the browser is trustworthy, any browser in the OS is able to access any key stored on the U2F token.
The user has no way to know which keys have been accessed by which browsers for which services.
If any of the browsers are compromised, it should be assumed that all of the token's keys have been compromised.
(This problem can be mitigated, however, if the U2F device has a special display to show the user what's being authenticated.) Moreover, since the USB stack is in the same monolithic OS, the system is vulnerable to attacks like [BadUSB].
In Qubes OS, by contrast, it is possible to securely compartmentalise the
browser in one qube and the USB stack in another so that they are always kept
separate from each other. The Qubes U2F Proxy then allows the token connected to
the USB stack in one qube to communicate with the browser in a separate qube. We
operate under the assumption that the USB stack is untrusted from the point of
view of the browser and also that the browser is not to be trusted blindly by
the token. Therefore, the token is never in the same qube as the browser. Our
proxy forwards only the data necessary to actually perform the authentication,
leaving all unnecessary data out, so it won't become a vector of attack. This is
depicted in the diagram below (click for full size).
In Qubes OS, by contrast, it is possible to securely compartmentalise the browser in one qube and the USB stack in another so that they are always kept separate from each other.
The Qubes U2F Proxy then allows the token connected to the USB stack in one qube to communicate with the browser in a separate qube.
We operate under the assumption that the USB stack is untrusted from the point of view of the browser and also that the browser is not to be trusted blindly by the token.
Therefore, the token is never in the same qube as the browser.
Our proxy forwards only the data necessary to actually perform the authentication, leaving all unnecessary data out, so it won't become a vector of attack.
This is depicted in the diagram below (click for full size).
[![Qubes U2F Proxy diagram](/attachment/wiki/posts/u2f.svg)](/attachment/wiki/posts/u2f.svg)
The Qubes U2F Proxy has two parts: the frontend and the backend. The frontend
runs in the same qube as the browser and presents a fake USB-like HID device
using `uhid`. The backend runs in `sys-usb` and behaves like a browser. This is
done using the `u2flib_host` reference library. All of our code was written in
Python. The standard [qrexec] policy is responsible for directing calls to the
appropriate domains.
The Qubes U2F Proxy has two parts: the frontend and the backend.
The frontend runs in the same qube as the browser and presents a fake USB-like HID device using `uhid`.
The backend runs in `sys-usb` and behaves like a browser.
This is done using the `u2flib_host` reference library.
All of our code was written in Python.
The standard [qrexec] policy is responsible for directing calls to the appropriate domains.
The `vault` qube with a dashed line in the bottom portion of the diagram depicts
future work in which we plan to implement the Qubes U2F Proxy with a software
token in an isolated qube rather than a physical hardware token. This is similar
to the manner in which [Split GPG] allows us to emulate the smart card model
without physical smart cards.
The `vault` qube with a dashed line in the bottom portion of the diagram depicts future work in which we plan to implement the Qubes U2F Proxy with a software token in an isolated qube rather than a physical hardware token.
This is similar to the manner in which [Split GPG] allows us to emulate the smart card model without physical smart cards.
One very important assumption of U2F is that the browser verifies every request
sent to the U2F token --- in particular, that the web application sending an
authentication request matches the application that would be authenticated by
answering that request (in order to prevent, e.g., a phishing site from sending
an authentication request for your bank's site). With the WebUSB feature in
Chrome, however, a malicious website can [bypass][wired] this safeguard by
connecting directly to the token instead of using the browser's U2F API.
One very important assumption of U2F is that the browser verifies every request sent to the U2F token --- in particular, that the web application sending an authentication request matches the application that would be authenticated by answering that request (in order to prevent, e.g., a phishing site from sending an authentication request for your bank's site).
With the WebUSB feature in Chrome, however, a malicious website can [bypass][wired] this safeguard by connecting directly to the token instead of using the browser's U2F API.
The Qubes U2F Proxy also prevents this class of attacks by implementing an
additional verification layer. This verification layer allows you to enforce,
for example, that the web browser in your `twitter` qube can only access the U2F
key associated with `https://twitter.com`. This means that if anything in your
`twitter` qube were compromised --- the browser or even the OS itself --- it
would still not be able to access the U2F keys on your token for any other
websites or services, like your email and bank accounts. This is another
significant security advantage over monolithic systems. (For details and
instructions, see the [Advanced usage] section below.)
The Qubes U2F Proxy also prevents this class of attacks by implementing an additional verification layer.
This verification layer allows you to enforce, for example, that the web browser in your `twitter` qube can only access the U2F key associated with `https://twitter.com`.
This means that if anything in your `twitter` qube were compromised --- the browser or even the OS itself --- it would still not be able to access the U2F keys on your token for any other websites or services, like your email and bank accounts.
This is another significant security advantage over monolithic systems.
(For details and instructions, see the [Advanced usage] section below.)
For even more protection, you can combine this with the [Qubes firewall] to
ensure, for example, that the browser in your `banking` qube accesses only one
website (your bank's website). By configuring the Qubes firewall to prevent your
`banking` qube from accessing any other websites, you reduce the risk of another
website compromising the browser in an attempt to bypass U2F authentication.
For even more protection, you can combine this with the [Qubes firewall] to ensure, for example, that the browser in your `banking` qube accesses only one website (your bank's website).
By configuring the Qubes firewall to prevent your `banking` qube from accessing any other websites, you reduce the risk of another website compromising the browser in an attempt to bypass U2F authentication.
## Installation
These instructions assume that there is a `sys-usb` qube that holds the USB
stack, which is the default configuration in most Qubes OS installations.
These instructions assume that there is a `sys-usb` qube that holds the USB stack, which is the default configuration in most Qubes OS installations.
In dom0:
@ -135,59 +88,41 @@ In Debian TemplateVMs:
$ sudo apt install qubes-u2f
```
Repeat `qvm-service --enable` (or do this in VM settings -> Services in the Qube
Manager) for all qubes that should have the proxy enabled. As usual with
software updates, shut down the templates after installation, then restart
`sys-usb` and all qubes that use the proxy. After that, you may use your U2F
token (but see [Browser support] below).
Repeat `qvm-service --enable` (or do this in VM settings -> Services in the Qube Manager) for all qubes that should have the proxy enabled.
As usual with software updates, shut down the templates after installation, then restart `sys-usb` and all qubes that use the proxy.
After that, you may use your U2F token (but see [Browser support] below).
## Advanced usage: per-qube key access
If you are using Qubes 4.0, you can further compartmentalise your U2F keys by
restricting each qube's access to specific keys. For example, you could make it
so that your `twitter` qube (and, therefore, all web browsers in your `twitter`
qube) can access only the key on your U2F token for `https://twitter.com`,
regardless of whether any of the web browsers in your `twitter` qube or the
`twitter` qube itself are compromised. If your `twitter` qube makes an
authentication request for your bank website, it will be denied at the Qubes
policy level.
If you are using Qubes 4.0, you can further compartmentalise your U2F keys by restricting each qube's access to specific keys.
For example, you could make it so that your `twitter` qube (and, therefore, all web browsers in your `twitter` qube) can access only the key on your U2F token for `https://twitter.com`, regardless of whether any of the web browsers in your `twitter` qube or the `twitter` qube itself are compromised.
If your `twitter` qube makes an authentication request for your bank website, it will be denied at the Qubes policy level.
To enable this, create a file in dom0 named
`/etc/qubes-rpc/policy/policy.RegisterArgument+u2f.Authenticate` with the
following content:
To enable this, create a file in dom0 named `/etc/qubes-rpc/policy/policy.RegisterArgument+u2f.Authenticate` with the following content:
```
sys-usb @anyvm allow,target=dom0
```
Next, empty the contents of `/etc/qubes-rpc/policy/u2f.Authenticate` so that it
is a blank file. Do not delete the file itself. (If you do, the default file
will be recreated the next time you update, so it will no longer be empty.)
Finally, follow your web application's instructions to enroll your token and use
it as usual. (This enrollment process depends on the web application and is in
no way specific to Qubes U2F.)
Next, empty the contents of `/etc/qubes-rpc/policy/u2f.Authenticate` so that it is a blank file.
Do not delete the file itself.
(If you do, the default file will be recreated the next time you update, so it will no longer be empty.) Finally, follow your web application's instructions to enroll your token and use it as usual.
(This enrollment process depends on the web application and is in no way specific to Qubes U2F.)
The default model is to allow a qube to access all and only the keys that were
enrolled by that qube. For example, if your `banking` qube enrolls your banking
key, and your `twitter` qube enrolls your Twitter key, then your `banking` qube
will have access to your banking key but not your Twitter key, and your
`twitter` qube will have access to your Twitter key but not your banking key.
The default model is to allow a qube to access all and only the keys that were enrolled by that qube.
For example, if your `banking` qube enrolls your banking key, and your `twitter` qube enrolls your Twitter key, then your `banking` qube will have access to your banking key but not your Twitter key, and your `twitter` qube will have access to your Twitter key but not your banking key.
## TemplateVM and browser support
The large number of possible combinations of
TemplateVM (Fedora 27, 28; Debian 8, 9) and browser (multiple Google Chrome
versions, multiple Chromium versions, multiple Firefox versions) made it
impractical for us to test every combination that users are likely to attempt
with the Qubes U2F Proxy. In some cases, you may be the first person to try a
particular combination. Consequently (and as with any new feature), users will
inevitably encounter bugs. We ask for your patience and understanding in this
regard. As always, please [report any bugs you encounter].
The large number of possible combinations of TemplateVM (Fedora 27, 28; Debian 8, 9) and browser (multiple Google Chrome versions, multiple Chromium versions, multiple Firefox versions) made it impractical for us to test every combination that users are likely to attempt with the Qubes U2F Proxy.
In some cases, you may be the first person to try a particular combination.
Consequently (and as with any new feature), users will inevitably encounter bugs.
We ask for your patience and understanding in this regard.
As always, please [report any bugs you encounter].
Please note that, in Firefox before Quantum (e.g. Firefox 52 in Debian 9), you
have to install the [U2F Support Add-on][ff-u2f-addon]. In Firefox post-Quantum
you may have to enable the `security.webauth.u2f` flag in `about:config`. Chrome
and Chromium do not require any special browser extensions.
Please note that, in Firefox before Quantum (e.g. Firefox 52 in Debian 9), you have to install the [U2F Support Add-on][ff-u2f-addon].
In Firefox post-Quantum you may have to enable the `security.webauth.u2f` flag in `about:config`.
Chrome and Chromium do not require any special browser extensions.
[Qubes U2F Proxy]: https://github.com/QubesOS/qubes-app-u2f

View File

@ -85,7 +85,9 @@ Below is a complete list of configuration made according to the above statement,
- NetworkManager configuration from normal user (nm-applet)
- updates installation (gpk-update-viewer)
- user can use pkexec just like sudo Note: above is needed mostly because Qubes user GUI session isn't treated by PolicyKit/logind as "local" session because of the way in which X server and session is started. Perhaps we will address this issue in the future, but this is really low priority. Patches welcomed anyway.
- user can use pkexec just like sudo Note: above is needed mostly because Qubes user GUI session isn't treated by PolicyKit/logind as "local" session because of the way in which X server and session is started.
Perhaps we will address this issue in the future, but this is really low priority.
Patches welcomed anyway.
3. Empty root password
- used for access to 'root' account from text console (xl console) - the only way to access the VM when GUI isn't working
@ -94,17 +96,16 @@ Below is a complete list of configuration made according to the above statement,
Replacing passwordless root access with Dom0 user prompt
--------------------------------------------------------
While ITL supports the statement above, some Qubes users may wish to enable
user/root isolation in VMs anyway. We do not support it in any of our packages,
but of course nothing is preventing the user from modifying his or her own
system. A list of steps to do so is provided here **without any guarantee of
safety, accuracy, or completeness. Proceed at your own risk. Do not rely on
this for extra security.**
While ITL supports the statement above, some Qubes users may wish to enable user/root isolation in VMs anyway.
We do not support it in any of our packages, but of course nothing is preventing the user from modifying his or her own system.
A list of steps to do so is provided here **without any guarantee of safety, accuracy, or completeness.
Proceed at your own risk.
Do not rely on this for extra security.**
1. Adding Dom0 "VMAuth" service:
[root@dom0 /]# echo "/usr/bin/echo 1" >/etc/qubes-rpc/qubes.VMAuth
[root@dom0 /]# echo "\$anyvm dom0 ask,default_target=dom0" \
[root@dom0 /]# echo "@anyvm dom0 ask,default_target=dom0" \
>/etc/qubes-rpc/policy/qubes.VMAuth
(Note: any VMs you would like still to have passwordless root access (e.g. TemplateVMs) can be specified in the second file with "\<vmname\> dom0 allow")
@ -116,7 +117,8 @@ this for extra security.**
auth requisite pam_deny.so
auth required pam_permit.so
- Require authentication for sudo. Replace the first line of /etc/sudoers.d/qubes with:
- Require authentication for sudo.
Replace the first line of /etc/sudoers.d/qubes with:
user ALL=(ALL) ALL
@ -132,7 +134,8 @@ this for extra security.**
auth requisite pam_deny.so
auth required pam_permit.so
- Require authentication for sudo. Replace the first line of /etc/sudoers.d/qubes with:
- Require authentication for sudo.
Replace the first line of /etc/sudoers.d/qubes with:
user ALL=(ALL) ALL
@ -156,4 +159,5 @@ this for extra security.**
Dom0 passwordless root access
-----------------------------
There is also passwordless user->root access in dom0. As stated in comment in sudo configuration there (different one than VMs one), there is really no point in user/root isolation, because all the user data (and VM management interface) is already accessible from dom0 user level, so there is nothing more to get from dom0 root account.
There is also passwordless user->root access in dom0.
As stated in comment in sudo configuration there (different one than VMs one), there is really no point in user/root isolation, because all the user data (and VM management interface) is already accessible from dom0 user level, so there is nothing more to get from dom0 root account.

View File

@ -10,33 +10,28 @@ redirect_from:
Using YubiKey to Qubes authentication
=====================================
You can use YubiKey to enhance Qubes user authentication, for example to mitigate
risk of snooping the password. This can also slightly improve security when you have [USB keyboard](/doc/device-handling-security/#security-warning-on-usb-input-devices).
You can use YubiKey to enhance Qubes user authentication, for example to mitigate risk of snooping the password.
This can also slightly improve security when you have [USB keyboard](/doc/device-handling-security/#security-warning-on-usb-input-devices).
There (at least) two possible configurations: using OTP mode and using challenge-response mode.
OTP mode
--------
This can be configured using
[app-linux-yubikey](https://github.com/adubois/qubes-app-linux-yubikey)
package. This package does not support sharing the same key slot with other
applications (it will deny further authentications if you try).
This can be configured using [app-linux-yubikey](https://github.com/adubois/qubes-app-linux-yubikey) package.
This package does not support sharing the same key slot with other applications (it will deny further authentications if you try).
Contrary to instruction there, currently there is no binary package in the Qubes
repository and you need to compile it yourself. This might change in the future.
Contrary to instruction there, currently there is no binary package in the Qubes repository and you need to compile it yourself.
This might change in the future.
Challenge-response mode
----------------------
In this mode, your YubiKey will generate a response based on the secret key, and
random challenge (instead of counter). This means that it isn't possible to
generate a response in advance even if someone gets access to your YubiKey. This
makes it reasonably safe to use the same YubiKey for other services (also in
challenge-response mode).
In this mode, your YubiKey will generate a response based on the secret key, and random challenge (instead of counter).
This means that it isn't possible to generate a response in advance even if someone gets access to your YubiKey.
This makes it reasonably safe to use the same YubiKey for other services (also in challenge-response mode).
Same as in the OTP case, you will need to set up your YubiKey, choose a separate
password (other than your login password!) and apply the configuration.
Same as in the OTP case, you will need to set up your YubiKey, choose a separate password (other than your login password!) and apply the configuration.
To use this mode you need to:
@ -50,13 +45,10 @@ To use this mode you need to:
sudo apt-get install yubikey-personalization yubikey-personalization-gui
Shut down your TemplateVM. Then reboot your USB VM (so changes inside the TemplateVM take effect
in your TemplateBased USB VM or install the packages inside your USB VM if you would like to avoid
rebooting your USB VM.
Shut down your TemplateVM.
Then reboot your USB VM (so changes inside the TemplateVM take effect in your TemplateBased USB VM or install the packages inside your USB VM if you would like to avoid rebooting your USB VM.
2. Configure your YubiKey for challenge-response `HMAC-SHA1` mode, for example
[following this
tutorial](https://www.yubico.com/products/services-software/personalization-tools/challenge-response/).
2. Configure your YubiKey for challenge-response `HMAC-SHA1` mode, for example [following this tutorial](https://www.yubico.com/products/services-software/personalization-tools/challenge-response/).
On Debian, you can run the graphical user interface `yubikey-personalization-gui` from the command line.
@ -91,17 +83,18 @@ To use this mode you need to:
echo -n "$password" | openssl dgst -sha1
7. Edit `/etc/pam.d/login` in dom0. Add this line at the beginning:
7. Edit `/etc/pam.d/login` in dom0.
Add this line at the beginning:
auth include yubikey
8. Edit `/etc/pam.d/xscreensaver` (or appropriate file if you are using other
screen locker program) in dom0. Add this line at the beginning:
8. Edit `/etc/pam.d/xscreensaver` (or appropriate file if you are using screen locker program) in dom0.
Add this line at the beginning:
auth include yubikey
9. Edit `/etc/pam.d/lightdm` (or appropriate file if you are using other
display manager) in dom0. Add this line at the beginning:
9. Edit `/etc/pam.d/lightdm` (or appropriate file if you are using other display manager) in dom0.
Add this line at the beginning:
auth include yubikey
@ -116,46 +109,44 @@ When you want to unlock your screen...
When everything is ok, your screen will be unlocked.
In any case you can still use your login password, but do it in a secure location
where no one can snoop your password.
In any case you can still use your login password, but do it in a secure location where no one can snoop your password.
### Mandatory YubiKey Login
Edit `/etc/pam.d/yubikey` (or appropriate file if you are using other screen locker program)
and remove `default=ignore` so the line looks like this.
Edit `/etc/pam.d/yubikey` (or appropriate file if you are using other screen locker program) and remove `default=ignore` so the line looks like this.
auth [success=done] pam_exec.so expose_authtok quiet /usr/bin/yk-auth
Locking the screen when YubiKey is removed
------------------------------------------
You can setup your system to automatically lock the screen when you unplug your
YubiKey. This will require creating a simple qrexec service which will expose
the ability to lock the screen to your USB VM, and then adding a udev hook to
actually call that service.
You can setup your system to automatically lock the screen when you unplug your YubiKey.
This will require creating a simple qrexec service which will expose the ability to lock the screen to your USB VM, and then adding a udev hook to actually call that service.
In dom0:
1. First configure the qrexec service. Create `/etc/qubes-rpc/custom.LockScreen`
with a simple command to lock the screen. In the case of xscreensaver (used in Xfce)
it would be:
1. First configure the qrexec service.
Create `/etc/qubes-rpc/custom.LockScreen` with a simple command to lock the screen.
In the case of xscreensaver (used in Xfce) it would be:
DISPLAY=:0 xscreensaver-command -lock
2. Allow your USB VM to call that service. Assuming that it's named `sys-usb` it
would require creating `/etc/qubes-rpc/policy/custom.LockScreen` with:
2. Allow your USB VM to call that service.
Assuming that it's named `sys-usb` it would require creating `/etc/qubes-rpc/policy/custom.LockScreen` with:
sys-usb dom0 allow
In your USB VM:
3. Create udev hook. Store it in `/rw/config` to have it
persist across VM restarts. For example name the file
`/rw/config/yubikey.rules`. Add the following line:
3. Create udev hook.
Store it in `/rw/config` to have it persist across VM restarts.
For example name the file `/rw/config/yubikey.rules`.
Add the following line:
ACTION=="remove", SUBSYSTEM=="usb", ENV{ID_SECURITY_TOKEN}=="1", RUN+="/usr/bin/qrexec-client-vm dom0 custom.LockScreen"
4. Ensure that the udev hook is placed in the right place after VM restart. Append to `/rw/config/rc.local`:
4. Ensure that the udev hook is placed in the right place after VM restart.
Append to `/rw/config/rc.local`:
ln -s /rw/config/yubikey.rules /etc/udev/rules.d/
udevadm control --reload