This commit is contained in:
Brennan Novak 2015-12-02 19:14:39 +01:00
commit 84c1a0c722
183 changed files with 3418 additions and 1819 deletions

View File

@ -1,6 +1,7 @@
Qubes OS Documentation
======================
https://www.qubes-os.org/doc/
Canonical URL: https://www.qubes-os.org/doc/
All [Qubes OS Project][qubes] documentation pages are stored as plain text
files in this dedicated repository. By cloning and regularly pulling from
@ -44,5 +45,5 @@ making contributions, please observe the following style conventions:
[qubes]: https://github.com/QubesOS
[gh-fork]: https://guides.github.com/activities/forking/
[gh-pull]: https://help.github.com/articles/using-pull-requests/
[patch]: /doc/SourceCode/#sending-a-patch
[lists]: https://www.qubes-os.org/doc/QubesLists/
[patch]: https://www.qubes-os.org/doc/source-code/#sending-a-patch
[lists]: https://www.qubes-os.org/doc/mailing-lists/

View File

@ -1,8 +1,9 @@
---
layout: doc
title: Getting Started
permalink: /en/doc/getting-started/
permalink: /doc/getting-started/
redirect_from:
- /en/doc/getting-started/
- /doc/GettingStarted/
- /wiki/GettingStarted/
---
@ -29,14 +30,14 @@ Each domain, apart from having a distinct name, is also assigned a **label**, wh
![snapshot12.png](/attachment/wiki/GettingStarted/snapshot12.png)
In addition to AppVMs and TemplateVMs, there's one special domain called "dom0," which is where the Desktop Manager runs. This is where you log in to the system. Dom0 is more trusted than any other domain (including TemplateVMs and black-labeled domains). If dom0 were ever compromised, it would be Game Over<sup>TM</sup>. (The entire system would effectively be compromised.) Due to its overarching importance, dom0 has no network connectivity and is used only for running the Window and Desktop Managers. Dom0 shouldn't be used for anything else. In particular, [you should never run user applications in dom0](/en/doc/security-guidelines/#dom0-precautions). (That's what your AppVMs are for!)
In addition to AppVMs and TemplateVMs, there's one special domain called "dom0," which is where the Desktop Manager runs. This is where you log in to the system. Dom0 is more trusted than any other domain (including TemplateVMs and black-labeled domains). If dom0 were ever compromised, it would be Game Over<sup>TM</sup>. (The entire system would effectively be compromised.) Due to its overarching importance, dom0 has no network connectivity and is used only for running the Window and Desktop Managers. Dom0 shouldn't be used for anything else. In particular, [you should never run user applications in dom0](/doc/security-guidelines/#dom0-precautions). (That's what your AppVMs are for!)
Qubes VM Manager and Command Line Tools
---------------------------------------
All aspects of the Qubes system can be controlled using command line tools run under a dom0 console. To open a console window in dom0, either go to Start-\>System Tools-\>Konsole or press Alt-F2 and type `konsole`.
Various command line tools are described as part of this guide, and the whole reference can be found [here](/en/doc/dom0-tools/).
Various command line tools are described as part of this guide, and the whole reference can be found [here](/doc/dom0-tools/).
![r2b1-dom0-konsole.png](/attachment/wiki/GettingStarted/r2b1-dom0-konsole.png)
@ -79,7 +80,7 @@ How Many Domains Do I Need?
That's a great question, but there's no one-size-fits-all answer. It depends on the structure of your digital life, and this is at least a little different for everyone. If you plan on using your system for work, then it also depends on what kind of job you do.
It's a good idea to start out with the three domains created automatically by the installer: work, personal, and untrusted. Then, if and when you start to feel that some activity just doesn't fit into any of your existing domains, you can easily create a new domain for it. You'll also be able to easily copy any files you need to the newly created domain, as explained [here](/en/doc/copying-files/).
It's a good idea to start out with the three domains created automatically by the installer: work, personal, and untrusted. Then, if and when you start to feel that some activity just doesn't fit into any of your existing domains, you can easily create a new domain for it. You'll also be able to easily copy any files you need to the newly created domain, as explained [here](/doc/copying-files/).
More paranoid people might find it worthwhile to read [this article](http://theinvisiblethings.blogspot.com/2011/03/partitioning-my-digital-life-into.html), which describes how one of the Qubes authors partitions her digital life into security domains.
@ -116,4 +117,4 @@ In order for the changes to take effect, restart the AppVM(s).
* * * * *
Now that you're familiar with the basics, please have a look at the rest of the [documentation](/en/doc/).
Now that you're familiar with the basics, please have a look at the rest of the [documentation](/doc/).

188
basics/intro.md Normal file
View File

@ -0,0 +1,188 @@
---
layout: doc
title: Introduction
permalink: /intro/
redirect_from:
- /en/intro/
- /doc/SimpleIntro/
- /wiki/SimpleIntro/
---
A Simple Introduction to Qubes
==============================
This is a short, non-technical introduction to Qubes intended for a popular
audience. (If you just want to quickly gain a basic understanding of what
Qubes is all about, you're in the right place!)
What is Qubes?
--------------
Qubes is a security-oriented operating system (OS). The OS is the software
which runs all the other programs on a computer. Some examples of popular
OSes are Microsoft Windows, Mac OS X, Android, and iOS. Qubes is free and
open-source software (FOSS). This means that everyone is free to use, copy,
and change the software in any way. It also means that the source code is
openly available so others can contribute to and audit it.
Why is OS security important?
-----------------------------
Most people use an operating system like Windows or OS X on their desktop
and laptop computers. These OSes are popular because they tend to be easy
to use and usually come pre-installed on the computers people buy. However,
they present problems when it comes to security. For example, you might
open an innocent-looking email attachment or website, not realizing that
you're actually allowing malware (malicious software) to run on your
computer. Depending on what kind of malware it is, it might do anything
from showing you unwanted advertisements to logging your keystrokes to
taking over your entire computer. This could jeopardize all the information
stored on or accessed by this computer, such as health records, confidential
communications, or thoughts written in a private journal. Malware can also
interfere with the activities you perform with your computer. For example,
if you use your computer to conduct financial transactions, the malware
might allow its creator to make fradulent transactions in your name.
Aren't antivirus programs and firewalls enough?
-----------------------------------------------
Unfortunately, conventional security approaches like antivirus programs
and (software and/or hardware) firewalls are no longer enough to keep out
sophisticated attackers. For example, nowadays it's common for malware
creators to check to see if their malware is recognized by any popular
antivirus programs. If it's recognized, they scramble their code until it's
no longer recognizable by the antivirus programs, then send it out. The
best antivirus programs will subsequently get updated once the antivirus
programmers discover the new threat, but this usually occurs at least a
few days after the new attacks start to appear in the wild. By then, it's
typically too late for those who have already been compromised. In addition,
bugs are inevitably discovered in the common software we all use (such as
our web browsers), and no antivirus program or firewall can prevent all of
these bugs from being exploited.
How does Qubes provide security?
--------------------------------
Qubes takes an appraoch called **security by compartmentalization**, which
allows you to compartmentalize the various parts of your digital life into
securely isolated virtual machines (VMs). A VM is basically a simulated
computer with its own OS which runs as software on your physical computer. You
can think of a VM as a *computer within a computer*.
This approach allows you to keep the different things you do on your computer
securely separated from each other in isolated VMs so that one VM getting
compromised won't affect the others. For example, you might have one VM for
visiting untrusted websites and a different VM for doing online banking. This
way, if your untrusted browsing VM gets compromised by a malware-laden
website, your online banking activities won't be at risk. Similarly, if
you're concerned about malicious email attachments, Qubes can make it so
that every attachment gets opened in its own single-use, "disposable" VM. In
this way, Qubes allows you to do everything on the same physical computer
without having to worry about a single successful cyberattack taking down
your entire digital life in one fell swoop.
How does Qubes compare to using a "live CD" OS?
-----------------------------------------------
Booting your computer from a live CD (or DVD) when you need to perform
sensitive activities can certainly be more secure than simply using your main
OS, but this method still preserves many of the risks of conventional OSes. For
example, popular live OSes (such as [Tails] and other Linux distributions)
are still **monolithic** in the sense that all software is still running in
the same OS. This means, once again, that if your session is compromised,
then all the data and activities performed within that same session are also
potentially compromised.
How does Qubes compare to running VMs in a convential OS?
---------------------------------------------------------
Not all virtual machine software is equal when it comes to security. You may
have used or heard of VMs in relation to software like VirtualBox or VMware
Workstation. These are known as "Type 2" or "hosted" hypervisors. (The
**hypervisor** is the software, firmware, or hardware that creates and
runs virtual machines.) These programs are popular because they're designed
primarily to be easy to use and run under popular OSes like Windows (which
is called the **host** OS, since it "hosts" the VMs). However, the fact
that Type 2 hypervisors run under the host OS means that they're really
only as secure as the host OS itself. If the host OS is ever compromised,
then any VMs it hosts are also effectively compromised.
By contrast, Qubes uses a "Type 1" or "bare metal" hypervisor called
[Xen]. Instead of running inside an OS, Type 1 hypervisors run directly on the
"bare metal" of the hardware. This means that an attacker must be capable of
subverting the hypervisor itself in order to compromise the entire system,
which is vastly more difficult.
Qubes makes it so that multiple VMs running under a Type 1 hypervisor can be
securely used as an integrated OS. For example, it puts all of your application
windows on the same desktop with special colored borders indicating the
trust levels of their respective VMs. It also allows for things like secure
copy/paste operations between VMs, securely copying and transferring files
between VMs, and secure networking between VMs and the Internet.
How does Qubes compare to using a separate physical machine?
------------------------------------------------------------
Using a separate physical computer for sensitive activities can certainly be
more secure than using one computer with a conventional OS for everything,
but there are still risks to consider. Briefly, here are some of the main
pros and cons of this approach relative to Qubes:
Pros:
* Physical separation doesn't rely on a hypervisor. (It's very unlikely
that an attacker will break out of Qubes' hypervisor, but if she were to
manage to do so, she could potentially gain control over the entire system.)
* Physical separation can be a natural complement to physical security. (For
example, you might find it natural to lock your secure laptop in a safe
when you take your unsecure laptop out with you.)
Cons:
* Physical separation can be cumbersome and expensive, since we may have to
obtain and set up a separate physical machine for each security level we
need.
* There's generally no secure way to transfer data between physically
separate computers running conventional OSes. (Qubes has a secure inter-VM
file transfer system to handle this.)
* Physically separate computers running conventional OSes are still
independently vulnerable to most conventional attacks due to their monolithic
nature.
* Malware which can bridge air gaps has existed for several years now and
is becoming increasingly common.
(For more on this topic, please see the paper
[Software compartmentalization vs. physical separation][paper-compart].)
More information
----------------
This page is just a brief sketch of what Qubes is all about, and many
technical details have been omitted here for the sake of presentation.
* If you're a current or potential Qubes user, you may want to check out the
[documentation][doc] and the [FAQ][user-faq].
* If you're a developer, there's dedicated [documentation][system-doc]
and an [FAQ][devel-faq] just for you.
* Ready to give Qubes a try? Head on over to the [downloads] page.
* Once you've installed Qubes, here's a guide on [getting started].
[Tails]: https://tails.boum.org/
[Xen]: http://www.xenproject.org
[paper-compart]: http://www.invisiblethingslab.com/resources/2014/Software_compartmentalization_vs_physical_separation.pdf
[doc]: /doc/
[user-faq]: /doc/user-faq/
[system-doc]: /doc/system-doc/
[devel-faq]: /doc/devel-faq/
[downloads]: /downloads/
[getting started]: /doc/getting-started/

View File

@ -1,8 +1,11 @@
---
layout: doc
title: Mailing Lists
permalink: /en/doc/mailing-lists/
permalink: /doc/mailing-lists/
redirect_from:
- /en/doc/mailing-lists/
- /en/doc/qubes-lists/
- /doc/qubes-lists/
- /doc/QubesLists/
- /wiki/QubesLists/
---
@ -47,8 +50,8 @@ This list is for helping users solve various daily problems with Qubes OS. Examp
Please try searching both the Qubes website and the archives of the mailing lists before sending a question. In addition, please make sure that you have read and understood the following basic documentation prior to posting to the list:
- [Installation guides, System Requirements, and HCL](/doc/QubesDownloads/) \<-- for problems related to Qubes OS installation
- [Qubes User FAQ](/en/doc/user-faq/)
- [Qubes User Guides](/en/doc/) \<-- for questions about how to use Qubes OS
- [Qubes User FAQ](/doc/user-faq/)
- [Qubes User Guides](/doc/) \<-- for questions about how to use Qubes OS
### How to Subscribe and Post

View File

@ -1,8 +1,9 @@
---
layout: doc
title: Users' FAQ
permalink: /en/doc/user-faq/
permalink: /doc/user-faq/
redirect_from:
- /en/doc/user-faq/
- /doc/UserFaq/
- /wiki/UserFaq/
---
@ -141,7 +142,7 @@ Create an issue in [qubes-issues](https://github.com/QubesOS/qubes-issues/issues
Installation & Hardware Compatibility
-------------------------------------
(See also: [System Requirements](/en/doc/system-requirements/) and [Hardware Compatibility List](/hcl/).)
(See also: [System Requirements](/doc/system-requirements/) and [Hardware Compatibility List](/hcl/).)
### How much disk space does each AppVM require?
@ -194,7 +195,7 @@ In your TemplateVMs, open a terminal and run `sudo yum upgrade`.
### How do I run a Windows HVM in non-seamless mode (i.e., as a single window)?
Enable "debug mode" in the AppVM's settings, either by checking the box labelled "Run in debug mode" in the Qubes VM Manager AppVM settings menu or by running the [qvm-prefs command](/en/doc/dom0-tools/qvm-prefs/).)
Enable "debug mode" in the AppVM's settings, either by checking the box labelled "Run in debug mode" in the Qubes VM Manager AppVM settings menu or by running the [qvm-prefs command](/doc/dom0-tools/qvm-prefs/).)
### I created a usbVM and assigned usb controllers to it. Now the usbVM wont boot.

View File

@ -1,8 +1,10 @@
---
layout: doc
title: Emergency Backup Recovery - format version 2
permalink: /en/doc/backup-emergency-restore-v2/
redirect_from: /doc/BackupEmergencyRestoreV2/
permalink: /doc/backup-emergency-restore-v2/
redirect_from:
- /en/doc/backup-emergency-restore-v2/
- /doc/BackupEmergencyRestoreV2/
---
Emergency Backup Recovery without Qubes - format version 2

View File

@ -1,8 +1,10 @@
---
layout: doc
title: Emergency Backup Recovery - format version 3
permalink: /en/doc/backup-emergency-restore-v3/
redirect_from: /doc/BackupEmergencyRestoreV3/
permalink: /doc/backup-emergency-restore-v3/
redirect_from:
- /en/doc/backup-emergency-restore-v3/
- /doc/BackupEmergencyRestoreV3/
---
Emergency Backup Recovery without Qubes - format version 3
@ -53,7 +55,7 @@ The Qubes backup system has been designed with emergency disaster recovery in mi
compressed=True
compression-filter=gzip
**Note:** If you see `version=2` here, go to [Emergency Backup Recovery - format version 2](/doc/BackupEmergencyRestoreV2/) instead.
**Note:** If you see `version=2` here, go to [Emergency Backup Recovery - format version 2](/doc/backup-emergency-restore-v2/) instead.
4. Verify the integrity of the `private.img` file which houses your data.

View File

@ -0,0 +1,104 @@
---
layout: doc
title: Backup, Restoration, and Migration
permalink: /doc/backup-restore/
redirect_from:
- /en/doc/backup-restore/
- /doc/BackupRestore/
- /wiki/BackupRestore/
---
Qubes Backup, Restoration, and Migration
========================================
**Caution:** The Qubes backup system currently relies on a [weak key derivation scheme](https://github.com/QubesOS/qubes-issues/issues/971). It is *strongly recommended* that users select a *high-entropy* passphrase for use with with Qubes backups.
* [Creating a Backup](#creating-a-backup)
* [Restoring from a Backup](#restoring-from-a-backup)
* [Emergency Backup Recovery without Qubes](#emergency-backup-recovery-without-qubes)
* [Migrating Between Two Physical Machines](#migrating-between-two-physical-machines)
* [Notes](#notes)
With Qubes, it's easy to back up and restore your whole system, as well as to migrate between two physical machines.
As of Qubes R2B3, these functions are integrated into the Qubes VM Manager GUI. There are also two command-line tools available which perform the same functions: [qvm-backup](/doc/dom0-tools/qvm-backup/) and [qvm-backup-restore](/doc/dom0-tools/qvm-backup-restore/).
Creating a Backup
-----------------
1. In **Qubes VM Manager**, click **System** on the menu bar, then click **Backup VMs** in the dropdown list. This brings up the **Qubes Backup VMs** window.
2. Move the AppVMs which you desire to back up to the right-hand **Selected** column. AppVMs in the left-hand **Available** column will not be backed up.
**Note:** An AppVM must be shut down in order to be backed up. Currently running AppVMs appear in red.
Once you have selected all desired AppVMs, click **Next**.
3. Select the destination for the backup:
- If you wish to send your backup to a [USB mass storage device](/doc/stick-mounting/), select the device in the dropdown box next to **Device** (feature removed in R3, select appropriate **Target AppVM** and mount the stick with one click in file selection dialog).
- If you wish to send your backup to a (currently running) AppVM, select the AppVM in the dropdown box next to **Target AppVM**.
You must also specify a directory on the device or in the AppVM, or a command to be executed in the AppVM as a destination for your backup. For example, if you wish to send your backup to the `~/backups` folder in the target AppVM, you would simply type `backups` in this field. This destination directory must already exist. If it does not exist, you must create it manually prior to backing up.
By specifying the appropriate directory as the destination in an AppVM, it is possible to send the backup directly to, e.g., a USB mass storage device attached to the AppVM. Likewise, it is possible to enter any command as a backup target by specifying the command as the destination in the AppVM. This can be used to send your backup directly to, e.g., a remote server using SSH.
At this point, you must also choose whether to encrypt your backup by checking or unchecking the **Encrypt backup** box.
**Note:** It is strongly recommended that you opt to encrypt all backups which will be sent to untrusted destinations!
**Note:** The supplied passphrase is used for **both** encryption/decryption and integrity verification. If you decide not to encrypt your backup (by unchecking the **Encrypt backup** box), the passphrase you supply will be used **only** for integrity verification. If you supply a passphrase but do not check the **Encrypt backup** box, your backup will **not** be encrypted!
4. When you are ready, click **Next**. Qubes will proceed to create your backup. Once the progress bar has completed, you may click **Finish**.
Restoring from a Backup
-----------------------
1. In **Qubes VM Manager**, click **System** on the menu bar, then click **Restore VMs from backup** in the dropdown list. This brings up the **Qubes Restore VMs** window.
2. Select the source location of the backup to be restored:
- If your backup is located on a [USB mass storage device](/doc/stick-mounting/), select the device in the dropdown box next to **Device**.
- If your backup is located in a (currently running) AppVM, select the AppVM in the dropdown box next to **AppVM**.
You must also specify the directory in which the backup resides (or a command to be executed in an AppVM). If you followed the instructions in the previous section, "Creating a Backup," then your backup is most likely in the location you chose as the destination in step 3. For example, if you had chosen the `~/backups` directory of an AppVM as your destination in step 3, you would now select the same AppVM and again type `backups` into the **Backup directory** field.
**Note:** After you have typed the directory location of the backup in the **Backup directory** field, click the ellipsis button `...` to the right of the field.
3. There are three options you may select when restoring from a backup:
1. **ignore missing**: If any of the AppVMs in your backup depended upon a NetVM, ProxyVM, or TemplateVM which is not present in (i.e., "missing from") the current system, checking this box will ignore the fact that they are missing and restore the AppVMs anyway.
2. **ignore username mismatch**: This option applies only to the restoration of dom0's home directory. If your backup was created on a Qubes system which had a different dom0 username than the dom0 username of the current system, then checking this box will ignore the mismatch between the two usernames and proceed to restore the home directory anyway.
3. **skip dom0**: If this box is checked, dom0's home directory will not be restored from your backup.
4. If your backup is encrypted, you must check the **Encrypted backup** box. If a passphrase was supplied during the creation of your backup (regardless of whether it is encrypted), then you must supply it here.
**Note:** The passphrase which was supplied when the backup was created was used for **both** encryption/decryption and integrity verification. If the backup was not encrypted, the supplied passphrase is used only for integrity verification.
**Note:** An AppVM cannot be restored from a backup if an AppVM with the same name already exists on the current system. You must first remove or change the name of any AppVM with the same name in order to restore such an AppVM.
5. When you are ready, click **Next**. Qubes will proceed to restore from your backup. Once the progress bar has completed, you may click **Finish**.
Emergency Backup Recovery without Qubes
---------------------------------------
The Qubes backup system has been designed with emergency disaster recovery in mind. No special Qubes-specific tools are required to access data backed up by Qubes. In the event a Qubes system is unavailable, you can access your data on any GNU/Linux system with the following procedure.
For emergency restore of backup created on Qubes R2 or newer take a look [here](/doc/backup-emergency-restore-v3/). For backups created on earlier Qubes version, take a look [here](/doc/backup-emergency-restore-v2/).
Migrating Between Two Physical Machines
---------------------------------------
In order to migrate your Qubes system from one physical machine to another, simply follow the backup procedure on the old machine, [install Qubes](/doc/downloads/) on the new machine, and follow the restoration procedure on the new machine. All of your settings and data will be preserved!
Notes
-----
* The Qubes backup system relies on `openssl enc`, which is known to use a very weak key derivation scheme. The Qubes backup system also uses the same passphrase for authentication and for encryption, which is problematic from a security perspective. Users are advised to use a very high entropy passphrase for Qubes backups. For a full discussion, see [this thread](https://groups.google.com/d/msg/qubes-devel/CZ7WRwLXcnk/u_rZPoVxL5IJ).
* For the technical details of the backup system, please refer to [this thread](https://groups.google.com/d/topic/qubes-devel/TQr_QcXIVww/discussion).
* If working with symlinks, note the issues described in [this thread](https://groups.google.com/d/topic/qubes-users/EITd1kBHD30/discussion).

View File

@ -1,8 +1,9 @@
---
layout: doc
title: Copy and Paste
permalink: /en/doc/copy-paste/
permalink: /doc/copy-paste/
redirect_from:
- /en/doc/copy-paste/
- /doc/CopyPaste/
- /wiki/CopyPaste/
---
@ -46,7 +47,7 @@ You may now paste the log contents to any VM as you normally would (i.e., Ctrl-S
For data other than logs, there are two options:
1. [Copy it as a file.](/en/doc/copy-to-dom0/)
1. [Copy it as a file.](/doc/copy-to-dom0/)
2. Paste the data to `/var/run/qubes/qubes-clipboard.bin`, then write "dom0" to `/var/run/qubes/qubes-clipboard.bin.source`. Then use Ctrl-Shift-V to paste the data to the desired VM.
Clipboard automatic policy enforcement
@ -73,3 +74,5 @@ The copy/paste shortcuts are configurable in:
~~~
/etc/qubes/guid.conf
~~~
VMs need to be restarted in order for changes in `/etc/qubes/guid.conf` to take effect.

View File

@ -1,8 +1,9 @@
---
layout: doc
title: Copying to and from dom0
permalink: /en/doc/copy-to-dom0/
permalink: /doc/copy-to-dom0/
redirect_from:
- /en/doc/copy-to-dom0/
- /doc/CopyToDomZero/
- /wiki/CopyToDomZero/
---

View File

@ -1,8 +1,9 @@
---
layout: doc
title: Copying Files between Domains
permalink: /en/doc/copying-files/
permalink: /doc/copying-files/
redirect_from:
- /en/doc/copying-files/
- /doc/CopyingFiles/
- /wiki/CopyingFiles/
---
@ -28,7 +29,7 @@ GUI
CLI
---
[qvm-copy-to-vm](/en/doc/vm-tools/qvm-copy-to-vm/)
[qvm-copy-to-vm](/doc/vm-tools/qvm-copy-to-vm/)
On inter-domain file copy security
----------------------------------

View File

@ -1,8 +1,9 @@
---
layout: doc
title: Disposable VMs
permalink: /en/doc/dispvm/
permalink: /doc/dispvm/
redirect_from:
- /en/doc/dispvm/
- /doc/DisposableVms/
- /wiki/DisposableVMs/
---
@ -73,7 +74,7 @@ Customizing Disposable VMs
---------------------------------------------------------
You can change the template used to generate the Disposable VM, and change settings used in the Disposable VM savefile. These changes will be reflected in every new Disposable VM.
Full instructions are [here](/en/doc/disp-vm-customization/)
Full instructions are [here](/doc/disp-vm-customization/)
Disposable VMs and Local Forensics

View File

@ -1,8 +1,9 @@
---
layout: doc
title: Full Screen Mode
permalink: /en/doc/full-screen-mode/
permalink: /doc/full-screen-mode/
redirect_from:
- /en/doc/full-screen-mode/
- /doc/FullScreenMode/
- /wiki/FullScreenMode/
---
@ -18,7 +19,7 @@ Normally Qubes GUI virtualization daemon restricts the VM from "owning" the full
Why is full screen mode potentially dangerous?
----------------------------------------------
If one allowed one of the VMs to "own" the full screen, e.g. to show a movie on a full screen, it might not be possible for the user to further realize if the applications/VM really "released" the full screen, or does it keep emulating the whole desktop and pretends to be the trusted Window Manager, drawing shapes on the screen that look e.g. like other windows, belonging to other domains (e.g. to trick the user into entering a secret passphrase into a window that looks like belonging to some trusted domain).
If one allowed one of the VMs to "own" the full screen, e.g. to show a movie on a full screen, it might not be possible for the user to know if the applications/VM really "released" the full screen, or if it has started emulating the whole desktop and is pretending to be the trusted Window Manager, drawing shapes on the screen that look e.g. like other windows, belonging to other domains (e.g. to trick the user into entering a secret passphrase into a window that looks like belonging to some trusted domain).
Secure use of full screen mode
------------------------------
@ -40,7 +41,7 @@ VM: {
};
~~~
The string 'personal' above is exemplary and should be replaced by the actual name of the VM for which you want to enable this functionality.
The string 'personal' above is an example only and should be replaced by the actual name of the VM for which you want to enable this functionality.
One can also enable this functionality for all the VMs globally in the same file, by modifying the 'global' section:

View File

@ -1,8 +1,9 @@
---
layout: doc
title: Managing AppVm Shortcuts
permalink: /en/doc/managing-appvm-shortcuts/
permalink: /doc/managing-appvm-shortcuts/
redirect_from:
- /en/doc/managing-appvm-shortcuts/
- /doc/ManagingAppVmShortcuts/
- /wiki/ManagingAppVmShortcuts/
---
@ -31,6 +32,6 @@ List of installed applications for each AppVM is stored in its template's `/var/
Actual command lines for the menu shortcuts involve `qvm-run` command which starts a process in another domain. Example: `qvm-run -q --tray -a w7s 'cmd.exe /c "C:\\ProgramData\\Microsoft\\Windows\\Start Menu\\Programs\\Accessories\\Calculator.lnk"'` or `qvm-run -q --tray -a untrusted 'firefox %u'`
`qvm-sync-appmenus` works by invoking *GetAppMenus* [Qubes service](/en/doc/qrexec/) in the target domain. This service enumerates installed applications and sends formatted info back to the dom0 script (`/usr/libexec/qubes-appmenus/qubes-receive-appmenus`) which creates .desktop files in the AppVM/TemplateVM directory.
`qvm-sync-appmenus` works by invoking *GetAppMenus* [Qubes service](/doc/qrexec/) in the target domain. This service enumerates installed applications and sends formatted info back to the dom0 script (`/usr/libexec/qubes-appmenus/qubes-receive-appmenus`) which creates .desktop files in the AppVM/TemplateVM directory.
For Linux VMs the service script is in `/etc/qubes-rpc/qubes.GetAppMenus`. In Windows it's a PowerShell script located in `c:\Program Files\Invisible Things Lab\Qubes OS Windows Tools\qubes-rpc-services\get-appmenus.ps1` by default.

View File

@ -0,0 +1,22 @@
---
layout: doc
title: Recording Optical Discs
permalink: /doc/recording-optical-discs/
redirect_from: /en/doc/recording-optical-discs/
---
Recording Optical Discs
=======================
Passthrough recording (a.k.a., "burning") is not supported by Xen. Currently,
the only options for recording optical discs (e.g., CDs, DVDs, BRDs) in Qubes
are:
1. Use a USB optical drive.
2. Attach a SATA optical drive to a secondary SATA controller, then assign this
secondary SATA controller to a VM.
3. Use a SATA optical drive attached to dom0.
(**Caution:** This option may violate the Qubes security model if it entails
transferring untrusted data (e.g., an ISO) to dom0 in order to record it on
an optical disc.)

View File

@ -1,8 +1,9 @@
---
layout: doc
title: Updating software in dom0
permalink: /en/doc/software-update-dom0/
permalink: /doc/software-update-dom0/
redirect_from:
- /en/doc/software-update-dom0/
- /doc/SoftwareUpdateDom0/
- /wiki/SoftwareUpdateDom0/
---

View File

@ -1,8 +1,9 @@
---
layout: doc
title: Installing and updating software in VMs
permalink: /en/doc/software-update-vm/
permalink: /doc/software-update-vm/
redirect_from:
- /en/doc/software-update-vm/
- /doc/SoftwareUpdateVM/
- /wiki/SoftwareUpdateVM/
---
@ -41,7 +42,7 @@ As the template VM is used for creating filesystems for other AppVMs, where you
There are several ways to deal with this problem:
- Only install packages from trusted sources -- e.g. from the pre-configured Fedora repositories. All those packages are signed by Fedora, and as we expect that at least the package's installation scripts are not malicious. This is enforced by default (at the [firewall VM level](/en/doc/qubes-firewall/)), by not allowing any networking connectivity in the default template VM, except for access to the Fedora repos.
- Only install packages from trusted sources -- e.g. from the pre-configured Fedora repositories. All those packages are signed by Fedora, and as we expect that at least the package's installation scripts are not malicious. This is enforced by default (at the [firewall VM level](/doc/qubes-firewall/)), by not allowing any networking connectivity in the default template VM, except for access to the Fedora repos.
- Use *standalone VMs* (see below) for installation of untrusted software packages.
@ -51,7 +52,7 @@ Some popular questions:
- So, why should we actually trust Fedora repos -- it also contains large amount of 3rd party software that might buggy, right?
As long as template's compromise is considered, it doesn't really matter whether /usr/bin/firefox is buggy and can be exploited, or not. What matters is whether its *installation* scripts (such as %post in the rpm.spec) are benign or not. Template VM should be used only for installation of packages, and nothing more, so it should never get a chance to actually run the /usr/bin/firefox and got infected from it, in case it was compromised. Also, some of your more trusted AppVMs, would have networking restrictions enforced by the [firewall VM](/en/doc/qubes-firewall/), and again they should not fear this proverbial /usr/bin/firefox being potentially buggy and easy to compromise.
As long as template's compromise is considered, it doesn't really matter whether /usr/bin/firefox is buggy and can be exploited, or not. What matters is whether its *installation* scripts (such as %post in the rpm.spec) are benign or not. Template VM should be used only for installation of packages, and nothing more, so it should never get a chance to actually run the /usr/bin/firefox and got infected from it, in case it was compromised. Also, some of your more trusted AppVMs, would have networking restrictions enforced by the [firewall VM](/doc/qubes-firewall/), and again they should not fear this proverbial /usr/bin/firefox being potentially buggy and easy to compromise.
- But why trusting Fedora?

View File

@ -1,8 +1,9 @@
---
layout: doc
title: Mounting USB Sticks
permalink: /en/doc/stick-mounting/
permalink: /doc/stick-mounting/
redirect_from:
- /en/doc/stick-mounting/
- /doc/StickMounting/
- /wiki/StickMounting/
---
@ -12,7 +13,7 @@ How to Mount USB Sticks to AppVMs
(**Note:** In the present context, the term "USB stick" denotes any [USB mass storage device](https://en.wikipedia.org/wiki/USB_mass_storage_device_class). In addition to smaller flash memory sticks, this includes things like USB external hard drives.)
Qubes supports the ability to attach a USB stick (or just one or more of its partitions) to any AppVM easily, no matter which VM actually handles the USB controller. (The USB controller may be assigned on the **Devices** tab of an AppVM's settings page in Qubes VM Manager or by using the [qvm-pci command](/en/doc/assigning-devices/).)
Qubes supports the ability to attach a USB stick (or just one or more of its partitions) to any AppVM easily, no matter which VM actually handles the USB controller. (The USB controller may be assigned on the **Devices** tab of an AppVM's settings page in Qubes VM Manager or by using the [qvm-pci command](/doc/assigning-devices/).)
As of Qubes R2 Beta 3, USB stick mounting has been integrated into the Qubes VM Manager GUI. Simply insert your USB stick, right-click the desired AppVM in the Qubes VM Manager list, click **Attach/detach block devices**, and select your desired action and device. This, however, only works for the whole device.
If you would like to attach individual partitions you must use the command-line tool (shown below). The reason for this is that when attaching a single partition, it used to be that Nautilus file manager would not see it and automatically mount it (see [this ticket](https://github.com/QubesOS/qubes-issues/issues/623)). This problem, however, seems to be resolved (see [this issue comment](https://github.com/QubesOS/qubes-issues/issues/1072#issuecomment-124270051)).

View File

@ -1,8 +1,9 @@
---
layout: doc
title: Assigning Devices
permalink: /en/doc/assigning-devices/
permalink: /doc/assigning-devices/
redirect_from:
- /en/doc/assigning-devices/
- /doc/AssigningDevices/
- /wiki/AssigningDevices/
---

View File

@ -1,8 +1,9 @@
---
layout: doc
title: Config Files
permalink: /en/doc/config-files/
permalink: /doc/config-files/
redirect_from:
- /en/doc/config-files/
- /doc/ConfigFiles/
- "/doc/UserDoc/ConfigFiles/"
- "/wiki/UserDoc/ConfigFiles/"

View File

@ -1,8 +1,9 @@
---
layout: doc
title: Disk TRIM
permalink: /en/doc/disk-trim/
permalink: /doc/disk-trim/
redirect_from:
- /en/doc/disk-trim/
- /doc/DiskTRIM/
- /wiki/DiskTRIM/
---

View File

@ -1,8 +1,9 @@
---
layout: doc
title: External Audio
permalink: /en/doc/external-audio/
permalink: /doc/external-audio/
redirect_from:
- /en/doc/external-audio/
- /doc/ExternalAudio/
- /wiki/ExternalAudio/
---

View File

@ -1,8 +1,9 @@
---
layout: doc
title: External Device Mount Point
permalink: /en/doc/external-device-mount-point/
permalink: /doc/external-device-mount-point/
redirect_from:
- /en/doc/external-device-mount-point/
- /doc/ExternalDeviceMountPoint/
- /wiki/ExternalDeviceMountPoint/
---

View File

@ -1,8 +1,9 @@
---
layout: doc
title: Fetchmail
permalink: /en/doc/fetchmail/
permalink: /doc/fetchmail/
redirect_from:
- /en/doc/fetchmail/
- /doc/Fetchmail/
- /wiki/Fetchmail/
---
@ -10,7 +11,7 @@ redirect_from:
Fetchmail
=========
Fetchmail is standalone MRA (Mail Retrieval Agent) aka "IMAP/POP3 client". Its sole purpose is to fetch your messages and store it locally or feed to local MTA (Message Transfer Agent). It cannot "read" messages — for that, use a MUA like Thunderbird or [Mutt](/en/doc/mutt/).
Fetchmail is standalone MRA (Mail Retrieval Agent) aka "IMAP/POP3 client". Its sole purpose is to fetch your messages and store it locally or feed to local MTA (Message Transfer Agent). It cannot "read" messages — for that, use a MUA like Thunderbird or [Mutt](/doc/mutt/).
Installation
------------
@ -22,7 +23,7 @@ Configuration
Assuming you have more than one account (safe assumption these days), you need to spawn multiple fetchmail instances, one for each IMAP/POP3 server (though one instance can watch over several accounts on one server). The easiest way is to create template systemd unit and start it several times. Fedora does not supply any, so we have to write one anyway.
**NOTE:** this assumes you use [Postfix](/en/doc/postfix/) as your local MTA.
**NOTE:** this assumes you use [Postfix](/doc/postfix/) as your local MTA.
In TemplateVM create `/etc/systemd/system/fetchmail@.service`:

View File

@ -0,0 +1,310 @@
---
layout: doc
title: Managing VM kernel
permalink: /doc/managing-vm-kernel/
redirect_from:
- /en/doc/managing-vm-kernel/
---
VM kernel managed by dom0
-------------------------
By default VMs kernels are provided by dom0. This means that:
1. You can select kernel version in VM settings;
2. You can modify kernel options in VM settings;
3. You can **not** modify any of above from inside of VM;
4. Installing additional kernel modules in cumbersome.
To select which kernel a given VM will use, you can use either use Qubes Manager (VM settings, advanced tab), or `qvm-prefs` tool:
~~~
[user@dom0 ~]$ qvm-prefs my-appvm -s kernel
Missing kernel version argument!
Possible values:
1) default
2) none (kernels subdir in VM)
3) <kernel version>, one of:
- 3.18.16-3
- 3.18.17-4
- 3.19.fc20
- 3.18.10-2
[user@dom0 ~]$ qvm-prefs my-appvm -s kernel 3.18.17-4
[user@dom0 ~]$ qvm-prefs my-appvm -s kernel default
~~~
To check/change the default kernel you can go either to "Global settings" in Qubes Manager, or use `qubes-prefs` tool:
~~~
[user@dom0 ~]$ qubes-prefs
clockvm : sys-net
default-fw-netvm : sys-net
default-kernel : 3.18.17-4
default-netvm : sys-firewall
default-template : fedora-21
updatevm : sys-firewall
[user@dom0 ~]$ qubes-prefs -s default-kernel 3.19.fc20
~~~
Installing different kernel using Qubes kernel package
==================================
VM kernels are packages by Qubes team in `kernel-qubes-vm` packages. Generally system will keep the 3 newest available versions. You can list them with the `rpm` command:
~~~
[user@dom0 ~]$ rpm -qa 'kernel-qubes-vm*'
kernel-qubes-vm-3.18.10-2.pvops.qubes.x86_64
kernel-qubes-vm-3.18.16-3.pvops.qubes.x86_64
kernel-qubes-vm-3.18.17-4.pvops.qubes.x86_64
~~~
If you want more recent version, you can check `qubes-dom0-unstable` repository. As the name suggest, keep in
mind that those packages may be less stable than the default ones.
Checking available versions in `qubes-dom0-unstable` repository:
~~~
[user@dom0 ~]$ sudo qubes-dom0-update --enablerepo=qubes-dom0-unstable --action=list kernel-qubes-vm
Using sys-firewall as UpdateVM to download updates for Dom0; this may take some time...
Running command on VM: 'sys-firewall'...
Loaded plugins: langpacks, post-transaction-actions, yum-qubes-hooks
Installed Packages
kernel-qubes-vm.x86_64 1000:3.18.10-2.pvops.qubes installed
kernel-qubes-vm.x86_64 1000:3.18.16-3.pvops.qubes installed
kernel-qubes-vm.x86_64 1000:3.18.17-4.pvops.qubes installed
Available Packages
kernel-qubes-vm.x86_64 1000:4.1.12-6.pvops.qubes qubes-dom0-unstable
No packages downloaded
Installed Packages
kernel-qubes-vm.x86_64 1000:3.18.10-2.pvops.qubes @anaconda/R3.0
kernel-qubes-vm.x86_64 1000:3.18.16-3.pvops.qubes @/kernel-qubes-vm-3.18.16-3.pvops.qubes.x86_64
kernel-qubes-vm.x86_64 1000:3.18.17-4.pvops.qubes @qubes-dom0-cached
~~~
Installing new version from `qubes-dom0-unstable` repository:
~~~
[user@dom0 ~]$ sudo qubes-dom0-update --enablerepo=qubes-dom0-unstable kernel-qubes-vm
Using sys-firewall as UpdateVM to download updates for Dom0; this may take some time...
Running command on VM: 'sys-firewall'...
Loaded plugins: langpacks, post-transaction-actions, yum-qubes-hooks
Resolving Dependencies
(...)
===========================================================================================
Package Arch Version Repository Size
===========================================================================================
Installing:
kernel-qubes-vm x86_64 1000:4.1.12-6.pvops.qubes qubes-dom0-cached 40 M
Removing:
kernel-qubes-vm x86_64 1000:3.18.10-2.pvops.qubes @anaconda/R3.0 134 M
Transaction Summary
===========================================================================================
Install 1 Package
Remove 1 Package
Total download size: 40 M
Is this ok [y/d/N]: y
Downloading packages:
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction (shutdown inhibited)
Installing : 1000:kernel-qubes-vm-4.1.12-6.pvops.qubes.x86_64 1/2
mke2fs 1.42.12 (29-Aug-2014)
This kernel version is used by at least one VM, cannot remove
error: %preun(kernel-qubes-vm-1000:3.18.10-2.pvops.qubes.x86_64) scriptlet failed, exit status 1
Error in PREUN scriptlet in rpm package 1000:kernel-qubes-vm-3.18.10-2.pvops.qubes.x86_64
Verifying : 1000:kernel-qubes-vm-4.1.12-6.pvops.qubes.x86_64 1/2
Verifying : 1000:kernel-qubes-vm-3.18.10-2.pvops.qubes.x86_64 2/2
Installed:
kernel-qubes-vm.x86_64 1000:4.1.12-6.pvops.qubes
Failed:
kernel-qubes-vm.x86_64 1000:3.18.10-2.pvops.qubes
Complete!
[marmarek@dom0 ~]$
~~~
In the above example, it tries to remove 3.18.10-2.pvops.qubes kernel (to keep only 3 installed), but since some VM uses it, it fails. Installation of new package is unaffected by this event.
The newly installed package is set as default VM kernel.
Installing different VM kernel based on dom0 kernel
===================================================
It is possible to package kernel installed in dom0 as VM kernel. This makes it
possible to use VM kernel, which is not packaged by Qubes team. This includes:
* using Fedora kernel package
* using manually compiled kernel
To prepare such VM kernel, you need to install `qubes-kernel-vm-support`
package in dom0 and also have matching kernel headers installed (`kernel-devel`
package in case of Fedora kernel package). You can install required stuff using `qubes-dom0-update`:
~~~
[user@dom0 ~]$ sudo qubes-dom0-update qubes-kernel-vm-support kernel-devel
Using sys-firewall as UpdateVM to download updates for Dom0; this may take some time...
Running command on VM: 'sys-firewall'...
Loaded plugins: langpacks, post-transaction-actions, yum-qubes-hooks
Package 1000:kernel-devel-4.1.9-6.pvops.qubes.x86_64 already installed and latest version
Resolving Dependencies
(...)
================================================================================
Package Arch Version Repository Size
================================================================================
Installing:
qubes-kernel-vm-support x86_64 3.1.2-1.fc20 qubes-dom0-cached 9.2 k
Transaction Summary
================================================================================
Install 1 Package
Total download size: 9.2 k
Installed size: 13 k
Is this ok [y/d/N]: y
Downloading packages:
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction (shutdown inhibited)
Installing : qubes-kernel-vm-support-3.1.2-1.fc20.x86_64 1/1
Creating symlink /var/lib/dkms/u2mfn/3.1.2/source ->
/usr/src/u2mfn-3.1.2
DKMS: add completed.
Verifying : qubes-kernel-vm-support-3.1.2-1.fc20.x86_64 1/1
Installed:
qubes-kernel-vm-support.x86_64 0:3.1.2-1.fc20
Complete!
~~~
Then you can call `qubes-prepare-vm-kernel` tool to actually package the
kernel. The first parameter is kernel version (exactly as seen by the kernel),
the second one (optional) is short name being visible in Qubes Manager and
`qvm-prefs` tool.
~~~
[user@dom0 ~]$ sudo qubes-prepare-vm-kernel 4.1.9-6.pvops.qubes.x86_64 4.1.qubes
--> Building files for 4.1.9-6.pvops.qubes.x86_64 in /var/lib/qubes/vm-kernels/4.1.qubes
---> Recompiling kernel module (u2mfn)
---> Generating modules.img
mke2fs 1.42.12 (29-Aug-2014)
---> Generating initramfs
--> Done.
~~~
Using kernel installed in the VM
================================
**This option is available only in Qubes R3.1 or newer**
It is possible to use kernel installed in the VM (in most cases - TemplateVM).
This is possible thanks to PV GRUB2 - GRUB2 running in the VM. To make it happen, you need to:
1. Install PV GRUB2 in dom0 - package is named `grub2-xen`.
2. Install kernel in the VM. As with all VM software installation - this needs to be done in TemplateVM (of StandaloneVM if you are using one).
3. Set VM kernel to `pvgrub2` value. You can use `pvgrub2` in selected VMs, not necessary all of them, even when it's template has kernel installed. You can still use dom0-provided kernel for selected VMs.
**WARNING: When using kernel from within VM, `kernelopts` parameter is ignored.**
### Installing PV GRUB2
Simply execute:
~~~
sudo qubes-dom0-update grub2-xen
~~~
### Installing kernel in Fedora VM
In Fedora based VM, you need to install `qubes-kernel-vm-support` package. This
package include required additional kernel module and initramfs addition
required to start Qubes VM (for details see
[template implementation](/doc/template-implementation/)). Additionally you
need some GRUB tools to create it's configuration. Note: you don't need actual
grub bootloader as it is provided by dom0. But having one also shouldn't harm.
~~~
sudo yum install qubes-kernel-vm-support grub2-tools
~~~
Then install whatever kernel you want. If you are using distribution kernel
package (`kernel` package), initramfs and kernel module should be handled
automatically. If you are using manually build kernel, you need to handle this
on your own. Take a look at `dkms` and `dracut` documentation.
When kernel is installed, you need to create GRUB configuration.
You may want to adjust some settings in `/etc/default/grub`, for example lower
`GRUB_TIMEOUT` to speed up VM startup. Then you need to generate actual configuration:
In Fedora it can be done using `grub2-mkconfig` tool:
~~~
sudo grub2-mkconfig -o /boot/grub2/grub.cfg
~~~
You can safely ignore this error message:
~~~
grub2-probe: error: cannot find a GRUB drive for /dev/mapper/dmroot. Check your device.map
~~~
Then shutdown the VM. From now you can set `pvgrub2` as VM kernel and it will
start kernel configured within VM.
### Installing kernel in Debian VM
In Debian based VM, you need to install `qubes-kernel-vm-support` package. This
package include required additional kernel module and initramfs addition
required to start Qubes VM (for details see
[template implementation](/doc/template-implementation/)). Additionally you
need some GRUB tools to create it's configuration. Note: you don't need actual
grub bootloader as it is provided by dom0. But having one also shouldn't harm.
~~~
sudo apt-get update
sudo apt-get install qubes-kernel-vm-support grub2-common
~~~
Ignore warnings about `version '...' has bad syntax`.
Then install whatever kernel you want. If you are using distribution kernel
package (`linux-image-amd64` package), initramfs and kernel module should be
handled automatically. If you are using manually build kernel, you need to
handle this on your own. Take a look at `dkms` and `initramfs-tools` documentation.
When kernel is installed, you need to create GRUB configuration.
You may want to adjust some settings in `/etc/default/grub`, for example lower
`GRUB_TIMEOUT` to speed up VM startup. Then you need to generate actual configuration:
In Fedora it can be done using `update-grub2` tool:
~~~
sudo mkdir /boot/grub
sudo update-grub2
~~~
You can safely ignore this error message:
~~~
grub2-probe: error: cannot find a GRUB drive for /dev/mapper/dmroot. Check your device.map
~~~
Then shutdown the VM. From now you can set `pvgrub2` as VM kernel and it will
start kernel configured within VM.
### Troubleshooting
In case of problems, you can access VM console (using `sudo xl console VMNAME` in dom0) to access
GRUB menu. You need to call it just after starting VM (until `GRUB_TIMEOUT`
expires) - for example in separate dom0 terminal window.
In any case you can later access VM logs (especially VM console log (`guest-VMNAME.log`).
You can always set kernel back to some dom0-provided value to fix VM kernel installation.

View File

@ -1,8 +1,9 @@
---
layout: doc
title: Mutt
permalink: /en/doc/mutt/
permalink: /doc/mutt/
redirect_from:
- /en/doc/mutt/
- /doc/Mutt/
- /wiki/Mutt/
---
@ -15,8 +16,8 @@ Mutt is a fast, standards-compliant, efficient MUA (Mail User Agent). In some ar
Mutt lacks true MTA (Message Transfer Agent aka "SMTP client") and MRA (Mail
Retrieval Agent aka "IMAP/POP3 client"), thus there are some provisions
built-in. In principle it is only mail reader and composer. You may install
true MTA such as [Postfix](/en/doc/postfix/) or Exim and MRA such as
[Fetchmail](/en/doc/fetchmail/). Alternatively you can synchronize your mailbox
true MTA such as [Postfix](/doc/postfix/) or Exim and MRA such as
[Fetchmail](/doc/fetchmail/). Alternatively you can synchronize your mailbox
using [OfflineIMAP](https://github.com/OfflineIMAP/offlineimap) and just stick
to integrated SMTP support. You can even use integrated IMAP client, but it is
not very convenient.
@ -29,7 +30,7 @@ Installation
Configuration
-------------
Mutt generally works out of the box. This configuration guide discusses only Qubes-specific setup. In this example we will have one TemplateVM and several AppVMs. It also takes advantage of [SplitGPG](/en/doc/split-gpg/), which is assumed to be already working.
Mutt generally works out of the box. This configuration guide discusses only Qubes-specific setup. In this example we will have one TemplateVM and several AppVMs. It also takes advantage of [SplitGPG](/doc/split-gpg/), which is assumed to be already working.
**NOTE:** this requires `qubes-gpg-split >= 2.0.9`. 2.0.8 and earlier contains bug which causes this setup to hang in specific situations and does not allow to list keys.

View File

@ -1,8 +1,9 @@
---
layout: doc
title: Network Bridge Support
permalink: /en/doc/network-bridge-support/
permalink: /doc/network-bridge-support/
redirect_from:
- /en/doc/network-bridge-support/
- /doc/NetworkBridgeSupport/
- /wiki/NetworkBridgeSupport/
---

View File

@ -1,8 +1,9 @@
---
layout: doc
title: Network Printer
permalink: /en/doc/network-printer/
permalink: /doc/network-printer/
redirect_from:
- /en/doc/network-printer/
- /doc/NetworkPrinter/
- /wiki/NetworkPrinter/
---

View File

@ -1,8 +1,9 @@
---
layout: doc
title: Postfix
permalink: /en/doc/postfix/
permalink: /doc/postfix/
redirect_from:
- /en/doc/postfix/
- /doc/Postfix/
- /wiki/Postfix/
---
@ -10,7 +11,7 @@ redirect_from:
Postfix
=======
Postfix is full featured MTA (Message Transfer Agent). Here we will configure it in smarthost mode as part of common [Mutt](/en/doc/mutt/)+Postfix+[Fetchmail](/en/doc/fetchmail/) stack.
Postfix is full featured MTA (Message Transfer Agent). Here we will configure it in smarthost mode as part of common [Mutt](/doc/mutt/)+Postfix+[Fetchmail](/doc/fetchmail/) stack.
Installation
------------

View File

@ -1,8 +1,9 @@
---
layout: doc
title: Resize Disk Image
permalink: /en/doc/resize-disk-image/
permalink: /doc/resize-disk-image/
redirect_from:
- /en/doc/resize-disk-image/
- /doc/ResizeDiskImage/
- /wiki/ResizeDiskImage/
---
@ -17,7 +18,7 @@ There are several disk images which can be easily extended.
1048576 MB is the maximum size which can be assigned to a private storage through qubes-manager.
To grow the private disk image of a AppVM beyond this limit [qubes-grow-private](/en/doc/dom0-tools/qvm-grow-private/) can be used:
To grow the private disk image of a AppVM beyond this limit [qubes-grow-private](/doc/dom0-tools/qvm-grow-private/) can be used:
~~~
qvm-grow-private <vm-name> <size>
@ -25,12 +26,6 @@ qvm-grow-private <vm-name> <size>
Note: Size is the target size (i.e. 4096MB or 16GB, ...), not the size to add to the existing disk.
Note2: If once the VM is started, the disk is has not been increased, you can issue in the VM's terminal:
~~~
sudo resize2fs /dev/xvdb
~~~
### Shrinking private disk image (Linux VM)
**This operation is dangerous and this is why it isn't available in standard Qubes tools. If you have enough disk space, it is safer to create new VM with smaller disk and move the data.**

View File

@ -1,8 +1,9 @@
---
layout: doc
title: Resize Root Disk Image
permalink: /en/doc/resize-root-disk-image/
permalink: /doc/resize-root-disk-image/
redirect_from:
- /en/doc/resize-root-disk-image/
- /doc/ResizeRootDiskImage/
- /wiki/ResizeRootDiskImage/
---

View File

@ -1,8 +1,9 @@
---
layout: doc
title: Rxvt
permalink: /en/doc/rxvt/
permalink: /doc/rxvt/
redirect_from:
- /en/doc/rxvt/
- /doc/Rxvt/
- /wiki/Rxvt/
---

171
configuration/salt/index.md Normal file
View File

@ -0,0 +1,171 @@
---
layout: doc
title: Management stack
permalink: /doc/salt/
---
# Management infrastructure
Since Qubes R3.1 release we have included `salt` (also called SaltStack)
management engine in dom0 as default with some states already configured. salt
allows administrators to easily configure their systems. In this guide we will
show how it is set up and how you can modify it for your own purpose.
In the current form the **API is provisional** and subject to change between
*minor* releases.
## Understanding `salt`
This document is not meant to be comprehensive salt documentation, however
before writing anything it is required you have at least *some* understanding of
basic salt-related vocabulary. For more exhaustive documentation, visit
[official site][salt-doc], though we must warn you that it is not easy to read
if you just start working with salt and know nothing.
Salt has client-server architecture, where server (called *master*) manages its
clients (called *minions*). In typical situation it is intended that
administrator interacts only with master and keeps the configuration there. In
Qubes OS we don't have master though, since we have only one minion, which
resides in `dom0` and manages domains from there. This is also supported by
salt.
Salt is a management engine, that enforces particular state of the system, where
minion runs. A *state* is an end effect *declaratively* expressed by the
administrator. This is the most important concept in the whole package. All
configuration (ie. the states) are written in YAML.
A *pillar* is a data backend declared by administrator. When states became
repetitive, instead of pure YAML they can be written with help of some template
engine (preferably jinja2), which can use data structures specified in pillars.
A *formula* is a ready to use, packaged solution that combines state and pillar,
possibly with some file templates and other auxiliary files. There are many of
those made by helpful people all over the Internet.
A *grain* is some data that is also available in templates, but its value is not
directly specified by administrator. For example the distribution (like
`"Debian"` or `"Gentoo"`) is a value of the grain `"os"`. It also contains other
info about kernel, hardware etc.
A *module* is a Python extension to salt that is responsible for actually
enforcing the state in a particular area. It exposes some *imperative* functions
for administrator. For example there is `system` module that has `system.halt`
function that, when issued, will immediately halt the computer. There is another
function called `state.highstate` which will synchronise the state of the system
with the administrator's will.
## Salt configuration, Qubes OS layout
All salt configuration in `/srv/` directory, as usual. The main directory is
`/srv/salt/` where all state files reside. States are contained in `*.sls`
files. However the states that are part of standard Qubes distribution are
mostly templates and the configuration is done in pillars from formulas.
The formulas are in `/srv/formulas`, including stock formula for domains in
`/srv/formulas/dom0/virtual-machines-formula/qvm`, which are used by firstboot.
Because we use some code that is not found in older versions of salt, there is
a tool called `qubesctl` that should be run instead of `salt-call --local`. It
accepts all arguments of the vanilla tool.
## Writing your own configuration
Let's start with quick example:
my new and shiny vm:
qvm.present:
- name: salt-test # can be omitted when same as ID
- template: fedora-21
- label: yellow
- mem: 2000
- vcpus: 4
- flags:
- proxy
It uses Qubes-specific `qvm.present` state, which ensures that domain is
created. The name should be `salt-test` (and not `my new and shiny vm`),
the rest are domains properties, same as in `qvm-prefs`. `proxy` flag informs
salt that the domain should be a ProxyVM.
This should be put in `/srv/salt/my-new-vm.sls` or another `.sls` file. Separate
`*.top` file should be also written:
base:
dom0:
- my-new-vm
The third line should contain the name of the previous file, without `.sls`.
Now because we use custom extension to manage top files (instead of just
enabling them all) to enable the particular top file you should issue command:
qubesctl top.enable my-new-vm
To list all enabled tops:
qubesctl top.enabled
And to disable one:
qubesctl top.disable my-new-vm
To actually apply the state:
qubesctl state.highstate
## All Qubes-specific states
### qvm.present
As in example above, it creates domain and sets its properties.
### qvm.prefs
You can set properties of existing domain:
my preferences:
qvm.prefs:
- name: salt-test2
- netvm: sys-firewall
Note that `name:` is a matcher, ie. it says the domain which properties will be
manipulated is called `salt-test2`. The implies that you currently cannot rename
domains this way.
### qvm.service
services in my domain:
qvm.service:
- name: salt-test3
- enable:
- service1
- service2
- disable:
- service3
- service4
- default:
- service5
This enables, disables, or sets to default, the services as in qvm-service.
### qvm.running
Ensures the domain is running:
domain is running:
qvm.running:
- name: salt-test4
## Further reading
* [Salt documentation][salt-doc]
* [Qubes specific modules][salt-qvm-doc]
* [Formula for default Qubes VMs][salt-virtual-machines-doc] ([and actual states][salt-virtual-machines-states])
[salt-doc]: https://docs.saltstack.com/en/latest/
[salt-qvm-doc]: https://github.com/QubesOS/qubes-mgmt-salt-dom0-qvm/blob/master/README.rst
[salt-virtual-machines-doc]: https://github.com/QubesOS/qubes-mgmt-salt-dom0-virtual-machines/blob/master/README.rst
[salt-virtual-machines-states]: https://github.com/QubesOS/qubes-mgmt-salt-dom0-virtual-machines/tree/master/qvm

View File

@ -1,8 +1,9 @@
---
layout: doc
title: Secondary Storage
permalink: /en/doc/secondary-storage/
permalink: /doc/secondary-storage/
redirect_from:
- /en/doc/secondary-storage/
- /doc/SecondaryStorage/
- /wiki/SecondaryStorage/
---

View File

@ -1,8 +1,9 @@
---
layout: doc
title: ZFS
permalink: /en/doc/zfs/
permalink: /doc/zfs/
redirect_from:
- /en/doc/zfs/
- /doc/ZFS/
- /wiki/ZFS/
---

View File

@ -1,11 +1,12 @@
---
layout: doc
title: DispVM Customization
permalink: /en/doc/dispvm-customization/
permalink: /doc/dispvm-customization/
redirect_from:
- /en/doc/dispvm-customization/
- /doc/DispVMCustomization/
- "/doc/UserDoc/DispVMCustomization/"
- "/wiki/UserDoc/DispVMCustomization/"
- /doc/UserDoc/DispVMCustomization/
- /wiki/UserDoc/DispVMCustomization/
---
Changing the template used as a basis for Disposable VM

View File

@ -1,7 +1,8 @@
---
layout: doc
title: Fedora Minimal Template Customization
permalink: /en/doc/fedora-minimal-template-customization/
permalink: /doc/fedora-minimal-template-customization/
redirect_from: /en/doc/fedora-minimal-template-customization/
---
FEDORA Packages Recommendations

44
customization/kde.md Normal file
View File

@ -0,0 +1,44 @@
---
layout: doc
title: KDE
permalink: /doc/kde/
redirect_from: /en/doc/kde/
---
Using KDE in dom0
=================
Window Management
-----------------
You can set each window's position and size like this:
~~~
Right click title bar --> More actions --> Special window settings...
Window matching tab
Window class (application): Exact Match: <vm_name>
Window title: Substring Match: <partial or full program name>
Size & Position tab
[x] Position: Apply Initially: x,y
[x] Size: Apply Initially: x,y
~~~
You can also use `kstart` to control virtual desktop placement like this:
~~~
kstart --desktop 3 --windowclass <vm_name> -q --tray -a <vm_name> '<run_program_command>'
~~~
(Replace "3" with whichever virtual desktop you want the window to be
on.)
This can be useful for creating a simple shell script which will set up your
workspace the way you like.
Mailing List Threads
--------------------
* [Nalu's KDE customization thread](https://groups.google.com/d/topic/qubes-users/KhfzF19NG1s/discussion)

View File

@ -1,8 +1,9 @@
---
layout: doc
title: Language Localization
permalink: /en/doc/language-localization/
permalink: /doc/language-localization/
redirect_from:
- /en/doc/language-localization/
- /doc/LanguageLocalization/
- /wiki/LanguageLocalization/
---

View File

@ -1,7 +1,8 @@
---
layout: doc
title: Windows Template Customization
permalink: /en/doc/windows-template-customization/
permalink: /doc/windows-template-customization/
redirect_from: /en/doc/windows-template-customization/
---
Disable/Uninstall unecessary features/services

View File

@ -1,8 +1,9 @@
---
layout: doc
title: XFCE
permalink: /en/doc/xfce/
permalink: /doc/xfce/
redirect_from:
- /en/doc/xfce/
- /doc/XFCE/
- "/doc/UserDoc/XFCE/"
- "/wiki/UserDoc/XFCE/"

View File

@ -1,8 +1,9 @@
---
layout: doc
title: Building Archlinux Template
permalink: /en/doc/building-archlinux-template/
permalink: /doc/building-archlinux-template/
redirect_from:
- /en/doc/building-archlinux-template/
- /doc/BuildingArchlinuxTemplate/
- /wiki/BuildingArchlinuxTemplate/
---
@ -15,7 +16,7 @@ The archlinux VM is now almost working as a NetVM. Based on qubes-builder code,
Download qubes-builder git code
-------------------------------
Prefer marmarek git repository as it is the most recent one
Prefer the [marmarek git repository](https://github.com/marmarek/qubes-builder-archlinux) as it is the most recent one.
Change your builder.conf
------------------------

View File

@ -1,8 +1,9 @@
---
layout: doc
title: Building Non-Fedora Template
permalink: /en/doc/building-non-fedora-template/
permalink: /doc/building-non-fedora-template/
redirect_from:
- /en/doc/building-non-fedora-template/
- /doc/BuildingNonFedoraTemplate/
- /wiki/BuildingNonFedoraTemplate/
---
@ -138,7 +139,7 @@ As soon as you manage to make qrexec and qubes-gui-agent working, it should be s
Several XEN libraries are required for Qubes to work correctly. In fact, you need to make xenstore commands working before anything else. For this, Qubes git can be used as several patches have been selected by Qubes developpers that could impact the activity inside a VM. Start be retrieving a recent git and identify how you can build a package from it: `git clone git://git.qubes-os.org/marmarek/xen`
Find the .spec file in the git repository (this is the file being used to build rpm packages), and try to adapt it to your OS in order to build a package similar to the target 'xen-vm'. For example, a PKGBUILD has been created for [ArchLinux](/en/doc/templates/archlinux/) and can be found on [http://aur.archlinux.org/packages/qu/qubes-vm-xen/PKGBUILD](http://aur.archlinux.org/packages/qu/qubes-vm-xen/PKGBUILD).
Find the .spec file in the git repository (this is the file being used to build rpm packages), and try to adapt it to your OS in order to build a package similar to the target 'xen-vm'. For example, a PKGBUILD has been created for [ArchLinux](/doc/templates/archlinux/) and can be found on [http://aur.archlinux.org/packages/qu/qubes-vm-xen/PKGBUILD](http://aur.archlinux.org/packages/qu/qubes-vm-xen/PKGBUILD).
Don't be afraid with the complexity of the PKGBUILD, most of the code is almost a copy/paste of required sources and patches found in the .spec file provided in the git repository.

View File

@ -0,0 +1,546 @@
---
layout: doc
title: Development Workflow
permalink: /doc/development-workflow/
redirect_from:
- /en/doc/development-workflow/
- /doc/DevelopmentWorkflow/
- /wiki/DevelopmentWorkflow/
---
Development Workflow
====================
A workflow for developing Qubes OS+
First things first, setup [QubesBuilder](/doc/qubes-builder/). This guide
assumes you're using qubes-builder to build Qubes.
Repositories and committing Code
--------------------------------
Qubes is split into a bunch of git repos. This are all contained in the
`qubes-src` directory under qubes-builder. Subdirectories there are separate
components, stored in separate git repositories.
The best way to write and contribute code is to create a git repo somewhere
(e.g., github) for the repo you are interested in editing (e.g.,
`qubes-manager`, `core-agent-linux`, etc). To integrate your repo with the rest
of Qubes, cd to the repo directory and add your repository as a remote in git
**Example:**
~~~
$ cd qubes-builder/qubes-src/qubes-manager
$ git remote add abel git@github.com:abeluck/qubes-manager.git
~~~
You can then proceed to easily develop in your own branches, pull in new
commits from the dev branches, merge them, and eventually push to your own repo
on github.
When you are ready to submit your changes to Qubes to be merged, push your
changes, then create a signed git tag (using `git tag -s`). Finally, send a
letter to the Qubes listserv describing the changes and including the link to
your repository. You can also create pull request on github. Don't forget to
include your public PGP key you use to sign your tags.
### Kernel-specific notes
#### Prepare fresh version of kernel sources, with Qubes-specific patches applied
In qubes-builder/qubes-src/linux-kernel:
~~~
make prep
~~~
The resulting tree will be in kernel-\<VERSION\>/linux-\<VERSION\>:
~~~
ls -ltrd kernel*/linux*
~~~
~~~
drwxr-xr-x 23 user user 4096 Nov 5 09:50 kernel-3.4.18/linux-3.4.18
drwxr-xr-x 6 user user 4096 Nov 21 20:48 kernel-3.4.18/linux-obj
~~~
#### Go to the kernel tree and update the version
In qubes-builder/qubes-src/linux-kernel:
~~~
cd kernel-3.4.18/linux-3.4.18
~~~
#### Changing the config
In kernel-3.4.18/linux-3.4.18:
~~~
cp ../../config .config
make oldconfig
~~~
Now change the configuration. For example, in kernel-3.4.18/linux-3.4.18:
~~~
make menuconfig
~~~
Copy the modified config back into the kernel tree:
~~~
cp .config ../../../config
~~~
#### Patching the code
TODO: describe the workflow for patching the code, below are some random notes, not working well
~~~
ln -s ../../patches.xen
export QUILT_PATCHES=patches.xen
export QUILT_REFRESH_ARGS="-p ab --no-timestamps --no-index"
export QUILT_SERIES=../../series-pvops.conf
quilt new patches.xen/pvops-3.4-0101-usb-xen-pvusb-driver-bugfix.patch
quilt add drivers/usb/host/Kconfig drivers/usb/host/Makefile \
drivers/usb/host/xen-usbback/* drivers/usb/host/xen-usbfront.c \
include/xen/interface/io/usbif.h
*edit something*
quilt refresh
cd ../..
vi series.conf
~~~
#### Building RPMS
TODO: Is this step generic for all subsystems?
Now it is a good moment to make sure you have changed kernel release name in
rel file. For example, if you change it to '1debug201211116c' the
resulting RPMs will be named
'kernel-3.4.18-1debug20121116c.pvops.qubes.x86\_64.rpm'. This will help
distinguish between different versions of the same package.
You might want to take a moment here to review (git diff, git status), commit
your changes locally.
To actually build RPMS, in qubes-builder:
~~~
make linux-kernel
~~~
RPMS will appear in qubes-src/linux-kernel/pkgs/fc20/x86\_64:
~~~
-rw-rw-r-- 1 user user 42996126 Nov 17 04:08 kernel-3.4.18-1debug20121116c.pvops.qubes.x86_64.rpm
-rw-rw-r-- 1 user user 43001450 Nov 17 05:36 kernel-3.4.18-1debug20121117a.pvops.qubes.x86_64.rpm
-rw-rw-r-- 1 user user 8940138 Nov 17 04:08 kernel-devel-3.4.18-1debug20121116c.pvops.qubes.x86_64.rpm
-rw-rw-r-- 1 user user 8937818 Nov 17 05:36 kernel-devel-3.4.18-1debug20121117a.pvops.qubes.x86_64.rpm
-rw-rw-r-- 1 user user 54490741 Nov 17 04:08 kernel-qubes-vm-3.4.18-1debug20121116c.pvops.qubes.x86_64.rpm
-rw-rw-r-- 1 user user 54502117 Nov 17 05:37 kernel-qubes-vm-3.4.18-1debug20121117a.pvops.qubes.x86_64.rpm
~~~
### Useful [QubesBuilder](/doc/qubes-builder/) commands
1. `make check` - will check if all the code was commited into repository and
if all repository are tagged with signed tag.
2. `make show-vtags` - show version of each component (based on git tags) -
mostly useful just before building ISO. **Note:** this will not show version
for components containing changes since last version tag
3. `make push` - push change from **all** repositories to git server. You must
set proper remotes (see above) for all repositories first.
4. `make prepare-merge` - fetch changes from remote repositories (can be
specified on commandline via GIT\_SUBDIR or GIT\_REMOTE vars), (optionally)
verify tags and show the changes. This do not merge the changes - there are
left for review as FETCH\_HEAD ref. You can merge them using `git merge
FETCH_HEAD` (in each repo directory). Or `make do-merge` to merge all of them.
Copying Code to dom0
--------------------
When developing it is convenient to be able to rapidly test changes. Assuming
you're developing Qubes on Qubes, you should be working in a special VM for
Qubes and occasionally you will want to transfer code or rpms back to dom0 for
testing.
Here are some handy scripts Marek has shared to facilitate this.
You may also like to run your [test environment on separate
machine](/doc/test-bench/).
### Syncing dom0 files
TODO: edit this script to be more generic
~~~
#!/bin/sh
set -x
set -e
QUBES_PY_DIR=/usr/lib64/python2.6/site-packages/qubes
QUBES_PY=$QUBES_PY_DIR/qubes.py
QUBESUTILS_PY=$QUBES_PY_DIR/qubesutils.py
qvm-run -p qubes-devel 'cd qubes-builder/qubes-src/core/dom0; tar c qmemman/qmemman*.py qvm-core/*.py qvm-tools/* misc/vm-template-hvm.conf misc/qubes-start.desktop ../misc/block-snapshot aux-tools ../qrexec' |tar xv
cp $QUBES_PY qubes.py.bak$$
cp $QUBESUTILS_PY qubesutils.py.bak$$
cp /etc/xen/scripts/block-snapshot block-snapshot.bak$$
sudo cp qvm-core/qubes.py $QUBES_PY
sudo cp qvm-core/qubesutils.py $QUBESUTILS_PY
sudo cp qvm-core/guihelpers.py $QUBES_PY_DIR/
sudo cp qmemman/qmemman*.py $QUBES_PY_DIR/
sudo cp misc/vm-template-hvm.conf /usr/share/qubes/
sudo cp misc/qubes-start.desktop /usr/share/qubes/
sudo cp misc/block-snapshot /etc/xen/scripts/
sudo cp aux-tools/qubes-dom0-updates.cron /etc/cron.daily/
# FIXME(Abel Luck): I hope to
~~~
### Apply qvm-tools
TODO: make it more generic
~~~
#!/bin/sh
BAK=qvm-tools.bak$$
mkdir -p $BAK
cp -a /usr/bin/qvm-* /usr/bin/qubes-* $BAK/
sudo cp qvm-tools/qvm-* qvm-tools/qubes-* /usr/bin/
~~~
### Copy from dom0 to an appvm
~~~
#/bin/sh
#
# usage ./cp-domain <vm_name> <file_to_copy>
#
domain=$1
file=$2
fname=`basename $file`
qvm-run $domain 'mkdir /home/user/incoming/dom0 -p'
cat $file| qvm-run --pass-io $domain "cat > /home/user/incoming/dom0/$fname"
~~~
## Git connection between VMs
Sometimes it's useful to transfer git commits between VMs. You can use `git
format-patch` for that and simply copy the files. But you can also setup
custom qrexec service for it.
Below example assumes that you use `builder-RX` directory in target VM to
store sources in qubes-builder layout (where `X` is some number). Make sure that
all the scripts are executable.
Service file (save in `/usr/local/etc/qubes-rpc/local.Git` in target VM):
~~~
#!/bin/sh
exec 2>/tmp/log2
read service rel repo
echo "Params: $service $rel $repo" >&2
# Adjust regexps if needed
echo "$repo" | grep -q '^[A-Za-z0-9-]\+$' || exit 1
echo "$rel" | grep -q '^[0-9.]\+$' || exit 1
path="/home/user/builder-R$rel/qubes-src/$repo"
if [ "$repo" = "builder" ]; then
path="/home/user/builder-R$rel"
fi
case $service in
git-receive-pack|git-upload-pack)
echo "starting $service $path" >&2
exec $service $path
;;
*)
echo "Unsupported service: $service" >&2
;;
esac
~~~
Client script (save in `~/bin/git-qrexec` in source VM):
~~~
#!/bin/sh
VMNAME=$1
(echo $GIT_EXT_SERVICE $2 $3; exec cat) | qrexec-client-vm $VMNAME local.Git
~~~
You will also need to setup qrexec policy in dom0 (`/etc/qubes-rpc/policy/local.Git`).
Usage:
~~~
[user@source core-agent-linux]$ git remote add testbuilder "ext::git-qrexec testbuilder 3 core-agent-linux"
[user@source core-agent-linux]$ git push testbuilder master
~~~
You can create `~/bin/add-remote` script to ease adding remotes:
~~~
#!/bin/sh
[ -n "$1" ] || exit 1
if [ "$1" = "tb" ]; then
git remote add $1 "ext::git-qrexec testbuilder 3 `basename $PWD`"
exit $?
fi
git remote add $1 git@github.com:$1/qubes-`basename $PWD`
~~~
It should be executed from component top level directory. This script takes one
argument - remote name. If it is `tb`, then it creates qrexec-based git remote
to `testbuilder` VM. Otherwise it creates remote pointing at github account of
the same name. In any case it points at repository matching current directory
name.
## Sending packages to different VM
Other useful script(s) can be used to setup local package repository hosted in
some VM. This way you can keep your development VM behind firewall, while
having an option to expose some yum/apt repository to the local network (to
have them installed on test machine).
To achieve this goal, a dummy repository can be created, which instead of
populating metadata locally, will upload the packages to some other VM and
trigger repository update there (using qrexec). You can use `unstable`
repository flavor, because there is no release managing rules bundled (unlike
current and current-testing).
### RPM packages - yum repo
In source VM, grab [linux-yum] repository (below is assumed you've made it in
`~/repo-yum-upload` directory) and replace `update_repo.sh` script with:
~~~
#!/bin/sh
VMNAME=repo-vm
set -e
qvm-copy-to-vm $VMNAME $1
# remove only files, leave directory structure
find -type f -name '*.rpm' -delete
# trigger repo update
qrexec-client-vm $VMNAME local.UpdateYum
~~~
In target VM, setup actual yum repository (also based on [linux-yum], this time
without modifications). You will also need to setup some gpg key for signing
packages (it is possible to force yum to install unsigned packages, but it
isn't possible for `qubes-dom0-update` tool). Fill `~/.rpmmacros` with
key description:
~~~
%_gpg_name Test packages signing key
~~~
Then setup `local.UpdateYum` qrexec service (`/usr/local/etc/qubes-rpc/local.UpdateYum`):
~~~
#!/bin/sh
if [ -z "$QREXEC_REMOTE_DOMAIN" ]; then
exit 1
fi
real_repository=/home/user/linux-yum
incoming=/home/user/QubesIncoming/$QREXEC_REMOTE_DOMAIN
find $incoming -name '*.rpm' |xargs rpm -K |grep -iv pgp |cut -f1 -d: |xargs -r setsid -w rpm --addsign 2>&1
rsync -lr --remove-source-files $incoming/ $real_repository
cd $real_repository
export SKIP_REPO_CHECK=1
if [ -d $incoming/r3.1 ]; then
./update_repo-unstable.sh r3.1
fi
if [ -d $incoming/r3.0 ]; then
./update_repo-unstable.sh r3.0
fi
if [ -d $incoming/r2 ]; then
./update_repo-unstable.sh r2
fi
find $incoming -type d -empty -delete
exit 0
~~~
Of course you will also need to setup qrexec policy in dom0
`/etc/qubes-rpc/policy/local.UpdateYum`.
If you want to access the repository from network, you need to setup HTTP
server serving it, and configure the system to let other machines actually
reach this HTTP server. You can use for example using [port
forwarding][port-forwarding] or setting up Tor hidden service. Configuration
details of those services are outside of the scope of this page.
Usage: setup `builder.conf` in source VM to use your dummy-uploader repository:
~~~
LINUX_REPO_BASEDIR = ../../repo-yum-upload/r3.1
~~~
Then use `make update-repo-unstable` to upload the packages. You can also
specify selected components on command line, then build them and upload to the
repository:
~~~
make COMPONENTS="core-agent-linux gui-agent-linux linux-utils" qubes update-repo-unstable
~~~
On the test machine, add yum repository (`/etc/yum.repos.d`) pointing at just
configured HTTP server. For example:
~~~
[local-test]
name=Test
baseurl=http://local-test.lan/linux-yum/r$releasever/unstable/dom0/fc20
~~~
Remember to also import gpg public key using `rpm --import`.
### Deb packages - Apt repo
Steps are mostly the same as in case of yum repo. Only details differs:
- use [linux-deb] instead of [linux-yum] as a base - both in source and target VM
- use different `update_repo.sh` script in source VM (below)
- use `local.UpdateApt` qrexec service in target VM (code below)
- in target VM additionally place `update-local-repo.sh` script in repository dir (code below)
`update_repo.sh` script:
~~~
#!/bin/sh
set -e
current_release=$1
VMNAME=repo-vm
qvm-copy-to-vm $VMNAME $1
find $current_release -type f -name '*.deb' -delete
rm -f $current_release/vm/db/*
qrexec-client-vm $VMNAME local.UpdateApt
~~~
`local.UpdateApt` service code (`/usr/local/etc/qubes-rpc/local.UpdateApt` in repo-serving VM):
~~~
#!/bin/sh
if [ -z "$QREXEC_REMOTE_DOMAIN" ]; then
exit 1
fi
incoming=/home/user/QubesIncoming/$QREXEC_REMOTE_DOMAIN
rsync -lr --remove-source-files $incoming/ /home/user/linux-deb/
cd /home/user/linux-deb
export SKIP_REPO_CHECK=1
if [ -d $incoming/r3.1 ]; then
for dist in `ls r3.1/vm/dists`; do
./update-local-repo.sh r3.1/vm $dist
done
fi
if [ -d $incoming/r3.0 ]; then
for dist in `ls r3.0/vm/dists`; do
./update-local-repo.sh r3.0/vm $dist
done
fi
if [ -d $incoming/r2 ]; then
for dist in `ls r2/vm/dists`; do
./update-local-repo.sh r2/vm $dist
done
fi
find $incoming -type d -empty -delete
exit 0
~~~
`update-local-repo.sh`:
~~~
#!/bin/sh
set -e
# Set this to your local repository signing key
SIGN_KEY=01ABCDEF
[ -z "$1" ] && { echo "Usage: $0 <repo> <dist>"; exit 1; }
REPO_DIR=$1
DIST=$2
if [ "$DIST" = "wheezy-unstable" ]; then
DIST_TAG=deb7
elif [ "$DIST" = "jessie-unstable" ]; then
DIST_TAG=deb8
elif [ "$DIST" = "stretch-unstable" ]; then
DIST_TAG=deb9
fi
pushd $REPO_DIR
mkdir -p dists/$DIST/main/binary-amd64
dpkg-scanpackages --multiversion --arch "*$DIST_TAG*" . > dists/$DIST/main/binary-amd64/Packages
gzip -9c dists/$DIST/main/binary-amd64/Packages > dists/$DIST/main/binary-amd64/Packages.gz
cat > dists/$DIST/Release <<EOF
Label: Test repo
Suite: $DIST
Codename: $DIST
Date: `date -R`
Architectures: amd64
Components: main
SHA1:
EOF
function calc_sha1() {
f=dists/$DIST/$1
echo -n " "
echo -n `sha1sum $f|cut -d' ' -f 1` ""
echo -n `stat -c %s $f` ""
echo $1
}
calc_sha1 main/binary-amd64/Packages >> dists/$DIST/Release
rm -f $DIST/Release.gpg
rm -f $DIST/InRelease
gpg -abs -u "$SIGN_KEY" \
< dists/$DIST/Release > dists/$DIST/Release.gpg
gpg -a -s --clearsign -u "$SIGN_KEY" \
< dists/$DIST/Release > dists/$DIST/InRelease
popd
if [ `id -u` -eq 0 ]; then
chown -R --reference=$REPO_DIR $REPO_DIR
fi
~~~
Usage: add this line to `/etc/apt/sources.list` on test machine (adjust host and path):
~~~
deb http://local-test.lan/linux-deb/r3.1 jessie-unstable main
~~~
[port-forwarding]: /doc/qubes-firewall/#tocAnchor-1-1-5
[linux-yum]: https://github.com/QubesOS/qubes-linux-yum
[linux-deb]: https://github.com/QubesOS/qubes-linux-deb

View File

@ -1,8 +1,9 @@
---
layout: doc
title: KDE dom0
permalink: /en/doc/kde-dom0/
permalink: /doc/kde-dom0/
redirect_from:
- /en/doc/kde-dom0/
- /doc/KdeDom0/
- /wiki/KdeDom0/
---
@ -10,7 +11,7 @@ redirect_from:
Qubes-customized KDE packages for Dom0
======================================
The Qubes kde-dom0 project (see [Source Code](/en/doc/source-code/)) contains the source code needed for building the customized KDE packages for use in Qubes Dom0 (the user desktop). The packages are based on Fedora 12 KDE packages, but are heavily slimmed down (Qubes doesn't need lots of KDE functionality in Dom0, such as most of the KDE apps). In the near future those KDE packages will also get some Qubes specific extensions, such as coloured titlebars/frames nicely integrated into the KDE Window Manager. And, of course, custom themes, e.g. for KDM :)
The Qubes kde-dom0 project (see [Source Code](/doc/source-code/)) contains the source code needed for building the customized KDE packages for use in Qubes Dom0 (the user desktop). The packages are based on Fedora 12 KDE packages, but are heavily slimmed down (Qubes doesn't need lots of KDE functionality in Dom0, such as most of the KDE apps). In the near future those KDE packages will also get some Qubes specific extensions, such as coloured titlebars/frames nicely integrated into the KDE Window Manager. And, of course, custom themes, e.g. for KDM :)
Getting the sources
-------------------

View File

@ -1,24 +1,25 @@
---
layout: doc
title: Qubes Builder Details
permalink: /en/doc/qubes-builder-details/
permalink: /doc/qubes-builder-details/
redirect_from:
- /en/doc/qubes-builder-details/
- /doc/QubesBuilderDetails/
- /wiki/QubesBuilderDetails/
---
[QubesBuilder](/en/doc/qubes-builder/) "API"
[QubesBuilder](/doc/qubes-builder/) "API"
========================================
Components Makefile.builder file
--------------------------------
[QubesBuilder](/en/doc/qubes-builder/) expects that each component have *Makefile.builder* file in its root directory. This file specifies what should be done to build the package. As name suggests, this is normal makefile, which is included by builder as its configuration. Its main purpose is to set some variables. Generally all available variables/settings are described as comments at the beginning of Makefile.\* in [QubesBuilder](/en/doc/qubes-builder/).
[QubesBuilder](/doc/qubes-builder/) expects that each component have *Makefile.builder* file in its root directory. This file specifies what should be done to build the package. As name suggests, this is normal makefile, which is included by builder as its configuration. Its main purpose is to set some variables. Generally all available variables/settings are described as comments at the beginning of Makefile.\* in [QubesBuilder](/doc/qubes-builder/).
Variables for Linux build:
- `RPM_SPEC_FILES` List (space separated) of spec files for RPM package build. Path should be relative to component root directory. [QubesBuilder](/en/doc/qubes-builder/) will install all BuildRequires (in chroot environment) before the build. In most Qubes components all spec files are kept in *rpm\_spec* directory. This is mainly used for Fedora packages build.
- `ARCH_BUILD_DIRS` List (space separated) of directories with PKGBUILD files for Archlinux package build. Similar to RPM build, [QubesBuilder](/en/doc/qubes-builder/) will install all makedepends, then build the package.
- `RPM_SPEC_FILES` List (space separated) of spec files for RPM package build. Path should be relative to component root directory. [QubesBuilder](/doc/qubes-builder/) will install all BuildRequires (in chroot environment) before the build. In most Qubes components all spec files are kept in *rpm\_spec* directory. This is mainly used for Fedora packages build.
- `ARCH_BUILD_DIRS` List (space separated) of directories with PKGBUILD files for Archlinux package build. Similar to RPM build, [QubesBuilder](/doc/qubes-builder/) will install all makedepends, then build the package.
Most components uses *archlinux* directory for this purpose, so its good to keep this style.
@ -37,8 +38,8 @@ Variables for Windows build:
- `SIGNTOOL` Path to signtool
- `WIN_PACKAGE_CMD` Command used to produce installation package (msi or msm). Default value is *wix.bat*, similar to above - use *true* if you don't want this command.
- `WIN_OUTPUT_HEADERS` Directory (relative to `WIN_SOURCE_SUBDIRS` element) with public headers of the package - for use in other components.
- `WIN_OUTPUT_LIBS` Directory (relative to `WIN_SOURCE_SUBDIRS` element) with libraries (both DLL and implib) of the package - for use in other components. Note that [QubesBuilder](/en/doc/qubes-builder/) will copy files specified as *\$(WIN\_OUTPUT\_LIBS)/\*/\** to match WDK directory layout (*\<specified directory\>/\<arch directory\>/\<actual libraries\>*), so you in mingw build you need to place libraries in some additional subdirectory.
- `WIN_BUILD_DEPS` List of components required to build this one. [QubesBuilder](/en/doc/qubes-builder/) will copy files specified with `WIN_OUTPUT_HEADERS` and `WIN_OUTPUT_LIBS` of those components to some directory and provide its path with `QUBES_INCLUDES` and `QUBES_LIBS` variables. Use those variables in your build scripts (*sources* or *Makefile* - depending on selected compiler). You can assume that the variables are always set and directories always exists, even if empty.
- `WIN_OUTPUT_LIBS` Directory (relative to `WIN_SOURCE_SUBDIRS` element) with libraries (both DLL and implib) of the package - for use in other components. Note that [QubesBuilder](/doc/qubes-builder/) will copy files specified as *\$(WIN\_OUTPUT\_LIBS)/\*/\** to match WDK directory layout (*\<specified directory\>/\<arch directory\>/\<actual libraries\>*), so you in mingw build you need to place libraries in some additional subdirectory.
- `WIN_BUILD_DEPS` List of components required to build this one. [QubesBuilder](/doc/qubes-builder/) will copy files specified with `WIN_OUTPUT_HEADERS` and `WIN_OUTPUT_LIBS` of those components to some directory and provide its path with `QUBES_INCLUDES` and `QUBES_LIBS` variables. Use those variables in your build scripts (*sources* or *Makefile* - depending on selected compiler). You can assume that the variables are always set and directories always exists, even if empty.
builder.conf settings
---------------------

View File

@ -1,8 +1,9 @@
---
layout: doc
title: Qubes Builder
permalink: /en/doc/qubes-builder/
permalink: /doc/qubes-builder/
redirect_from:
- /en/doc/qubes-builder/
- /doc/QubesBuilder/
- /wiki/QubesBuilder/
---
@ -89,9 +90,9 @@ You can also build selected component separately. Eg. to compile only gui virtua
make gui-daemon
Full list you can get from make help. For advanced use and preparing sources
for use with [QubesBuilder](/en/doc/qubes-builder/) take a look at [doc directory
for use with [QubesBuilder](/doc/qubes-builder/) take a look at [doc directory
in QubesBuilder](https://github.com/marmarek/qubes-builder/tree/master/doc) or
[QubesBuilderDetails](/en/doc/qubes-builder-details/) page.
[QubesBuilderDetails](/doc/qubes-builder-details/) page.
Making customized build
-----------------------
@ -127,7 +128,7 @@ If you want to somehow modify sources, you can also do it, here are some basic s
Code verification keys management
=================================
[QubesBuilder](/en/doc/qubes-builder/) by default verifies signed tags on every downloaded code. Public keys used for that are stored in `keyrings/git`. By default Qubes developers' keys are imported automatically, but if you need some additional keys (for example your own), you can add them using:
[QubesBuilder](/doc/qubes-builder/) by default verifies signed tags on every downloaded code. Public keys used for that are stored in `keyrings/git`. By default Qubes developers' keys are imported automatically, but if you need some additional keys (for example your own), you can add them using:
GNUPGHOME=$PWD/keyrings/git gpg --import /path/to/key.asc
GNUPGHOME=$PWD/keyrings/git gpg --edit-key ID_OF_JUST_IMPORTED_KEY

View File

@ -1,8 +1,9 @@
---
layout: doc
title: Qubes R3 Building
permalink: /en/doc/qubes-r3-building/
permalink: /doc/qubes-r3-building/
redirect_from:
- /en/doc/qubes-r3-building/
- /doc/QubesR3Building/
- /wiki/QubesR3Building/
---
@ -10,7 +11,7 @@ redirect_from:
Building Qubes OS 3.0 ISO
=========================
Ensure your system is rpm-based and that you have necessary dependencies installed (see [QubesBuilder](/en/doc/qubes-builder/) for more info):
Ensure your system is rpm-based and that you have necessary dependencies installed (see [QubesBuilder](/doc/qubes-builder/) for more info):
~~~
sudo yum install git createrepo rpm-build make wget rpmdevtools pandoc

View File

@ -1,8 +1,9 @@
---
layout: doc
title: USBVM
permalink: /en/doc/usbvm/
permalink: /doc/usbvm/
redirect_from:
- /en/doc/usbvm/
- /doc/USBVM/
- /wiki/USBVM/
---

View File

@ -1,8 +1,9 @@
---
layout: doc
title: Coding Style
permalink: /en/doc/coding-style/
permalink: /doc/coding-style/
redirect_from:
- /en/doc/coding-style/
- /doc/CodingStyle/
- /wiki/CodingStyle/
- /trac/wiki/CodingStyle/

View File

@ -1,11 +1,11 @@
---
layout: doc
title: Contributing
permalink: /en/doc/contributing/
permalink: /doc/contributing/
redirect_from:
- /doc/contributing/
- "/doc/ContributingHowto/"
- "/wiki/ContributingHowto/"
- /en/doc/contributing/
- /doc/ContributingHowto/
- /wiki/ContributingHowto/
---
How can I contribute to the Qubes Project?
@ -15,16 +15,16 @@ Ok, so you think Qubes Project is cool and you would like to contribute? You are
First you should decide what you are interested in (and good in). The Qubes project would welcome contributions in various areas:
- Testing and [bug reporting](/en/doc/reporting-bugs/)
- Testing and [bug reporting](/doc/reporting-bugs/)
- Code audit (e.g. gui-daemon)
- New features
- Artwork (plymouth themes, KDM themes, installer themes, wallpapers, etc)
Perhaps the best starting point is to have a look at the [issues](https://github.com/QubesOS/qubes-issues/issues) to see what are the most urgent tasks to do.
Before you engage in some longer activity, e.g. implementing a new feature, it's always good to contact us first (preferably via the [qubes-devel](/en/doc/qubes-lists/) list), to avoid a situation when two or more independent people would work on the same feature at the same time, doubling each others work. When you contact us and devote to a particular task, we will create a ticket for this task with info who is working on this feature and what is the expected date of some early code to be posted.
Before you engage in some longer activity, e.g. implementing a new feature, it's always good to contact us first (preferably via the [qubes-devel](/doc/mailing-lists/) list), to avoid a situation when two or more independent people would work on the same feature at the same time, doubling each others work. When you contact us and devote to a particular task, we will create a ticket for this task with info who is working on this feature and what is the expected date of some early code to be posted.
When you are ready to start some work, read how to [access Qubes sources and send patches](/en/doc/source-code/).
When you are ready to start some work, read how to [access Qubes sources and send patches](/doc/source-code/).
You can also contribute in other areas than coding and testing, e.g. by providing mirrors for Qubes rpm repositories, providing feedback about what features you would like to have in Qubes, or perhaps even preparing some cool You Tube videos that would demonstrate some Qubes' features. You are always encouraged to discuss your ideas on qubes-devel.

View File

@ -1,8 +1,10 @@
---
layout: doc
title: Automated Tests
permalink: /en/doc/automated-tests/
redirect_from: /doc/AutomatedTests/
permalink: /doc/automated-tests/
redirect_from:
- /en/doc/automated-tests/
- /doc/AutomatedTests/
---
Automatic tests
@ -104,7 +106,7 @@ Example test run:
After you added a new unit test to [core-admin/tests](https://github.com/QubesOS/qubes-core-admin/tree/master/tests)
you have to make sure of two things:
1. The test will be added to the RPM file created by [QubesBuilder](/en/doc/qubes-builder/)
1. The test will be added to the RPM file created by [QubesBuilder](/doc/qubes-builder/)
For this you need to edit [core-admin/tests/Makefile](https://github.com/QubesOS/qubes-core-admin/tree/master/tests/Makefile)
2. The test will be loaded by [core-admin/tests/\_\_init\_\_.py](https://github.com/QubesOS/qubes-core-admin/tree/master/tests/__init__.py)

View File

@ -1,8 +1,9 @@
---
layout: doc
title: Profiling
permalink: /en/doc/profiling/
permalink: /doc/profiling/
redirect_from:
- /en/doc/profiling/
- /doc/Profiling/
- /wiki/Profiling/
---

View File

@ -1,8 +1,9 @@
---
layout: doc
title: Test Bench
permalink: /en/doc/test-bench/
permalink: /doc/test-bench/
redirect_from:
- /en/doc/test-bench/
- /doc/TestBench/
- /wiki/TestBench/
---
@ -12,7 +13,7 @@ Test bench for Dom0
This guide shows how to set up simple test bench that automatically test your code you're about to push. It is written especially for `core3` branch of `core-admin.git` repo, but some ideas are universal.
We will set up a spare machine (bare metal, not a virtual) that will be hosting our experimental Dom0. We will communicate with it via Ethernet and SSH. This tutorial assumes you are familiar with [QubesBuilder](/en/doc/qubes-builder/) and you have it set up and running flawlessly.
We will set up a spare machine (bare metal, not a virtual) that will be hosting our experimental Dom0. We will communicate with it via Ethernet and SSH. This tutorial assumes you are familiar with [QubesBuilder](/doc/qubes-builder/) and you have it set up and running flawlessly.
Setting up the machine
----------------------

View File

@ -1,11 +1,12 @@
---
layout: doc
title: VM Configuration Interface
permalink: /en/doc/vm-interface/
permalink: /doc/vm-interface/
redirect_from:
- /en/doc/vm-interface/
- /doc/VMInterface/
- "/doc/SystemDoc/VMInterface/"
- "/wiki/SystemDoc/VMInterface/"
- /doc/SystemDoc/VMInterface/
- /wiki/SystemDoc/VMInterface/
---
VM Configuration Interface
@ -83,16 +84,16 @@ Other Qrexec services installed by default:
- `qubes.DetachPciDevice` - service called in reaction to `qvm-pci -d` call on
running VM. The service receives one word - BDF of device to detach. When the
service call ends, the device will be detached
- `qubes.Filecopy` - receive some files from other VM. Files sent in [qfile format](/en/doc/qfilecopy/)
- `qubes.Filecopy` - receive some files from other VM. Files sent in [qfile format](/doc/qfilecopy/)
- `qubes.OpenInVM` - open a file in called VM. Service receives a single file on stdin (in
[qfile format](/en/doc/qfilecopy/). After a file viewer/editor is terminated, if
[qfile format](/doc/qfilecopy/). After a file viewer/editor is terminated, if
the file was modified, can be sent back (just raw content, without any
headers); otherwise service should just terminate without sending anything.
This service is used by both `qvm-open-in-vm` and `qvm-open-in-dvm` tools. When
called in DispVM, service termination will trigger DispVM cleanup.
- `qubes.Restore` - retrieve Qubes backup. The service receives backup location
entered by the user (one line, terminated by '\n'), then should output backup
archive in [qfile format](/en/doc/qfilecopy/) (core-agent-linux component contains
archive in [qfile format](/doc/qfilecopy/) (core-agent-linux component contains
`tar2qfile` utility to do the conversion
- `qubes.SelectDirectory`, `qubes.SelectFile` - services which should show
file/directory selection dialog and return (to stdout) a single line
@ -118,7 +119,7 @@ abstraction. This will change in the future. Those tools are:
- `nm-online -x` - called before `qubes.SyncNtpClock` service call by `qvm-sync-clock` tool
- `resize2fs` - called to resize filesystem on /rw partition by `qvm-grow-private` tool
- `gpk-update-viewer` - called by Qubes Manager to display available updates in a TemplateVM
- `systemctl start qubes-update-check.timer` (and similarly stop) - called when enabling/disabling updates checking in given VM (`qubes-update-check` [qvm-service](/en/doc/qubes-service/))
- `systemctl start qubes-update-check.timer` (and similarly stop) - called when enabling/disabling updates checking in given VM (`qubes-update-check` [qvm-service](/doc/qubes-service/))
Additionally automatic tests extensively calls various commands directly in VMs. We do not plan to change that.

View File

@ -1,8 +1,9 @@
---
layout: doc
title: Windows Debugging
permalink: /en/doc/windows-debugging/
permalink: /doc/windows-debugging/
redirect_from:
- /en/doc/windows-debugging/
- /doc/WindowsDebugging/
- /wiki/WindowsDebugging/
---

View File

@ -1,8 +1,9 @@
---
layout: doc
title: Developer Books
permalink: /en/doc/devel-books/
permalink: /doc/devel-books/
redirect_from:
- /en/doc/devel-books/
- /doc/DevelBooks/
- /wiki/DevelBooks/
---

View File

@ -1,8 +1,9 @@
---
layout: doc
title: Developers' FAQ
permalink: /en/doc/devel-faq/
permalink: /doc/devel-faq/
redirect_from:
- /en/doc/devel-faq/
- /doc/DevelFaq/
- /wiki/DevelFaq/
---
@ -10,38 +11,37 @@ redirect_from:
Qubes Developers' FAQ
=====================
1. 1. [Q: Why does dom0 need to be 64-bit?](#q-why-does-dom0-need-to-be-64-bit)
2. [Q: Why do you use KDE in Dom0? What is the roadmap for Gnome support?](#q-why-do-you-use-kde-in-dom0-what-is-the-roadmap-for-gnome-support)
3. [Q: What is the recommended build environment?](#q-what-is-the-recommended-build-environment)
4. [Q: How to build Qubes from sources?](#q-how-to-build-qubes-from-sources)
5. [Q: How do I submit a patch?](#q-how-do-i-submit-a-patch)
### Q: Why does dom0 need to be 64-bit?
Q: Why does dom0 need to be 64-bit?
-----------------------------------
Since 2013 [Xen has not supported 32-bit x86 architecture](http://wiki.xenproject.org/wiki/Xen_Project_Release_Features) and Intel VT-d, which Qubes uses to isolate devices and drivers, is available on Intel 64-bit processors only.
In addition, often it is more difficult to exploit a bug on the x64 Linux than it is on x86 Linux (e.g. ASLR is sometimes harder to get around). While we designed Qubes with the emphasis on limiting any potential attack vectors in the first place, still we realize that some of the code running in Dom0, e.g. our GUI daemon or xen-store daemon, even though it is very simple code, might contain some bugs. Plus currently we haven't implemented a separate storage domain, so also the disk backends are in Dom0 and are "reachable" from the VMs, which adds up to the potential attack surface. So, having faced a choice between 32-bit and 64-bit OS for Dom0, it was almost a no-brainer, as the 64-bit option provides some (little perhaps, but still) more protection against some classes of attacks, and at the same time does not have any disadvantages (except that it requires a 64-bit processor, but all systems on which it makes sense to run Qubes, e.g. that have at least 3-4GB memory, they do have 64-bit CPUs anyway).
### Q: Why do you use KDE in Dom0? What is the roadmap for Gnome support?
Q: Why do you use KDE in Dom0? What is the roadmap for Gnome support?
---------------------------------------------------------------------
There are a few things that are KDE-specific, but generally it should not be a big problem to also add Gnome support to Qubes (in Dom0 of course). Those KDE-specific things are:
- Qubes requires KDM (KDE Login Manager), rather than GDM, for the very simple reason that GDM doesn't obey standards and start `/usr/bin/Xorg` instead of `/usr/bin/X`. This is important for Qubes, because we need to load a special "X wrapper" (to make it possible to use Linux usermode shared memory to access Xen shared memory pages in our App Viewers -- see the sources [here](https://github.com/QubesOS/qubes-gui-daemon/tree/master/shmoverride)). So, Qubes makes the `/usr/bin/X` to be a symlink to the Qubes X Wrapper, which, in turn, executes the `/usr/bin/Xorg`. This works well with KDM (and would probably also work with other X login managers), but not with GDM. If somebody succeeded in makeing GDM to execute `/usr/bin/X` instead of `/usr/bin/Xorg`, we would love to hear about it!
- We maintain a special [repository](/en/doc/kde-dom0/) for building packages specifically for Qubes Dom0.
- We maintain a special [repository](/doc/kde-dom0/) for building packages specifically for Qubes Dom0.
- We've patched the KDE's Window Manager (specifically [one of the decoration plugins](https://github.com/QubesOS/qubes-desktop-linux-kde/tree/master/plastik-for-qubes)) to draw window decorations in the color of the specific AppVM's label.
If you're interested in porting GNOME for Qubes Dom0 use, let us know -- we will most likely welcome patches in this area.
### Q: What is the recommended build environment?
Q: What is the recommended build environment?
---------------------------------------------
Any rpm-based, 64-bit. Preferred Fedora.
### Q: How to build Qubes from sources?
Q: How to build Qubes from sources?
-----------------------------------
See [the instruction](/en/doc/qubes-builder/)
See [the instruction](/doc/qubes-builder/)
### Q: How do I submit a patch?
Q: How do I submit a patch?
---------------------------
See [Qubes Source Code Repositories](/en/doc/source-code/).
See [Qubes Source Code Repositories](/doc/source-code/).

View File

@ -1,11 +1,11 @@
---
layout: doc
title: Documentation Guidelines
permalink: /en/doc/doc-guidelines/
permalink: /doc/doc-guidelines/
redirect_from:
- /doc/doc-guidelines/
- "/wiki/DocStyle/"
- "/doc/DocStyle/"
- /en/doc/doc-guidelines/
- /wiki/DocStyle/
- /doc/DocStyle/
---
Guidelines for Documentation Contributors

View File

@ -1,8 +1,9 @@
---
layout: doc
title: GUI
permalink: /en/doc/gui/
permalink: /doc/gui/
redirect_from:
- /en/doc/gui/
- /en/doc/gui-docs/
- /doc/GUIdocs/
- /wiki/GUIdocs/
@ -11,24 +12,24 @@ redirect_from:
Qubes GUI protocol
==================
qubes\_gui and qubes\_guid processes
qubes_gui and qubes_guid processes
------------------------------------
All AppVM X applications connect to local (running in AppVM) Xorg server, that uses the following "hardware" drivers:
- *dummy\_drv* - video driver, that paints onto a framebuffer located in RAM, not connected to real hardware
- *qubes\_drv* - it provides a virtual keyboard and mouse (in fact, more, see below)
- *dummy_drv* - video driver, that paints onto a framebuffer located in RAM, not connected to real hardware
- *qubes_drv* - it provides a virtual keyboard and mouse (in fact, more, see below)
For each AppVM, there is a pair of *qubes\_gui* (running in AppVM) and *qubes\_guid* (running in dom0) processes, connected over vchan. Main responsibilities of *qubes\_gui* are:
For each AppVM, there is a pair of *qubes_gui* (running in AppVM) and *qubes_guid* (running in dom0) processes, connected over vchan. Main responsibilities of *qubes_gui* are:
- call XCompositeRedirectSubwindows on the root window, so that each window has its own composition buffer
- instruct the local Xorg server to notify it about window creation, configuration and damage events; pass information on these events to dom0
- receive information about keyboard and mouse events from dom0, tell *qubes\_drv* to fake appropriate events
- receive information about keyboard and mouse events from dom0, tell *qubes_drv* to fake appropriate events
- receive information about window size/position change, apply them to the local window
Main responsibilities of *qubes\_guid* are:
Main responsibilities of *qubes_guid* are:
- create a window in dom0 whenever an information on window creation in AppVM is received from *qubes\_gui*
- create a window in dom0 whenever an information on window creation in AppVM is received from *qubes_gui*
- whenever the local window receives XEvent, pass information on it to AppVM (particularly, mouse and keyboard data)
- whenever AppVM signals damage event, tell local Xorg server to repaint a given window fragment
- receive information about window size/position change, apply them to the local window
@ -38,19 +39,19 @@ Note that keyboard and mouse events are passed to AppVM only if a window belongi
Window content updates implementation
-------------------------------------
Typical remote desktop applications, like *vnc*, pass the information on all changed window content in-band (say, over tcp). As the channel has limited throughput, this impacts video performance. In case of Qubes, *qubes\_gui* does not transfer all changed pixels via vchan. Instead, for each window, upon its creation or size change, *qubes\_gui*
Typical remote desktop applications, like *vnc*, pass the information on all changed window content in-band (say, over tcp). As the channel has limited throughput, this impacts video performance. In case of Qubes, *qubes_gui* does not transfer all changed pixels via vchan. Instead, for each window, upon its creation or size change, *qubes_gui*
- asks *qubes\_drv* driver for the list of physical memory frames that hold the composition buffer of a window
- pass this information via `MFNDUMP` message to *qubes\_guid* in dom0
- asks *qubes_drv* driver for the list of physical memory frames that hold the composition buffer of a window
- pass this information via `MFNDUMP` message to *qubes_guid* in dom0
Now, *qubes\_guid* has to tell dom0 Xorg server about the location of the buffer. There is no supported way (e.g. Xorg extension) to do this zero-copy style. The following method is used in Qubes:
Now, *qubes_guid* has to tell dom0 Xorg server about the location of the buffer. There is no supported way (e.g. Xorg extension) to do this zero-copy style. The following method is used in Qubes:
- in dom0, the Xorg server is started with *LD\_PRELOAD*-ed library named *shmoverride.so*. This library hooks all function calls related to shared memory.
- *qubes\_guid* creates a shared memory segment, and then tells Xorg to attach it via *MIT-SHM* extension
- when Xorg tries to attach the segment (via glibc *shmat*) *shmoverride.so* intercepts this call and instead maps AppVM memory via *xc\_map\_foreign\_range*
- in dom0, the Xorg server is started with *LD_PRELOAD*-ed library named *shmoverride.so*. This library hooks all function calls related to shared memory.
- *qubes_guid* creates a shared memory segment, and then tells Xorg to attach it via *MIT-SHM* extension
- when Xorg tries to attach the segment (via glibc *shmat*) *shmoverride.so* intercepts this call and instead maps AppVM memory via *xc_map_foreign_range*
- since then, we can use MIT-SHM functions, e.g. *XShmPutImage* to draw onto a dom0 window. *XShmPutImage* will paint with DRAM speed; actually, many drivers use DMA for this.
The important detail is that *xc\_map\_foreign\_range* verifies that a given mfn range actually belongs to a given domain id (and the latter is provided by trusted *qubes\_guid*). Therefore, rogue AppVM cannot gain anything by passing crafted mnfs in the `MFNDUMP` message.
The important detail is that *xc_map_foreign_range* verifies that a given mfn range actually belongs to a given domain id (and the latter is provided by trusted *qubes_guid*). Therefore, rogue AppVM cannot gain anything by passing crafted mnfs in the `MFNDUMP` message.
To sum up, this solution has the following benefits:
@ -65,7 +66,7 @@ Security markers on dom0 windows
It is important that user knows which AppVM a given window belongs to. This prevents an attack when a rogue AppVM paints a window pretending to belong to other AppVM or dom0, and tries to steal e.g. passwords.
In Qubes, the custom window decorator is used, that paints a colourful frame (the colour is determined during AppVM creation) around decorated windows. Additionally, window title always starts with **[name of the AppVM]**. If a window has a *override\_redirect* attribute, meaning that it should not be treated by a window manager (typical case is menu windows), *qubes\_guid* draws a two-pixel colourful frame around it manually.
In Qubes, the custom window decorator is used, that paints a colourful frame (the colour is determined during AppVM creation) around decorated windows. Additionally, window title always starts with **[name of the AppVM]**. If a window has a *override_redirect* attribute, meaning that it should not be treated by a window manager (typical case is menu windows), *qubes_guid* draws a two-pixel colourful frame around it manually.
Clipboard sharing implementation
--------------------------------
@ -73,12 +74,12 @@ Clipboard sharing implementation
Certainly, it would be insecure to allow AppVM to read/write clipboard of other AppVMs unconditionally. Therefore, the following mechanism is used:
- there is a "qubes clipboard" in dom0 - its contents is stored in a regular file in dom0.
- if user wants to copy local AppVM clipboard to qubes clipboard, she must focus on any window belonging to this AppVM, and press **Ctrl-Shift-C**. This combination is trapped by *qubes-guid*, and `CLIPBOARD_REQ` message is sent to AppVM. *qubes-gui* responds with *CLIPBOARD\_DATA* message followed by clipboard contents.
- user focuses on other AppVM window, presses **Ctrl-Shift-V**. This combination is trapped by *qubes-guid*, and `CLIPBOARD_DATA` message followed by qubes clipboard contents is sent to AppVM; *qubes\_gui* copies data to the the local clipboard, and then user can paste its contents to local applications normally.
- if user wants to copy local AppVM clipboard to qubes clipboard, she must focus on any window belonging to this AppVM, and press **Ctrl-Shift-C**. This combination is trapped by *qubes-guid*, and `CLIPBOARD_REQ` message is sent to AppVM. *qubes-gui* responds with *CLIPBOARD_DATA* message followed by clipboard contents.
- user focuses on other AppVM window, presses **Ctrl-Shift-V**. This combination is trapped by *qubes-guid*, and `CLIPBOARD_DATA` message followed by qubes clipboard contents is sent to AppVM; *qubes_gui* copies data to the the local clipboard, and then user can paste its contents to local applications normally.
This way, user can quickly copy clipboards between AppVMs. This action is fully controlled by the user, it cannot be triggered/forced by any AppVM.
*qubes\_gui* and *qubes\_guid* code notes
*qubes_gui* and *qubes_guid* code notes
-----------------------------------------
Both applications are structures similarly. They use *select* function to wait for any of the two event sources
@ -86,18 +87,18 @@ Both applications are structures similarly. They use *select* function to wait f
- messages from the local X server
- messages from the vchan connecting to the remote party
The XEvents are handled by *handle\_xevent\_eventname* function, messages are handled by *handle\_messagename* function. One should be very careful when altering the actual *select* loop - e.g. both XEvents and vchan messages are buffered, meaning that *select* will not wake for each message.
The XEvents are handled by *handle_xevent_eventname* function, messages are handled by *handle_messagename* function. One should be very careful when altering the actual *select* loop - e.g. both XEvents and vchan messages are buffered, meaning that *select* will not wake for each message.
If one changes the number/order/signature of messages, one should increase the *QUBES\_GUID\_PROTOCOL\_VERSION* constant in *messages.h* include file.
If one changes the number/order/signature of messages, one should increase the *QUBES_GUID_PROTOCOL_VERSION* constant in *messages.h* include file.
*qubes\_guid* writes debugging information to */var/log/qubes/qubes.domain\_id.log* file; *qubes\_gui* writes debugging information to */var/log/qubes/gui\_agent.log*. Include these files when reporting a bug.
*qubes_guid* writes debugging information to */var/log/qubes/qubes.domain_id.log* file; *qubes_gui* writes debugging information to */var/log/qubes/gui_agent.log*. Include these files when reporting a bug.
AppVM -\> dom0 messages
AppVM -> dom0 messages
-----------------------
Proper handling of the below messages is security-critical. Observe that beside two messages (`CLIPBOARD` and `MFNDUMP`) the rest have fixed size, so the parsing code can be small.
The *override\_redirect* window attribute is explained at [Override Redirect Flag](http://tronche.com/gui/x/xlib/window/attributes/override-redirect.html). The *transient\_for* attribute is explained at [Transient\_for attribute](http://tronche.com/gui/x/icccm/sec-4.html#WM_TRANSIENT_FOR).
The *override_redirect* window attribute is explained at [Override Redirect Flag](http://tronche.com/gui/x/xlib/window/attributes/override-redirect.html). The *transient_for* attribute is explained at [Transient_for attribute](http://tronche.com/gui/x/icccm/sec-4.html#WM_TRANSIENT_FOR).
Window manager hints and flags are described at [http://standards.freedesktop.org/wm-spec/latest/](http://standards.freedesktop.org/wm-spec/latest/), especially part about `_NET_WM_STATE`.
@ -118,10 +119,11 @@ struct msghdr {
The header is followed by message-specific data.
~~~
|Message name|Structure after header|Action|
|:-----------|:---------------------|:-----|
|MSG\_CLIPBOARD\_DATA|amorphic blob (length determined by the "window" field)|Store the received clipboard content (not parsing in any way)|
|MSG\_CREATE|` struct msg_create { `
|MSG_CLIPBOARD_DATA|amorphic blob (length determined by the "window" field)|Store the received clipboard content (not parsing in any way)|
|MSG_CREATE|` struct msg_create { `
` uint32_t x; `
` uint32_t y; `
` uint32_t width; `
@ -129,20 +131,20 @@ The header is followed by message-specific data.
` uint32_t parent; `
` uint32_t override_redirect; `
` }; `|Create a window with given parameters|
|MSG\_DESTROY|None|Destroy a window|
|MSG\_MAP|` struct msg_map_info { `
|MSG_DESTROY|None|Destroy a window|
|MSG_MAP|` struct msg_map_info { `
` uint32_t transient_for; `
` uint32_t override_redirect; `
` }; `|Map a window with given parameters|
|MSG\_UNMAP|None|Unmap a window|
|MSG\_CONFIGURE|` struct msg_configure { `
|MSG_UNMAP|None|Unmap a window|
|MSG_CONFIGURE|` struct msg_configure { `
` uint32_t x; `
` uint32_t y; `
` uint32_t width; `
` uint32_t height; `
` uint32_t override_redirect; `
` }; `|Change window position/size/type|
|MSG\_MFNDUMP|` struct shm_cmd { `
|MSG_MFNDUMP|` struct shm_cmd { `
` uint32_t shmid; `
` uint32_t width; `
` uint32_t height; `
@ -152,18 +154,18 @@ The header is followed by message-specific data.
` uint32_t domid; `
` uint32_t mfns[0]; `
` }; `|Retrieve the array of mfns that constitute the composition buffer of a remote window.
The "num\_mfn" 32bit integers follow the shm\_cmd structure; "off" is the offset of the composite buffer start in the first frame; "shmid" and "domid" parameters are just placeholders (to be filled by *qubes\_guid*), so that we can use the same structure when talking to *shmoverride.so*|
|MSG\_SHMIMAGE|` struct msg_shmimage { `
The "num_mfn" 32bit integers follow the shm_cmd structure; "off" is the offset of the composite buffer start in the first frame; "shmid" and "domid" parameters are just placeholders (to be filled by *qubes_guid*), so that we can use the same structure when talking to *shmoverride.so*|
|MSG_SHMIMAGE|` struct msg_shmimage { `
` uint32_t x; `
` uint32_t y;`
` uint32_t width;`
` uint32_t height;`
` }; `|Repaint the given window fragment|
|MSG\_WMNAME|` struct msg_wmname { `
|MSG_WMNAME|` struct msg_wmname { `
` char data[128]; `
` } ; `|Set the window name; only printable characters are allowed|
|MSG\_DOCK|None|Dock the window in the tray|
|MSG\_WINDOW\_HINTS|` struct msg_window_hints { `
|MSG_DOCK|None|Dock the window in the tray|
|MSG_WINDOW_HINTS|` struct msg_window_hints { `
` uint32_t flags; `
` uint32_t min_width; `
` uint32_t min_height; `
@ -174,12 +176,13 @@ The header is followed by message-specific data.
` uint32_t base_width; `
` uint32_t base_height; `
` }; `|Size hints for window manager|
|MSG\_WINDOW\_FLAGS|` struct msg_window_flags { `
|MSG_WINDOW_FLAGS|` struct msg_window_flags { `
` uint32_t flags_set; `
` uint32_t flags_unset;`
` }; `|Change window state request; fields contains bitmask which flags request to be set and which unset|
~~~
Dom0 -\> AppVM messages
Dom0 -> AppVM messages
-----------------------
Proper handling of the below messages is NOT security-critical.
@ -196,41 +199,42 @@ struct msghdr {
The header is followed by message-specific data.
` KEYPRESS, BUTTON, MOTION, FOCUS ` messages pass information extracted from dom0 XEvent; see appropriate event documentation.
~~~
|Message name|Structure after header|Action|
|:-----------|:---------------------|:-----|
|MSG\_KEYPRESS|` struct msg_keypress { `
|MSG_KEYPRESS|` struct msg_keypress { `
` uint32_t type; `
` uint32_t x; `
` uint32_t y; `
` uint32_t state; `
` uint32_t keycode; `
` }; `|Tell *qubes\_drv* driver to generate a keypress|
|MSG\_BUTTON|` struct msg_button { `
` }; `|Tell *qubes_drv* driver to generate a keypress|
|MSG_BUTTON|` struct msg_button { `
` uint32_t type; `
` uint32_t x; `
` uint32_t y; `
` uint32_t state; `
` uint32_t button; `
` }; `|Tell *qubes\_drv* driver to generate mouseclick|
|MSG\_MOTION|` struct msg_motion { `
` }; `|Tell *qubes_drv* driver to generate mouseclick|
|MSG_MOTION|` struct msg_motion { `
` uint32_t x; `
` uint32_t y; `
` uint32_t state; `
` uint32_t is_hint; `
` }; `|Tell *qubes\_drv* driver to generate motion event|
|MSG\_CONFIGURE|` struct msg_configure { `
` }; `|Tell *qubes_drv* driver to generate motion event|
|MSG_CONFIGURE|` struct msg_configure { `
` uint32_t x; `
` uint32_t y; `
` uint32_t width; `
` uint32_t height; `
` uint32_t override_redirect; `
` }; `|Change window position/size/type|
|MSG\_MAP|` struct msg_map_info { `
|MSG_MAP|` struct msg_map_info { `
` uint32_t transient_for; `
` uint32_t override_redirect; `
` }; `|Map a window with given parameters|
|MSG\_CLOSE|None|send wmDeleteMessage to the window|
|MSG\_CROSSING|` struct msg_crossing { `
|MSG_CLOSE|None|send wmDeleteMessage to the window|
|MSG_CROSSING|` struct msg_crossing { `
` uint32_t type; `
` uint32_t x; `
` uint32_t y; `
@ -239,18 +243,18 @@ The header is followed by message-specific data.
` uint32_t detail; `
` uint32_t focus; `
` }; `|Notify window about enter/leave event|
|MSG\_FOCUS|` struct msg_focus { `
|MSG_FOCUS|` struct msg_focus { `
` uint32_t type; `
` uint32_t mode; `
` uint32_t detail; `
` }; `|Raise a window, XSetInputFocus|
|MSG\_CLIPBOARD\_REQ|None|Retrieve the local clipboard, pass contents to gui-daemon|
|MSG\_CLIPBOARD\_DATA|amorphic blob|Insert the received data into local clipboard|
|MSG\_EXECUTE|Obsolete|Obsolete, unused|
|MSG\_KEYMAP\_NOTIFY|` unsigned char remote_keys[32]; `|Synchronize the keyboard state (key pressed/released) with dom0|
|MSG\_WINDOW\_FLAGS|` struct msg_window_flags { `
|MSG_CLIPBOARD_REQ|None|Retrieve the local clipboard, pass contents to gui-daemon|
|MSG_CLIPBOARD_DATA|amorphic blob|Insert the received data into local clipboard|
|MSG_EXECUTE|Obsolete|Obsolete, unused|
|MSG_KEYMAP_NOTIFY|` unsigned char remote_keys[32]; `|Synchronize the keyboard state (key pressed/released) with dom0|
|MSG_WINDOW_FLAGS|` struct msg_window_flags { `
` uint32_t flags_set; `
` uint32_t flags_unset;`
` }; `|Window state change confirmation|
~~~

View File

@ -1,8 +1,9 @@
---
layout: doc
title: Qubes Architecture
permalink: /en/doc/qubes-architecture/
permalink: /doc/qubes-architecture/
redirect_from:
- /en/doc/qubes-architecture/
- /doc/QubesArchitecture/
- /wiki/QubesArchitecture/
---

View File

@ -1,8 +1,9 @@
---
layout: doc
title: Qubes Networking
permalink: /en/doc/qubes-net/
permalink: /doc/qubes-net/
redirect_from:
- /en/doc/qubes-net/
- /doc/QubesNet/
- /wiki/QubesNet/
---

View File

@ -1,8 +1,9 @@
---
layout: doc
title: Security-critical Code
permalink: /en/doc/security-critical-code/
permalink: /doc/security-critical-code/
redirect_from:
- /en/doc/security-critical-code/
- /doc/SecurityCriticalCode/
- /wiki/SecurityCriticalCode/
- /trac/wiki/SecurityCriticalCode/
@ -13,7 +14,7 @@ Security-Critical Code in Qubes OS
Below is a list of security-critical (AKA trusted) code in Qubes OS. A successful attack against any of those might allow to compromise the Qubes OS security. This code can be thought of as of a Trusted Computing Base (TCB) of Qubes OS. The goal of the project has been to minimize the amount of this trusted code to an absolute minimum. The size of the current TCB is of an order of hundreds thousands of lines of C code, which is several orders of magnitude less than in other OSes, such as Windows, Linux or Mac, where it is of orders of tens of millions of lines of C code.
For more information about the security goals of Qubes OS, see [this page](/en/doc/security-goals/).
For more information about the security goals of Qubes OS, see [this page](/doc/security-goals/).
Security-Critical Qubes-Specific Components
-------------------------------------------

View File

@ -1,8 +1,9 @@
---
layout: doc
title: Template Implementation
permalink: /en/doc/template-implementation/
permalink: /doc/template-implementation/
redirect_from:
- /en/doc/template-implementation/
- /doc/TemplateImplementation/
- /wiki/TemplateImplementation/
---

View File

@ -1,11 +1,11 @@
---
layout: doc
title: License
permalink: /en/doc/license/
permalink: /doc/license/
redirect_from:
- /doc/license/
- "/doc/QubesLicensing/"
- "/wiki/QubesLicensing/"
- /en/doc/license/
- /doc/QubesLicensing/
- /wiki/QubesLicensing/
---
Qubes OS License

View File

@ -1,8 +1,9 @@
---
layout: doc
title: Qubes Research
permalink: /en/doc/qubes-research/
permalink: /doc/qubes-research/
redirect_from:
- /en/doc/qubes-research/
- /doc/QubesResearch/
- /wiki/QubesResearch/
---

View File

@ -1,11 +1,11 @@
---
layout: doc
title: Reporting Bugs
permalink: /en/doc/reporting-bugs/
permalink: /doc/reporting-bugs/
redirect_from:
- /doc/reporting-bugs/
- "/doc/BugReportingGuide/"
- "/wiki/BugReportingGuide/"
- /en/doc/reporting-bugs/
- /doc/BugReportingGuide/
- /wiki/BugReportingGuide/
---
Bug Reporting Guide

View File

@ -1,8 +1,9 @@
---
layout: doc
title: Dom0 Secure Updates
permalink: /en/doc/dom0-secure-updates/
permalink: /doc/dom0-secure-updates/
redirect_from:
- /en/doc/dom0-secure-updates/
- /doc/Dom0SecureUpdates/
- /wiki/Dom0SecureUpdates/
---

View File

@ -1,8 +1,9 @@
---
layout: doc
title: DVMimpl
permalink: /en/doc/dvm-impl/
permalink: /doc/dvm-impl/
redirect_from:
- /en/doc/dvm-impl/
- /doc/DVMimpl/
- /wiki/DVMimpl/
---

View File

@ -1,8 +1,9 @@
---
layout: doc
title: Qfilecopy
permalink: /en/doc/qfilecopy/
permalink: /doc/qfilecopy/
redirect_from:
- /en/doc/qfilecopy/
- /doc/Qfilecopy/
- /wiki/Qfilecopy/
---

View File

@ -1,8 +1,9 @@
---
layout: doc
title: Qfileexchgd
permalink: /en/doc/qfileexchgd/
permalink: /doc/qfileexchgd/
redirect_from:
- /en/doc/qfileexchgd/
- /doc/Qfileexchgd/
- /wiki/Qfileexchgd/
---
@ -10,7 +11,7 @@ redirect_from:
**This mechanism is obsolete as of Qubes Beta 1!**
==================================================
Please see this [page](/en/doc/qfilecopy/) instead.
Please see this [page](/doc/qfilecopy/) instead.
qfilexchgd, the Qubes file exchange daemon
==========================================

View File

@ -1,8 +1,9 @@
---
layout: doc
title: Qmemman
permalink: /en/doc/qmemman/
permalink: /doc/qmemman/
redirect_from:
- /en/doc/qmemman/
- /doc/Qmemman/
- /wiki/Qmemman/
---

View File

@ -0,0 +1,330 @@
---
layout: doc
title: Qrexec2
permalink: /doc/qrexec2/
redirect_from:
- /doc/qrexec2-implementation/
- /en/doc/qrexec2-implementation/
- /doc/Qrexec2Implementation/
- /wiki/Qrexec2Implementation/
---
# Command execution in VMs #
(*This page is about qrexec v2. For qrexec v3, see
[here](/doc/qrexec3/).*)
Qubes **qrexec** is a framework for implementing inter-VM (incl. Dom0-VM)
services. It offers a mechanism to start programs in VMs, redirect their
stdin/stdout, and a policy framework to control this all.
## Qrexec basics ##
During each domain creation a process named `qrexec-daemon` is started in
dom0, and a process named `qrexec-agent` is started in the VM. They are
connected over `vchan` channel.
Typically, the first thing that a `qrexec-client` instance does is to send
a request to `qrexec-agent` to start a process in the VM. From then on,
the stdin/stdout/stderr from this remote process will be passed to the
`qrexec-client` process.
E.g., to start a primitive shell in a VM type the following in Dom0 console:
[user@dom0 ~]$ /usr/lib/qubes/qrexec-client -d <vm name> user:bash
The string before first semicolon specifies what user to run the command as.
Adding `-e` on the `qrexec-client` command line results in mere command
execution (no data passing), and `qrexec-client` exits immediately after
sending the execution request.
There is also the `-l <local program>` flag, which directs `qrexec-client`
to pass stdin/stdout of the remote program not to its stdin/stdout, but to
the (spawned for this purpose) `<local program>`.
The `qvm-run` command is heavily based on `qrexec-client`. It also takes care
of additional activities (e.g., starting the domain, if it is not up yet, and
starting the GUI daemon). Thus, it is usually more convenient to use `qvm-run`.
There can be almost arbitrary number of `qrexec-client` processes for a domain
(i.e., `qrexec-client` processes connected to the same `qrexec-daemon`);
their data is multiplexed independently.
There is a similar command line utility avilable inside Linux AppVMs (note
the `-vm` suffix): `qrexec-client-vm` that will be described in subsequent
sections.
## Qubes RPC services ##
Apart from simple Dom0-\>VM command executions, as discussed above, it is
also useful to have more advanced infrastructure for controlled inter-VM
RPC/services. This might be used for simple things like inter-VM file
copy operations, as well as more complex tasks like starting a DispVM,
and requesting it to do certain operations on a handed file(s).
Instead of implementing complex RPC-like mechanisms for inter-VM communication,
Qubes takes a much simpler and pragmatic approach and aims to only provide
simple *pipes* between the VMs, plus ability to request *pre-defined* programs
(servers) to be started on the other end of such pipes, and a centralized
policy (enforced by the `qrexec-policy` process running in dom0) which says
which VMs can request what services from what VMs.
Thanks to the framework and automatic stdin/stdout redirection, RPC programs
are very simple; both the client and server just use their stdin/stdout to pass
data. The framework does all the inner work to connect these file descriptors
to each other via `qrexec-daemon` and `qrexec-agent`. Additionally, DispVMs
are tightly integrated; RPC to a DispVM is a simple matter of using a magic
`$dispvm` keyword as the target VM name.
All services in Qubes are identified by a single string, which by convention
takes a form of `qubes.ServiceName`. Each VM can provide handlers for each of
the known services by providing a file in `/etc/qubes-rpc/` directory with
the same name as the service it is supposed to handle. This file will then
be executed by the qrexec service, if the dom0 policy allowed the service to
be requested (see below). Typically, the files in `/etc/qubes-rpc/` contain
just one line, which is a path to the specific binary that acts as a server
for the incoming request, however they might also be the actual executable
themselves. Qrexec framework is careful about connecting the stdin/stdout
of the server process with the corresponding stdin/stdout of the requesting
process in the requesting VM (see example Hello World service described below).
## Qubes RPC administration ##
Besides each VM needing to provide explicit programs to serve each supported
service, the inter-VM service RPC is also governed by a central policy in Dom0.
In dom0, there is a bunch of files in `/etc/qubes-rpc/policy/` directory,
whose names describe the available RPC actions; their content is the RPC
access policy database. Some example of the default services in Qubes are:
qubes.Filecopy
qubes.OpenInVM
qubes.ReceiveUpdates
qubes.SyncAppMenus
qubes.VMShell
qubes.ClipboardPaste
qubes.Gpg
qubes.NotifyUpdates
qubes.PdfConvert
These files contain lines with the following format:
srcvm destvm (allow|deny|ask)[,user=user_to_run_as][,target=VM_to_redirect_to]
You can specify `srcvm` and `destvm` by name, or by one of `$anyvm`,
`$dispvm`, `dom0` reserved keywords (note string `dom0` does not match the
`$anyvm` pattern; all other names do). Only `$anyvm` keyword makes sense
in the `srcvm` field (service calls from dom0 are currently always allowed,
`$dispvm` means "new VM created for this particular request" - so it is never
a source of request). Currently, there is no way to specify source VM by type,
but this is planned for Qubes R3.
Whenever a RPC request for service named "XYZ" is received, the first line
in `/etc/qubes-rpc/policy/XYZ` that matches the actual `srcvm`/`destvm` is
consulted to determine whether to allow RPC, what user account the program
should run in target VM under, and what VM to redirect the execution to. If
the policy file does not exits, user is prompted to create one *manually*;
if still there is no policy file after prompting, the action is denied.
On the target VM, the `/etc/qubes-rpc/XYZ` must exist, containing the file
name of the program that will be invoked.
### Requesting VM-VM (and VM-Dom0) services execution ###
In a src VM, one should invoke the qrexec client via the following command:
/usr/lib/qubes/qrexec-client-vm <target vm name> <service name> <local program path> [local program arguments]
Note that only stdin/stdout is passed between RPC server and client --
notably, no cmdline argument are passed.
The source VM name can be accessed in the server process via
`QREXEC_REMOTE_DOMAIN` environment variable. (Note the source VM has *no*
control over the name provided in this variable--the name of the VM is
provided by dom0, and so is trusted.)
By default, stderr of client and server is logged to respective
`/var/log/qubes/qrexec.XID` files, in each of the VM.
Be very careful when coding and adding a new RPC service! Any vulnerability
in a RPC server can be fatal to security of the target VM!
If requesting VM-VM (and VM-Dom0) services execution *without cmdline helper*,
connect directly to `/var/run/qubes/qrexec-agent-fdpass` socket as described
[below](#all-the-pieces-together-at-work).
### Revoking "Yes to All" authorization ###
Qubes RPC policy supports an "ask" action, that will prompt the user whether
a given RPC call should be allowed. It is set as default for services such
as inter-VM file copy. A prompt window launches in dom0, that gives the user
option to click "Yes to All", which allows the action and adds a new entry
to the policy file, which will unconditionally allow further calls for given
(service, srcVM, dstVM) tuple.
In order to remove such authorization, issue this command from a Dom0 terminal
(example below for `qubes.Filecopy` service):
sudo nano /etc/qubes-rpc/policy/qubes.Filecopy
and then remove any line(s) ending in "allow" (before the first `##` comment)
which are the "Yes to All" results.
A user might also want to set their own policies in this section. This may
mostly serve to prevent the user from mistakenly copying files or text from
a trusted to untrusted domain, or vice-versa.
### Qubes RPC "Hello World" service ###
We will show the necessary files to create a simple RPC call that adds two
integers on the target VM and returns back the result to the invoking VM.
* Client code on source VM (`/usr/bin/our_test_add_client`)
#!/bin/sh
echo $1 $2 # pass data to rpc server
exec cat >&$SAVED_FD_1 # print result to the original stdout, not to the other rpc endpoint
* Server code on target VM (`/usr/bin/our_test_add_server`)
#!/bin/sh
read arg1 arg2 # read from stdin, which is received from the rpc client
echo $(($arg1+$arg2)) # print to stdout - so, pass to the rpc client
* Policy file in dom0 (`/etc/qubes-rpc/policy/test.Add`)
$anyvm $anyvm ask
* Server path definition on target VM (`/etc/qubes-rpc/test.Add`)
/usr/bin/our_test_add_server
* To test this service, run the following in the source VM:
/usr/lib/qubes/qrexec-client-vm <target VM> test.Add /usr/bin/our_test_add_client 1 2
and we should get "3" as answer, provided dom0 policy allows the call to pass
through, which would happen after we click "Yes" in the popup that should
appear after the invocation of this command. If we changed the policy from
"ask" to "allow", then no popup should be presented, and the call will always
be allowed.
**Note:** For a real world example of writing a qrexec service, see this
[blog post](http://theinvisiblethings.blogspot.com/2013/02/converting-untrusted-pdfs-into-trusted.html).
### More high-level RPCs? ###
As previously noted, Qubes aims to provide mechanisms that are very simple
and thus with very small attack surface. This is the reason why the inter-VM
RPC framework is very primitive and doesn't include any serialization or
other function arguments passing, etc. We should remember, however, that
users/app developers are always free to run more high-level RPC protocols on
top of qrexec. Care should be taken, however, to consider potential attack
surfaces that are exposed to untrusted or less trusted VMs in that case.
# Qubes RPC internals #
(*This is about the implementation of qrexec v2. For the implementation of
qrexec v3, see [here](/doc/qrexec3/#qubes-rpc-internals). Note that the user
API in v3 is backward compatible: qrexec apps written for Qubes R2 should
run without modification on Qubes R3.*)
## Dom0 tools implementation ##
Players:
* `/usr/lib/qubes/qrexec-daemon`: started by mgmt stack (qubes.py) when a
VM is started.
* `/usr/lib/qubes/qrexec-policy`: internal program used to evaluate the
policy file and making the 2nd half of the connection.
* `/usr/lib/qubes/qrexec-client`: raw command line tool that talks to the
daemon via unix socket (`/var/run/qubes/qrexec.XID`)
**Note:** None of the above tools are designed to be used by users.
## Linux VMs implementation ##
Players:
* `/usr/lib/qubes/qrexec-agent`: started by VM bootup scripts, a daemon.
* `/usr/lib/qubes/qubes-rpc-multiplexer`: executes the actual service program,
as specified in VM's `/etc/qubes-rpc/qubes.XYZ`.
* `/usr/lib/qubes/qrexec-client-vm`: raw command line tool that talks to
the agent.
**Note:** None of the above tools are designed to be used by
users. `qrexec-client-vm` is designed to be wrapped up by Qubes apps.
## Windows VMs implemention ##
`%QUBES_DIR%` is the installation path (`c:\Program Files\Invisible Things
Lab\Qubes OS Windows Tools` by default).
* `%QUBES_DIR%\bin\qrexec-agent.exe`: runs as a system service. Responsible
both for raw command execution and interpreting RPC service requests.
* `%QUBES_DIR%\qubes-rpc`: directory with `qubes.XYZ` files that contain
commands for executing RPC services. Binaries for the services are contained
in `%QUBES_DIR%\qubes-rpc-services`.
* `%QUBES_DIR%\bin\qrexec-client-vm`: raw command line tool that talks to
the agent.
**Note:** None of the above tools are designed to be used by
users. `qrexec-client-vm` is designed to be wrapped up by Qubes apps.
## All the pieces together at work ##
**Note:** This section is not needed to use qrexec for writing Qubes
apps. Also note the [qrexec framework implemention in Qubes R3](/doc/qrexec3/)
significantly differs from what is described in this section.
The VM-VM channels in Qubes R2 are made via "gluing" two VM-Dom0 and Dom0-VM
vchan connections:
![qrexec2-internals.png](/attachment/wiki/Qrexec2Implementation/qrexec2-internals.png)
Note that Dom0 never examines the actual data flowing in neither of the two
vchan connections.
When a user in a source VM executes `qrexec-client-vm` utility, the following
steps are taken:
* `qrexec-client-vm` connects to `qrexec-agent`'s
`/var/run/qubes/qrexec-agent-fdpass` unix socket 3 times. Reads 4 bytes from
each of them, which is the fd number of the accepted socket in agent. These
3 integers, in text, concatenated, form "connection identifier" (CID)
* `qrexec-client-vm` writes to `/var/run/qubes/qrexec-agent` fifo a blob,
consisting of target vmname, rpc action, and CID
* `qrexec-client-vm` executes the rpc client, passing the above mentioned
unix sockets as process stdin/stdout, and optionally stderr (if the
`PASS_LOCAL_STDERR` env variable is set)
* `qrexec-agent` passes the blob to `qrexec-daemon`, via
`MSG_AGENT_TO_SERVER_TRIGGER_CONNECT_EXISTING` message over vchan
* `qrexec-daemon` executes `qrexec-policy`, passing source vmname, target
vmname, rpc action, and CID as cmdline arguments
* `qrexec-policy` evaluates the policy file. If successful, creates a pair of
`qrexec-client` processes, whose stdin/stdout are cross-connencted.
* The first `qrexec-client` connects to the src VM, using the `-c ClientID`
parameter, which results in not creating a new process, but connecting to
the existing process file descriptors (these are the fds of unix socket
created in step 1).
* The second `qrexec-client` connects to the target VM, and executes
`qubes-rpc-multiplexer` command there with the rpc action as the cmdline
argument. Finally, `qubes-rpc-multiplexer` executes the correct rpc server
on the target.
* In the above step, if the target VM is `$dispvm`, the DispVM is created
via the `qfile-daemon-dvm` program. The latter waits for the `qrexec-client`
process to exit, and then destroys the DispVM.
*TODO: Protocol description ("wire-level" spec)*

View File

@ -0,0 +1,449 @@
---
layout: doc
title: Qrexec3
permalink: /doc/qrexec3/
redirect_from:
- /en/doc/qrexec3/
- /doc/Qrexec3/
- /wiki/Qrexec3/
- /doc/qrexec/
- /en/doc/qrexec/
- /doc/Qrexec/
- /wiki/Qrexec/
- /doc/qrexec3-implementation/
- /en/doc/qrexec3-implementation/
- /doc/Qrexec3Implementation/
- /wiki/Qrexec3Implementation/
---
# Command execution in VMs #
(*This page is about qrexec v3. For qrexec v2, see
[here](/doc/qrexec2/).*)
The **qrexec** framework is used by core Qubes components to implement
communication between domains. Qubes domains are isolated by design, but
there is a need for a mechanism to allow the administrative domain (dom0) to
force command execution in another domain (VM). For instance, when user
selects an application from the KDE menu, it should be started in the selected
VM. Also, it is often useful to be able to pass stdin/stdout/stderr from an
application running in a VM to dom0 (and the other way around). In specific
circumstances, Qubes allows VMs to be initiators of such communications (so,
for example, a VM can notify dom0 that there are updates available for it).
## Qrexec basics ##
Qrexec is built on top of vchan (a library providing data links between
VMs). During domain creation a process named `qrexec-daemon` is started
in dom0, and a process named `qrexec-agent` is started in the VM. They are
connected over **vchan** channel. `qrexec-daemon` listens for connections
from dom0 utility named `qrexec-client`. Typically, the first thing that a
`qrexec-client` instance does is to send a request to `qrexec-daemon` to
start a process (let's name it `VMprocess`) with a given command line in
a specified VM (`someVM`). `qrexec-daemon` assigns unique vchan connection
details and sends them both to `qrexec-client` (in dom0) and `qrexec-agent`
(in `someVM`). `qrexec-client` starts a vchan server which `qrexec-agent`
connects to. Since then, stdin/stdout/stderr from the VMprocess is passed
via vchan between `qrexec-agent` and the `qrexec-client` process.
So, for example, executing in dom0:
qrexec-client -d someVM user:bash
allows to work with the remote shell. The string before the first
semicolon specifies what user to run the command as. Adding `-e` on the
`qrexec-client` command line results in mere command execution (no data
passing), and `qrexec-client` exits immediately after sending the execution
request and receiving status code from `qrexec-agent` (whether the process
creation succeeded). There is also the `-l local_program` flag -- with it,
`qrexec-client` passes stdin/stdout of the remote process to the (spawned
for this purpose) `local_program`, not to its own stdin/stdout.
The `qvm-run` command is heavily based on `qrexec-client`. It also takes care
of additional activities, e.g. starting the domain if it is not up yet and
starting the GUI daemon. Thus, it is usually more convenient to use `qvm-run`.
There can be almost arbitrary number of `qrexec-client` processes for a
domain (so, connected to the same `qrexec-daemon`, same domain) -- their
data is multiplexed independently. Number of available vchan channels is
the limiting factor here, it depends on the underlying hypervisor.
## Qubes RPC services ##
Some tasks (like inter-vm file copy) share the same rpc-like structure:
a process in one VM (say, file sender) needs to invoke and send/receive
data to some process in other VM (say, file receiver). Thus, the Qubes RPC
framework was created, facilitating such actions.
Obviously, inter-VM communication must be tightly controlled to prevent one
VM from taking control over other, possibly more privileged, VM. Therefore
the design decision was made to pass all control communication via dom0,
that can enforce proper authorization. Then, it is natural to reuse the
already-existing qrexec framework.
Also, note that bare qrexec provides `VM <-> dom0` connectivity, but the
command execution is always initiated by dom0. There are cases when VM needs
to invoke and send data to a command in dom0 (e.g. to pass information on
newly installed `.desktop` files). Thus, the framework allows dom0 to be
the rpc target as well.
Thanks to the framework, RPC programs are very simple -- both rpc client
and server just use their stdin/stdout to pass data. The framework does all
the inner work to connect these processes to each other via `qrexec-daemon`
and `qrexec-agent`. Additionally, disposable VMs are tightly integrated --
rpc to a DisposableVM is identical to rpc to a normal domain, all one needs
is to pass `$dispvm` as the remote domain name.
## Qubes RPC administration ##
(*TODO: fix for non-linux dom0*)
In dom0, there is a bunch of files in `/etc/qubes-rpc/policy` directory,
whose names describe the available rpc actions. Their content is the rpc
access policy database. Currently defined actions are:
qubes.Filecopy
qubes.OpenInVM
qubes.ReceiveUpdates
qubes.SyncAppMenus
qubes.VMShell
qubes.ClipboardPaste
qubes.Gpg
qubes.NotifyUpdates
qubes.PdfConvert
These files contain lines with the following format:
srcvm destvm (allow|deny|ask)[,user=user_to_run_as][,target=VM_to_redirect_to]
You can specify srcvm and destvm by name, or by one of `$anyvm`, `$dispvm`,
`dom0` reserved keywords (note string `dom0` does not match the `$anyvm`
pattern; all other names do). Only `$anyvm` keyword makes sense in srcvm
field (service calls from dom0 are currently always allowed, `$dispvm`
means "new VM created for this particular request," so it is never a
source of request). Currently there is no way to specify source VM by
type. Whenever a rpc request for action X is received, the first line in
`/etc/qubes-rpc/policy/X` that match srcvm/destvm is consulted to determine
whether to allow rpc, what user account the program should run in target
VM under, and what VM to redirect the execution to. If the policy file does
not exits, user is prompted to create one; if still there is no policy file
after prompting, the action is denied.
In the target VM, the `/etc/qubes-rpc/RPC_ACTION_NAME` must exist, containing
the file name of the program that will be invoked.
In the src VM, one should invoke the client via:
/usr/lib/qubes/qrexec-client-vm target_vm_name RPC_ACTION_NAME rpc_client_path client arguments
Note that only stdin/stdout is passed between rpc server and client --
notably, no command line argument are passed. Source VM name is specified by
`QREXEC_REMOTE_DOMAIN` environment variable. By default, stderr of client
and server is logged to respective `/var/log/qubes/qrexec.XID` files.
Be very careful when coding and adding a new rpc service. Unless the
offered functionality equals full control over the target (it is the case
with e.g. `qubes.VMShell` action), any vulnerability in a rpc server can
be fatal to Qubes security. On the other hand, this mechanism allows to
delegate processing of untrusted input to less privileged (or throwaway)
AppVMs, thus wise usage of it increases security.
### Revoking "Yes to All" authorization ###
Qubes RPC policy supports "ask" action. This will prompt the user whether given
RPC call should be allowed. That prompt window has also "Yes to All" option,
which will allow the action and add new entry to the policy file, which will
unconditionally allow further calls for given service-srcVM-dstVM tuple.
In order to remove such authorization, issue this command from a dom0 terminal
(for `qubes.Filecopy` service):
sudo nano /etc/qubes-rpc/policy/qubes.Filecopy
and then remove the first line(s) (before the first `##` comment) which are
the "Yes to All" results.
### Qubes RPC example ###
We will show the necessary files to create rpc call that adds two integers
on the target and returns back the result to the invoker.
* rpc client code (`/usr/bin/our_test_add_client`):
#!/bin/sh
echo $1 $2 # pass data to rpc server
exec cat >&$SAVED_FD_1 # print result to the original stdout, not to the other rpc endpoint
* rpc server code (*/usr/bin/our\_test\_add\_server*)
#!/bin/sh
read arg1 arg2 # read from stdin, which is received from the rpc client
echo $(($arg1+$arg2)) # print to stdout - so, pass to the rpc client
* policy file in dom0 (*/etc/qubes-rpc/policy/test.Add* )
$anyvm $anyvm ask
* server path definition ( */etc/qubes-rpc/test.Add*)
/usr/bin/our_test_add_server
* invoke rpc via
/usr/lib/qubes/qrexec-client-vm target_vm test.Add /usr/bin/our_test_add_client 1 2
and we should get "3" as answer, after dom0 allows it.
**Note:** For a real world example of writing a qrexec service, see this
[blog post](http://theinvisiblethings.blogspot.com/2013/02/converting-untrusted-pdfs-into-trusted.html).
# Qubes RPC internals #
(*This is about the implementation of qrexec v3. For the implementation of
qrexec v2, see [here](/doc/qrexec2/#qubes-rpc-internals).*)
Qrexec framework consists of a number of processes communicating with each
other using common IPC protocol (described in detail below). Components
residing in the same domain use pipes as the underlying transport medium,
while components in separate domains use vchan link.
## Dom0 tools implementation ##
* `/usr/lib/qubes/qrexec-daemon`: One instance is required for every active
domain. Responsible for:
* Handling execution and service requests from **dom0** (source:
`qrexec-client`).
* Handling service requests from the associated domain (source:
`qrexec-client-vm`, then `qrexec-agent`).
* Command line: `qrexec-daemon domain-id domain-name [default user]`
* `domain-id`: Numeric Qubes ID assigned to the associated domain.
* `domain-name`: Associated domain name.
* `default user`: Optional. If passed, `qrexec-daemon` uses this user as
default for all execution requests that don't specify one.
* `/usr/lib/qubes/qrexec-policy`: Internal program used to evaluate the
RPC policy and deciding whether a RPC call should be allowed.
* `/usr/lib/qubes/qrexec-client`: Used to pass execution and service requests
to `qrexec-daemon`. Command line parameters:
* `-d target-domain-name`: Specifies the target for the execution/service
request.
* `-l local-program`: Optional. If present, `local-program` is executed
and its stdout/stdin are used when sending/receiving data to/from the
remote peer.
* `-e`: Optional. If present, stdout/stdin are not connected to the remote
peer. Only process creation status code is received.
* `-c <request-id,src-domain-name,src-domain-id>`: used for connecting
a VM-VM service request by `qrexec-policy`. Details described below in
the service example.
* `cmdline`: Command line to pass to `qrexec-daemon` as the
execution/service request. Service request format is described below in
the service example.
**Note:** None of the above tools are designed to be used by users directly.
## VM tools implementation ##
* `qrexec-agent`: One instance runs in each active domain. Responsible for:
* Handling service requests from `qrexec-client-vm` and passing them to
connected `qrexec-daemon` in dom0.
* Executing associated `qrexec-daemon` execution/service requests.
* Command line parameters: none.
* `qrexec-client-vm`: Runs in an active domain. Used to pass service requests
to `qrexec-agent`.
* Command line: `qrexec-client-vm target-domain-name service-name local-program [local program arguments]`
* `target-domain-name`: Target domain for the service request. Source is
the current domain.
* `service-name`: Requested service name.
* `local-program`: `local-program` is executed locally and its stdin/stdout
are connected to the remote service endpoint.
## Qrexec protocol details ##
Qrexec protocol is message-based. All messages share a common header followed
by an optional data packet.
/* uniform for all peers, data type depends on message type */
struct msg_header {
uint32_t type; /* message type */
uint32_t len; /* data length */
};
When two peers establish connection, the server sends `MSG_HELLO` followed by
`peer_info` struct:
struct peer_info {
uint32_t version; /* qrexec protocol version */
};
The client then should reply with its own `MSG_HELLO` and `peer_info`. If
protocol versions don't match, the connection is closed.
(*TODO: fallback for backwards compatibility, don't do handshake in the
same domain?*)
Details of all possible use cases and the messages involved are described below.
### dom0: request execution of `some_command` in domX and pass stdin/stdout ###
- **dom0**: `qrexec-client` is invoked in **dom0** as follows:
`qrexec-client -d domX [-l local_program] user:some_command`
- `user` may be substituted with the literal `DEFAULT`. In that case,
default Qubes user will be used to execute `some_command`.
- **dom0**: `qrexec-client` sets `QREXEC_REMOTE_DOMAIN` environment variable
to **domX**.
- **dom0**: If `local_program` is set, `qrexec-client` executes it and uses
that child's stdin/stdout in place of its own when exchanging data with
`qrexec-agent` later.
- **dom0**: `qrexec-client` connects to **domX**'s `qrexec-daemon`.
- **dom0**: `qrexec-daemon` sends `MSG_HELLO` header followed by `peer_info`
to `qrexec-client`.
- **dom0**: `qrexec-client` replies with `MSG_HELLO` header followed by
`peer_info` to `qrexec-daemon`.
- **dom0**: `qrexec-client` sends `MSG_EXEC_CMDLINE` header followed by
`exec_params` to `qrexec-daemon`.
/* variable size */
struct exec_params {
uint32_t connect_domain; /* target domain id */
uint32_t connect_port; /* target vchan port for i/o exchange */
char cmdline[0]; /* command line to execute, size = msg_header.len - sizeof(struct exec_params) */
};
In this case, `connect_domain` and `connect_port` are set to 0.
- **dom0**: `qrexec-daemon` replies to `qrexec-client` with
`MSG_EXEC_CMDLINE` header followed by `exec_params`, but with empty `cmdline`
field. `connect_domain` is set to Qubes ID of **domX** and `connect_port`
is set to a vchan port allocated by `qrexec-daemon`.
- **dom0**: `qrexec-daemon` sends `MSG_EXEC_CMDLINE` header followed
by `exec_params` to the associated **domX** `qrexec-agent` over
vchan. `connect_domain` is set to 0 (**dom0**), `connect_port` is the same
as sent to `qrexec-client`. `cmdline` is unchanged except that the literal
`DEFAULT` is replaced with actual user name, if present.
- **dom0**: `qrexec-client` disconnects from `qrexec-daemon`.
- **dom0**: `qrexec-client` starts a vchan server using the details received
from `qrexec-daemon` and waits for connection from **domX**'s `qrexec-agent`.
- **domX**: `qrexec-agent` receives `MSG_EXEC_CMDLINE` header followed by
`exec_params` from `qrexec-daemon` over vchan.
- **domX**: `qrexec-agent` connects to `qrexec-client` over vchan using the
details from `exec_params`.
- **domX**: `qrexec-agent` executes `some_command` in **domX** and connects
the child's stdin/stdout to the data vchan. If the process creation fails,
`qrexec-agent` sends `MSG_DATA_EXIT_CODE` to `qrexec-client` followed by
the status code (**int**) and disconnects from the data vchan.
- Data read from `some_command`'s stdout is sent to the data vchan using
`MSG_DATA_STDOUT` by `qrexec-agent`. `qrexec-client` passes data received as
`MSG_DATA_STDOUT` to its own stdout (or to `local_program`'s stdin if used).
- `qrexec-client` sends data read from local stdin (or `local_program`'s
stdout if used) to `qrexec-agent` over the data vchan using
`MSG_DATA_STDIN`. `qrexec-agent` passes data received as `MSG_DATA_STDIN`
to `some_command`'s stdin.
- `MSG_DATA_STDOUT` or `MSG_DATA_STDIN` with data `len` field set to 0 in
`msg_header` is an EOF marker. Peer receiving such message should close the
associated input/output pipe.
- When `some_command` terminates, **domX**'s `qrexec-agent` sends
`MSG_DATA_EXIT_CODE` header to `qrexec-client` followed by the exit code
(**int**). `qrexec-agent` then disconnects from the data vchan.
### domY: invoke execution of qubes service `qubes.SomeRpc` in domX and pass stdin/stdout ###
- **domY**: `qrexec-client-vm` is invoked as follows:
`qrexec-client-vm domX qubes.SomeRpc local_program [params]`
- **domY**: `qrexec-client-vm` connects to `qrexec-agent` (via local
socket/named pipe).
- **domY**: `qrexec-client-vm` sends `trigger_service_params` data to
`qrexec-agent` (without filling the `request_id` field):
struct trigger_service_params {
char service_name[64];
char target_domain[32];
struct service_params request_id; /* service request id */
};
struct service_params {
char ident[32];
};
- **domY**: `qrexec-agent` allocates a locally-unique (for this domain)
`request_id` (let's say `13`) and fills it in the `trigger_service_params`
struct received from `qrexec-client-vm`.
- **domY**: `qrexec-agent` sends `MSG_TRIGGER_SERVICE` header followed by
`trigger_service_params` to `qrexec-daemon` in **dom0** via vchan.
- **dom0**: **domY**'s `qrexec-daemon` executes `qrexec-policy`: `qrexec-policy
domY_id domY domX qubes.SomeRpc 13`.
- **dom0**: `qrexec-policy` evaluates if the RPC should be allowed or
denied. If the action is allowed it returns `0`, if the action is denied it
returns `1`.
- **dom0**: **domY**'s `qrexec-daemon` checks the exit code of `qrexec-policy`.
- If `qrexec-policy` returned **not** `0`: **domY**'s `qrexec-daemon`
sends `MSG_SERVICE_REFUSED` header followed by `service_params` to
**domY**'s `qrexec-agent`. `service_params.ident` is identical to the one
received. **domY**'s `qrexec-agent` disconnects its `qrexec-client-vm`
and RPC processing is finished.
- If `qrexec-policy` returned `0`, RPC processing continues.
- **dom0**: if `qrexec-policy` allowed the RPC, it executed `qrexec-client
-d domX -c 13,domY,domY_id user:QUBESRPC qubes.SomeRpc domY`.
- **dom0**: `qrexec-client` sets `QREXEC_REMOTE_DOMAIN` environment variable
to **domX**.
- **dom0**: `qrexec-client` connects to **domX**'s `qrexec-daemon`.
- **dom0**: **domX**'s `qrexec-daemon` sends `MSG_HELLO` header followed by
`peer_info` to `qrexec-client`.
- **dom0**: `qrexec-client` replies with `MSG_HELLO` header followed by
`peer_info` to **domX**'s`qrexec-daemon`.
- **dom0**: `qrexec-client` sends `MSG_EXEC_CMDLINE` header followed by
`exec_params` to **domX**'s`qrexec-daemon`
/* variable size */
struct exec_params {
uint32_t connect_domain; /* target domain id */
uint32_t connect_port; /* target vchan port for i/o exchange */
char cmdline[0]; /* command line to execute, size = msg_header.len - sizeof(struct exec_params) */
};
In this case, `connect_domain` is set to id of **domY** (from the `-c`
parameter) and `connect_port` is set to 0. `cmdline` field contains the
RPC to execute, in this case `user:QUBESRPC qubes.SomeRpc domY`.
- **dom0**: **domX**'s `qrexec-daemon` replies to `qrexec-client` with
`MSG_EXEC_CMDLINE` header followed by `exec_params`, but with empty `cmdline`
field. `connect_domain` is set to Qubes ID of **domX** and `connect_port`
is set to a vchan port allocated by **domX**'s `qrexec-daemon`.
- **dom0**: **domX**'s `qrexec-daemon` sends `MSG_EXEC_CMDLINE` header
followed by `exec_params` to **domX**'s `qrexec-agent`. `connect_domain`
and `connect_port` fields are the same as in the step above. `cmdline` is
set to the one received from `qrexec-client`, in this case `user:QUBESRPC
qubes.SomeRpc domY`.
- **dom0**: `qrexec-client` disconnects from **domX**'s `qrexec-daemon`
after receiving connection details.
- **dom0**: `qrexec-client` connects to **domY**'s `qrexec-daemon` and
exchanges `MSG_HELLO` as usual.
- **dom0**: `qrexec-client` sends `MSG_SERVICE_CONNECT` header followed by
`exec_params` to **domY**'s `qrexec-daemon`. `connect_domain` is set to ID
of **domX** (received from **domX**'s `qrexec-daemon`) and `connect_port` is
the one received as well. `cmdline` is set to request ID (`13` in this case).
- **dom0**: **domY**'s `qrexec-daemon` sends `MSG_SERVICE_CONNECT` header
followed by `exec_params` to **domY**'s `qrexec-agent`. Data fields are
unchanged from the step above.
- **domY**: `qrexec-agent` starts a vchan server on the port received in
the step above. It acts as a `qrexec-client` in this case because this is
a VM-VM connection.
- **domX**: `qrexec-agent` connects to the vchan server of **domY**'s
`qrexec-agent` (connection details were received before from **domX**'s
`qrexec-daemon`).
- After that, connection follows the flow of the previous example (dom0-VM).

View File

@ -1,8 +1,9 @@
---
layout: doc
title: Source Code
permalink: /en/doc/source-code/
permalink: /doc/source-code/
redirect_from:
- /en/doc/source-code/
- /doc/SourceCode/
- /wiki/SourceCode/
---

52
developers/system-doc.md Normal file
View File

@ -0,0 +1,52 @@
---
layout: doc
title: System Documentation
permalink: /doc/system-doc/
redirect_from:
- /en/doc/system-doc/
- /doc/SystemDoc/
- /wiki/SystemDoc/
---
System Documentation for Developers
===================================
Fundamentals
------------
* [Qubes OS Architecture Overview](/doc/qubes-architecture/)
* [Qubes OS Architecture Spec v0.3 [PDF]](/attachment/wiki/QubesArchitecture/arch-spec-0.3.pdf)
(The original 2009 document that started this all...)
* [Security-critical elements of Qubes OS](/doc/security-critical-code/)
* [Qrexec: command execution in VMs](/doc/qrexec3/)
* [Qubes GUI virtualization protocol](/doc/gui/)
* [Networking in Qubes](/doc/qubes-net/)
* [Implementation of template sharing and updating](/doc/template-implementation/)
Services
--------
* [Inter-domain file copying](/doc/qfilecopy/) (deprecates [`qfileexchgd`](/doc/qfileexchgd/))
* [Dynamic memory management in Qubes](/doc/qmemman/)
* [Implementation of DisposableVMs](/doc/dvm-impl/)
* [Article about disposable VMs](http://theinvisiblethings.blogspot.com/2010/06/disposable-vms.html)
* [Dom0 secure update mechanism](/doc/dom0-secure-updates/)
* VM secure update mechanism (forthcoming)
Debugging
---------
* [Profiling python code](/doc/profiling/)
* [Test environment in separate machine for automatic tests](/doc/test-bench/)
* [Automated tests](/doc/automated-tests/)
* [VM-dom0 internal configuration interface](/doc/vm-interface/)
* [Debugging Windows VMs](/doc/windows-debugging/)
Building
--------
* [Building Qubes](/doc/qubes-builder/) (["API" Details](/doc/qubes-builder-details/))
* [Development Workflow](/doc/development-workflow/)
* [KDE Dom0 packages for Qubes](/doc/kde-dom0/)
* [Building Qubes OS 3.0 ISO](/doc/qubes-r3-building/)
* [Building USB passthrough support (experimental)](/doc/usbvm/)
* [Building a TemplateVM based on a new OS (ArchLinux example)](/doc/building-non-fedora-template/)
* [Building the Archlinux Template](/doc/building-archlinux-template/)

View File

@ -1,8 +1,9 @@
---
layout: doc
title: Version Scheme
permalink: /en/doc/version-scheme/
permalink: /doc/version-scheme/
redirect_from:
- /en/doc/version-scheme/
- /doc/VersionScheme/
- /wiki/VersionScheme/
---

171
doc.md Normal file
View File

@ -0,0 +1,171 @@
---
layout: default
title: Qubes OS Documentation
permalink: /doc/
redirect_from:
- /en/doc/
- /doc/UserDoc/
- /wiki/UserDoc/
- /doc/QubesDocs/
- /wiki/QubesDocs/
---
Qubes OS Documentation
======================
The Basics
----------
* [A Simple Introduction to Qubes](/intro/)
* [Getting Started](/doc/getting-started/)
* [Users' FAQ](/doc/user-faq/)
* [Mailing Lists](/doc/mailing-lists/)
* [Further reading: How is Qubes different from...?](http://blog.invisiblethings.org/2012/09/12/how-is-qubes-os-different-from.html)
* [Further reading: Why Qubes is more than a collection of VMs](http://www.invisiblethingslab.com/resources/2014/Software_compartmentalization_vs_physical_separation.pdf)
Choosing Your Hardware
----------------------
* [System Requirements](/doc/system-requirements/)
* [Hardware Compatibility List (HCL)](/hcl)
* [Qubes-Certified Laptops](/doc/certified-laptops/)
Installing Qubes
----------------
* [Use Qubes without installing: Qubes Live USB (alpha)](https://groups.google.com/d/msg/qubes-users/IQdCEpkooto/iyMh3LuzCAAJ)
* [How to Install Qubes](/doc/installation-guide/)
* [Qubes Downloads](/downloads/)
* [Why and How to Verify Signatures](/doc/verifying-signatures/)
* [Security Considerations when Installing](/doc/install-security/)
Common Tasks
------------
* [Copying and Pasting Text Between Domains](/doc/copy-paste/)
* [Copying and Moving Files Between Domains](/doc/copying-files/)
* [Copying Files to and from dom0](/doc/copy-to-dom0/)
* [Updating Software in dom0](/doc/software-update-dom0/)
* [Updating and Installing Software in VMs](/doc/software-update-vm/)
* [Backup, Restoration, and Migration](/doc/backup-restore/)
* [Disposable VMs](/doc/dispvm/)
* [Mounting USB Drives to AppVMs](/doc/stick-mounting/)
* [Recording Optical Discs](/doc/recording-optical-discs/)
* [Managing Application Shortcuts](/doc/managing-appvm-shortcuts/)
* [Enabling Fullscreen Mode](/doc/full-screen-mode/)
Managing Operating Systems within Qubes
---------------------------------------
* [TemplateVMs](/doc/templates/)
* [Templates: Fedora - minimal](/doc/templates/fedora-minimal/)
* [Templates: Debian](/doc/templates/debian/)
* [Templates: Archlinux](/doc/templates/archlinux/)
* [Templates: Ubuntu](/doc/templates/ubuntu/)
* [Templates: Whonix](/doc/templates/whonix/)
* [Installing and Using Windows-based AppVMs (Qubes R2 Beta 3 and later)](/doc/windows-appvms/)
* [Creating and Using HVM and Windows Domains (Qubes R2+)](/doc/hvm-create/)
* [Advanced options and troubleshooting of Qubes Tools for Windows (R3)](/doc/windows-tools-3/)
* [Advanced options and troubleshooting of Qubes Tools for Windows (R2)](/doc/windows-tools-2/)
* [Uninstalling Qubes Tools for Windows 2.x](/doc/uninstalling-windows-tools-2/)
* [Upgrading the Fedora 21 Template](/doc/fedora-template-upgrade-21/)
* [Upgrading the Fedora 20 Template](/doc/fedora-template-upgrade-20/)
* [Upgrading the Fedora 18 Template](/doc/fedora-template-upgrade-18/)
* [Tips for Using Linux in an HVM](/doc/linux-hvm-tips/)
* [Creating NetBSD VM](https://groups.google.com/group/qubes-devel/msg/4015c8900a813985)
Security Guides
---------------
* [Qubes OS Project Security Information](/security/)
* [Security Guidelines](/doc/security-guidelines/)
* [Understanding Qubes Firewall](/doc/qubes-firewall/)
* [Understanding and Preventing Data Leaks](/doc/data-leaks/)
* [Installing Anti Evil Maid](/doc/anti-evil-maid/)
* [Using Multi-factor Authentication with Qubes](/doc/multifactor-authentication/)
* [Using GPG more securely in Qubes: Split GPG](/doc/split-gpg/)
* [Configuring YubiKey for user authentication](/doc/yubi-key/)
* [Note regarding password-less root access in VM](/doc/vm-sudo/)
Privacy Guides
--------------
* [Whonix for privacy & anonymization](/en/doc/privacy/whonix/)
* [Install Whonix in Qubes](/en/doc/privacy/install-whonix/)
* [Updating Whonix in Qubes](/en/doc/privacy/updating-whonix/)
* [Customizing Whonix](/en/doc/privacy/customizing-whonix/)
* [Uninstall Whonix from Qubes](/en/doc/privacy/uninstall-whonix/)
* [How to Install a Transparent Tor ProxyVM (TorVM)](/en/doc/privacy/torvm/)
* [How to set up a ProxyVM as a VPN Gateway](/en/doc/privacy/vpn/)
Configuration Guides
--------------------
* [Configuration Files](/doc/config-files/)
* [How to Install a Transparent Tor ProxyVM (TorVM)](/doc/torvm/)
* [How to set up a ProxyVM as a VPN Gateway](/doc/vpn/)
* [Storing AppVMs on Secondary Drives](/doc/secondary-storage/)
* [Where are my external storage devices mounted?](/doc/external-device-mount-point/)
* [Resizing AppVM and HVM Disk Images](/doc/resize-disk-image/)
* [Extending `root.img` Size](/doc/resize-root-disk-image/)
* [Installing ZFS in Qubes](/doc/zfs/)
* [Mutt Guide](/doc/mutt/)
* [Postfix Guide](/doc/postfix/)
* [Fetchmail Guide](/doc/fetchmail/)
* [Creating Custom NetVMs and ProxyVMs](http://theinvisiblethings.blogspot.com/2011/09/playing-with-qubes-networking-for-fun.html)
* [How to make proxy for individual tcp connection from networkless VM](https://groups.google.com/group/qubes-devel/msg/4ca950ab6d7cd11a)
* [HTTP filtering proxy in Qubes firewall VM](https://groups.google.com/group/qubes-devel/browse_thread/thread/5252bc3f6ed4b43e/d881deb5afaa2a6c#39c95d63fccca12b)
* [Adding Bridge Support to the NetVM (EXPERIMENTAL)](/doc/network-bridge-support/)
* [Assigning PCI Devices to AppVMs](/doc/assigning-devices/)
* [Enabling TRIM for SSD disks](/doc/disk-trim/)
* [Configuring a Network Printer](/doc/network-printer/)
* [Using External Audio Devices](/doc/external-audio/)
* [Booting with GRUB2 and GPT](https://groups.google.com/group/qubes-devel/browse_thread/thread/e4ac093cabd37d2b/d5090c20d92c4128#d5090c20d92c4128)
* [Rxvt Guide](/doc/rxvt/)
* [Managing VM kernel](/doc/managing-vm-kernel/)
* [Salt management stack](/doc/salt/)
Customization Guides
--------------------
* [DispVM Customization](/doc/dispvm-customization/)
* [Customizing Fedora minimal templates](/doc/fedora-minimal-template-customization)
* [Customizing Windows 7 templates](/doc/windows-template-customization)
* [Using KDE in dom0](/doc/kde/)
* [Installing XFCE in dom0](/doc/xfce/)
* [Language Localization](/doc/language-localization/)
Troubleshooting
---------------
* [Home directory is out of disk space error](/doc/out-of-memory/)
* [Installing on system with new AMD GPU (missing firmware problem)](https://groups.google.com/group/qubes-devel/browse_thread/thread/e27a57b0eda62f76)
* [How to install an Nvidia driver in dom0](/doc/install-nvidia-driver/)
* [Solving problems with Macbook Air 2012](https://groups.google.com/group/qubes-devel/browse_thread/thread/b8b0d819d2a4fc39/d50a72449107ab21#8a9268c09d105e69)
* [Getting Sony Vaio Z laptop to work with Qubes](/doc/sony-vaio-tinkering/)
* [Getting Lenovo 450 to work with Qubes](/doc/lenovo450-tinkering/)
Reference Pages
---------------
* [Dom0 Command-Line Tools](/doc/dom0-tools/)
* [DomU Command-Line Tools](/doc/vm-tools/)
* [Glossary of Qubes Terminology](/doc/glossary/)
* [Qubes Service Framework](/doc/qubes-service/)
* [Command Execution in VMs (and Qubes RPC)](/doc/qrexec/)
For Developers
--------------
* [System Documentation](/doc/system-doc/)
* [Developers' FAQ](/doc/devel-faq/)
* [How to Contribute to the Qubes OS Project](/doc/contributing/)
* [Reporting Security Issues](/security/)
* [Reporting Bugs](/doc/reporting-bugs/)
* [Source Code](/doc/source-code/)
* [Qubes OS Version Scheme](/doc/version-scheme/)
* [Coding Guidelines](/doc/coding-style/)
* [Documentation Guidelines](/doc/doc-guidelines/)
* [Books for Developers](/doc/devel-books/)
* [Research Papers](/doc/qubes-research/)
* [Qubes OS License](/doc/license/)

View File

@ -1,80 +0,0 @@
---
layout: doc
title: Introduction
permalink: /en/intro/
redirect_from:
- /intro/
- "/doc/SimpleIntro/"
- "/wiki/SimpleIntro/"
---
A Simple Introduction to Qubes
==============================
This is a short, non-technical introduction to Qubes intended for a popular audience. (If you just want to quickly gain a basic understanding of what Qubes is all about, you're in the right place!)
What is Qubes?
--------------
Qubes is a security-oriented operating system (OS). The OS is the software which runs all the other programs on a computer. Some examples of popular OSes are Microsoft Windows, Mac OS X, Android, and iOS. Qubes is free and open-source software (FOSS). This means that everyone is free to use, copy, and change the software in any way. It also means that the source code is openly available so others can contribute to and audit it.
Why is OS security important?
-----------------------------
Most people use an operating system like Windows or OS X on their desktop and laptop computers. These OSes are popular because they tend to be easy to use and usually come pre-installed on the computers people buy. However, they present problems when it comes to security. For example, you might open an innocent-looking email attachment or website, not realizing that you're actually allowing malware (malicious software) to run on your computer. Depending on what kind of malware it is, it might do anything from showing you unwanted advertisements to logging your keystrokes to taking over your entire computer. This could jeopardize all the information stored on or accessed by this computer, such as health records, confidential communications, or thoughts written in a private journal. Malware can also interfere with the activities you perform with your computer. For example, if you use your computer to conduct financial transactions, the malware might allow its creator to make fradulent transactions in your name.
Aren't antivirus programs and firewalls enough?
-----------------------------------------------
Unfortunately, conventional security approaches like antivirus programs and (software and/or hardware) firewalls are no longer enough to keep out sophisticated attackers. For example, nowadays it's common for malware creators to check to see if their malware is recognized by any popular antivirus programs. If it's recognized, they scramble their code until it's no longer recognizable by the antivirus programs, then send it out. The best antivirus programs will subsequently get updated once the antivirus programmers discover the new threat, but this usually occurs at least a few days after the new attacks start to appear in the wild. By then, it's typically too late for those who have already been compromised. In addition, bugs are inevitably discovered in the common software we all use (such as our web browsers), and no antivirus program or firewall can prevent all of these bugs from being exploited.
How does Qubes provide security?
--------------------------------
Qubes allows you to separate the various parts of your digital life into securely isolated virtual machines (VMs). A VM is basically a simulated computer with its own OS which runs as software on your physical computer. You can think of a VM as a *computer within a computer*. This allows you to have, for example, one VM for visiting untrusted websites and a different VM for doing online banking. This way, if your untrusted browsing VM get compromised by a malware-laden website, your online banking activities won't be at risk. Similarly, if you're concerned about risky email attachments, Qubes can make it so that every attachment gets opened in its own single-use, "disposable" VM.
In general, Qubes takes an approach called **security by isolation**, which in this context means keeping the things you do on your computer securely isolated in different VMs so that one VM getting compromised won't affect the others. This allows you to do everything on a single physical computer without having to worry about one successful cyberattack taking down your entire digital life in one fell swoop.
How does Qubes compare to using a "live CD" OS?
-----------------------------------------------
Booting your computer from a live CD (or DVD) when you need to perform sensitive activities can certainly be more secure than simply using your main OS, but this method still preserves many of the risks of conventional OSes. For example, popular live OSes (such as [Tails](https://tails.boum.org/) and other Linux distributions) are still **monolithic** in the sense that all software is still running in the same OS. This means, once again, that if your session is compromised, then all the data and activities performed within that same session are also potentially compromised.
How does Qubes compare to running VMs in a convential OS?
---------------------------------------------------------
Not all virtual machine software is equal when it comes to security. You may have used or heard of VMs in relation to software like VirtualBox or VMware Workstation. These are known as "Type 2" or "hosted" hypervisors. (The **hypervisor** is the software, firmare, or hardware that creates and runs virtual machines.) These programs are popular because they're designed primarily to be easy to use and run under popular OSes like Windows (which is called the **host** OS, since it "hosts" the VMs). However, the fact that Type 2 hypervisors run under the host OS means that they're really only as secure as the host OS itself. If the host OS is ever compromised, then any VMs it hosts are also effectively compromised.
By contrast, Qubes uses a "Type 1" or "bare metal" hypervisor called [Xen](http://www.xenproject.org). Instead of running inside an OS, Type 1 hypervisors run directly on the "bare metal" of the hardware. This means that an attacker must be capable of subverting the hypervisor itself in order to compromise the entire system, which is vastly more difficult.
Qubes makes it so that multiple VMs running under a Type 1 hypervisor can be securely used as an integrated OS. For example, it puts all of your application windows on the same desktop with special colored borders indicating the trust levels of their respective VMs. It also allows for things like secure copy/paste operations between VMs, securely copying and transferring files between VMs, and secure networking between VMs and the Internet.
How does Qubes compare to using a separate physical machine?
------------------------------------------------------------
Using a separate physical computer for sensitive activities can certainly be more secure than using one computer with a conventional OS for everything, but there are still risks to consider. Briefly, here are some of the main pros and cons of this approach relative to Qubes:
Pros:
- Physical separation doesn't rely on a hypervisor. (It's very unlikely that an attacker will break out of Qubes' hypervisor, but if she were to manage to do so, she could potentially gain control over the entire system.)
- Physical separation can be a natural complement to physical security. (For example, you might find it natural to lock your secure laptop in a safe when you take your unsecure laptop out with you.)
Cons:
- Physical separation can be cumbersome and expensive, since we may have to obtain and set up a separate physical machine for each security level we need.
- There's generally no secure way to transfer data between physically separate computers running conventional OSes. (Qubes has a secure inter-VM file transfer system to handle this.)
- Physically separate computers running conventional OSes are still independently vulnerable to most conventional attacks due to their monolithic nature.
- Malware which can bridge air gaps has existed for several years now and is becoming increasingly common.
(For more on this topic, please see the paper [Software compartmentalization vs. physical separation](http://www.invisiblethingslab.com/resources/2014/Software_compartmentalization_vs_physical_separation.pdf).)
More information
----------------
This page is just a brief sketch of what Qubes is all about, and many technical details have been omitted here for the sake of presentation.
- If you're a current or potential Qubes user, you may want to check out the [documentation](/en/doc/) and the [FAQ](/en/doc/user-faq/).
- If you're a developer, there's dedicated [documentation](/en/doc/system-doc/) and an [FAQ](/en/doc/devel-faq/) just for you.
- Ready to give Qubes a try? Head on over to the [downloads page](/downloads/).
- Once you've installed Qubes, here's a guide on [getting started](/en/doc/getting-started/).

View File

@ -1,96 +0,0 @@
---
layout: doc
title: Backup, Restoration, and Migration
permalink: /en/doc/backup-restore/
redirect_from:
- /doc/BackupRestore/
- /wiki/BackupRestore/
---
Qubes Backup, Restoration, and Migration
========================================
1. [Qubes Backup, Restoration, and Migration](#QubesBackupRestorationandMigration)
1. [Creating a Backup](#CreatingaBackup)
2. [Restoring from a Backup](#RestoringfromaBackup)
3. [Emergency Backup Recovery without Qubes](#EmergencyBackupRecoverywithoutQubes)
4. [Migrating Between Two Physical Machines](#MigratingBetweenTwoPhysicalMachines)
5. [Notes](#Notes)
With Qubes, it's easy to back up and restore your whole system, as well as to migrate between two physical machines.
As of Qubes R2B3, these functions are integrated into the Qubes VM Manager GUI. There are also two command-line tools available which perform the same functions: [qvm-backup](/en/doc/dom0-tools/qvm-backup/) and [qvm-backup-restore](/en/doc/dom0-tools/qvm-backup-restore/).
Creating a Backup
-----------------
1. In **Qubes VM Manager**, click **System** on the menu bar, then click **Backup VMs** in the dropdown list. This brings up the **Qubes Backup VMs** window.
1. Move the AppVMs which you desire to back up to the right-hand **Selected** column. AppVMs in the left-hand **Available** column will not be backed up.
**Note:** An AppVM must be shut down in order to be backed up. Currently running AppVMs appear in red.
Once you have selected all desired AppVMs, click **Next**.
1. Select the destination for the backup:
- If you wish to send your backup to a [USB mass storage device](/en/doc/stick-mounting/), select the device in the dropdown box next to **Device** (feature removed in R3, select appropriate **Target AppVM** and mount the stick with one click in file selection dialog).
- If you wish to send your backup to a (currently running) AppVM, select the AppVM in the dropdown box next to **Target AppVM**.
You must also specify a directory on the device or in the AppVM, or a command to be executed in the AppVM as a destination for your backup. For example, if you wish to send your backup to the `~/backups` folder in the target AppVM, you would simply type `backups` in this field. This destination directory must already exist. If it does not exist, you must create it manually prior to backing up.
By specifying the appropriate directory as the destination in an AppVM, it is possible to send the backup directly to, e.g., a USB mass storage device attached to the AppVM. Likewise, it is possible to enter any command as a backup target by specifying the command as the destination in the AppVM. This can be used to send your backup directly to, e.g., a remote server using SSH.
At this point, you must also choose whether to encrypt your backup by checking or unchecking the **Encrypt backup** box.
**Note:** It is strongly recommended that you opt to encrypt all backups which will be sent to untrusted destinations!
**Note:** The supplied passphrase is used for **both** encryption/decryption and integrity verification. If you decide not to encrypt your backup (by unchecking the **Encrypt backup** box), the passphrase you supply will be used **only** for integrity verification. If you supply a passphrase but do not check the **Encrypt backup** box, your backup will **not** be encrypted!
1. When you are ready, click **Next**. Qubes will proceed to create your backup. Once the progress bar has completed, you may click **Finish**.
Restoring from a Backup
-----------------------
1. In **Qubes VM Manager**, click **System** on the menu bar, then click **Restore VMs from backup** in the dropdown list. This brings up the **Qubes Restore VMs** window.
1. Select the source location of the backup to be restored:
- If your backup is located on a [USB mass storage device](/en/doc/stick-mounting/), select the device in the dropdown box next to **Device**.
- If your backup is located in a (currently running) AppVM, select the AppVM in the dropdown box next to **AppVM**.
You must also specify the directory in which the backup resides (or a command to be executed in an AppVM). If you followed the instructions in the previous section, "Creating a Backup," then your backup is most likely in the location you chose as the destination in step 3. For example, if you had chosen the `~/backups` directory of an AppVM as your destination in step 3, you would now select the same AppVM and again type `backups` into the **Backup directory** field.
**Note:** After you have typed the directory location of the backup in the **Backup directory** field, click the ellipsis button `...` to the right of the field.
1. There are three options you may select when restoring from a backup:
1. **ignore missing**: If any of the AppVMs in your backup depended upon a NetVM, ProxyVM, or TemplateVM which is not present in (i.e., "missing from") the current system, checking this box will ignore the fact that they are missing and restore the AppVMs anyway.
2. **ignore username mismatch**: This option applies only to the restoration of dom0's home directory. If your backup was created on a Qubes system which had a different dom0 username than the dom0 username of the current system, then checking this box will ignore the mismatch between the two usernames and proceed to restore the home directory anyway.
3. **skip dom0**: If this box is checked, dom0's home directory will not be restored from your backup.
1. If your backup is encrypted, you must check the **Encrypted backup** box. If a passphrase was supplied during the creation of your backup (regardless of whether it is encrypted), then you must supply it here.
**Note:** The passphrase which was supplied when the backup was created was used for **both** encryption/decryption and integrity verification. If the backup was not encrypted, the supplied passphrase is used only for integrity verification.
**Note:** An AppVM cannot be restored from a backup if an AppVM with the same name already exists on the current system. You must first remove or change the name of any AppVM with the same name in order to restore such an AppVM.
1. When you are ready, click **Next**. Qubes will proceed to restore from your backup. Once the progress bar has completed, you may click **Finish**.
Emergency Backup Recovery without Qubes
---------------------------------------
The Qubes backup system has been designed with emergency disaster recovery in mind. No special Qubes-specific tools are required to access data backed up by Qubes. In the event a Qubes system is unavailable, you can access your data on any GNU/Linux system with the following procedure.
For emergency restore of backup created on Qubes R2 or newer take a look [here](/doc/BackupEmergencyRestoreV3/). For backups created on earlier Qubes version, take a look [here](/doc/BackupEmergencyRestoreV2/).
Migrating Between Two Physical Machines
---------------------------------------
In order to migrate your Qubes system from one physical machine to another, simply follow the backup procedure on the old machine, [install Qubes](/doc/QubesDownloads/) on the new machine, and follow the restoration procedure on the new machine. All of your settings and data will be preserved!
Notes
-----
- For the technical details of the backup system, please refer to [this thread](https://groups.google.com/d/topic/qubes-devel/TQr_QcXIVww/discussion).
- If working with symlinks, note the issues described in [this thread](https://groups.google.com/d/topic/qubes-users/EITd1kBHD30/discussion).

View File

@ -1,204 +0,0 @@
---
layout: doc
title: Development Workflow
permalink: /en/doc/development-workflow/
redirect_from:
- /doc/DevelopmentWorkflow/
- /wiki/DevelopmentWorkflow/
---
Development Workflow
====================
A workflow for developing Qubes OS+
First things first, setup [QubesBuilder](/en/doc/qubes-builder/). This guide assumes you're using qubes-builder to build Qubes.
Repositories and committing Code
--------------------------------
Qubes is split into a bunch of git repos. This are all contained in the `qubes-src` directory under qubes-builder.
FIXME(ypid): Not on github?
The best way to write and contribute code is to create a git repo somewhere (e.g., github) for the repo you are interested in editing (e.g., `qubes-manager`, `core`, etc). To integrate your repo with the rest of Qubes, cd to the repo directory and add your repository as a remote in git
**Example:**
~~~
$ cd qubes-builder/qubes-src/qubes-manager
$ git remote add abel git@github.com:abeluck/qubes-manager.git
~~~
You can then proceed to easily develop in your own branches, pull in new commits from the dev branches, merge them, and eventually push to your own repo on github.
When you are ready to submit your changes to Qubes to be merged, push your changes, then create a signed git tag (using `git tag -s`). Finally, send a letter to the Qubes listserv describing the changes and including the link to your repository. Don't forget to include your public PGP key you use to sign your tags.
### Kernel-specific notes
#### Prepare fresh version of kernel sources, with Qubes-specific patches applied
In qubes-builder/qubes-src/kernel:
~~~
make prep
~~~
The resulting tree will be in kernel-\<VERSION\>/linux-\<VERSION\>:
~~~
ls -ltrd kernel*/linux*
~~~
~~~
drwxr-xr-x 23 user user 4096 Nov 5 09:50 kernel-3.4.18/linux-3.4.18
drwxr-xr-x 6 user user 4096 Nov 21 20:48 kernel-3.4.18/linux-obj
~~~
#### Go to the kernel tree and update the version
In qubes-builder/qubes-src/kernel:
~~~
cd kernel-3.4.18/linux-3.4.18
~~~
#### Changing the config
In kernel-3.4.18/linux-3.4.18:
~~~
cp ../../config-pvops .config
make oldconfig
~~~
Now change the configuration. For example, in kernel-3.4.18/linux-3.4.18:
~~~
make menuconfig
~~~
Copy the modified config back into the kernel tree:
~~~
cp .config ../../../config-pvops
~~~
#### Patching the code
TODO: describe the workflow for patching the code, below are some random notes, not working well
~~~
ln -s ../../patches.xen
export QUILT_PATCHES=patches.xen
export QUILT_REFRESH_ARGS="-p ab --no-timestamps --no-index"
export QUILT_SERIES=../../series-pvops.conf
quilt new patches.xen/pvops-3.4-0101-usb-xen-pvusb-driver-bugfix.patch
quilt add drivers/usb/host/Kconfig drivers/usb/host/Makefile \
drivers/usb/host/xen-usbback/* drivers/usb/host/xen-usbfront.c \
include/xen/interface/io/usbif.h
*edit something*
quilt refresh
cd ../..
vi series-pvops.conf
~~~
#### Building RPMS
TODO: Is this step generic for all subsystems?
Now it is a good moment to make sure you have changed kernel release name in rel-pvops file. For example, if you change it to '1debug201211116c' the resulting RPMs will be named 'kernel-3.4.18-1debug20121116c.pvops.qubes.x86\_64.rpm'. This will help distinguish between different versions of the same package.
You might want to take a moment here to review (git diff, git status), commit your changes locally.
To actually build RPMS, in qubes-src/kernel:
~~~
make rpms
~~~
RPMS will appear in qubes-src/kernel/rpm/x86\_64:
~~~
-rw-rw-r-- 1 user user 42996126 Nov 17 04:08 kernel-3.4.18-1debug20121116c.pvops.qubes.x86_64.rpm
-rw-rw-r-- 1 user user 43001450 Nov 17 05:36 kernel-3.4.18-1debug20121117a.pvops.qubes.x86_64.rpm
-rw-rw-r-- 1 user user 8940138 Nov 17 04:08 kernel-devel-3.4.18-1debug20121116c.pvops.qubes.x86_64.rpm
-rw-rw-r-- 1 user user 8937818 Nov 17 05:36 kernel-devel-3.4.18-1debug20121117a.pvops.qubes.x86_64.rpm
-rw-rw-r-- 1 user user 54490741 Nov 17 04:08 kernel-qubes-vm-3.4.18-1debug20121116c.pvops.qubes.x86_64.rpm
-rw-rw-r-- 1 user user 54502117 Nov 17 05:37 kernel-qubes-vm-3.4.18-1debug20121117a.pvops.qubes.x86_64.rpm
~~~
### Useful [QubesBuilder](/en/doc/qubes-builder/) commands
1. *make check* - will check if all the code was commited into repository and if all repository are tagged with signed tag.
2. *make show-vtags* - show version of each component (based on git tags) - mostly useful just before building ISO. **Note:** this will not show version for components containing changes since last version tag
3. *make push* - push change from **all** repositories to git server. You must set proper remotes (see above) for all repositories first.
4. *make prepare-merge* - fetch changes from remote repositories (can be specified on commandline via GIT\_SUBDIR or GIT\_REMOTE vars), (optionally) verify tags and show the changes. This do not merge the changes - there are left for review as FETCH\_HEAD ref. You can merge them using "git merge FETCH\_HEAD" (in each repo directory).
Copying Code to dom0
--------------------
When developing it is convenient to be able to rapidly test changes. Assuming you're developing Qubes on Qubes, you should be working in a special VM for Qubes and occasionally you will want to transfer code or rpms back to dom0 for testing.
Here are some handy scripts Marek has shared to facilitate this.
You may also like to run your [test environment on separate machine](/en/doc/test-bench/).
### Syncing dom0 files
TODO: edit this script to be more generic
~~~
#!/bin/sh
set -x
set -e
QUBES_PY_DIR=/usr/lib64/python2.6/site-packages/qubes
QUBES_PY=$QUBES_PY_DIR/qubes.py
QUBESUTILS_PY=$QUBES_PY_DIR/qubesutils.py
qvm-run -p qubes-devel 'cd qubes-builder/qubes-src/core/dom0; tar c qmemman/qmemman*.py qvm-core/*.py qvm-tools/* misc/vm-template-hvm.conf misc/qubes-start.desktop ../misc/block-snapshot aux-tools ../qrexec' |tar xv
cp $QUBES_PY qubes.py.bak$$
cp $QUBESUTILS_PY qubesutils.py.bak$$
cp /etc/xen/scripts/block-snapshot block-snapshot.bak$$
sudo cp qvm-core/qubes.py $QUBES_PY
sudo cp qvm-core/qubesutils.py $QUBESUTILS_PY
sudo cp qvm-core/guihelpers.py $QUBES_PY_DIR/
sudo cp qmemman/qmemman*.py $QUBES_PY_DIR/
sudo cp misc/vm-template-hvm.conf /usr/share/qubes/
sudo cp misc/qubes-start.desktop /usr/share/qubes/
sudo cp misc/block-snapshot /etc/xen/scripts/
sudo cp aux-tools/qubes-dom0-updates.cron /etc/cron.daily/
# FIXME(Abel Luck): I hope to
~~~
### Apply qvm-tools
TODO: make it more generic
~~~
#!/bin/sh
BAK=qvm-tools.bak$$
mkdir -p $BAK
cp -a /usr/bin/qvm-* /usr/bin/qubes-* $BAK/
sudo cp qvm-tools/qvm-* qvm-tools/qubes-* /usr/bin/
~~~
### Copy from dom0 to an appvm
~~~
#/bin/sh
#
# usage ./cp-domain <vm_name> <file_to_copy>
#
domain=$1
file=$2
fname=`basename $file`
qvm-run $domain 'mkdir /home/user/incoming/dom0 -p'
cat $file| qvm-run --pass-io $domain "cat > /home/user/incoming/dom0/$fname"
~~~

View File

@ -1,88 +0,0 @@
---
layout: doc
title: Installation ISO Building
permalink: /en/doc/installation-iso-building/
redirect_from:
- /doc/InstallationIsoBuilding/
- /wiki/InstallationIsoBuilding/
---
How to build Qubes installation ISO
===================================
Qubes uses [Fedora Unity Revisor](http://revisor.fedoraunity.org/) to build the installation ISO.
You may want to get familiar with [Revisor documentation](http://revisor.fedoraunity.org/documentation).
Build installer packages
------------------------
Get [Qubes Installer repository](http://git.qubes-os.org/?p=smoku/installer) and build its packages:
~~~
cd installer
make rpms
~~~
Packages will be in `rpm/noarch` and `rpm/x86_64`.
Install Revisor
---------------
Next install the freshly built revisor and anaconda:
~~~
yum install rpm/noarch/revisor*.rpm
yum install rpm/x86_64/anaconda*.rpm
~~~
Review configuration files
--------------------------
All configuration files for Qubes Revisor are kept in the ~~~conf/~~~ directory:
- `conf/qubes-install.conf` - Main Revisor configuration file. This configures Revisor to build Qubes Installation image based on Fedora 13. All other configuration files and working directories are pointed here.
- `conf/qubes-x86_64.conf` - This file describes all repositories needed to build Qubes for x86\_64 architecture.
- `conf/qubes-kickstart.cfg` - Fedora Kickstart formatted file describing which packages should land in the ISO `/Packages` repository. This describes basically what will be available for installation. The packages list built using this file will be further filtered by the comps file.
- `conf/comps-qubes.xml` - Repository Comps file for ISO `/Packages` repository, describing packages and package groups of the installer repository. Package groups are used to select which of the packages are mandatory to install, which are optional and which are to be just available on the ISO but not installed by default (not used on Qubes).
Create/Update local repository
------------------------------
Revisor fetches all RPM packages from YUM repositories. We currently use 5 repositories:
- `yum/installer` (installer-related rpms)
- `yum/qubes-dom0` (all the Qubes stuff)
- `yum/dom0-updates` (for select 3rd party packages, e.g. Xorg)
- `yum/fedora13-repo` (local fedora 13 repo, copy from DVD)
- remote fedora repo for extra packages (usually deps for qubes-dom0)
You need to manually copy the Fedora 13 installation DVD contents (`Packages/` and `repodata/` directories) into `build/fedora13-repo`.
Also, you need to copy all the qubes dom0 rpms into `build/yum/qubes-dom0/rpm` and run the `yum/update_repo.sh` script afterwards.
In order to fill the `build/yum/installer` repo one can just use `make update-repo`.
The `build/yum/dom0-updates` is to be used for select rpms that should also be used instead of those from the fedora (loacal and remote) repos.
Update your local repos:
~~~
make update-repo
~~~
Build ISO
---------
Now you're finally ready to build the ISO image:
~~~
make iso
~~~
and wait...
You may add `-d 1` (or `-d 99` if you're a masochist) in the Makefile at the end of the revisor command to get (a ton of) debugging information.

View File

@ -1,164 +0,0 @@
---
layout: doc
title: Qrexec
permalink: /en/doc/qrexec/
redirect_from:
- /doc/Qrexec/
- /wiki/Qrexec/
---
Command execution in VM (and Qubes RPC)
=======================================
Qubes **qrexec** is a framework for implementing inter-VM (incl. Dom0-VM) services. It offers a mechanism to start programs in VMs, redirect their stdin/stdout, and a policy framework to control this all.
Basic Dom0-VM command execution
-------------------------------
During each domain creation a process named `qrexec-daemon` is started in dom0, and a process named `qrexec-agent` is started in the VM. They are connected over `vchan` channel.
Typically, the first thing that a `qrexec-client` instance does is to send a request to `qrexec-agent` to start a process in the VM. From then on, the stdin/stdout/stderr from this remote process will be passed to the `qrexec-client` process.
E.g., to start a primitive shell in a VM type the following in Dom0 console:
~~~
[user@dom0 ~]$ /usr/lib/qubes/qrexec-client -d <vm name> user:bash
~~~
The string before first semicolon specifies what user to run the command as.
Adding `-e` on the `qrexec-client` command line results in mere command execution (no data passing), and `qrexec-client` exits immediately after sending the execution request.
There is also the `-l <local program>` flag, which directs `qrexec-client` to pass stdin/stdout of the remote program not to its stdin/stdout, but to the (spawned for this purpose) `<local program>`.
The `qvm-run` command is heavily based on `qrexec-client`. It also takes care of additional activities (e.g., starting the domain, if it is not up yet, and starting the GUI daemon). Thus, it is usually more convenient to use `qvm-run`.
There can be almost arbitrary number of `qrexec-client` processes for a domain (i.e., `qrexec-client` processes connected to the same `qrexec-daemon`); their data is multiplexed independently.
There is a similar command line utility avilable inside Linux AppVMs (note the `-vm` suffix): `qrexec-client-vm` that will be described in subsequent sections.
Qubes Inter-VM Services (Qubes RPC)
-----------------------------------
Apart from simple Dom0-\>VM command executions, as discussed above, it is also useful to have more advanced infrastructure for controlled inter-VM RPC/services. This might be used for simple things like inter-VM file copy operations, as well as more complex tasks like starting a DispVM, and requesting it to do certain operations on a handed file(s).
Instead of implementing complex RPC-like mechanisms for inter-VM communication, Qubes takes a much simpler and pragmatic approach and aims to only provide simple *pipes* between the VMs, plus ability to request *pre-defined* programs (servers) to be started on the other end of such pipes, and a centralized policy (enforced by the `qrexec-policy` process running in dom0) which says which VMs can request what services from what VMs.
Thanks to the framework and automatic stdin/stdout redirection, RPC programs are very simple; both the client and server just use their stdin/stdout to pass data. The framework does all the inner work to connect these file descriptors to each other via `qrexec-daemon` and `qrexec-agent`. Additionally, DispVMs are tightly integrated; RPC to a DispVM is a simple matter of using a magic `$dispvm` keyword as the target VM name.
All services in Qubes are identified by a single string, which by convention takes a form of `qubes.ServiceName`. Each VM can provide handlers for each of the known services by providing a file in `/etc/qubes-rpc/` directory with the same name as the service it is supposed to handle. This file will then be executed by the qrexec service, if the dom0 policy allowed the service to be requested (see below). Typically, the files in `/etc/qubes-rpc/` contain just one line, which is a path to the specific binary that acts as a server for the incoming request, however they might also be the actual executable themselves. Qrexec framework is careful about connecting the stdin/stdout of the server process with the corresponding stdin/stdout of the requesting process in the requesting VM (see example Hello World service described below).
Qubes Services (RPC) policy
---------------------------
Besides each VM needing to provide explicit programs to serve each supported service, the inter-VM service RPC is also governed by a central policy in Dom0.
In dom0, there is a bunch of files in `/etc/qubes-rpc/policy/` directory, whose names describe the available RPC actions; their content is the RPC access policy database. Some example of the default services in Qubes are:
- qubes.Filecopy
- qubes.OpenInVM
- qubes.ReceiveUpdates
- qubes.SyncAppMenus
- qubes.VMShell
- qubes.ClipboardPaste
- qubes.Gpg
- qubes.NotifyUpdates
- qubes.PdfConvert
These files contain lines with the following format:
~~~
srcvm destvm (allow|deny|ask)[,user=user_to_run_as][,target=VM_to_redirect_to]
~~~
You can specify `srcvm` and `destvm` by name, or by one of `$anyvm`, `$dispvm`, `dom0` reserved keywords (note string `dom0` does not match the `$anyvm` pattern; all other names do). Only `$anyvm` keyword makes sense in the `srcvm` field (service calls from dom0 are currently always allowed, `$dispvm` means "new VM created for this particular request" - so it is never a source of request). Currently, there is no way to specify source VM by type, but this is planned for Qubes R3.
Whenever a RPC request for service named "XYZ" is received, the first line in `/etc/qubes-rpc/policy/XYZ` that matches the actual `srcvm`/`destvm` is consulted to determine whether to allow RPC, what user account the program should run in target VM under, and what VM to redirect the execution to. If the policy file does not exits, user is prompted to create one *manually*; if still there is no policy file after prompting, the action is denied.
On the target VM, the `/etc/qubes-rpc/XYZ` must exist, containing the file name of the program that will be invoked.
Requesting VM-VM (and VM-Dom0) services execution
-------------------------------------------------
In a src VM, one should invoke the qrexec client via the following command:
~~~
/usr/lib/qubes/qrexec-client-vm <target vm name> <service name> <local program path> [local program arguments]`
~~~
Note that only stdin/stdout is passed between RPC server and client - notably, no cmdline argument are passed.
The source VM name can be accessed in the server process via QREXEC\_REMOTE\_DOMAIN environment variable. (Note the source VM has *no* control over the name provided in this variable--the name of the VM is provided by dom0, and so is trusted.)
By default, stderr of client and server is logged to respective `/var/log/qubes/qrexec.XID` files, in each of the VM.
Be very careful when coding and adding a new RPC service! Any vulnerability in a RPC server can be fatal to security of the target VM!
Requesting VM-VM (and VM-Dom0) services execution (without cmdline helper)
--------------------------------------------------------------------------
Connect directly to `/var/run/qubes/qrexec-agent-fdpass` socket as described [here](/doc/Qrexec2Implementation#Allthepiecestogetheratwork).
### Revoking "Yes to All" authorization
Qubes RPC policy supports an "ask" action, that will prompt the user whether a given RPC call should be allowed. It is set as default for services such as inter-VM file copy. A prompt window launches in dom0, that gives the user option to click "Yes to All", which allows the action and adds a new entry to the policy file, which will unconditionally allow further calls for given (service, srcVM, dstVM) tuple.
In order to remove such authorization, issue this command from a Dom0 terminal (example below for qubes.Filecopy service):
~~~
sudo nano /etc/qubes-rpc/policy/qubes.Filecopy
~~~
and then remove any line(s) ending in "allow" (before the first \#\# comment) which are the "Yes to All" results.
A user might also want to set their own policies in this section. This may mostly serve to prevent the user from mistakenly copying files or text from a trusted to untrusted domain, or vice-versa.
### Qubes RPC "Hello World" service
We will show the necessary files to create a simple RPC call that adds two integers on the target VM and returns back the result to the invoking VM.
- Client code on source VM (`/usr/bin/our_test_add_client`)
~~~
#!/bin/sh
echo $1 $2 # pass data to rpc server
exec cat >&$SAVED_FD_1 # print result to the original stdout, not to the other rpc endpoint
~~~
- Server code on target VM (`/usr/bin/our_test_add_server`)
~~~
#!/bin/sh
read arg1 arg2 # read from stdin, which is received from the rpc client
echo $(($arg1+$arg2)) # print to stdout - so, pass to the rpc client
~~~
- Policy file in dom0 (`/etc/qubes-rpc/policy/test.Add`)
~~~
$anyvm $anyvm ask
~~~
- Server path definition on target VM (`/etc/qubes-rpc/test.Add`)
~~~
/usr/bin/our_test_add_server
~~~
- To test this service, run the following in the source VM:
~~~
/usr/lib/qubes/qrexec-client-vm <target VM> test.Add /usr/bin/our_test_add_client 1 2
~~~
and we should get "3" as answer, provided dom0 policy allows the call to pass through, which would happen after we click "Yes" in the popup that should appear after the invocation of this command. If we changed the policy from "ask" to "allow", then no popup should be presented, and the call will always be allowed.
More high-level RPCs?
---------------------
As previously noted, Qubes aims to provide mechanisms that are very simple and thus with very small attack surface. This is the reason why the inter-VM RPC framework is very primitive and doesn't include any serialization or other function arguments passing, etc. We should remember, however, that users/app developers are always free to run more high-level RPC protocols on top of qrexec. Care should be taken, however, to consider potential attack surfaces that are exposed to untrusted or less trusted VMs in that case.
Qubes RPC internals
-------------------
The internal implementation of qrexec in Qubes R2 is described [here](/en/doc/qrexec2-implementation/), and in Qubes R3 [here](/en/doc/qrexec3-implementation/).

View File

@ -1,74 +0,0 @@
---
layout: doc
title: Qrexec2Implementation
permalink: /en/doc/qrexec2-implementation/
redirect_from:
- /doc/Qrexec2Implementation/
- /wiki/Qrexec2Implementation/
---
Implementation of qrexec in Qubes R2
====================================
This page describes implementation of the [qrexec framework](/en/doc/qrexec/) in Qubes OS R2. Note that the implementation has changed significantly in Qubes R3 (see [Qrexec3Implementation](/en/doc/qrexec3-implementation/)), although the user API reminded backwards compatible (i.e. qrexec apps written for Qubes R2 should run without modifications on Qubes R3).
Dom0 tools implementation
-------------------------
Players:
- `/usr/lib/qubes/qrexec-daemon` \<- started by mgmt stack (qubes.py) when a VM is started.
- `/usr/lib/qubes/qrexec-policy` \<- internal program used to evaluate the policy file and making the 2nd half of the connection.
- `/usr/lib/qubes/qrexec-client` \<- raw command line tool that talks to the daemon via unix socket (`/var/run/qubes/qrexec.XID`)
Note: none of the above tools are designed to be used by users.
Linux VMs implementation
------------------------
Players:
- `/usr/lib/qubes/qrexec-agent` \<- started by VM bootup scripts, a daemon.
- `/usr/lib/qubes/qubes-rpc-multiplexer` \<- executes the actual service program, as specified in VM's `/etc/qubes-rpc/qubes.XYZ`.
- `/usr/lib/qubes/qrexec-client-vm` \<- raw command line tool that talks to the agent.
Note: none of the above tools are designed to be used by users. `qrexec-client-vm` is designed to be wrapped up by Qubes apps.
Windows VMs implemention
------------------------
%QUBES\_DIR% is the installation path (`c:\Program Files\Invisible Things Lab\Qubes OS Windows Tools` by default).
- `%QUBES_DIR%\bin\qrexec-agent.exe` \<- runs as a system service. Responsible both for raw command execution and interpreting RPC service requests.
- `%QUBES_DIR%\qubes-rpc` \<- directory with `qubes.XYZ` files that contain commands for executing RPC services. Binaries for the services are contained in `%QUBES_DIR%\qubes-rpc-services`.
- `%QUBES_DIR%\bin\qrexec-client-vm` \<- raw command line tool that talks to the agent.
Note: neither of the above tools are designed to be used by users. `qrexec-client-vm` is designed to be wrapped up by Qubes apps.
All the pieces together at work
-------------------------------
Note: this section is not needed to use qrexec for writing Qubes apps. Also note the qrexec framework implemention in Qubes R3 significantly differs from what is described in this section.
The VM-VM channels in Qubes R2 are made via "gluing" two VM-Dom0 and Dom0-VM vchan connections:
![qrexec2-internals.png](/attachment/wiki/Qrexec2Implementation/qrexec2-internals.png)
Note: Dom0 never examines the actual data flowing in neither of the two vchan connections.
When a user in a source VM executes `qrexec-client-vm` utility, the following steps are taken:
- `qrexec-client-vm` connects to `qrexec-agent`'s `/var/run/qubes/qrexec-agent-fdpass` unix socket 3 times. Reads 4 bytes from each of them, which is the fd number of the accepted socket in agent. These 3 integers, in text, concatenated, form "connection identifier" (CID)
- `qrexec-client-vm` writes to `/var/run/qubes/qrexec-agent` fifo a blob, consisting of target vmname, rpc action, and CID
- `qrexec-client-vm` executes the rpc client, passing the above mentioned unix sockets as process stdin/stdout, and optionally stderr (if the PASS\_LOCAL\_STDERR env variable is set)
- `qrexec-agent` passes the blob to `qrexec-daemon`, via MSG\_AGENT\_TO\_SERVER\_TRIGGER\_CONNECT\_EXISTING message over vchan
- `qrexec-daemon` executes `qrexec-policy`, passing source vmname, target vmname, rpc action, and CID as cmdline arguments
- `qrexec-policy` evaluates the policy file. If successful, creates a pair of `qrexec-client` processes, whose stdin/stdout are cross-connencted.
- The first `qrexec-client` connects to the src VM, using the `-c ClientID` parameter, which results in not creating a new process, but connecting to the existing process file descriptors (these are the fds of unix socket created in step 1).
- The second `qrexec-client` connects to the target VM, and executes `qubes-rpc-multiplexer` command there with the rpc action as the cmdline argument. Finally, `qubes-rpc-multiplexer` executes the correct rpc server on the target.
- In the above step, if the target VM is `$dispvm`, the DispVM is created via the `qfile-daemon-dvm` program. The latter waits for the `qrexec-client` process to exit, and then destroys the DispVM.
Protocol description ("wire-level" spec)
----------------------------------------
TODO

View File

@ -1,184 +0,0 @@
---
layout: doc
title: Qrexec3Implementation
permalink: /en/doc/qrexec3-implementation/
redirect_from:
- /doc/Qrexec3Implementation/
- /wiki/Qrexec3Implementation/
---
Implementation of qrexec in Qubes R3
====================================
This page describes implementation of the [qrexec framework](/en/doc/qrexec/) in Qubes OS R3.
Qrexec framework consists of a number of processes communicating with each other using common IPC protocol (described in detail below). Components residing in the same domain use pipes as the underlying transport medium, while components in separate domains use vchan link.
Dom0 tools implementation
-------------------------
- `/usr/lib/qubes/qrexec-daemon` \<- one instance is required for every active domain. Responsible for:
- Handling execution and service requests from **dom0** (source: `qrexec-client`).
- Handling service requests from the associated domain (source: `qrexec-client-vm`, then `qrexec-agent`).
> Command line: `qrexec-daemon domain-id domain-name [default user]`
> *domain-id*: numeric Qubes ID assigned to the associated domain.
> *domain-name*: associated domain name.
> *default user*: optional. If passed, `qrexec-daemon` uses this user as default for all execution requests that don't specify one.
- `/usr/lib/qubes/qrexec-policy` \<- internal program used to evaluate the RPC policy and deciding whether a RPC call should be allowed.
- `/usr/lib/qubes/qrexec-client` \<- used to pass execution and service requests to `qrexec-daemon`. Command line parameters:
> `-d target-domain-name` Specifies the target for the execution/service request.
> `-l local-program` Optional. If present, `local-program` is executed and its stdout/stdin are used when sending/receiving data to/from the remote peer.
> `-e` Optional. If present, stdout/stdin are not connected to the remote peer. Only process creation status code is received.
> `-c <request-id,src-domain-name,src-domain-id>` Used for connecting a VM-VM service request by `qrexec-policy`. Details described below in the service example.
> `cmdline` Command line to pass to `qrexec-daemon` as the execution/service request. Service request format is described below in the service example.
Note: none of the above tools are designed to be used by users directly.
VM tools implementation
-----------------------
- `qrexec-agent` \<- one instance runs in each active domain. Responsible for:
- Handling service requests from `qrexec-client-vm` and passing them to connected `qrexec-daemon` in **dom0**.
- Executing associated `qrexec-daemon` execution/service requests.
> Command line parameters: none.
- `qrexec-client-vm` \<- runs in an active domain. Used to pass service requests to `qrexec-agent`.
> Command line: `qrexec-client-vm target-domain-name service-name local-program [local program arguments]`
> `target-domain-name` Target domain for the service request. Source is the current domain.
> `service-name` Requested service name.
> `local-program` **local-program** is executed locally and its stdin/stdout are connected to the remote service endpoint.
Qrexec protocol details
-----------------------
Qrexec protocol is message-based. All messages share a common header followed by an optional data packet.
~~~
/* uniform for all peers, data type depends on message type */
struct msg_header {
uint32_t type; /* message type */
uint32_t len; /* data length */
};
~~~
When two peers establish connection, the server sends `MSG_HELLO` followed by `peer_info` struct:
~~~
struct peer_info {
uint32_t version; /* qrexec protocol version */
};
~~~
The client then should reply with its own `MSG_HELLO` and `peer_info`. If protocol versions don't match, the connection is closed. TODO: fallback for backwards compatibility, don't do handshake in the same domain?.
Details of all possible use cases and the messages involved are described below.
### dom0: request execution of some\_command in domX and pass stdin/stdout
- **dom0**: `qrexec-client` is invoked in **dom0** as follows:
> `qrexec-client -d domX [-l local_program] user:some_command`
> `user` may be substituted with the literal `DEFAULT`. In that case, default Qubes user will be used to execute `some_command`.
- **dom0**: `qrexec-client` sets `QREXEC_REMOTE_DOMAIN` environment variable to **domX**.
- **dom0**: If `local_program` is set, `qrexec-client` executes it and uses that child's stdin/stdout in place of its own when exchanging data with `qrexec-agent` later.
- **dom0**: `qrexec-client` connects to **domX**'s `qrexec-daemon`.
- **dom0**: `qrexec-daemon` sends `MSG_HELLO` header followed by `peer_info` to `qrexec-client`.
- **dom0**: `qrexec-client` replies with `MSG_HELLO` header followed by `peer_info` to `qrexec-daemon`.
- **dom0**: `qrexec-client` sends `MSG_EXEC_CMDLINE` header followed by `exec_params` to `qrexec-daemon`
~~~
/* variable size */
struct exec_params {
uint32_t connect_domain; /* target domain id */
uint32_t connect_port; /* target vchan port for i/o exchange */
char cmdline[0]; /* command line to execute, size = msg_header.len - sizeof(struct exec_params) */
};
~~~
In this case, `connect_domain` and `connect_port` are set to 0.
- **dom0**: `qrexec-daemon` replies to `qrexec-client` with `MSG_EXEC_CMDLINE` header followed by `exec_params`, but with empty `cmdline` field. `connect_domain` is set to Qubes ID of **domX** and `connect_port` is set to a vchan port allocated by `qrexec-daemon`.
- **dom0**: `qrexec-daemon` sends `MSG_EXEC_CMDLINE` header followed by `exec_params` to the associated **domX** `qrexec-agent` over vchan. `connect_domain` is set to 0 (**dom0**), `connect_port` is the same as sent to `qrexec-client`. `cmdline` is unchanged except that the literal `DEFAULT` is replaced with actual user name, if present.
- **dom0**: `qrexec-client` disconnects from `qrexec-daemon`.
- **dom0**: `qrexec-client` starts a vchan server using the details received from `qrexec-daemon` and waits for connection from **domX**'s `qrexec-agent`.
- **domX**: `qrexec-agent` receives `MSG_EXEC_CMDLINE` header followed by `exec_params` from `qrexec-daemon` over vchan.
- **domX**: `qrexec-agent` connects to `qrexec-client` over vchan using the details from `exec_params`.
- **domX**: `qrexec-agent` executes `some_command` in **domX** and connects the child's stdin/stdout to the data vchan. If the process creation fails, `qrexec-agent` sends `MSG_DATA_EXIT_CODE` to `qrexec-client` followed by the status code (**int**) and disconnects from the data vchan.
- Data read from `some_command`'s stdout is sent to the data vchan using `MSG_DATA_STDOUT` by `qrexec-agent`. `qrexec-client` passes data received as `MSG_DATA_STDOUT` to its own stdout (or to `local_program`'s stdin if used).
- `qrexec-client` sends data read from local stdin (or `local_program`'s stdout if used) to `qrexec-agent` over the data vchan using `MSG_DATA_STDIN`. `qrexec-agent` passes data received as `MSG_DATA_STDIN` to `some_command`'s stdin.
- `MSG_DATA_STDOUT` or `MSG_DATA_STDIN` with data `len` field set to 0 in `msg_header` is an EOF marker. Peer receiving such message should close the associated input/output pipe.
- When `some_command` terminates, **domX**'s `qrexec-agent` sends `MSG_DATA_EXIT_CODE` header to `qrexec-client` followed by the exit code (**int**). `qrexec-agent` then disconnects from the data vchan.
### domY: invoke execution of qubes service qubes.SomeRpc in domX and pass stdin/stdout
- **domY**: `qrexec-client-vm` is invoked as follows:
> `qrexec-client-vm domX qubes.SomeRpc local_program [params]`
- **domY**: `qrexec-client-vm` connects to `qrexec-agent` (via local socket/named pipe).
- **domY**: `qrexec-client-vm` sends `trigger_service_params` data to `qrexec-agent` (without filling the `request_id` field):
~~~
struct trigger_service_params {
char service_name[64];
char target_domain[32];
struct service_params request_id; /* service request id */
};
struct service_params {
char ident[32];
};
~~~
- **domY**: `qrexec-agent` allocates a locally-unique (for this domain) `request_id` (let's say `13`) and fills it in the `trigger_service_params` struct received from `qrexec-client-vm`.
- **domY**: `qrexec-agent` sends `MSG_TRIGGER_SERVICE` header followed by `trigger_service_params` to `qrexec-daemon` in **dom0** via vchan.
- **dom0**: **domY**'s `qrexec-daemon` executes `qrexec-policy`: `qrexec-policy domY_id domY domX qubes.SomeRpc 13`.
- **dom0**: `qrexec-policy` evaluates if the RPC should be allowed or denied. If the action is allowed it returns `0`, if the action is denied it returns `1`.
- **dom0**: **domY**'s `qrexec-daemon` checks the exit code of `qrexec-policy`.
- If `qrexec-policy` returned **not** `0`: **domY**'s `qrexec-daemon` sends `MSG_SERVICE_REFUSED` header followed by `service_params` to **domY**'s `qrexec-agent`. `service_params.ident` is identical to the one received. **domY**'s `qrexec-agent` disconnects its `qrexec-client-vm` and RPC processing is finished.
- If `qrexec-policy` returned `0`, RPC processing continues.
- **dom0**: if `qrexec-policy` allowed the RPC, it executed `qrexec-client -d domX -c 13,domY,domY_id user:QUBESRPC qubes.SomeRpc domY`.
- **dom0**: `qrexec-client` sets `QREXEC_REMOTE_DOMAIN` environment variable to **domX**.
- **dom0**: `qrexec-client` connects to **domX**'s `qrexec-daemon`.
- **dom0**: **domX**'s `qrexec-daemon` sends `MSG_HELLO` header followed by `peer_info` to `qrexec-client`.
- **dom0**: `qrexec-client` replies with `MSG_HELLO` header followed by `peer_info` to **domX**'s`qrexec-daemon`.
- **dom0**: `qrexec-client` sends `MSG_EXEC_CMDLINE` header followed by `exec_params` to **domX**'s`qrexec-daemon`
~~~
/* variable size */
struct exec_params {
uint32_t connect_domain; /* target domain id */
uint32_t connect_port; /* target vchan port for i/o exchange */
char cmdline[0]; /* command line to execute, size = msg_header.len - sizeof(struct exec_params) */
};
~~~
In this case, `connect_domain` is set to id of **domY** (from the `-c` parameter) and `connect_port` is set to 0. `cmdline` field contains the RPC to execute, in this case `user:QUBESRPC qubes.SomeRpc domY`.
- **dom0**: **domX**'s `qrexec-daemon` replies to `qrexec-client` with `MSG_EXEC_CMDLINE` header followed by `exec_params`, but with empty `cmdline` field. `connect_domain` is set to Qubes ID of **domX** and `connect_port` is set to a vchan port allocated by **domX**'s `qrexec-daemon`.
- **dom0**: **domX**'s `qrexec-daemon` sends `MSG_EXEC_CMDLINE` header followed by `exec_params` to **domX**'s `qrexec-agent`. `connect_domain` and `connect_port` fields are the same as in the step above. `cmdline` is set to the one received from `qrexec-client`, in this case `user:QUBESRPC qubes.SomeRpc domY`.
- **dom0**: `qrexec-client` disconnects from **domX**'s `qrexec-daemon` after receiving connection details.
- **dom0**: `qrexec-client` connects to **domY**'s `qrexec-daemon` and exchanges MSG\_HELLO as usual.
- **dom0**: `qrexec-client` sends `MSG_SERVICE_CONNECT` header followed by `exec_params` to **domY**'s `qrexec-daemon`. `connect_domain` is set to ID of **domX** (received from **domX**'s `qrexec-daemon`) and `connect_port` is the one received as well. `cmdline` is set to request ID (`13` in this case).
- **dom0**: **domY**'s `qrexec-daemon` sends `MSG_SERVICE_CONNECT` header followed by `exec_params` to **domY**'s `qrexec-agent`. Data fields are unchanged from the step above.
- **domY**: `qrexec-agent` starts a vchan server on the port received in the step above. It acts as a `qrexec-client` in this case because this is a VM-VM connection.
- **domX**: `qrexec-agent` connects to the vchan server of **domY**'s `qrexec-agent` (connection details were received before from **domX**'s `qrexec-daemon`).
- After that, connection follows the flow of the previous example (dom0-VM).

View File

@ -1,129 +0,0 @@
---
layout: doc
title: Qrexec3
permalink: /en/doc/qrexec3/
redirect_from:
- /doc/Qrexec3/
- /wiki/Qrexec3/
---
Command execution in VM (and Qubes RPC)
=======================================
*[Note: this documents describes Qrexec v3 (Odyssey)]*
Qrexec framework is used by core Qubes components to implement communication between domains. Qubes domains are isolated by design but there is a need for a mechanism to allow administrative domain (dom0) to force command execution in another domain (VM). For instance, when user selects an application from KDE menu, it should be started in the selected VM. Also it is often useful to be able to pass stdin/stdout/stderr from an application running in VM to dom0 (and the other way around). In specific circumstances Qubes allows VMs to be initiators of such communication (so for example a VM can notify dom0 that there are updates available for it).
Qrexec basics
-------------
Qrexec is built on top of vchan (a library providing data links between VMs). During domain creation a process named *qrexec-daemon* is started in dom0, and a process named *qrexec-agent* is started in the VM. They are connected over *vchan* channel. *qrexec-daemon* listens for connections from dom0 utility named *qrexec-client*. Typically, the first thing that a *qrexec-client* instance does is to send a request to *qrexec-daemon* to start a process (let's name it VMprocess) with a given command line in a specified VM (someVM). *qrexec-daemon* assigns unique vchan connection details and sends them both to *qrexec-client* (in dom0) and *qrexec-agent* (in someVM). *qrexec-client* starts a vchan server which *qrexec-agent* connects to. Since then, stdin/stdout/stderr from the VMprocess is passed via vchan between *qrexec-agent* and the *qrexec-client* process.
So, for example, executing in dom0
`qrexec-client -d someVM user:bash`
allows to work with the remote shell. The string before the first semicolon specifies what user to run the command as. Adding `-e` on the *qrexec-client* command line results in mere command execution (no data passing), and *qrexec-client* exits immediately after sending the execution request and receiving status code from *qrexec-agent* (whether the process creation succeeded). There is also the `-l local_program` flag -- with it, *qrexec-client* passes stdin/stdout of the remote process to the (spawned for this purpose) *local\_program*, not to its own stdin/stdout.
The `qvm-run` command is heavily based on *qrexec-client*. It also takes care of additional activities, e.g. starting the domain if it is not up yet and starting the GUI daemon. Thus, it is usually more convenient to use `qvm-run`.
There can be almost arbitrary number of *qrexec-client* processes for a domain (so, connected to the same *qrexec-daemon*, same domain) -- their data is multiplexed independently. Number of available vchan channels is the limiting factor here, it depends on the underlying hypervisor.
Qubes RPC services
------------------
Some tasks (like intervm file copy) share the same rpc-like structure: a process in one VM (say, file sender) needs to invoke and send/receive data to some process in other VM (say, file receiver). Thus, the Qubes RPC framework was created, facilitating such actions.
Obviously, inter-VM communication must be tightly controlled to prevent one VM from taking control over other, possibly more privileged, VM. Therefore the design decision was made to pass all control communication via dom0, that can enforce proper authorization. Then, it is natural to reuse the already-existing qrexec framework.
Also, note that bare qrexec provides VM\<-\>dom0 connectivity, but the command execution is always initiated by dom0. There are cases when VM needs to invoke and send data to a command in dom0 (e.g. to pass information on newly installed .desktop files). Thus, the framework allows dom0 to be the rpc target as well.
Thanks to the framework, RPC programs are very simple -- both rpc client and server just use their stdin/stdout to pass data. The framework does all the inner work to connect these processes to each other via *qrexec-daemon* and *qrexec-agent*. Additionally, disposable VMs are tightly integrated -- rpc to a disposableVM is identical to rpc to a normal domain, all one needs is to pass "\$dispvm" as the remote domain name.
Qubes RPC administration
------------------------
[TODO: fix for non-linux dom0]
In dom0, there is a bunch of files in */etc/qubes-rpc/policy* directory, whose names describe the available rpc actions; their content is the rpc access policy database. Currently defined actions are:
- qubes.Filecopy
- qubes.OpenInVM
- qubes.ReceiveUpdates
- qubes.SyncAppMenus
- qubes.VMShell
- qubes.ClipboardPaste
- qubes.Gpg
- qubes.NotifyUpdates
- qubes.PdfConvert
These files contain lines with the following format:
srcvm destvm (allow|deny|ask)[,user=user\_to\_run\_as][,target=VM\_to\_redirect\_to]
You can specify srcvm and destvm by name, or by one of "\$anyvm", "\$dispvm", "dom0" reserved keywords (note string "dom0" does not match the \$anyvm pattern; all other names do). Only "\$anyvm" keyword makes sense in srcvm field (service calls from dom0 are currently always allowed, "\$dispvm" means "new VM created for this particular request" - so it is never a source of request). Currently there is no way to specify source VM by type. Whenever a rpc request for action X is received, the first line in /etc/qubes-rpc/policy/X that match srcvm/destvm is consulted to determine whether to allow rpc, what user account the program should run in target VM under, and what VM to redirect the execution to. If the policy file does not exits, user is prompted to create one; if still there is no policy file after prompting, the action is denied.
In the target VM, the */etc/qubes-rpc/RPC\_ACTION\_NAME* must exist, containing the file name of the program that will be invoked.
In the src VM, one should invoke the client via
`/usr/lib/qubes/qrexec-client-vm target_vm_name RPC_ACTION_NAME rpc_client_path client arguments`
Note that only stdin/stdout is passed between rpc server and client -- notably, no command line argument are passed. Source VM name is specified by QREXEC\_REMOTE\_DOMAIN environment variable. By default, stderr of client and server is logged to respective /var/log/qubes/qrexec.XID files.
Be very careful when coding and adding a new rpc service. Unless the offered functionality equals full control over the target (it is the case with e.g. qubes.VMShell action), any vulnerability in a rpc server can be fatal to qubes security. On the other hand, this mechanism allows to delegate processing of untrusted input to less privileged (or throwaway) AppVMs, thus wise usage of it increases security.
### Revoking "Yes to All" authorization
Qubes RPC policy supports "ask" action. This will prompt the user whether given RPC call should be allowed. That prompt window has also "Yes to All" option, which will allow the action and add new entry to the policy file, which will unconditionally allow further calls for given service-srcVM-dstVM tuple.
In order to remove such authorization, issue this command from a Dom0 terminal (for qubes.Filecopy service):
`sudo nano /etc/qubes-rpc/policy/qubes.Filecopy`
and then remove the first line/s (before the first \#\# comment) which are the "Yes to All" results.
### Qubes RPC example
We will show the necessary files to create rpc call that adds two integers on the target and returns back the result to the invoker.
- rpc client code (*/usr/bin/our\_test\_add\_client*)
~~~
#!/bin/sh
echo $1 $2 # pass data to rpc server
exec cat >&$SAVED_FD_1 # print result to the original stdout, not to the other rpc endpoint
~~~
- rpc server code (*/usr/bin/our\_test\_add\_server*)
~~~
#!/bin/sh
read arg1 arg2 # read from stdin, which is received from the rpc client
echo $(($arg1+$arg2)) # print to stdout - so, pass to the rpc client
~~~
- policy file in dom0 (*/etc/qubes-rpc/policy/test.Add* )
~~~
$anyvm $anyvm ask
~~~
- server path definition ( */etc/qubes-rpc/test.Add*)
~~~
/usr/bin/our_test_add_server
~~~
- invoke rpc via
~~~
/usr/lib/qubes/qrexec-client-vm target_vm test.Add /usr/bin/our_test_add_client 1 2
~~~
and we should get "3" as answer, after dom0 allows it.
Qubes RPC internals
-------------------
See [QrexecProtocol?](/doc/QrexecProtocol/).

View File

@ -1,53 +0,0 @@
---
layout: doc
title: System Documentation
permalink: /en/doc/system-doc/
redirect_from:
- /doc/SystemDoc/
- /wiki/SystemDoc/
---
System Documentation for Developers
===================================
Fundamentals
------------
* [Qubes OS Architecture Overview](/en/doc/qubes-architecture/)
* [Qubes OS Architecture Spec v0.3 [PDF]](/attachment/wiki/QubesArchitecture/arch-spec-0.3.pdf) (The original 2009 document that started this all...)
* [Security-critical elements of Qubes OS](/en/doc/security-critical-code/)
* Qubes RPC: [`qrexec` v2](/en/doc/qrexec/) ([R2 implementation](/en/doc/qrexec2-implementation/))
* Qubes RPC: [`qrexec` v3](/en/doc/qrexec3/) ([R3 implementation](/en/doc/qrexec3-implementation/)) (Odyssey)
* [Example for writing a `qrexec` service in Qubes OS (blog post)](http://theinvisiblethings.blogspot.com/2013/02/converting-untrusted-pdfs-into-trusted.html)
* [Qubes GUI virtualization protocol](/en/doc/gui/)
* [Networking in Qubes](/en/doc/qubes-net/)
* [Implementation of template sharing and updating](/en/doc/template-implementation/)
Services
--------
* [Inter-domain file copying](/en/doc/qfilecopy/) (deprecates [`qfileexchgd`](/en/doc/qfileexchgd/))
* [Dynamic memory management in Qubes](/en/doc/qmemman/)
* [Implementation of DisposableVMs](/en/doc/dvm-impl/)
* [Article about disposable VMs](http://theinvisiblethings.blogspot.com/2010/06/disposable-vms.html)
* [Dom0 secure update mechanism](/en/doc/dom0-secure-updates/)
* VM secure update mechanism (forthcoming)
Debugging
---------
* [Profiling python code](/en/doc/profiling/)
* [Test environment in separate machine for automatic tests](/en/doc/test-bench/)
* [Automated tests](/en/doc/automated-tests/)
* [VM-dom0 internal configuration interface](/en/doc/vm-interface/)
* [Debugging Windows VMs](/en/doc/windows-debugging/)
Building
--------
* [Building Qubes](/en/doc/qubes-builder/) (["API" Details](/en/doc/qubes-builder-details/))
* [Development Workflow](/en/doc/development-workflow/)
* [KDE Dom0 packages for Qubes](/en/doc/kde-dom0/)
* [How to build Qubes installation ISO](/en/doc/installation-iso-building/)
* [Building Qubes OS 3.0 ISO](/en/doc/qubes-r3-building/)
* [Building USB passthrough support (experimental)](/en/doc/usbvm/)
* [Building a TemplateVM based on a new OS (ArchLinux example)](/en/doc/building-non-fedora-template/)
* [Building the Archlinux Template](/en/doc/building-archlinux-template/)

164
en/doc.md
View File

@ -1,164 +0,0 @@
---
layout: doc
title: Qubes OS Documentation
permalink: /en/doc/
redirect_from:
- /doc/
- "/doc/UserDoc/"
- "/wiki/UserDoc/"
- "/doc/QubesDocs/"
- "/wiki/QubesDocs/"
---
Qubes OS Documentation
======================
The Basics
----------
* [A Simple Introduction to Qubes](/en/intro/)
* [Getting Started](/en/doc/getting-started/)
* [Users' FAQ](/en/doc/user-faq/)
* [Mailing Lists](/en/doc/mailing-lists/)
* [Further reading: How is Qubes different from...?](http://blog.invisiblethings.org/2012/09/12/how-is-qubes-os-different-from.html)
* [Further reading: Why Qubes is more than a collection of VMs](http://www.invisiblethingslab.com/resources/2014/Software_compartmentalization_vs_physical_separation.pdf)
Choosing Your Hardware
----------------------
* [System Requirements](/en/doc/system-requirements/)
* [Hardware Compatibility List (HCL)](/hcl)
* Qubes Certified Laptops ([coming soon!](https://twitter.com/Puri_sm/status/644963433293717504))
Installing Qubes
----------------
* [Use Qubes without installing: Qubes Live USB (alpha)](https://groups.google.com/d/msg/qubes-users/IQdCEpkooto/iyMh3LuzCAAJ)
* [How to Install Qubes](/en/doc/installation-guide/)
* [Qubes Downloads](/downloads/)
* [Why and How to Verify Signatures](/en/doc/verifying-signatures/)
* [Security Considerations when Installing](/en/doc/install-security/)
Common Tasks
------------
* [Copying and Pasting Text Between Domains](/en/doc/copy-paste/)
* [Copying and Moving Files Between Domains](/en/doc/copying-files/)
* [Copying Files to and from dom0](/en/doc/copy-to-dom0/)
* [Mounting USB Drives to AppVMs](/en/doc/stick-mounting/)
* [Updating Software in dom0](/en/doc/software-update-dom0/)
* [Updating and Installing Software in VMs](/en/doc/software-update-vm/)
* [Backup, Restoration, and Migration](/en/doc/backup-restore/)
* [Disposable VMs](/en/doc/dispvm/)
* [Managing Application Shortcuts](/en/doc/managing-appvm-shortcuts/)
* [Enabling Fullscreen Mode](/en/doc/full-screen-mode/)
Managing Operating Systems within Qubes
---------------------------------------
* [TemplateVMs](/en/doc/templates/)
* [Templates: Fedora - minimal](/en/doc/templates/fedora-minimal/)
* [Templates: Debian](/en/doc/templates/debian/)
* [Templates: Archlinux](/en/doc/templates/archlinux/)
* [Templates: Ubuntu](/en/doc/templates/ubuntu/)
* [Templates: Whonix](/en/doc/templates/whonix/)
* [Installing and Using Windows-based AppVMs (Qubes R2 Beta 3 and later)](/en/doc/windows-appvms/)
* [Creating and Using HVM and Windows Domains (Qubes R2+)](/en/doc/hvm-create/)
* [Advanced options and troubleshooting of Qubes Tools for Windows (R3)](/en/doc/windows-tools-3/)
* [Advanced options and troubleshooting of Qubes Tools for Windows (R2)](/en/doc/windows-tools-2/)
* [Uninstalling Qubes Tools for Windows 2.x](/en/doc/uninstalling-windows-tools-2/)
* [Upgrading the Fedora 20 Template](/en/doc/fedora-template-upgrade-20/)
* [Upgrading the Fedora 18 Template](/en/doc/fedora-template-upgrade-18/)
* [Tips for Using Linux in an HVM](/en/doc/linux-hvm-tips/)
* [Creating NetBSD VM](https://groups.google.com/group/qubes-devel/msg/4015c8900a813985)
Security Guides
---------------
* [Qubes OS Project Security Information](/en/security/)
* [Security Guidelines](/en/doc/security-guidelines/)
* [Understanding Qubes Firewall](/en/doc/qubes-firewall/)
* [Understanding and Preventing Data Leaks](/en/doc/data-leaks/)
* [Installing Anti Evil Maid](/en/doc/anti-evil-maid/)
* [Using Multi-factor Authentication with Qubes](/en/doc/multifactor-authentication/)
* [Using GPG more securely in Qubes: Split GPG](/en/doc/split-gpg/)
* [Configuring YubiKey for user authentication](/en/doc/yubi-key/)
* [Note regarding password-less root access in VM](/en/doc/vm-sudo/)
Privacy Guides
--------------
* [Whonix for privacy & anonymization](/en/doc/privacy/whonix/)
* [Install Whonix in Qubes](/en/doc/privacy/install-whonix/)
* [Updating Whonix in Qubes](/en/doc/privacy/updating-whonix/)
* [Customizing Whonix](/en/doc/privacy/customizing-whonix/)
* [Uninstall Whonix from Qubes](/en/doc/privacy/uninstall-whonix/)
* [How to Install a Transparent Tor ProxyVM (TorVM)](/en/doc/privacy/torvm/)
* [How to set up a ProxyVM as a VPN Gateway](/en/doc/privacy/vpn/)
Configuration Guides
--------------------
* [Configuration Files](/en/doc/config-files/)
* [Storing AppVMs on Secondary Drives](/en/doc/secondary-storage/)
* [Where are my external storage devices mounted?](/en/doc/external-device-mount-point/)
* [Resize Disk Image (AppVM and HVM)](/en/doc/resize-disk-image/)
* [Resize Root Disk Image (StandaloneVM and TemplateVM)](/en/doc/resize-root-disk-image/)
* [Installing ZFS in Qubes](/en/doc/zfs/)
* [Mutt Guide](/en/doc/mutt/)
* [Postfix Guide](/en/doc/postfix/)
* [Fetchmail Guide](/en/doc/fetchmail/)
* [Creating Custom NetVMs and ProxyVMs](http://theinvisiblethings.blogspot.com/2011/09/playing-with-qubes-networking-for-fun.html)
* [How to make proxy for individual tcp connection from networkless VM](https://groups.google.com/group/qubes-devel/msg/4ca950ab6d7cd11a)
* [HTTP filtering proxy in Qubes firewall VM](https://groups.google.com/group/qubes-devel/browse_thread/thread/5252bc3f6ed4b43e/d881deb5afaa2a6c#39c95d63fccca12b)
* [Adding Bridge Support to the NetVM (EXPERIMENTAL)](/en/doc/network-bridge-support/)
* [Assigning PCI Devices to AppVMs](/en/doc/assigning-devices/)
* [Enabling TRIM for SSD disks](/en/doc/disk-trim/)
* [Configuring a Network Printer](/en/doc/network-printer/)
* [Using External Audio Devices](/en/doc/external-audio/)
* [Booting with GRUB2 and GPT](https://groups.google.com/group/qubes-devel/browse_thread/thread/e4ac093cabd37d2b/d5090c20d92c4128#d5090c20d92c4128)
* [Rxvt Guide](/en/doc/rxvt/)
Customization Guides
--------------------
* [DispVM Customization](/en/doc/dispvm-customization/)
* [XFCE Installation in dom0](/en/doc/xfce/)
* [Customizing Fedora minimal templates](/en/doc/fedora-minimal-template-customization)
* [Customizing Windows 7 templates](/en/doc/windows-template-customization)
* [Customizing the GUI experience with KDE](https://groups.google.com/d/topic/qubes-users/KhfzF19NG1s/discussion)
* [Language Localization](/en/doc/language-localization/)
Troubleshooting
---------------
* [Home directory is out of disk space error](/en/doc/out-of-memory/)
* [Installing on system with new AMD GPU (missing firmware problem)](https://groups.google.com/group/qubes-devel/browse_thread/thread/e27a57b0eda62f76)
* [How to install an Nvidia driver in dom0](/en/doc/install-nvidia-driver/)
* [Solving problems with Macbook Air 2012](https://groups.google.com/group/qubes-devel/browse_thread/thread/b8b0d819d2a4fc39/d50a72449107ab21#8a9268c09d105e69)
* [Getting Sony Vaio Z laptop to work with Qubes](/en/doc/sony-vaio-tinkering/)
* [Getting Lenovo 450 to work with Qubes](/en/doc/lenovo450-tinkering/)
Reference Pages
---------------
* [Dom0 Command-Line Tools](/en/doc/dom0-tools/)
* [DomU Command-Line Tools](/en/doc/vm-tools/)
* [Glossary of Qubes Terminology](/en/doc/glossary/)
* [Qubes Service Framework](/en/doc/qubes-service/)
* [Command Execution in VMs (and Qubes RPC)](/en/doc/qrexec/)
For Developers
--------------
* [System Documentation](/en/doc/system-doc/)
* [Developers' FAQ](/en/doc/devel-faq/)
* [How to Contribute to the Qubes OS Project](/en/doc/contributing/)
* [Reporting Security Issues](/en/security/)
* [Reporting Bugs](/en/doc/reporting-bugs/)
* [Source Code](/en/doc/source-code/)
* [Qubes OS Version Scheme](/en/doc/version-scheme/)
* [Coding Guidelines](/en/doc/coding-style/)
* [Documentation Guidelines](/en/doc/doc-guidelines/)
* [Books for Developers](/en/doc/devel-books/)
* [Research Papers](/en/doc/qubes-research/)
* [Qubes OS License](/en/doc/license/)

View File

@ -1,36 +0,0 @@
---
layout: doc
title: Dom0 Tools
permalink: /en/doc/dom0-tools/
redirect_from:
- /doc/DomZeroTools/
- /wiki/DomZeroTools/
---
QVM-tools:
- [qubes-dom0-update](/en/doc/dom0-tools/qubes-dom0-update/)
- [qubes-prefs](/en/doc/dom0-tools/qubes-prefs/)
- [qvm-add-appvm](/en/doc/dom0-tools/qvm-add-appvm/)
- [qvm-add-template](/en/doc/dom0-tools/qvm-add-template/)
- [qvm-backup-restore](/en/doc/dom0-tools/qvm-backup-restore/)
- [qvm-backup](/en/doc/dom0-tools/qvm-backup/)
- [qvm-block](/en/doc/dom0-tools/qvm-block/)
- [qvm-clone](/en/doc/dom0-tools/qvm-clone/)
- [qvm-create-default-dvm](/en/doc/dom0-tools/qvm-create-default-dvm/)
- [qvm-create](/en/doc/dom0-tools/qvm-create/)
- [qvm-firewall](/en/doc/dom0-tools/qvm-firewall/)
- [qvm-grow-private](/en/doc/dom0-tools/qvm-grow-private/)
- [qvm-ls](/en/doc/dom0-tools/qvm-ls/)
- [qvm-kill](/en/doc/dom0-tools/qvm-kill/)
- [qvm-pci](/en/doc/dom0-tools/qvm-pci/)
- [qvm-prefs](/en/doc/dom0-tools/qvm-prefs/)
- [qvm-remove](/en/doc/dom0-tools/qvm-remove/)
- [qvm-revert-template-changes](/en/doc/dom0-tools/qvm-revert-template-changes/)
- [qvm-run](/en/doc/dom0-tools/qvm-run/)
- [qvm-service](/en/doc/dom0-tools/qvm-service/)
- [qvm-shutdown](/en/doc/dom0-tools/qvm-shutdown/)
- [qvm-start](/en/doc/dom0-tools/qvm-start/)
- [qvm-sync-appmenus](/en/doc/dom0-tools/qvm-sync-appmenus/)
- [qvm-template-commit](/en/doc/dom0-tools/qvm-template-commit/)

View File

@ -1,16 +0,0 @@
---
layout: doc
title: VM Tools
permalink: /en/doc/vm-tools/
redirect_from:
- /doc/VmTools/
- /wiki/VmTools/
---
VM tools:
- [qvm-copy-to-vm](/en/doc/vm-tools/qvm-copy-to-vm/)
- [qvm-open-in-dvm](/en/doc/vm-tools/qvm-open-in-dvm/)
- [qvm-open-in-vm](/en/doc/vm-tools/qvm-open-in-vm/)
- [qvm-run](/en/doc/vm-tools/qvm-run/)

View File

@ -0,0 +1,37 @@
---
layout: doc
title: Qubes-Certified Laptops
permalink: /doc/certified-laptops/
---
*NOTE: This is just a mockup*
Qubes-certified Laptops
============================================
Qubes-certified laptops are laptops that have been tested by the Qubes developers to ensure compatibility with all of Qubes' features.
We aim to partner with a few select computer makers to ensure that Qubes is compatible with them, so that new users have clear path towards getting started with Qubes if they desire. We aim for these makers to be as diverse as possible in terms of geography, cost, and availability.
Purism Librem 13
----------------------------
[![image of Librem 13](https://puri.sm/wp-content/uploads/2015/07/librem-13-700x490-20150721.png)](https://puri.sm/librem-13/)
For users now who seek to buy a Librem 13, there will be an option to have Qubes pre-installed on their Librem 13. This will include all the necessary tweaks for maximum compatibility with Qubes.
In addition, the Qubes team will receive a small portion of the revenue from any Librem 13 sale that come with Qubes pre-installed.
For existing Librem 13 users, please follow the instructions to ensure maximum compatibility with Qubes:
1. In `dom0`, open a terminal and remove unsupported X drivers:
sudo yum remove xorg-x11-drv-{trident,i740,apm,i128,cirrus,tdfx,qxl,rendition,sis,glint,siliconmotion,r128,s3virge,mga,mach64,savage}
2. Update X drivers:
sudo qubes-dom0-update --releasever=3.1 --enablerepo=qubes-dom0-current-testing --action=update 'xorg-x11-drv-*'
3. Enable newer kernel:
sudo qubes-dom0-update --enablerepo=qubes-dom0-unstable kernel

View File

@ -1,8 +1,9 @@
---
layout: doc
title: Hardware Compatibility List (HCL)
permalink: /en/doc/hcl/
permalink: /doc/hcl/
redirect_from:
- /en/doc/hcl/
- /doc/HCL/
- /wiki/HCL/
- /wiki/HCLR1/
@ -16,7 +17,7 @@ The [HCL](/hcl) is a compilation of reports generated and submitted by users acr
**Note:**
Except in the case of developer-reported entries, the Qubes team has not independently verified the accuracy of these reports.
Please first consult the data sheets (CPU, chipset, motherboard) prior to buying new hardware for Qubes.
Meet the [SystemRequirements](/en/doc/system-requirements/) and search particular for support of:
Meet the [SystemRequirements](/doc/system-requirements/) and search particular for support of:
- HVM ("AMD virtualization (AMD-V)", "Intel virtualization (VT-x)", "VIA virtualization (VIA VT)")
- IOMMU ("AMD I/O Virtualization Technology (AMD-Vi)", "Intel Virtualization Technology for Directed I/O (VT-d)")
@ -36,7 +37,7 @@ In order to generate a HCL report in Qubes, simply open a terminal in dom0 (KDE:
(Note: If you are working with a new Qubes installation, you may need to update your system in order to download this script.)
You are encouraged to submit your HCL report for the benefit of further Qubes development and other users.
If you would like to submit your HCL report, please send the **HCL Info** `.txt` file to [\`qubes-users@googlegroups.com\`](/en/doc/qubes-lists/) with the subject `HCL - <your machine model name>`.
If you would like to submit your HCL report, please send the **HCL Info** `.txt` file to [\`qubes-users@googlegroups.com\`](/doc/mailing-lists/) with the subject `HCL - <your machine model name>`.
Please include any useful information about any Qubes features you may have tested (see the legend below), as well as general machine compatibility (video, networking, sleep, etc.).
If you have problems with your hardware try a different kernel in the [Troubleshooting menu](/doc/InstallationGuideR2rc1/#troubleshooting-problems-with-the-installer).
Please consider sending the **HCL Support Files** `.cpio.gz` file as well.

View File

@ -1,8 +1,9 @@
---
layout: doc
title: System Requirements
permalink: /en/doc/system-requirements/
permalink: /doc/system-requirements/
redirect_from:
- /en/doc/system-requirements/
- /doc/SystemRequirements/
- /wiki/SystemRequirements/
---
@ -23,7 +24,7 @@ Recommended
- Fast SSD (strongly recommended)
- Intel GPU (strongly preferred)
- Nvidia GPUs may require significant [troubleshooting](/en/doc/install-nvidia-driver/).
- Nvidia GPUs may require significant [troubleshooting](/doc/install-nvidia-driver/).
- ATI GPUs have not been formally tested (but see the [Hardware Compatibility List](/hcl/)).
- Intel VT-x or AMD-v technology (required for running HVM domains, such as Windows-based AppVMs)
- Intel VT-d or AMD IOMMU technology (required for effective isolation of network VMs)

View File

@ -1,8 +1,9 @@
---
layout: doc
title: Installation Security
permalink: /en/doc/install-security/
permalink: /doc/install-security/
redirect_from:
- /en/doc/install-security/
- /doc/InstallSecurity/
- /wiki/InstallSecurity/
---
@ -57,16 +58,18 @@ Cons:
* Fixed capacity. (If the size of the ISO is larger than your disc, it will be
inconvenient.)
* Passthrough burning is not supported by Xen. (This mainly applies if you're
upgrading from a previous version of Qubes.) Currently, the only options for
burning optical discs in Qubes are:
* Passthrough recording (a.k.a., "burning") is not supported by Xen. (This
mainly applies if you're upgrading from a previous version of Qubes.)
Currently, the only options for recording optical discs (e.g., CDs, DVDs,
BRDs) in Qubes are:
1. Use a USB optical drive.
2. Attach a SATA optical drive to a secondary SATA controller, then assign
this secondary SATA controller to an AppVM.
3. Use a SATA optical drive attached to dom0.
(Option 3 violates the Qubes security model since it entails transferring
an untrusted ISO to dom0 in order to burn it to disc, which leaves only
the other two options.)
(Option 3 violates the Qubes security model since it entails transferring an
untrusted ISO to dom0 in order to burn it to disc, which leaves only the
other two options.)
[verify]: https://www.qubes-os.org/doc/VerifyingSignatures/
[trusting-trust]: http://www.acm.org/classics/sep95/

View File

@ -1,33 +1,18 @@
---
layout: doc
title: Installation Guide
permalink: /en/doc/installation-guide/
permalink: /doc/installation-guide/
redirect_from:
- /en/doc/installation-guide/
- /doc/InstallationGuide/
- /wiki/InstallationGuide/
redirect_from:
-
- /doc/InstallationGuideR1/
redirect_from:
-
- /doc/InstallationGuideR2B1/
redirect_from:
-
- /doc/InstallationGuideR2B2/
redirect_from:
-
- /doc/InstallationGuideR2B3/
redirect_from:
-
- /doc/InstallationGuideR2rc1/
redirect_from:
-
- /doc/InstallationGuideR2rc2/
redirect_from:
-
- /doc/InstallationGuideR3.0rc1/
redirect_from:
-
- /doc/InstallationGuideR3.0rc2/
---
@ -51,7 +36,7 @@ Note: We don't recommend installing Qubes in a virtual machine! It will likely n
Download installer ISO
----------------------
See [this page](/doc/QubesDownloads/) for ISO downloads. Remember, we have absolutely no control over those servers, and so you should be assuming that they might be compromised, or just be serving a compromised ISOs because their operators decided so, for whatever reason. Always verify the digital signature on the downloaded ISO. See this [page](/en/doc/verifying-signatures/) for more info about how to download and verify our GPG keys, and then verify the downloaded ISO:
See [this page](/doc/QubesDownloads/) for ISO downloads. Remember, we have absolutely no control over those servers, and so you should be assuming that they might be compromised, or just be serving a compromised ISOs because their operators decided so, for whatever reason. Always verify the digital signature on the downloaded ISO. See this [page](/doc/verifying-signatures/) for more info about how to download and verify our GPG keys, and then verify the downloaded ISO:
gpg -v Qubes-R2-x86_64-DVD.iso.asc
@ -88,9 +73,9 @@ See [release notes](/doc/releases/) of appropriate version.
Getting Help
------------
- **User manuals are [here](/en/doc/).** (Strongly recommended!)
- **User manuals are [here](/doc/).** (Strongly recommended!)
- Developers documentation (normally not needed by users) is [here](/en/doc/system-doc/)
- Developers documentation (normally not needed by users) is [here](/doc/system-doc/)
- If you don't find answer in the sources given above, write to the *qubes-users* mailing list (you don't need to be subscribed to the list, just send email to the address given below):
- [https://groups.google.com/group/qubes-users](https://groups.google.com/group/qubes-users)

View File

@ -1,12 +1,11 @@
---
layout: doc
title: Upgrading to R2
permalink: /en/doc/upgrade-to-r2/
permalink: /doc/upgrade-to-r2/
redirect_from:
- /en/doc/upgrade-to-r2/
- /doc/UpgradeToR2/
- /doc/UpgradeToR2rc1/
redirect_from:
-
- /wiki/UpgradeToR2rc1/
---
@ -15,7 +14,7 @@ Upgrading Qubes R2 Beta 3 to R2
Current Qubes R2 Beta 3 (R2B3) systems can be upgraded in-place to the latest R2 (R2) release by following the procedure below.
**Before attempting either an in-place upgrade or a clean installation, we strongly recommend that users back up the system by using the built-in [backup tool](/en/doc/backup-restore/).**
**Before attempting either an in-place upgrade or a clean installation, we strongly recommend that users back up the system by using the built-in [backup tool](/doc/backup-restore/).**
Upgrade Template and Standalone VM(s)
-------------------------------------
@ -26,7 +25,7 @@ Upgrade Template and Standalone VM(s)
While technically it is possible to use old Fedora 18 template on R2, it is strongly recommended to upgrade all the Template VMs and Standalone VMs, because Fedora 18 no longer receive security updates.
By default, in Qubes R2, there is only one Template VM, however users are free to create more Template VMs for special purposes, as well as Standalone VMs. If more than one template and/or Standalone VMs are used, then it is recommended to upgrade/replace all of them. More information on using multiple Template VMs, as well as Standalone VMs, can be found [here](/en/doc/software-update-vm/).
By default, in Qubes R2, there is only one Template VM, however users are free to create more Template VMs for special purposes, as well as Standalone VMs. If more than one template and/or Standalone VMs are used, then it is recommended to upgrade/replace all of them. More information on using multiple Template VMs, as well as Standalone VMs, can be found [here](/doc/software-update-vm/).
Upgrading dom0
--------------

View File

@ -1,8 +1,9 @@
---
layout: doc
title: Upgrading to R2B1
permalink: /en/doc/upgrade-to-r2b1/
permalink: /doc/upgrade-to-r2b1/
redirect_from:
- /en/doc/upgrade-to-r2b1/
- /doc/UpgradeToR2B1/
- /wiki/UpgradeToR2B1/
---

View File

@ -1,8 +1,9 @@
---
layout: doc
title: Upgrading to R2B2
permalink: /en/doc/upgrade-to-r2b2/
permalink: /doc/upgrade-to-r2b2/
redirect_from:
- /en/doc/upgrade-to-r2b2/
- /doc/UpgradeToR2B2/
- /wiki/UpgradeToR2B2/
---
@ -10,7 +11,7 @@ redirect_from:
Upgrading Qubes R1 to R2 (beta2)
================================
Existing users of Qubes R1 (but not R1 betas!) can upgrade their systems to the latest R2 beta release by following the procedure below. As usual, it is advisable to backup the system before proceeding with the upgrade. While it is possible to upgrade the system **it is strongly recommended to reinstall it**. You will preserve all your data and settings thanks to [backup and restore tools](/en/doc/backup-restore/).
Existing users of Qubes R1 (but not R1 betas!) can upgrade their systems to the latest R2 beta release by following the procedure below. As usual, it is advisable to backup the system before proceeding with the upgrade. While it is possible to upgrade the system **it is strongly recommended to reinstall it**. You will preserve all your data and settings thanks to [backup and restore tools](/doc/backup-restore/).
Upgrade all Template and Standalone VM(s)
-----------------------------------------

View File

@ -1,8 +1,9 @@
---
layout: doc
title: Upgrading to R2B3
permalink: /en/doc/upgrade-to-r2b3/
permalink: /doc/upgrade-to-r2b3/
redirect_from:
- /en/doc/upgrade-to-r2b3/
- /doc/UpgradeToR2B3/
- /wiki/UpgradeToR2B3/
---
@ -10,16 +11,16 @@ redirect_from:
Upgrading Qubes R2 Beta 2 to R2 Beta 3
======================================
Current Qubes R2 Beta 2 (R2B2) systems can be upgraded in-place to the latest R2 Beta 3 (R2B3) release by following the procedure below. However, upgrading in-place is riskier than performing a clean installation, since there are more things which can go wrong. For this reason, **we strongly recommended that users perform a [clean installation](/en/doc/installation-guide/) of Qubes R2 Beta 3**.
Current Qubes R2 Beta 2 (R2B2) systems can be upgraded in-place to the latest R2 Beta 3 (R2B3) release by following the procedure below. However, upgrading in-place is riskier than performing a clean installation, since there are more things which can go wrong. For this reason, **we strongly recommended that users perform a [clean installation](/doc/installation-guide/) of Qubes R2 Beta 3**.
**Before attempting either an in-place upgrade or a clean installation, we strongly recommend that users back up the system by using the built-in [backup tool](/en/doc/backup-restore/).**
**Before attempting either an in-place upgrade or a clean installation, we strongly recommend that users back up the system by using the built-in [backup tool](/doc/backup-restore/).**
Experienced users may be comfortable accepting the risks of upgrading in-place. Such users may wish to first attempt an in-place upgrade. If nothing goes wrong, then some time and effort will have been saved. If something does go wrong, then the user can simply perform a clean installation, and no significant loss will have occurred (as long as the user [backed up](/en/doc/backup-restore/) correctly!).
Experienced users may be comfortable accepting the risks of upgrading in-place. Such users may wish to first attempt an in-place upgrade. If nothing goes wrong, then some time and effort will have been saved. If something does go wrong, then the user can simply perform a clean installation, and no significant loss will have occurred (as long as the user [backed up](/doc/backup-restore/) correctly!).
Upgrade all Template and Standalone VM(s)
-----------------------------------------
By default, in Qubes R2, there is only one Template VM, however users are free to create more Template VMs for special purposes, as well as Standalone VMs. More information on using multiple Template VMs, as well as Standalone VMs, can be found [here](/en/doc/software-update-vm/). The steps described in this section should be repeated in *all* user's Template and Standalone VMs.
By default, in Qubes R2, there is only one Template VM, however users are free to create more Template VMs for special purposes, as well as Standalone VMs. More information on using multiple Template VMs, as well as Standalone VMs, can be found [here](/doc/software-update-vm/). The steps described in this section should be repeated in *all* user's Template and Standalone VMs.
It is critical to complete this step **before** proceeding to dom0 upgrade. Otherwise you will most likely ends with unusable system.

View File

@ -1,8 +1,9 @@
---
layout: doc
title: Upgrading to R3.0
permalink: /en/doc/upgrade-to-r3.0/
permalink: /doc/upgrade-to-r3.0/
redirect_from:
- /en/doc/upgrade-to-r3.0/
- /doc/UpgradeToR3.0/
- /doc/UpgradeToR3.0rc1/
---
@ -12,16 +13,16 @@ Upgrading Qubes R2 to R3.0
**This instruction is highly experimental, the official way to upgrade from R2 is to backup the data and reinstall the system. Use at your own risk!**
Current Qubes R3.0 (R3.0) systems can be upgraded in-place to the latest R3.0 by following the procedure below. However, upgrading in-place is riskier than performing a clean installation, since there are more things which can go wrong. For this reason, **we strongly recommended that users perform a [clean installation](/en/doc/installation-guide/) of Qubes R3.0**.
Current Qubes R3.0 (R3.0) systems can be upgraded in-place to the latest R3.0 by following the procedure below. However, upgrading in-place is riskier than performing a clean installation, since there are more things which can go wrong. For this reason, **we strongly recommended that users perform a [clean installation](/doc/installation-guide/) of Qubes R3.0**.
**Before attempting either an in-place upgrade or a clean installation, we strongly recommend that users back up the system by using the built-in [backup tool](/en/doc/backup-restore/).**
**Before attempting either an in-place upgrade or a clean installation, we strongly recommend that users back up the system by using the built-in [backup tool](/doc/backup-restore/).**
Experienced users may be comfortable accepting the risks of upgrading in-place. Such users may wish to first attempt an in-place upgrade. If nothing goes wrong, then some time and effort will have been saved. If something does go wrong, then the user can simply perform a clean installation, and no significant loss will have occurred (as long as the user [backed up](/en/doc/backup-restore/) correctly!).
Experienced users may be comfortable accepting the risks of upgrading in-place. Such users may wish to first attempt an in-place upgrade. If nothing goes wrong, then some time and effort will have been saved. If something does go wrong, then the user can simply perform a clean installation, and no significant loss will have occurred (as long as the user [backed up](/doc/backup-restore/) correctly!).
Upgrade all Template and Standalone VM(s)
-----------------------------------------
By default, in Qubes R2, there is only one Template VM, however users are free to create more Template VMs for special purposes, as well as Standalone VMs. More information on using multiple Template VMs, as well as Standalone VMs, can be found [here](/en/doc/software-update-vm/). The steps described in this section should be repeated in **all** user's Template and Standalone VMs.
By default, in Qubes R2, there is only one Template VM, however users are free to create more Template VMs for special purposes, as well as Standalone VMs. More information on using multiple Template VMs, as well as Standalone VMs, can be found [here](/doc/software-update-vm/). The steps described in this section should be repeated in **all** user's Template and Standalone VMs.
It is critical to complete this step **before** proceeding to dom0 upgrade. Otherwise you will most likely end with unusable system.

Some files were not shown because too many files have changed in this diff Show More