resolved merge conflicts

This commit is contained in:
GammaSQ 2019-01-05 13:36:34 +01:00
commit 8f7fb84a1f
No known key found for this signature in database
GPG key ID: D552FD2F98647C64
105 changed files with 4814 additions and 2016 deletions

View file

@ -0,0 +1,36 @@
---
layout: doc
title: Changing your Time Zone
permalink: /doc/change-time-zone/
---
# Changing your Time Zone #
## Qubes 4.0 ##
### Command line ###
If you use the i3 window manager or would prefer to change the system's time
zone in terminal you can issue the `timedatectl` command with the option
`set-timezone`.
For example, to set the system's time zone to Berlin, Germany type in a dom0
terminal:
$ sudo timedatectl set-timezone 'Europe/Berlin'
You can list the available time zones with the option `list-timezones` and show
the current settings of the system clock and time zone with option `status`.
Example output status of `timedatectl` on a system with time zone set to
Europe/Berlin:
[user@dom0 ~]$ timedatectl status
Local time: Sun 2018-10-14 06:20:00 CEST
Universal time: Sun 2018-10-14 04:20:00 UTC
RTC time: Sun 2018-10-14 04:20:00
Time zone: Europe/Berlin (CEST, +0200)
Network time on: no
NTP synchronized: no
RTC in local TZ: no

View file

@ -9,8 +9,11 @@ redirect_from:
- "/wiki/UserDoc/ConfigFiles/"
---
Qubes specific VM config files
==============================
Configuration Files
===================
Qubes-specific VM config files
------------------------------
These files are placed in /rw, which survives a VM restart.
That way, they can be used to customize a single VM instead of all VMs based on the same template.
@ -28,7 +31,7 @@ The scripts here all run as root.
- `/rw/config/qubes-ip-change-hook` - script runs in NetVM after every external IP change and on "hardware" link status change.
- (R4.0 only) in ProxyVMs/AppVMs with `qvm-features <vmname> qubes-firewall true`, scripts placed in the following directories will be executed in the listed order followed by `qubes-firewall-user-script` after each firewall update.
- (R4.0 only) in ProxyVMs (or AppVMs with `qubes-firewall` service enabled), scripts placed in the following directories will be executed in the listed order followed by `qubes-firewall-user-script` after each firewall update.
Good place to write own custom firewall rules.
~~~
@ -37,7 +40,7 @@ The scripts here all run as root.
/rw/config/qubes-firewall-user-script
~~~
- (R3.2 only) `/rw/config/qubes-firewall-user-script` - script runs in ProxyVM/AppVM with `qvm-features <vmname> qubes-firewall true` after each firewall update.
- (R3.2 only) `/rw/config/qubes-firewall-user-script` - script runs in ProxyVM (or AppVM with `qubes-firewall` service enabled) after each firewall update.
Good place to write own custom firewall rules.
- `/rw/config/suspend-module-blacklist` - list of modules (one per line) to be unloaded before system goes to sleep.
@ -48,8 +51,9 @@ Note that scripts need to be executable (chmod +x) to be used.
Also, take a look at [bind-dirs](/doc/bind-dirs) for instructions on how to easily modify arbitrary system files in an AppVM and have those changes persist.
GUI and audio configuration in dom0
===================================
-----------------------------------
The GUI configuration file `/etc/qubes/guid.conf` in one of a few not managed by qubes-prefs or the Qubes Manager tool.
Sample config (included in default installation):
@ -97,3 +101,4 @@ Currently supported settings:
- `audio_low_latency` - force low-latency audio mode (about 40ms compared to 200-500ms by default).
Note that this will cause much higher CPU usage in dom0.

View file

@ -26,6 +26,8 @@ If you run `fstrim --all` inside a TemplateVM, in a worst case the `discard` can
If discards are not supported at any one of those layers, it will not make it to the underlying physical device.
There are some security implications to permitting TRIM (read for example [this article](https://asalor.blogspot.com/2011/08/trim-dm-crypt-problems.html)), but in most cases not exploitable.
Conversely, TRIM can improve security against local forensics when using SSDs, because with TRIM enabled deleting data (usually) results in the actual data being erased quickly, rather than remaining in unallocated space indefinitely.
However deletion is not guaranteed, and can fail to happen without warning for a variety of reasons.
Configuration
@ -94,3 +96,17 @@ To enable TRIM support in dom0 with LUKS you need to:
5. To verify if discards are enabled you may use `dmsetup table` (confirm the line for your device mentions "discards") or just run `fstrim -av` (you should see a `/` followed by the number of bytes trimmed).
Swap Space
----------
By default TRIM is not enabled for swap.
To enable it add the `discard` flag to the options for the swap entry in `/etc/fstab`.
This may or may not actually improve performance.
If you only want the security against local forensics benefit of TRIM, you can use the `discard=once` option instead to only perform the TRIM operation once during at boot.
To verify that TRIM is enabled, check `dmesg` for what flags were enabled when the swap space was activated.
You should see something like the following:
Adding 32391164k swap on /dev/mapper/qubes_dom0-swap. Priority:-2 extents:1 across:32391164k SSDscFS
The `s` indicates that the entire swap device will be trimmed at boot, and `c` indicates that individual pages are trimmed after they are no longer being used.

View file

@ -0,0 +1,37 @@
---
layout: doc
title: GUI Configuration and Troubleshooting
permalink: /doc/gui-configuration/
---
GUI Configuration and Troubleshooting
=====================================
Video RAM adjustment for high-resolution displays
-------------------------------------------------
**Problem:** You have a 4K external display, and when you connect it, you can't click on anything but a small area in the upper-right corner.
When a qube starts, a fixed amount of RAM is allocated to the graphics buffer called video RAM.
This buffer needs to be at least as big as the whole desktop, accounting for all displays that are or will be connected to the machine.
By default, it is as much as needed for the current display and an additional full HD (FHD) display (1920×1080 8 bit/channel RGBA).
This logic fails when the machine has primary display in FHD resolution and, after starting some qubes, a 4K display is connected.
The buffer is too small, and internal desktop resize fails.
**Solution:** Increase the minimum size of the video RAM buffer.
```sh
qvm-features dom0 gui-videoram-min $(($WIDTH * $HEIGHT * 4 / 1024))
qvm-features dom0 gui-videoram-overhead 0
```
Where `$WIDTH`×`$HEIGHT` is the maximum desktop size that you anticipate needing.
For example, if you expect to use a 1080p display and a 4k display side-by-side, that is `(1920 + 3840) × 2160 × 4 / 1024 = 48600`, or slightly more than 48 MiB per qube.
After making these adjustments, the qubes need to be restarted.
The amount of memory allocated per qube is the maximum of:
- `gui-videoram-min`
- current display + `gui-videoram-overhead`
Default overhead is about 8 MiB, which is enough for a 1080p display (see above).
So, the `gui-videoram-overhead` zeroing is not strictly necessary; it only avoids allocating memory that will not be used.

View file

@ -16,8 +16,7 @@ By default, VMs kernels are provided by dom0. This means that:
3. You can **not** modify any of the above from inside a VM;
4. Installing additional kernel modules is cumbersome.
*Note* In the examples below, although the specific version numbers might be old, the commands have been verified on R3.2 with debian-9 and fedora-26 templates.
At the time of writing, there is a blocking issue for R4.0 [3563](https://github.com/QubesOS/qubes-issues/issues/3563).
*Note* In the examples below, although the specific version numbers might be old, the commands have been verified on R3.2 and R4.0 with debian-9 and fedora-26 templates.
To select which kernel a given VM will use, you can either use Qubes Manager (VM settings, advanced tab), or the `qvm-prefs` tool:
@ -208,7 +207,102 @@ mke2fs 1.42.12 (29-Aug-2014)
--> Done.
~~~
Using kernel installed in the VM
Using kernel installed in the VM (R4.0)
--------------------------------
Both debian-9 and fedora-26 templates already have grub and related tools preinstalled so if you want to use one of the distribution kernels, all you need to do is clone either template to a new one, then:
~~~
qvm-prefs <clonetemplatename> virt_mode hvm
qvm-prefs <clonetemplatename> kernel ''
~~~
If you'd like to use a different kernel than default, continue reading.
### Installing kernel in Fedora VM (R4.0)
Install whatever kernel you want.
You need to also ensure you have the `kernel-devel` package for the same kernel version installed.
If you are using a distribution kernel package (`kernel` package), the initramfs and kernel modules may be handled automatically.
If you are using a manually built kernel, you need to handle this on your own.
Take a look at the `dkms` documentation, especially the `dkms autoinstall` command may be useful.
If you did not see the `kernel` install rebuild your initramfs, or are using a manually built kernel, you will need to rebuild it yourself.
Replace the version numbers in the example below with the ones appropriate to the kernel you are installing:
~~~
sudo dracut -f /boot/initramfs-4.15.14-200.fc26.x86_64.img 4.15.14-200.fc26.x86_64
~~~
Once the kernel is installed, you need to create a GRUB configuration.
You may want to adjust some settings in `/etc/default/grub`; for example, lower `GRUB_TIMEOUT` to speed up VM startup.
Then, you need to generate the actual configuration:
In Fedora it can be done using the `grub2-mkconfig` tool:
~~~
sudo grub2-mkconfig -o /boot/grub2/grub.cfg
~~~
You can safely ignore this error message:
~~~
grub2-probe: error: cannot find a GRUB drive for /dev/mapper/dmroot. Check your device.map
~~~
Then shutdown the VM.
**Note:** You may also use `PV` mode instead of `HVM` but this is not recommended for security purposes.
If you require `PV` mode, install `grub2-xen` in dom0 and change the template's kernel to `pvgrub2`.
Booting to a kernel inside the template is not supported under `PVH`.
### Installing kernel in Debian VM (R4.0)
Install whatever kernel you want, making sure to include the headers.
If you are using a distribution kernel package (`linux-image-amd64` package), the initramfs and kernel modules should be handled automatically.
If not, or you are building the kernel manually, do this using `dkms` and `initramfs-tools`:
sudo dkms autoinstall -k <kernel-version> # replace this <kernel-version> with actual kernel version
sudo update-initramfs -u
The output should look like this:
$ sudo dkms autoinstall -k 3.16.0-4-amd64
u2mfn:
Running module version sanity check.
- Original module
- No original module exists within this kernel
- Installation
- Installing to /lib/modules/3.16.0-4-amd64/updates/dkms/
depmod....
DKMS: install completed.
$ sudo update-initramfs -u
update-initramfs: Generating /boot/initrd.img-3.16.0-4-amd64
When the kernel is installed, you need to create a GRUB configuration.
You may want to adjust some settings in `/etc/default/grub`; for example, lower `GRUB_TIMEOUT` to speed up VM startup.
Then, you need to generate the actual configuration with the `update-grub2` tool:
~~~
sudo mkdir /boot/grub
sudo update-grub2
~~~
You can safely ignore this error message:
~~~
grub2-probe: error: cannot find a GRUB drive for /dev/mapper/dmroot. Check your device.map
~~~
Then shutdown the VM.
**Note:** You may also use `PV` mode instead of `HVM` but this is not recommended for security purposes.
If you require `PV` mode, install `grub2-xen` in dom0 and change the template's kernel to `pvgrub2`.
Booting to a kernel inside the template is not supported under `PVH`.
Using kernel installed in the VM (R3.2)
--------------------------------
**This option is available only in Qubes R3.1 or newer**
@ -226,7 +320,7 @@ To make it happen, at a high level you need to:
**WARNING: When using a kernel from within a VM, the `kernelopts` parameter is ignored.**
### Installing PV GRUB2
### Installing PV GRUB2 (R3.2)
Simply execute:
@ -234,7 +328,7 @@ Simply execute:
sudo qubes-dom0-update grub2-xen
~~~
### Installing kernel in Fedora VM
### Installing kernel in Fedora VM (R3.2)
In a Fedora based VM, you need to install the `qubes-kernel-vm-support` package.
This package includes the additional kernel module and initramfs addition required to start a Qubes VM (for details see [template implementation](/doc/template-implementation/)).
@ -251,10 +345,11 @@ You need to also ensure you have the `kernel-devel` package for the same kernel
If you are using a distribution kernel package (`kernel` package), the initramfs and kernel modules may be handled automatically.
If you are using a manually built kernel, you need to handle this on your own.
Take a look at the `dkms` documentation, especially the `dkms autoinstall` command may be useful.
If you did not see the `kernel` install rebuild your initramfs, or are using a manually built kernel, you will need to rebuild it yourself with the following:
If you did not see the `kernel` install rebuild your initramfs, or are using a manually built kernel, you will need to rebuild it yourself.
Replace the version numbers in the example below with the ones appropriate to the kernel you are installing:
~~~
sudo dracut -f /boot/initramfs-$(uname -r).img $(uname -r)
sudo dracut -f /boot/initramfs-4.15.14-200.fc26.x86_64.img 4.15.14-200.fc26.x86_64
~~~
Once the kernel is installed, you need to create a GRUB configuration.
@ -280,7 +375,7 @@ This can take a while to complete- longer than your `qrexec_timeout` setting, wh
To confirm this is the case, see [Troubleshooting](/doc/managing-vm-kernel/#troubleshooting) below or just wait for five minutes and shutdown the VM.
It should respond normally on future boots.
### Installing kernel in Debian VM
### Installing kernel in Debian VM (R3.2)
In a Debian based VM, you need to install the `qubes-kernel-vm-support` package.
This package includes the additional kernel module and initramfs addition required to start a Qubes VM (for details see [template implementation](/doc/template-implementation/)).

View file

@ -30,7 +30,7 @@ As a result, installation of such third-party RPMs in a default template VM expo
(Again, it's not buggy or malicious drivers that we fear here, but rather malicious installation scripts for those drivers).
In order to mitigate this risk, one might consider creating a custom template (i.e. clone the original template) and then install the third-party, unverified drivers there.
Such template might then be made a DVM template for [Disposable VM creation](/doc/dispvm/), which should allow one to print any document by right-clicking on it, choosing "Open in Disposable VM" and print from there.
Such template might then be made a DVM template for [DisposableVM creation](/doc/disposablevm/), which should allow one to print any document by right-clicking on it, choosing "Open in DisposableVM" and print from there.
This would allow to print documents from more trusted AppVMs (based on a trusted default template that is not poisoned by third-party printer drivers).
However, one should be aware that most (all?) network printing protocols are insecure, unencrypted protocols.

View file

@ -16,6 +16,7 @@ Resize Disk Image
There are several disk images which can be easily extended, but pay attention to the overall consumed space of your sparse/thin disk images.
See also [OS Specific Follow-up Instructions](/doc/resize-disk-image/#os-specific-follow-up-instructions) at the end of this page.
Since a TemplateBasedVM [inherits its system filesystem from the Template on which it is based](/getting-started/#appvms-qubes-and-templatevms), it is not possible to resize the system disk for a TemplateBasedVM.
### Template disk image (R4.0)

View file

@ -240,6 +240,24 @@ This way dom0 doesn't directly interact with potentially malicious target VMs;
and in the case of a compromised Salt VM, because they are temporary, the
compromise cannot spread from one VM to another.
In Qubes 3.2, this temporary VM is based on the default template.
Beginning with Qubes 4.0 and after [QSB #45], we implemented two changes:
1. Added the `management_dispvm` VM property, which specifies the DVM
Template that should be used for management, such as Salt
configuration. TemplateBasedVMs inherit this property from their
parent TemplateVMs. If the value is not set explicitly, the default
is taken from the global `management_dispvm` property. The
VM-specific property is set with the `qvm-prefs` command, while the
global property is set with the `qubes-prefs` command.
2. Created the `default-mgmt-dvm` DVM Template, which is hidden from
the menu (to avoid accidental use), has networking disabled, and has
a black label (the same as TemplateVMs). This VM is set as the global
`management_dispvm`. Keep in mind that this DVM template has full control
over the VMs it's used to manage.
## Writing Your Own Configurations
Let's start with a quick example:
@ -353,11 +371,107 @@ Ensures the specified domain is running:
qvm.running:
- name: salt-test4
## Virtual Machine Formulae
You can use these formulae to download, install, and configure VMs in Qubes.
These formulae use pillar data to define default VM names and configuration details.
The default settings can be overridden in the pillar data located in:
```
/srv/pillar/base/qvm/init.sls
```
In dom0, you can apply a single state with `sudo qubesctl state.sls STATE_NAME`.
For example, `sudo qubesctl state.sls qvm.personal` will create a `personal` VM (if it does not already exist) with all its dependencies (TemplateVM, `sys-firewall`, and `sys-net`).
### Available states
#### `qvm.sys-net`
System NetVM
#### `qvm.sys-usb`
System UsbVM
#### `qvm.sys-net-with-usb`
System UsbVM bundled into NetVM. Do not enable together with `qvm.sys-usb`.
#### `qvm.usb-keyboard`
Enable USB keyboard together with USBVM, including for early system boot (for LUKS passhprase).
This state implicitly creates a USBVM (`qvm.sys-usb` state), if not already done.
#### `qvm.sys-firewall`
System firewall ProxyVM
#### `qvm.sys-whonix`
Whonix gateway ProxyVM
#### `qvm.personal`
Personal AppVM
#### `qvm.work`
Work AppVM
#### `qvm.untrusted`
Untrusted AppVM
#### `qvm.vault`
Vault AppVM with no NetVM enabled.
#### `qvm.default-dispvm`
Default DisposableVM template - fedora-26-dvm AppVM
#### `qvm.anon-whonix`
Whonix workstation AppVM.
#### `qvm.whonix-ws-dvm`
Whonix workstation AppVM for Whonix DisposableVMs.
#### `qvm.updates-via-whonix`
Setup UpdatesProxy to route all templates updates through Tor (sys-whonix here).
#### `qvm.template-fedora-21`
Fedora-21 TemplateVM
#### `qvm.template-fedora-21-minimal`
Fedora-21 minimal TemplateVM
#### `qvm.template-debian-7`
Debian 7 (wheezy) TemplateVM
#### `qvm.template-debian-8`
Debian 8 (jessie) TemplateVM
#### `qvm.template-whonix-gw`
Whonix Gateway TemplateVM
#### `qvm.template-whonix-ws`
Whonix Workstation TemplateVM
## The `qubes` Pillar Module
Additional pillar data is available to ease targeting configurations (for
example all templates).
***Note*** List here may be subject to changes in future releases.
Additional pillar data is available to ease targeting configurations (for example all templates).
**Note:** This list is subject to change in future releases.
### `qubes:type`
@ -423,11 +537,10 @@ The solution is to shut down the updateVM between each install:
* [Top files][salt-doc-top]
* [Jinja templates][jinja]
* [Qubes specific modules][salt-qvm-doc]
* [Formulas for default Qubes VMs][salt-virtual-machines-doc] ([and actual states][salt-virtual-machines-states])
* [Formulas for default Qubes VMs][salt-virtual-machines-states]
[salt-doc]: https://docs.saltstack.com/en/latest/
[salt-qvm-doc]: https://github.com/QubesOS/qubes-mgmt-salt-dom0-qvm/blob/master/README.rst
[salt-virtual-machines-doc]: https://github.com/QubesOS/qubes-mgmt-salt-dom0-virtual-machines/blob/master/README.rst
[salt-virtual-machines-states]: https://github.com/QubesOS/qubes-mgmt-salt-dom0-virtual-machines/tree/master/qvm
[salt-doc-states]: https://docs.saltstack.com/en/latest/ref/states/all/
[salt-doc-states-file]: https://docs.saltstack.com/en/latest/ref/states/all/salt.states.file.html
@ -440,3 +553,4 @@ The solution is to shut down the updateVM between each install:
[jinja]: http://jinja.pocoo.org/
[jinja-tmp]: http://jinja.pocoo.org/docs/2.9/templates/
[jinja-call-salt-functions]: https://docs.saltstack.com/en/getstarted/config/jinja.html#get-data-using-salt
[QSB #45]: /news/2018/12/03/qsb-45/

View file

@ -20,7 +20,7 @@ Qubes 4.0 is more flexible than earlier versions about placing different VMs on
For example, you can keep templates on one disk and AppVMs on another, without messy symlinks.
These steps assume you have already created a separate [volume group](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/logical_volume_manager_administration/vg_admin#VG_create) and [thin pool](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/logical_volume_manager_administration/thinly_provisioned_volume_creation) (not thin volume) for your HDD.
See also [this example](https://www.linux.com/blog/how-full-encrypt-your-linux-system-lvm-luks) if you would like to create an encrypted LVM pool (but note you can use a single logical volume if preferred, and to use the `-T` option on `lvcreate` to specify it is thin).
See also [this example](https://www.linux.com/blog/how-full-encrypt-your-linux-system-lvm-luks) if you would like to create an encrypted LVM pool (but note you can use a single logical volume if preferred, and to use the `-T` option on `lvcreate` to specify it is thin). You can find the commands for this example applied to Qubes at the bottom of this R4.0 section.
First, collect some information in a dom0 terminal:
@ -50,6 +50,45 @@ For example:
In theory, you can still use file-based disk images ("file" pool driver), but it lacks some features such as you won't be able to do backups without shutting down the qube.
#### Example HDD setup ####
Assuming the secondary hard disk is at /dev/sdb (it will be completely erased), you can set it up for encryption by doing in a dom0 terminal (use the same passphrase as the main Qubes disk to avoid a second password prompt at boot):
sudo cryptsetup luksFormat --hash=sha512 --key-size=512 --cipher=aes-xts-plain64 --verify-passphrase /dev/sdb
sudo blkid /dev/sdb
Note the device's UUID (in this example "b209..."), we will use it as its luks name for auto-mounting at boot, by doing:
sudo nano /etc/crypttab
And adding this line (change both "b209..." for your device's UUID from blkid) to crypttab:
luks-b20975aa-8318-433d-8508-6c23982c6cde UUID=b20975aa-8318-433d-8508-6c23982c6cde none
Reboot the computer so the new luks device appears at /dev/mapper/luks-b209... and we can then create its pool, by doing this on a dom0 terminal (substitute the b209... UUIDs with yours):
First create the physical volume
sudo pvcreate /dev/mapper/luks-b20975aa-8318-433d-8508-6c23982c6cde
Then create the LVM volume group, we will use for example "qubes" as the <vg_name>:
sudo vgcreate qubes /dev/mapper/luks-b20975aa-8318-433d-8508-6c23982c6cde
And then use "poolhd0" as the <thin_pool_name> (LVM thin pool name):
sudo lvcreate -T -n poolhd0 -l +100%FREE qubes
Finally we will tell Qubes to add a new pool on the just created thin pool
qvm-pool --add poolhd0_qubes lvm_thin -o volume_group=qubes,thin_pool=poolhd0,revisions_to_keep=2
By default VMs will be created on the main Qubes disk (i.e. a small SSD), to create them on this secondary HDD do the following on a dom0 terminal:
qvm-create -P poolhd0_qubes --label red unstrusted-hdd
### R3.2 ###
In dom0:

View file

@ -50,11 +50,43 @@ Set up a ProxyVM as a VPN gateway using NetworkManager
3. Set up your VPN as described in the NetworkManager documentation linked above.
4. Configure your AppVMs to use the new VM as a NetVM.
4. (Optional) Make your VPN start automatically.
Edit `/rw/config/rc.local` and add these lines:
```bash
# Automatically connect to the VPN once Internet is up
while ! ping -c 1 -W 1 1.1.1.1; do
sleep 1
done
PWDFILE="/rw/config/NM-system-connections/secrets/passwd-file.txt"
nmcli connection up file-vpn-conn passwd-file $PWDFILE
```
You can find the actual "file-vpn-conn" in `/rw/config/NM-system-connections/`.
Create directory `/rw/config/NM-system-connections/secrets/` (You can put your `*.crt` and `*.pem` files here too).
Create a new file `/rw/config/NM-system-connections/secrets/passwd-file.txt`:
```
vpn.secrets.password:XXXXXXXXXXXXXX
```
And substitute "XXXXXXXXXXXXXX" for the actual password.
The contents of `passwd-file.txt` may differ depending on your VPN settings. See the [documentation for `nmcli up`](https://www.mankier.com/1/nmcli#up).
5. (Optional) Make the network fail-close for the AppVMs if the connection to the VPN breaks.
Edit `/rw/config/qubes-firewall-user-script` and add these lines:
```bash
# Block forwarding of connections through upstream network device
# (in case the vpn tunnel breaks)
iptables -I FORWARD -o eth0 -j DROP
iptables -I FORWARD -i eth0 -j DROP
```
6. Configure your AppVMs to use the new VM as a NetVM.
![Settings-NetVM.png](/attachment/wiki/VPN/Settings-NetVM.png)
5. Optionally, you can install some [custom icons](https://github.com/Zrubi/qubes-artwork-proxy-vpn) for your VPN
7. Optionally, you can install some [custom icons](https://github.com/Zrubi/qubes-artwork-proxy-vpn) for your VPN
Set up a ProxyVM as a VPN gateway using iptables and CLI scripts
@ -212,6 +244,8 @@ It has been tested with Fedora 23 and Debian 8 templates.
# (in case the vpn tunnel breaks):
iptables -I FORWARD -o eth0 -j DROP
iptables -I FORWARD -i eth0 -j DROP
ip6tables -I FORWARD -o eth0 -j DROP
ip6tables -I FORWARD -i eth0 -j DROP
# Block all outgoing traffic
iptables -P OUTPUT DROP

View file

@ -18,7 +18,7 @@ Beware: Dragons might eat your precious data!
Install ZFS in Dom0
===================
Install DKMS style packages for Fedora <sup>(defunct\\ in\\ 0.6.2\\ due\\ to\\ spl/issues/284)</sup>
Install DKMS style packages for Fedora <sup>(defunct in 0.6.2 due to spl/issues/284)</sup>
----------------------------------------------------------------------------------------------------
Fetch and install repository for DKMS style packages for your Dom0 Fedora version [http://zfsonlinux.org/fedora.html](http://zfsonlinux.org/fedora.html):
@ -37,7 +37,7 @@ Install DKMS style packages from git-repository
Build and install your DKMS or KMOD packages as described in [http://zfsonlinux.org/generic-rpm.html](http://zfsonlinux.org/generic-rpm.html).
### Prerequisites steps in AppVM <sup>(i.e.\\ disp1)</sup>
### Prerequisites steps in AppVM <sup>(i.e. disp1)</sup>
Checkout repositories for SPL and ZFS: