Fix conflicts introduced from new changes in master branch

This commit is contained in:
Miguel Jacq 2017-11-01 14:00:00 +11:00
commit 408aef3941
No known key found for this signature in database
GPG key ID: EEA4341C6D97A0B6
76 changed files with 4053 additions and 762 deletions

View file

@ -4,7 +4,7 @@ title: HTTP Filtering Proxy
permalink: /doc/config/http-filtering-proxy/
---
How to run an HTTP filtering proxy in a FirwallVM
How to run an HTTP filtering proxy in a FirewallVM
=================================================
Introduction

View file

@ -35,7 +35,7 @@ and run `newaliases`.
This is the only thing to do in TemplateVM, as MTA configuration is AppVM specific, so we will keep it in `/usr/local` (ie. `/rw/usrlocal`) in each AppVM.
Now shutdown TemplateVM, start AppVM. Create directory `/usr/local/etc/postfix` and copy `/etc/postfix/master.cf` there.
Now shutdown TemplateVM, start AppVM. Create directory `/usr/local/etc/postfix` and copy `/etc/postfix/master.cf` and `/etc/postfix/postfix-files` there.
### Makefile
@ -152,4 +152,4 @@ mount --bind /usr/local/etc/postfix /etc/postfix
systemctl --no-block start postfix
~~~
Reboot your AppVM and you are done.
Make sure `/rw/config/rc.local` is executable (i.e., `chmod a+x /rw/config/rc.local`). Reboot your AppVM and you are done.

View file

@ -73,11 +73,11 @@ Done.
### Template disk image
If you want install a lot of software in your TemplateVM, you may need to increase the amount of disk space your TemplateVM can use.
If you want install a lot of software in your TemplateVM, you may need to increase the amount of disk space your TemplateVM can use. See also additional information and caveats about [resizing the root disk image].
1. Make sure that all the VMs based on this template are shut down (including netvms etc).
2. Sanity check: verify that none of the loop devices are pointing at this template root.img: `sudo losetup -a`
3. Resize root.img file using `truncate -s <desired size>` (the root.img path can be obtained from qvm-prefs).
2. Sanity check: verify that none of the loop devices are pointing at this template root.img. Run this in dom0: `sudo losetup -a`
3. Resize root.img file. Run this in dom0: `truncate -s <desired size> <path to root.img>` (the root.img path can be obtained from qvm-prefs).
4. If any netvm/proxyvm used by this template is based on it, set template netvm to none.
5. Start the template.
6. Execute `sudo resize2fs /dev/mapper/dmroot` in the template.
@ -126,3 +126,6 @@ zpool online -e poolname ada0
You will see that there is unallocated free space at the end of your primary disk.
You can use standard linux tools like fdisk and mkfs to make this space available.
[resizing the root disk image]: https://www.qubes-os.org/doc/resize-root-disk-image/

View file

@ -0,0 +1,58 @@
---
layout: doc
title: RPC Policies
permalink: /doc/rpc-policy/
---
RPC Policies
============
This document explains the basics of RPC policies in Qubes.
For more information, see [Qrexec: command execution in VMs][qrexec3].
Here's an example of an RPC policy file in dom0:
```
[user@dom0 user ~]$ cat /etc/qubes-rpc/policy/qubes.FileCopy
(...)
$tag:work $tag:work allow
$tag:work $anyvm deny
$anyvm $tag:work deny
$anyvm $anyvm ask
```
It has three columns (from left to right): source, destination, and permission.
Each row is a rule.
For example, the first row says that we're **allowed** (third column) to copy a file (since this is the policy file for `qubes.FileCopy`) **from** (first column) any VM tagged with "work" **to** (second column) any VM tagged with "work".
In other words, all the VMs tagged with "work" are allowed to copy files to each other without any prompts.
(If the third column were "ask" instead of "allow", there would be prompts.
I.e., we would be **asked** to approve the action, instead of it always being **allowed**.)
Now, the whole policy file is parsed from top to bottom.
As soon as a rule is found that matches the action being evaluated, parsing stops.
We can see what this means by looking at the second row.
It says that we're **denied** from attempting to copy a file **from** any VM tagged with "work" **to** any VM whatsoever.
(That's what the `$anyvm` keyword means -- literally any VM in the system).
But, wait a minute, didn't we just say (in the first row) that all the VMs tagged with work are **allowed** to copy files to each other?
That's exactly right.
The first and second rows contradict each other, but that's intentional.
Since we know that parsing goes from top to bottom (and stops at the first match), we intentionally put the first row above the second row so that it would take precedence.
This is how we create a policy that says: "VMs tagged with 'work' are allowed to copy files to each other but not to any *other* VMs (i.e., not to VMs that *aren't* tagged with 'work')."
The third row says that we're **denied** from copying files **from** any VM in the system **to** any VM tagged with "work".
Again, since parsing goes from top to bottom, this doesn't mean that no files can ever be copied from *any* VM to a VM tagged with "work".
Rather, it means that only VMs that match an earlier rule can do so (in this case, only VMs tagged with "work").
The fourth and final row says that we're **asked** (i.e., prompted) to copy files **from** any VM in the system **to** any VM in the system.
(This rule was already in the policy file by default.
We added the first three.)
Note that it wouldn't make sense to add any rules after this one, since every possible pair of VMs will match the `$anyvm $anyvm` pattern.
Therefore, parsing will always stop at this rule, and no rules below it will ever be evaluated.
All together, the three rules we added say that all VMs tagged with "work" are allowed to copy files to each other; however, they're denied from copying files to other VMs (without the "work" tag), and other VMs (without the "work" tag) are denied from copying files to them.
The fourth rule means that the user gets prompted for any situation not already covered.
Further details about how this system works can be found in [Qrexec: command execution in VMs][qrexec3].
[qrexec3]: /doc/qrexec3/

View file

@ -3,86 +3,96 @@ layout: doc
title: Management stack
permalink: /doc/salt/
---
# Management infrastructure
Since Qubes R3.1 release we have included `salt` (also called SaltStack)
management engine in dom0 as default with some states already configured. salt
allows administrators to easily configure their systems. In this guide we will
show how it is set up and how you can modify it for your own purpose.
# Management Infrastructure
Since the Qubes R3.1 release we have included the Salt (also called SaltStack)
management engine in dom0 as default (with some states already configured).
Salt allows administrators to easily configure their systems.
In this guide we will show how it is set up and how you can modify it for your
own purpose.
In the current form the **API is provisional** and subject to change between
*minor* releases.
## Understanding `salt`
## Understanding Salt
This document is not meant to be comprehensive salt documentation, however
This document is not meant to be comprehensive Salt documentation; however,
before writing anything it is required you have at least *some* understanding of
basic salt-related vocabulary. For more exhaustive documentation, visit
[official site][salt-doc], though we must warn you that it is not easy to read
if you just start working with salt and know nothing.
basic Salt-related vocabulary.
For more exhaustive documentation, visit [official site][salt-doc], though we
must warn you that it is not easy to read if you just start working with Salt
and know nothing.
### The architecture
### The Salt Architecture
Salt has client-server architecture, where server (called *master*) manages its
clients (called *minions*). In typical situation it is intended that
administrator interacts only with master and keeps the configuration there. In
Qubes OS we don't have master though, since we have only one minion, which
resides in `dom0` and manages domains from there. This is also supported by
salt.
Salt is a client-server model, where the server (called *master*) manages
its clients (called *minions*).
In typical situations, it is intended that the administrator interacts only
with the master and keeps the configurations there.
In Qubes, we don't have a master.
Instead we have one minion which resides in `dom0` and manages domains from
there.
This setup is also supported by Salt.
Salt is a management engine, that enforces particular state of the system, where
minion runs. A *state* is an end effect *declaratively* expressed by the
administrator. This is the most important concept in the whole package. All
configuration (ie. the states) are written in YAML.
Salt is a management engine (similar to Ansible, Puppet, and Chef), that
enforces a particular state of a minion system.
A *state* is an end effect *declaratively* expressed by the administrator.
This is the most important concept in the entire engine.
All configurations (i.e., the states) are written in YAML.
A *pillar* is a data back-end declared by administrator. When states became
repetitive, instead of pure YAML they can be written with help of some template
engine (preferably jinja2), which can use data structures specified in pillars.
A *pillar* is a data back-end declared by the administrator.
When states become repetitive, instead of pure YAML they can be written using a
template engine (preferably Jinja2); which can use data structures specified in
pillars.
A *formula* is a ready to use, packaged solution that combines state and pillar,
possibly with some file templates and other auxiliary files. There are many of
those made by helpful people all over the Internet.
A *formula* is a ready to use, packaged solution that combines a state and a
pillar (possibly with some file templates and other auxiliary files).
There are many formulas made by helpful people all over the Internet.
A *grain* is some data that is also available in templates, but its value is not
directly specified by administrator. For example the distribution (like
`"Debian"` or `"Gentoo"`) is a value of the grain `"os"`. It also contains other
info about kernel, hardware etc.
directly specified by administrator.
For example, the distribution (e.g., `"Debian"` or `"Gentoo"`) is a value of
the grain `"os"`. It also contains other information about the kernel,
hardware, etc.
A *module* is a Python extension to salt that is responsible for actually
enforcing the state in a particular area. It exposes some *imperative* functions
for administrator. For example there is `system` module that has `system.halt`
function that, when issued, will immediately halt the computer. There is another
function called `state.highstate` which will synchronize the state of the system
with the administrator's will.
A *module* is a Python extension to Salt that is responsible for actually
enforcing the state in a particular area.
It exposes some *imperative* functions for the administrator.
For example, there is a `system` module that has a `system.halt` function that,
when issued, will immediately halt a domain.
There is another function called `state.highstate` which will synchronize the
state of the system with the administrator's configuration/desires.
### Configuration
#### States
The smallest unit of configuration is a state.
A state is written in yaml and looks like this:
A state is written in YAML and looks like this:
stateid:
cmd.run: #this is the execution module. in this case it will execute a command on the shell
- name: echo 'hello world' #this is a parameter of the state.
The stateid has to be unique over all states running for a minion and can be used
to order the execution of states.
`cmd.run` is the execution module. It decides which action will be executed.
`name: echo 'hello world'` is a parameter for the execution module. It depends on
the module which parameters are accepted.
The stateid has to be unique throughout all states running for a minion and can
be used to order the execution of the references state.
`cmd.run` is an execution module.
It executes a command on behalf of the administrator.
`name: echo 'hello world'` is a parameter for the execution module `cmd.run`.
The module used defines which parameters can be passed to it.
There is list of [officially available states][salt-doc-states].
There is a list of [officially available states][salt-doc-states].
There are many very useful states:
* For [managing files][salt-doc-states-file]: Use this to create files or
directories and change them (append lines, replace text, set their content etc.)
* For [installing and uninstalling][salt-doc-states-pkg] packages.
* To [execute shell commands][salt-doc-states-cmd].
* For [executing shell commands][salt-doc-states-cmd].
With these three states you can do most of the configuration inside of a vm.
With these three states you can define most of the configuration of a VM.
You also can [order the execution][salt-doc-states-order] of your states:
You can also [order the execution][salt-doc-states-order] of your states:
D:
cmd.run:
@ -104,20 +114,20 @@ You also can [order the execution][salt-doc-states-order] of your states:
- order: 1
The order of execution will be `A, B, C, D`.
The official documentation has more details on the [require][salt-doc-states-req] and
[order][salt-doc-states-ord] arguments.
The official documentation has more details on the
[require][salt-doc-states-req] and [order][salt-doc-states-ord] arguments.
#### State files
#### State Files
When configuring a system you will write one or several state files (`*.sls`) and
put (or symlink) them in the salt main directory `/srv/salt/`.
Each state file contains one multiple states and should describe some unit of
configuration (e.g.: A state file `mail.sls` could setup a vm for mailing).
When configuring a system you will write one or more state files (`*.sls`) and
put (or symlink) them into the main Salt directory `/srv/salt/`.
Each state file contains multiple states and should describe some unit of
configuration (e.g., a state file `mail.sls` could setup a VM for e-mail).
#### Top files
#### Top Files
After you have state several state files, you need something to assign them to a
vm. This is done by `*.top` files ([official documentation][salt-doc-top]).
After you have several state files, you need something to assign them to a VM.
This is done by `*.top` files ([official documentation][salt-doc-top]).
Their structure looks like this:
environment:
@ -125,11 +135,11 @@ Their structure looks like this:
- statefile1
- folder2.statefile2
The environment will be in most cases `base`.
The `target_matching_clause` will be used to select your minions (vms).
It can be just the name of a vm or a regular expression.
If you are using a regular expression, you need to give salt a hint you are doing
so:
In most cases, the environment will be called `base`.
The `target_matching_clause` will be used to select your minions (VMs).
It can be either the name of a VM or a regular expression.
If you are using a regular expressions, you need to give Salt a hint you are
doing so:
environment:
^app-(work|(?!mail).*)$:
@ -137,99 +147,104 @@ so:
- statefile
For each target you can write a list of state files.
Each line is a path to a state file (without the `.sls`) relative to the main
directory. Each `/` is exchanged by a dot, so you can't reference files or
directories with a dot in their name.
Each line is a path to a state file (without the `.sls` extension) relative to
the main directory.
Each `/` is exchanged with a `.`, so you can't reference files or directories
with a `.` in their name.
### Enabling top files and applying the configuration
### Enabling Top Files and Applying the States
Now because we use custom extension to manage top files (instead of just
enabling them all) to enable the particular top file you should issue command:
Now, because we use custom extensions to manage top files (instead of just
enabling them all), to enable a particular top file you should issue command:
qubesctl top.enable my-new-vm
$ qubesctl top.enable my-new-vm
To list all enabled tops:
To list all enabled top files:
qubesctl top.enabled
$ qubesctl top.enabled
And to disable one:
qubesctl top.disable my-new-vm
$ qubesctl top.disable my-new-vm
To actually apply the states to dom0 and all vms:
To apply the states to dom0 and all VMs:
qubesctl --all state.highstate
$ qubesctl --all state.highstate
(More information on the command is further down.)
(More information on the `qubesctl` command further down.)
### Templating files
### Template Files
You will sometimes find your self writing repetitive states. To solve this,
there is the ability to template files or states.
This can be done with [jinja][jinja].
Jinja is similar to python and behaves in many cases similar, but there
sometimes are differences (e.g. If you set some variable inside a loop,
the variable outside will not get changed. Unless you use a do statement).
So you should take a look at the [jinja api documentation][jinja-tmp].
How you can use jinja to directly call salt functions and get data about
your system is documented in the [salt documentation][jinja-call-salt-functions].
You will sometimes find yourself writing repetitive states.
To solve this, there is the ability to template files or states.
This is most commonly done with [Jinja][jinja].
Jinja is similar to Python and in many cases behaves in a similar fashion, but
there are sometimes differences when, for example, you set some variable inside
a loop: the variable outside will not get changed.
Instead, to get this behavior, you would use a `do` statement.
So you should take a look at the [Jinja API documentation][jinja-tmp].
Documentation about using Jinja to directly call Salt functions and get data
about your system can be found in the official
[Salt documentation][jinja-call-salt-functions].
## Salt configuration, Qubes OS layout
## Salt Configuration, QubesOS layout
All salt configuration in `/srv/` directory, as usual. The main directory is
`/srv/salt/` where all state files reside. States are contained in `*.sls`
files. However the states that are part of standard Qubes distribution are
mostly templates and the configuration is done in pillars from formulas.
All Salt configuration files are in the `/srv/` directory, as usual.
The main directory is `/srv/salt/` where all state files reside.
States are contained in `*.sls` files.
However, the states that are part of the standard Qubes distribution are mostly
templates and the configuration is done in pillars from formulas.
The formulas are in `/srv/formulas`, including stock formula for domains in
The formulas are in `/srv/formulas`, including stock formulas for domains in
`/srv/formulas/dom0/virtual-machines-formula/qvm`, which are used by firstboot.
Because we use some code that is not found in older versions of salt, there is
a tool called `qubesctl` that should be run instead of `salt-call --local`. It
accepts all arguments of the vanilla tool.
Because we use some code that is not found in older versions of Salt, there is
a tool called `qubesctl` that should be run instead of `salt-call --local`.
It accepts all the same arguments of the vanilla tool.
## Configuring a VM's System from Dom0
## Configuring system inside of VMs
Starting with Qubes 3.2, Salt in Qubes can be used to configure VMs. Salt
formulas can be used normal way. Simply set VM name as target minion name in
top file. You can also use `qubes` pillar module to select VMs with a
particular property (see below). Then you need to pass additional arguments to
`qubesctl` tool:
Starting with Qubes R3.2, Salt in Qubes can be used to configure VMs from dom0.
Simply set the VM name as the target minion name in the top file.
You can also use the `qubes` pillar module to select VMs with a particular
property (see below).
If you do so, then you need to pass additional arguments to the `qubesctl` tool:
usage: qubesctl [-h] [--show-output] [--force-color] [--skip-dom0]
[--targets TARGETS | --templates | --app | --all]
...
positional arguments:
command Salt command to execute (for example: state.highstate)
command Salt command to execute (e.g., state.highstate)
optional arguments:
-h, --help show this help message and exit
--show-output Show output of management commands
--force-color Force color output, allow control characters from VM,
UNSAFE
--skip-dom0 Skip dom0 condifuration (VM creation etc)
--skip-dom0 Skip dom0 configuration (VM creation etc)
--targets TARGETS Coma separated list of VMs to target
--templates Target all templates
--app Target all AppVMs
--all Target all non-disposable VMs (TemplateVMs and AppVMs)
To apply the configuration to all the templates, call `qubesctl --templates
state.highstate`.
To apply a state to all templates, call `qubesctl --templates state.highstate`.
Actual configuration is applied using `salt-ssh` (running over `qrexec` instead
of `ssh`). Which means you don't need to install anything special in a VM you
want to manage. Additionally for each target VM, `salt-ssh` is started from a
temporary VM. This way dom0 doesn't directly interact with potentially
malicious target VM.
The actual configuration is applied using `salt-ssh` (running over `qrexec`
instead of `ssh`).
Which means you don't need to install anything special in a VM you want to
manage.
Additionally, for each target VM, `salt-ssh` is started from a temporary VM.
This way dom0 doesn't directly interact with potentially malicious target VMs;
and in the case of a compromised Salt VM, because they are temporary, the
compromise cannot spread from one VM to another.
## Writing your own configuration
## Writing Your Own Configurations
Let's start with a quick example:
my new and shiny vm:
my new and shiny VM:
qvm.present:
- name: salt-test # can be omitted when same as ID
- template: fedora-21
@ -239,75 +254,82 @@ Let's start with a quick example:
- flags:
- proxy
It uses Qubes-specific `qvm.present` state, which ensures that domain is
created. The name should be `salt-test` (and not `my new and shiny vm`),
the rest are domains properties, same as in `qvm-prefs`. `proxy` flag informs
salt that the domain should be a ProxyVM.
It uses the Qubes-specific `qvm.present` state, which ensures that the domain is
present (if not, it creates it).
This should be put in `/srv/salt/my-new-vm.sls` or another `.sls` file. Separate
`*.top` file should be also written:
* The `name` flag informs Salt that the domain should be named `salt-test` (not
`my new and shiny VM`).
* The `template` flag informs Salt which template should be used for the domain.
* The `label` flag informs Salt what color the domain should be.
* The `mem` flag informs Salt how much RAM should be allocated to the domain.
* The `vcpus` flag informs Salt how many Virtual CPUs should be allocated to the
domain
* The `proxy` flag informs Salt that the domain should be a ProxyVM.
As you will notice, the options are the same (or very similar) to those used in
`qvm-prefs`.
This should be put in `/srv/salt/my-new-vm.sls` or another `.sls` file.
A separate `*.top` file should be also written:
base:
dom0:
- my-new-vm
The third line should contain the name of the previous file, without `.sls`.
**Note** The third line should contain the name of the previous state file,
without the `.sls` extension.
To enable the particular top file you should issue command:
To enable the particular top file you should issue the command:
qubesctl top.enable my-new-vm
$ qubesctl top.enable my-new-vm
To actually apply the state:
To apply the state:
qubesctl state.highstate
$ qubesctl state.highstate
### Example of Configuring a VM's System from Dom0
### Example of VM system configuration
It is also possible to configure system inside the VM. Lets make sure that `mc`
package is installed in all the templates. Similar to previous example, you
need to create state file (`/srv/salt/mc-everywhere.sls`):
Lets make sure that the `mc` package is installed in all templates.
Similar to the previous example, you need to create a state file
(`/srv/salt/mc-everywhere.sls`):
mc:
pkg.installed: []
Then appropriate top file (`/srv/salt/mc-everywhere.top`):
Then the appropriate top file (`/srv/salt/mc-everywhere.top`):
base:
qubes:type:template:
- match: pillar
- mc-everywhere
Now you need to enable the configuration:
Now you need to enable the top file:
qubesctl top.enable mc-everywhere
$ qubesctl top.enable mc-everywhere
And apply the configuration:
qubesctl --all state.highstate
$ qubesctl --all state.highstate
## All Qubes-specific States
## All Qubes-specific states
### `qvm.present`
As in the example above, it creates a domain and sets its properties.
### qvm.present
### `qvm.prefs`
As in example above, it creates domain and sets its properties.
### qvm.prefs
You can set properties of existing domain:
You can set properties of an existing domain:
my preferences:
qvm.prefs:
- name: salt-test2
- netvm: sys-firewall
Note that `name:` is a matcher, ie. it says the domain which properties will be
manipulated is called `salt-test2`. This implies that you currently cannot rename
domains this way.
***Note*** The `name:` option will not change the name of a domain, it will only
be used to match a domain to apply the configurations to it.
### qvm.service
### `qvm.service`
services in my domain:
qvm.service:
@ -321,83 +343,79 @@ domains this way.
- default:
- service5
This enables, disables, or sets to default, the services as in qvm-service.
This enables, disables, or sets to default, services as in `qvm-service`.
### qvm.running
### `qvm.running`
Ensures the domain is running:
Ensures the specified domain is running:
domain is running:
qvm.running:
- name: salt-test4
## qubes pillar module
## The `qubes` Pillar Module
Additional pillar data is available to ease targeting configuration (for
example all the templates). List here may be subject to changes in future
releases.
Additional pillar data is available to ease targeting configurations (for
example all templates).
***Note*** List here may be subject to changes in future releases.
### qubes:type
### `qubes:type`
VM type. Possible values:
- `admin` - administration domain (`dom0`)
- `admin` - Administration domain (`dom0`)
- `template` - Template VM
- `standalone` - Standalone VM
- `app` - template based AppVM
- `app` - Template based AppVM
### qubes:template
### `qubes:template`
Template name on which given VM is based (if any).
Template name on which a given VM is based (if any).
### qubes:netvm
### `qubes:netvm`
VM which provides network to the given VM
## Debugging
The output for each vm is logged in `/var/log/qubes/mgmt-VM_NAME.log`.
If the log does not contain useful information, you can stop `qubesctl` by
pressing `ctrl+z`.
The output for each VM is logged in `/var/log/qubes/mgmt-VM_NAME.log`.
You need to:
1. run `sudo qubesctl --skip-dom0 --target=VM_NAME state.highstate`
2. When your vm is being started (yellow) press Ctrl-Z on qubesctl.
If the log does not contain useful information:
1. Run `sudo qubesctl --skip-dom0 --target=VM_NAME state.highstate`
2. When your VM is being started (yellow) press Ctrl-z on qubesctl.
3. Open terminal in disp-mgmt-VM_NAME.
4. Look at /etc/qubes-rpc/qubes.SaltLinuxVM - this is what is
executed in the management vm.
executed in the management VM.
5. Get the last two lines:
export PATH="/usr/lib/qubes-vm-connector/ssh-wrapper:$PATH"
salt-ssh "$target_vm" $salt_command
$ export PATH="/usr/lib/qubes-vm-connector/ssh-wrapper:$PATH"
$ salt-ssh "$target_vm" $salt_command
Adjust $target_vm (VM_NAME) and $salt_command (state.highstate).
6. Execute them, fix problems, repeat.
## Known pitfalls
## Known Pitfalls
### Using fedora-24-minimal
The fedora-24-minimal package is missing the sudo package.
The fedora-24-minimal package is missing the `sudo` package.
You can install it via:
qvm-run -p vmname 'dnf install -y sudo'
$ qvm-run -p -u root fedora-24-minimal-template 'dnf install -y sudo'
The `-p` is will cause the execution to wait until the package is installed.
This is important when using a state with `cmd.run`.
The `-p` will cause the execution to wait until the package is installed.
Having the `-p` flag is important when using a state with `cmd.run`.
### Disk Quota Exceeded (When Installing Templates)
### Disk quota exceeded (when installing templates)
If you install multiple templates you may encounter this error.
The solution is to shut down the updatevm between each install.
E.g.:
The solution is to shut down the updateVM between each install:
{% raw %}
install template and shutdown updatevm:
install template and shutdown updateVM:
cmd.run:
- name: sudo qubes-dom0-update -y fedora-24; qvm-shutdown {{salt.cmd.run(qubes-prefs updatevm) }}
{% endraw %}
- name: sudo qubes-dom0-update -y fedora-24; qvm-shutdown {% raw %}{{ salt.cmd.run(qubes-prefs updateVM) }}{% endraw %}
## Further reading
## Further Reading
* [Salt documentation][salt-doc]
* [Salt states][salt-doc-states] ([files][salt-doc-states-file], [commands][salt-doc-states-cmd],
@ -405,7 +423,7 @@ E.g.:
* [Top files][salt-doc-top]
* [Jinja templates][jinja]
* [Qubes specific modules][salt-qvm-doc]
* [Formula for default Qubes VMs][salt-virtual-machines-doc] ([and actual states][salt-virtual-machines-states])
* [Formulas for default Qubes VMs][salt-virtual-machines-doc] ([and actual states][salt-virtual-machines-states])
[salt-doc]: https://docs.saltstack.com/en/latest/
[salt-qvm-doc]: https://github.com/QubesOS/qubes-mgmt-salt-dom0-qvm/blob/master/README.rst