80 columns and simplified my last part and good script

This commit is contained in:
Alex Dubois 2016-02-07 19:09:25 +00:00
parent 88afdd9c3c
commit df2d26ebbd

View File

@ -14,7 +14,9 @@ Understanding Qubes networking and firewall
Understanding firewalling in Qubes
----------------------------------
Every VM in Qubes is connected to the network via a FirewallVM, which is used to enforce network-level policies. By default there is one default Firewall VM, but the user is free to create more, if needed.
Every VM in Qubes is connected to the network via a FirewallVM, which is used to
enforce network-level policies. By default there is one default Firewall VM, but
the user is free to create more, if needed.
For more information, see the following:
@ -24,47 +26,74 @@ For more information, see the following:
How to edit rules
-----------------
In order to edit rules for a given domain, select this domain in the Qubes Manager and press the "firewall" button:
In order to edit rules for a given domain, select this domain in the Qubes
Manager and press the "firewall" button:
![r2b1-manager-firewall.png](/attachment/wiki/QubesFirewall/r2b1-manager-firewall.png)
Note that if you specify a rule by DNS name it will be resolved to IP(s) *at the moment of applying the rules*, and not on the fly for each new connection. This means it will not work for serves using load balancing. More on this in the message quoted below.
Note that if you specify a rule by DNS name it will be resolved to IP(s)
*at the moment of applying the rules*, and not on the fly for each new
connection. This means it will not work for servers using load balancing. More
on this in the message quoted below.
Alternatively, one can use the `qvm-firewall` command from Dom0 to edit the firewall rules by hand:
Alternatively, one can use the `qvm-firewall` command from Dom0 to edit the
firewall rules by hand:
Reconnecting VMs after a NetVM reboot
----------------------------------------
Normally Qubes doesn't let the user to stop a NetVM if there are other VMs running which use it as their own NetVM. But in case the NetVM stops for whatever reason (e.g. it crashes, or the user forces its shutdown via qvm-kill via terminal in the netvm), then there is an easy way to restore the connection to the netvm by issuing:
Normally Qubes doesn't let the user to stop a NetVM if there are other VMs
running which use it as their own NetVM. But in case the NetVM stops for
whatever reason (e.g. it crashes, or the user forces its shutdown via qvm-kill
via terminal in Dom0), then there is an easy way to restore the connection to
the netvm by issuing:
~~~
qvm-prefs <vm> -s netvm <netvm>
~~~
` qvm-prefs <vm> -s netvm <netvm> `
Normally VMs do not connect directly to the actual NetVM which has networking devices, but rather to the default FirewallVM first, and in most cases it would be the NetVM that would crash, e.g. in response to S3 sleep/restore or other issues with WiFi drivers. In that case it is necessary to just issue the above command once, for the FirewallVM (this assumes default VM-nameing used by the default Qubes installation):
Normally VMs do not connect directly to the actual NetVM which has networking
devices, but rather to the default FirewallVM first, and in most cases it would
be the NetVM that would crash, e.g. in response to S3 sleep/restore or other
issues with WiFi drivers. In that case it is necessary to just issue the above
command once, for the FirewallVM (this assumes default VM-naming used by the
default Qubes installation):
~~~
qvm-prefs firewallvm -s netvm netvm
~~~
` qvm-prefs firewallvm -s netvm netvm `
Enabling networking between two VMs
--------------------------------------
Normally any networking traffic between VMs is prohibited for security reasons. However, in special situations, one might one to selectively allow specific VMs to be able to establish networking connectivity between each other. For example, this might come useful in some development work, when one wants to test networking code, or to allow file exchange between HVM domains (which do not have Qubes tools installed) via SMB/scp/NFS protocols.
Normally any networking traffic between VMs is prohibited for security reasons.
However, in special situations, one might want to selectively allow specific VMs
to be able to establish networking connectivity between each other. For example,
this might come useful in some development work, when one wants to test
networking code, or to allow file exchange between HVM domains (which do not
have Qubes tools installed) via SMB/scp/NFS protocols.
In order to allow networking between VM A and B follow those steps:
- Make sure both A and B are connected to the same firewall vm (by default all VMs use the same firewall VM).
- Note the Qubes IP addresses assigned to both VMs. This can be done using the `qvm-ls -n` command, or via the Qubes Manager preferences pane for each VM.
- Start both VMs, and also open a terminal in the firewall VM
- In the firewall VM's terminal enter the following iptables rule:
* Make sure both A and B are connected to the same firewall vm (by default all
VMs use the same firewall VM).
* Note the Qubes IP addresses assigned to both VMs. This can be done using the
`qvm-ls -n` command, or via the Qubes Manager preferences pane for each VM.
* Start both VMs, and also open a terminal in the firewall VM
* In the firewall VM's terminal enter the following iptables rule:
~~~
sudo iptables -I FORWARD 2 -s <IP address of A> -d <IP address of B> -j ACCEPT
~~~
- Now you should be able to reach the VM B from A -- test it using e.g. ping issues from VM A. Note however, that this doesn't allow you to reach A from B -- for this you would need another rule, with A and B addresses swapped.
- If everything works as expected, then the above iptables rule(s) should be written into firewall VM's `qubes_firewall_user_script` script which is run on every firewall update. This is necessary, because Qubes orders every firewall VM to update all the rules whenever new VM is started in the system. If we didn't enter our rules into this "hook" script, then shortly our custom rules would disappear and inter-VM networking would stop working. Here's an example how to update the script (note that, by default, there is no script file present, so we likely will be creating it, unless we had some other custom rules defines earlier in this firewallvm):
* Now you should be able to reach the VM B from A -- test it using e.g. ping
issues from VM A. Note however, that this doesn't allow you to reach A from
B -- for this you would need another rule, with A and B addresses swapped.
* If everything works as expected, then the above iptables rule(s) should be
written into firewall VM's `qubes_firewall_user_script` script which is run
on every firewall update. This is necessary, because Qubes orders every
firewall VM to update all the rules whenever new VM is started in the system.
If we didn't enter our rules into this "hook" script, then shortly our custom
rules would disappear and inter-VM networking would stop working. Here's an
example how to update the script (note that, by default, there is no script
file present, so we likely will be creating it, unless we had some other
custom rules defines earlier in this firewallvm):
~~~
[user@firewallvm ~]$ sudo bash
@ -72,104 +101,187 @@ In order to allow networking between VM A and B follow those steps:
[root@firewallvm user]# chmod +x /rw/config/qubes_firewall_user_script
~~~
Port forwarding to an VM from the outside world
Port forwarding to a VM from the outside world
--------------------------------------------------
In order to allow a service present in an VM to be exposed to the outside world in the default setup (where the VM has the FirewallVM as network VM, which in turn has the NetVM as network VM) the following needs to be done:
In order to allow a service present in a VM to be exposed to the outside world
in the default setup (where the VM has the Firewall VM as network VM, which in
turn has the Net VM as network VM)
the following needs to be done:
- In the NetVM, allow packets to be routed from the outside world to the FirewallVM and allow packets through the NetVM firewall
- In the FirewallVM, allow packets to be routed from the NetVM to the VM and allow packets through the FirewallVM firewall
- In the VM, allow packets into the VM firewall to reach the service
* In the sys-net VM:
* Route packets from the outside world to the sys-firewall VM
* Allow packets through the sys-net VM firewall
* In the sys-firewall VM:
* Route packets from the sys-net VM to the VM
* Allow packets through the sys-firewall VM firewall
* In the VM:
* Allow packets in the VM firewall to reach the service
As an example we can take the use case of a web server listening on port 443 that we want to expose on our physical interface eth0, but only to our local network 192.168.0.0/24.
As an example we can take the use case of a web server listening on port 443
that we want to expose on our physical interface eth0, but only to our local
network 192.168.x.0/24.
**1. Allow packets to be routed from the outside world for the exposed service to the FirewallVM**
**1. Route packets from the outside world to the FirewallVM**
In System Tools (Dom0) / Terminal, take note of the firewallVM IPAddress to which packet will be routed using ` qvm-ls -n `
From a Terminal window in sys-net VM, take note of the 'Interface name' and
'IP address' on which you want to expose your service (i.e. eth0, 192.168.x.x)
In NetVM terminal, take note of the interface name and IPAddress on which you want to expose your service (i.e. eth0, 192.168.0.10) using ` ifconfig | grep -i cast `
` ifconfig | grep -i cast `
> Note: The vifx.0 interface is the one connected to your firewallVM so it is not an outside world interface...
> Note: The vifx.0 interface is the one connected to your sys-firewall VM so it
is _not_ an outside world interface...
Still in NetVM terminal, code the appropriate natting firewall rule to intercept traffic on the inbound interface for the service and nat the destination IP address to the one of the firewallVM for the traffic to be routed there:
From a Terminal window in sys-firewall VM, take note of the 'IP address' for
interface Eth0
~~~
iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 443 -d 192.168.0.10 -j DNAT --to-destination 10.137.1.x
~~~
` ifconfig | grep -i cast `
Code the appropriate new filtering firewall rule to allow new connections for the service:
Back into the sys-net VM's Terminal, code a natting firewall rule to route
traffic on the outside interface for the service to the sys-firewall VM
~~~
iptables -I FORWARD 2 -i eth0 -d 10.137.1.x -p tcp --dport 443 -m conntrack --ctstate NEW -j ACCEPT
~~~
` iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 443 -d 192.168.x.x -j DNAT --to-destination 10.137.1.x `
Note: If you want to expose the service on multiple interfaces, repeat the steps described in part 1 for each interface.
Code the appropriate new filtering firewall rule to allow new connections for
the service
That's it for the netVM. You can verify you are cutting through the netVM by looking at the counters for your firewall rules:
` iptables -I FORWARD 2 -i eth0 -d 10.137.1.x -p tcp --dport 443 -m conntrack --ctstate NEW -j ACCEPT `
> Note: If you want to expose the service on multiple interfaces, repeat the
steps described in part 1 for each interface
Verify you are cutting through the sys-net VM firewall by looking at its
counters (column 2)
` iptables -t nat -L -v -n `
` iptables -L -v -n `
... issue some test packets trying to connect... Obviously we have not reached the service yet but the counters should increase.
Try to connect to the service from an external device
In order to ensure your set-up survive a reboot we need in the NetVM to:
` telnet 192.168.x.x 443 `
Store these commands in ` /rw/config/rc.local `:
Once you have confirmed that the counters increase, store these command in
'/rw/config/rc.local'
~~~
sudo nano /rw/config/rc.local
~~~
` sudo nano /rw/config/rc.local `
~~~
#!/bin/sh
/sbin/iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 443 -d 192.168.0.10 -j DNAT --to-destination 10.137.1.x
/sbin/iptables -I FORWARD 2 -s 192.168.0.0/24 -d 10.137.1.x -p tcp --dport 443 -m conntrack --ctstate NEW -j ACCEPT
####################
# My service routing
# Create a new firewall natting chain for my service
if iptables -t nat -N MY-HTTPS; then
# Add a natting rule if it did not exit (to avoid cluter if script executed multiple times)
iptables -t nat -A MY-HTTPS -j DNAT --to-destination 10.137.1.x
fi
# If no prerouting rule exist for my service
if ! iptables -t nat -n -L PREROUTING | grep --quiet MY-HTTPS; then
# add a natting rule for the traffic (same reason)
iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 443 -d 192.168.0.x -j MY-HTTPS
fi
######################
# My service filtering
# Create a new firewall filtering chain for my service
if iptables -N MY-HTTPS; then
# Add a filtering rule if it did not exit (to avoid cluter if script executed multiple times)
iptables -A MY-HTTPS -s 192.168.x.0/24 -j ACCEPT
fi
# If no prerouting rule exist for my service
if ! iptables -n -L FORWARD | grep --quiet MY-HTTPS; then
# add a natting rule for the traffic (same reason)
iptables -I FORWARD 2 -d 10.137.1.x -p tcp --dport 443 -m conntrack --ctstate NEW -j MY-HTTPS
fi
~~~
Make this file executable:
Finally make this file executable, so it runs at each boot
~~~
sudo chmod +x /rw/config/rc.local
~~~
` sudo chmod +x /rw/config/rc.local `
**2. Allow packets to be routed from the firewallVM to the VM**
**2. Allow packets to be routed from the FirewallVM to the VM**
In System Tools (Dom0) / Terminal, take note of the VM (on which the service is exposed) IPAddress using the command ` qvm-ls -n `
From a Terminal window in the VM, take note of the 'IP address' for
interface Eth0 (i.e. 10.137.2.x)
In FirewallVM Terminal, take note of the IPAddress for interface eth0 using the command ` ifconfig | grep -i cast `
` ifconfig | grep -i cast `
Still in FirewallVM terminal, code the appropriate natting firewall rule to intercept traffic on the inbound interface for the service and nat the destination IP address to the one of the VM for the traffic to be routed there:
Back into the sys-firewall VM's Terminal, code a natting firewall rule to route
traffic on its outside interface for the service to the VM
~~~
iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 443 -d 10.137.1.x -j DNAT --to-destination 10.137.2.y
~~~
` iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 443 -d 10.137.1.x -j DNAT --to-destination 10.137.2.y `
Code the appropriate new filtering firewall rule to allow new connections for the service:
Code the appropriate new filtering firewall rule to allow new connections for
the service
~~~
iptables -I FORWARD 2 -i eth0 -s 192.168.0.0/24 -d 10.137.2.y -p tcp --dport 443 -m conntrack --ctstate NEW -j ACCEPT
~~~
` iptables -I FORWARD 2 -i eth0 -s 192.168.0.0/24 -d 10.137.2.y -p tcp --dport 443 -m conntrack --ctstate NEW -j ACCEPT `
> Note: If you do not wish to limit the IP addresses connecting to the service, remove the ` -s 192.168.0.1/24 `
> Note: If you do not wish to limit the IP addresses connecting to the service,
remove the ` -s 192.168.0.1/24 `
FirewallVM is now done. You can verify you are cutting through it in the same way as above.
This time in order to ensure your set-up survive a reboot we need in the firewallVM to:
Store these commands in ` /rw/config/qubes_firewall_user_script `:
Once you have confirmed that the counters increase, store these command in
'/rw/config/qubes_firewall_user_script'
~~~
#!/bin/sh
/sbin/iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 443 -d 10.137.1.x -j DNAT --to-destination 10.137.2.y
/sbin/iptables -I FORWARD 4 -i eth0 -s 192.168.0.0/24 -d 10.137.2.y -p tcp --dport 443 -m conntrack --ctstate NEW -j ACCEPT
####################
# My service routing
# Create a new firewall natting chain for my service
if iptables -t nat -N MY-HTTPS; then
# Add a natting rule if it did not exit (to avoid cluter if script executed multiple times)
iptables -t nat -A MY-HTTPS -j DNAT --to-destination 10.137.2.x
fi
# If no prerouting rule exist for my service
if ! iptables -t nat -n -L PREROUTING | grep --quiet MY-HTTPS; then
# add a natting rule for the traffic (same reason)
iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 443 -d 10.137.1.x -j MY-HTTPS
fi
######################
# My service filtering
# Create a new firewall filtering chain for my service
if iptables -N MY-HTTPS; then
# Add a filtering rule if it did not exit (to avoid cluter if script executed multiple times)
iptables -A MY-HTTPS -s 192.168.x.0/24 -j ACCEPT
fi
# If no prerouting rule exist for my service
if ! iptables -n -L FORWARD | grep --quiet MY-HTTPS; then
# add a natting rule for the traffic (same reason)
iptables -I FORWARD 4 -d 10.137.2.x -p tcp --dport 443 -m conntrack --ctstate NEW -j MY-HTTPS
fi
~~~
And again make this file executable:
Finally make this file executable (so it runs at every Firewall VM update)
~~~
sudo chmod +x /rw/config/qubes_firewall_user_script
@ -177,12 +289,29 @@ sudo chmod +x /rw/config/qubes_firewall_user_script
**3. Allow packets into the VM to reach the service**
Here no routing is required, only filtering. Proceed in the same way as above but store the filtering rule in the `/rw/config/rc.local` script.
Here no routing is required, only filtering. Proceed in the same way as above
but store the filtering rule in the '/rw/config/rc.local' script.
~~~
#!/bin/sh
######################
# My service filtering
/sbin/iptables -I INPUT 5 -p tcp --dport 443 -m conntrack --ctstate NEW -j ACCEPT
# Create a new firewall filtering chain for my service
if iptables -N MY-HTTPS; then
# Add a filtering rule if it did not exit (to avoid cluter if script executed multiple times)
iptables -A MY-HTTPS -s 192.168.x.0/24 -j ACCEPT
fi
# If no prerouting rule exist for my service
if ! iptables -n -L FORWARD | grep --quiet MY-HTTPS; then
# add a natting rule for the traffic (same reason)
iptables -I INPUT 5 -d 10.137.2.x -p tcp --dport 443 -m conntrack --ctstate NEW -j MY-HTTPS
fi
~~~
This time testing should allow connectivity to the service.
This time testing should allow connectivity to the service as long as the
service is up :-)