Every VM in Qubes is connected to the network via a FirewallVM, which is used to enforce network-level policies. By default there is one default Firewall VM, but the user is free to create more, if needed.
Note that if you specify a rule by DNS name it will be resolved to IP(s) *at the moment of applying the rules*, and not on the fly for each new connection. This means it will not work for serves using load balancing. More on this in the message quoted below.
Normally Qubes doesn't let the user to stop a NetVM if there are other VMs running which use it as their own NetVM. But in case the NetVM stops for whatever reason (e.g. it crashes, or the user forces its shutdown via qvm-kill via terminal in the netvm), then there is an easy way to restore the connection to the netvm by issuing:
Normally VMs do not connect directly to the actual NetVM which has networking devices, but rather to the default FirewallVM first, and in most cases it would be the NetVM that would crash, e.g. in response to S3 sleep/restore or other issues with WiFi drivers. In that case it is necessary to just issue the above command once, for the FirewallVM (this assumes default VM-nameing used by the default Qubes installation):
Normally any networking traffic between VMs is prohibited for security reasons. However, in special situations, one might one to selectively allow specific VMs to be able to establish networking connectivity between each other. For example, this might come useful in some development work, when one wants to test networking code, or to allow file exchange between HVM domains (which do not have Qubes tools installed) via SMB/scp/NFS protocols.
- Make sure both A and B are connected to the same firewall vm (by default all VMs use the same firewall VM).
- Note the Qubes IP addresses assigned to both VMs. This can be done using the `qvm-ls -n` command, or via the Qubes Manager preferences pane for each VM.
- Start both VMs, and also open a terminal in the firewall VM
- Now you should be able to reach the VM B from A -- test it using e.g. ping issues from VM A. Note however, that this doesn't allow you to reach A from B -- for this you would need another rule, with A and B addresses swapped.
- If everything works as expected, then the above iptables rule(s) should be written into firewall VM's `qubes_firewall_user_script` script which is run on every firewall update. This is necessary, because Qubes orders every firewall VM to update all the rules whenever new VM is started in the system. If we didn't enter our rules into this "hook" script, then shortly our custom rules would disappear and inter-VM networking would stop working. Here's an example how to update the script (note that, by default, there is no script file present, so we likely will be creating it, unless we had some other custom rules defines earlier in this firewallvm):
In order to allow a service present in an VM to be exposed to the outside world in the default setup (where the VM has the FirewallVM as network VM, which in turn has the NetVM as network VM) the following needs to be done:
As an example we can take the use case of a web server listenning on port 443 that we want to expose on our physical interface eth0, but only to our local network 192.168.0.0/24.
**1. Allow packets to be routed from the outside world for the exposed service to the FirewallVM**
In System Tools (Dom0) / Terminal, take note of the firewallVM IPAddress to which packet will be routed using ` qvm-ls -n `
In NetVM terminal, take note of the interface name and IPAddress on which you want to expose your service (i.e. eth0, 192.168.0.10) using ` ifconfig | grep -i cast `
> Note: The vifx.0 interface is the one connected to your firewallVM so it is not an outside world interface...
Still in NetVM terminal, code the appropriate natting firewall rule to intercept traffic on the inbound interface for the service and nat the destination IP address to the one of the firewallVM for the traffic to be routed there:
Still in FirewallVM terminal, code the appropriate natting firewall rule to intercept traffic on the inbound interface for the service and nat the destination IP address to the one of the VM for the traffic to be routed there: