If the firewall is restarted while AppVMs are connected, qubesd tries to
reconnect them before starting the GUI agent. However, the firewall was
waiting for the GUI agent to connect before handling the connections.
This led to a 10s delay on restart for each client VM.
Reported by xaki23.
We previously assumed that Qubes would always give clients IP addresses
on a particular network. However, it is not required to do this and in
fact uses a different network for disposable VMs.
With this change:
- We no longer reject clients with unknown IP addresses
- The `Unknown_client` classification is gone; we have no way to tell
the difference between a client that isn't connected and an external
address.
- We now consider every client to be on a point-to-point link and do not
answer ARP requests on behalf of other clients. Clients should assume
their netmask is 255.255.255.255 (and ignore /qubes-netmask).
This is a partial fix for #9. It allows disposable VMs to connect to the
firewall but for some reason they don't process any frames we send them
(we get their ARP requests but they don't get our replies). Taking eth0
down in the disp VM, then bringing it back up (and re-adding the routes)
allows it to work.
- Unpin bootvar and use register ~argv:no_argv` instead.
- Use new name for uplink device ("0", not "tap0").
- Don't configure logging - mirage does that for us now.
We don't need the GUI anyway. Error was:
Fatal error: exception Failure("End-of-file from GUId in dom0")
Raised at file "pervasives.ml", line 30, characters 22-33
Called from file "src/core/lwt.ml", line 754, characters 44-47
Mirage exiting with status 2
Do_exit called!
Added explicit NAT target, allowing NAT even within client net and
making it clear that NAT is used externally.
Changed Redirect_to_netvm to NAT_to, and allow specifying any target
host.