From 875fc70ebf5a95fb5b1270066be710ea15856ff8 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Rafa=C5=82=20Wojdy=C5=82a?= Date: Sun, 10 Mar 2024 19:38:22 +0100 Subject: [PATCH] Update windows debugging instructions MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Signed-off-by: Rafał Wojdyła --- developer/debugging/windows-debugging.md | 291 +++++------------------ 1 file changed, 59 insertions(+), 232 deletions(-) diff --git a/developer/debugging/windows-debugging.md b/developer/debugging/windows-debugging.md index 87f22b5a..aec81298 100644 --- a/developer/debugging/windows-debugging.md +++ b/developer/debugging/windows-debugging.md @@ -10,253 +10,80 @@ ref: 50 title: Windows debugging --- -Debugging Windows code can be tricky in a virtualized environment. The guide below assumes Xen hypervisor and Windows 7 VMs. +Debugging Windows code can be tricky in a virtualized environment. The guide below assumes Qubes 4.1 and Windows 7 or later VMs. User-mode debugging is usually straightforward if it can be done on one machine. Just duplicate your normal debugging environment in the VM. -Things get complicated if you need to perform kernel debugging or troubleshoot problems that only manifest on system boot, user logoff or similar. For that you need two Windows VMs: the *host* and the *target*. The *host* will contain [WinDbg](https://msdn.microsoft.com/en-us/library/windows/hardware/ff551063(v=vs.85).aspx) installation, your source code and private symbols. The *target* will run the code being debugged. Both will be linked by virtual serial ports. +Things get complicated if you need to perform kernel debugging or troubleshoot problems that only manifest on system boot, user logoff or similar. For that you need two Windows VMs: the *host* and the *target*. The *host* will contain the debugger, your source code and private symbols. The *target* will run the code being debugged. We will use kernel debugging over network which is supported from Windows 7 onwards. The main caveat is that Windows kernel supports only specific network adapters for this, and the default one in Qubes won't work. -- First, you need to prepare separate copies of both *target* and *host* VM configuration files with some changes. Copy the files from **/var/lib/qubes/appvms/vmname/vmname.conf** to some convenient location, let's call them **host.conf** and **target.conf**. -- In both copied files add the following line at the end: `serial = 'pty'`. This will make Xen connect VM's serial ports to dom0's ptys. -- From now on you need to start both VMs like this: `qvm-start --custom-config=/your/edited/host.conf host` -- To connect both VM serial ports together you will either need [socat](http://www.dest-unreach.org/socat/) or a custom utility described later. -- To determine which dom0 pty corresponds to VM's serial port you need to read xenstore, example script below: +## Important note -```bash -#!/bin/sh +- Do not install Xen network PV drivers in the target VM. Network kernel debugging needs a specific type of NIC or it won't work, the network PV drivers interfere with that. -id1=$(xl domid "$1-dm") -tty1=$(xenstore-read /local/domain/${id1}/device/console/3/tty) -echo $tty1 -``` +- If you have kernel debugging active when the Xen PV drivers are being installed, make sure to disable it before rebooting (`bcdedit /set debug off`). You can re-enable debugging after the reboot. The OS won't boot otherwise. I'm not sure what's the exact cause. I know that busparams for the debugging NIC change when PV drivers are installed (see later), but even changing that accordingly in the debug settings doesn't help -- so it's best to disable debug for this one reboot. -Pass it a running VM name and it will output the corresponding pty name. +## Modifying the NIC of the target VM -- To connect both ptys you can use [socat](http://www.dest-unreach.org/socat/) like that: +You will need to create a custom libvirt config for the target VM. See [the documentation](https://dev.qubes-os.org/projects/core-admin/en/latest/libvirt.html) for overview of how libvirt templates work in Qubes. The following assumes the target VM is named `target-vm`. -```bash -#!/bin/sh +- Edit `/usr/share/qubes/templates/libvirt/xen.xml` to prepare our custom config to override just the NIC part of the global template: + - add `{{ '{% block network %}' }}` before `{{ '{% if vm.netvm %}' }}` + - add `{{ '{% endblock %}' }}` after the matching `{{ '{% endif %}' }}` +- Copy `/usr/share/qubes/templates/libvirt/devices/net.xml` to `/etc/qubes/templates/libvirt/xen/by-name/target-vm.xml`. +- Add `` to the `` section. +- Enclose everything within `{{ '{% block network %}' }}` + `{{ '{% endblock %}' }}`. +- Add `{{ "{% extends 'libvirt/xen.xml' %}" }}` at the start. +- The final `target-vm.xml` should look something like this: -id1=$(xl domid "$1-dm") -id2=$(xl domid "$2-dm") -tty1=$(xenstore-read /local/domain/${id1}/device/console/3/tty) -tty2=$(xenstore-read /local/domain/${id2}/device/console/3/tty) -socat $tty1,raw $tty2,raw -``` +~~~ +{% raw %} +{% extends 'libvirt/xen.xml' %} +{% block network %} + + + + +