Author: Gavin <email@example.com>
Usually, the guest OS is running on top of QEMU, which is regarded as normal linux process running on top of host OS. Just like following figure shows, multiple guest OS and QEMU processes could be coexisting simultaneously. On the other hand, the host OS is running beneath them to control all possible resources like CPU, memory, I/O etc.
Figure 1. QEMU and Guest/Host OS
At the early stage of investigation on KVM and QEMU, I thought KVM is full capable of emulating the whole system to support the upper guest OS. It turned to be wrong totally with more understanding KVM and QEMU architecture. The fact is that KVM is helping QEMU to achive more efficiency and performance. Further more, QEMU and host OS together supplies the integrated running environment for the upper guest OS.
First of all, guest OS is unaware of the situation that it's running on top of QEMU. That's to say, guest OS thinks it's running on excluded physical RAM, which starting from address 0x0 usually. On the other hand, QEMU is one of the running processes on top of host OS and what address space QEMU can see is virtual space. On the other hand, KVM is running in the kernel space of host OS and it should be aware of the reserved physical address range dedicated for upper guest OS. The situation looks a little bit comprehensive. In order to simplify that for better understanding, I'd like to split the complex into pieces.
Guest OS: It's maintaining its own page table so that Guest Virtual Address (GVA) could be translated into the Guest Phyiscal Address (GPA), which is only meaningful to guest OS itself. In Power architecture, which is being maintained by IBM together with other organizations like Freescale, the translation from GVA to GPA is done through Hash Page Table (HPT). As well, the physical address of the HPT is traced by dedicated Special Register (SR) - SDR1.
QEMU: Firstly, it's one of user processes of host OS. From that standpoint, QEMU only owns the virtual space, which can be translated to Host Physical Address (HPA) space by QEMU's page table.
Host OS/KVM: Finally, guest OS will tie up the CPU for some period to fulfil guest OS requests including computing, memory and IO access. In order for that, host OS has to translate GPA to HPA. That's the most difficult part of address translation across the virtualized system.
In oder to help understanding the situation more clearly, following figure is presented to show how the different address types are translated from top to the bottom. More detailed, GVA is translated to GPA by Hash Page Table (HPT) of the guest OS. GPA is finally translated to HPA by host OS.
Figure 2. Address Translation