虚拟机运行概览

首先直观的了解一下利用QEMU运行客户虚拟机的流程。

在命令行中运行QEMU的系统模式的可执行文件,参数声明虚拟CPU的个数,内存大小,指定已经安装好的硬盘镜像,启动QEMU虚拟机主窗口。启动命令格式举例:

qemu-system-x86_64 --enable-kvm -cpu host \ 
-smp cores=4,threads=2,sockets=4 \
-m 16384 -k en-us -hda /pps/guohongwei/vm_test/ubuntu.img -monitor stdio

以下为笔者的Mac-mini上弹出QEMU虚拟机界面的图示:

QEMU的核心初始化流程
客户系统运行之前,QEMU作为全系统模拟软件,需要为客户系统模拟出CPU、主存以及I/O设备,使客户系统就像运行在真实硬件之上,而不用对客户系统代码做修改。

如概览部分所示,由用户为客户系统指定需要的虚拟CPU资源(包括CPU核心数,SOCKET数目,每核心的超线程数,是否开启NUMA等等),虚拟内存资源,具体参数设置参见${QEMU}/qemu-options.hx。创建QEMU主线程,执行QEMU系统的初始化,在初始化的过程中针对每一个虚拟CPU,单独创建一个posix线程。每当一个虚拟CPU线程被调度到物理CPU上执行时,该VCPU对应的一套完整的寄存器集合被加载到物理CPU上,通过VM-LAUNCH或VM-RESUME指令切换到非根模式执行。直到该线程时间片到,或者其它中断引发虚拟机退出,VCPU退出到根模式,进行异常处理。
如下图所示,当用户运行QEMU的System Mode的可执行文件时,QEMU从${QEMU}/vl.c的main函数执行主线程。以下着重分析,客户系统启动之前,QEMU所做的初始化工作:

1.处理命令行参数:
进入vl.c的main函数,首先有一个很长的for(;;)循环,用于分析处理通过命令行传进来的参数,进行相应系统的初始化设置。比如创建多少VCPU,是否开启NUMA,分配多少虚拟内存资源等等。

2.选择虚拟化方案:

configure_accelerator()函数,选择使用哪种虚拟化解决方案。
accel_list[] = {
    { "tcg", "tcg", tcg_available, tcg_init, &tcg_allowed },
    { "xen", "Xen", xen_available, xen_init, &xen_allowed },
    { "kvm", "KVM", kvm_available, kvm_init, &kvm_allowed },
    { "qtest", "QTest", qtest_available, qtest_init, &qtest_allowed },
};
accel_list[]数组声明了QEMU使用的系统模拟方案。“tcg”模式是不使用任何硬件虚拟化辅助方式,采用基于二进制指令翻译的方式,将目标平台的指令代码通过一个叫做TCG的模块翻译为本机可以执行的指令。“xen”、“kvm”分别为两种主流的开源虚拟化解决方案。本文主要针对kvm这种硬件辅助的虚拟化解决方案。

3.初始化内存布局:
新版本的QEMU(1.4)中,cpu_exec_init_all()函数只负责注册主存与IO内存两个顶层的memory_region,并且注册memory_listener。

4.虚拟客户机硬件初始化:
在完成了QEMU自身的初始化工作后,便开始了客户系统核心的初始化工作,主要是QEMU根据命令行参数,为客户系统创建虚拟的CPU、内存、I/O资源。核心过程是machine->init(&args),对于x86目标平台,实际调用的是pc_init1函数。下面着重分析该函数。

4.1 VCPU初始化pc_cpus_init()

void pc_cpus_init(const char *cpu_model)
{
    int i;
 
 
    /* init CPUs */
    if (cpu_model == NULL) {
#ifdef TARGET_X86_64
        cpu_model = "qemu64";
#else
        cpu_model = "qemu32";
#endif
    }
 
 
    for (i = 0; i < smp_cpus; i++) {
        if (!cpu_x86_init(cpu_model)) {
            fprintf(stderr, "Unable to find x86 CPU definition\n");
            exit(1);
        }
    }
}

pc_init1的函数调用关系如下图所示,对于每一个即将创建的VCPU(个数由命令行传入smp_cpus),执行cpu_x86_init,逐层调用后,由qemu_kvm_start_vcpu创建一个VCPU线程,新的VCPU线程将执行qemu_kvm_cpu_thread_fn函数,逐层调用后经过kvm_vcpu_ioctl系统调用切换到核心态,由KVM执行VCPU的创建工作,包括创建VMCS等非根模式下工作所需要的核心数据结构。

4.2 pc_memory_init初始化主存空间
利用mmap系统调用,在QEMU主线程的虚拟地址空间中申明一段连续的大小的空间用于客户机物理内存映射。在QEMU的内存管理结构中逐步添加subregion。此处添加了低于4g的memory_region,高于4g的memory_region,BIOS的memory_region。并调用bochs_bios_init,初始化BIOS。

4.3 i440fx_init初始化桥片及总线结构
4.4 pc_cmos_init初始化CMOS及时钟

客户系统的执行流程

在创建了全部VCPU后,这些VCPU线程并没有被立即调度执行,直至vl.c的main函数执行完全部初始化工作后,调用resume_all_vcpus(),将pcpu->stop和 pcpu->stopped 置为false,当VCPU线程再次被调度到物理CPU上执行时,VCPU正式开始工作。

由于pc_memory_init阶段已将BIOS初始化,创建了“bios.bin”文件到内存的映射,已处于非根模式下的VCPU到客户物理地址0xFFFF0(即“bios.bin”文件被映射到的地址的一个线性偏移)处,获取第一条指令,开始执行客户系统代码。

此时多个VCPU线程由宿主系统轮流调度执行,QEMU主线程处于循环中,用于接收来自客户系统传回的I/O模拟请求。每一个VCPU线程,只要被调度的物理CPU上,便切到非根模式执行客户系统代码,产生需要退出的异常时(如EPT缺页,处理I/O指令等),保存异常原因到VMCS,切换到根模式下,由KVM捕获该异常,查询VMCS异常号执行相应处理,完成后再切换回非根模式,如此循环往复执行下去。

Intel Virtualisation: How VT-x, KVM and QEMU Work Together

VT-x is name of CPU virtualisation technology by Intel. KVM is component of Linux kernel which makes use of VT-x. And QEMU is a user-space application which allows users to create virtual machines. QEMU makes use of KVM to achieve efficient virtualisation. In this article we will talk about how these three technologies work together. Don’t expect an in-depth exposition about all aspects here, although in future, I might follow this up with more focused posts about some specific parts.

Something About Virtualisation First

Let’s first touch upon some theory before going into main discussion. Related to virtualisation is concept of emulation – in simple words, faking the hardware. When you use QEMU or VMWare to create a virtual machine that has ARM processor, but your host machine has an x86 processor, then QEMU or VMWare would emulate or fake ARM processor. When we talk about virtualisation we mean hardware assisted virtualisation where the VM’s processor matches host computer’s processor. Often conflated with virtualisation is an even more distinct concept of containerisation. Containerisation is mostly a software concept and it builds on top of operating system abstractions like process identifiers, file system and memory consumption limits. In this post we won’t discuss containers any more.

A typical VM set up looks like below:

At the lowest level is hardware which supports virtualisation. Above it, hypervisor or virtual machine monitor (VMM). In case of KVM, this is actually Linux kernel which has KVM modules loaded into it. In other words, KVM is a set of kernel modules that when loaded into Linux kernel turn the kernel into hypervisor. Above the hypervisor, and in user space, sit virtualisation applications that end users directly interact with – QEMU, VMWare etc. These applications then create virtual machines which run their own operating systems, with cooperation from hypervisor.

Finally, there is “full” vs. “para” virtualisation dichotomy. Full virtualisation is when OS that is running inside a VM is exactly the same as would be running on real hardware. Paravirtualisation is when OS inside VM is aware that it is being virtualised and thus runs in a slightly modified way than it would on real hardware.

VT-x

VT-x is CPU virtualisation for Intel 64 and IA-32 architecture. For Intel’s Itanium, there is VT-I. For I/O virtualisation there is VT-d. AMD also has its virtualisation technology called AMD-V. We will only concern ourselves with VT-x.

Under VT-x a CPU operates in one of two modes: root and non-root. These modes are orthogonal to real, protected, long etc, and also orthogonal to privilege rings (0-3). They form a new “plane” so to speak. Hypervisor runs in root mode and VMs run in non-root mode. When in non-root mode, CPU-bound code mostly executes in the same way as it would if running in root mode, which means that VM’s CPU-bound operations run mostly at native speed. However, it doesn’t have full freedom.

Privileged instructions form a subset of all available instructions on a CPU. These are instructions that can only be executed if the CPU is in higher privileged state, e.g. current privilege level (CPL) 0 (where CPL 3 is least privileged). A subset of these privileged instructions are what we can call “global state-changing” instructions – those which affect the overall state of CPU. Examples are those instructions which modify clock or interrupt registers, or write to control registers in a way that will change the operation of root mode. This smaller subset of sensitive instructions are what the non-root mode can’t execute.

VMX and VMCS

Virtual Machine Extensions (VMX) are instructions that were added to facilitate VT-x. Let’s look at some of them to gain a better understanding of how VT-x works.

VMXON: Before this instruction is executed, there is no concept of root vs non-root modes. The CPU operates as if there was no virtualisation. VMXON must be executed in order to enter virtualisation. Immediately after VMXON, the CPU is in root mode.

VMXOFF: Converse of VMXON, VMXOFF exits virtualisation.

VMLAUNCH: Creates an instance of a VM and enters non-root mode. We will explain what we mean by “instance of VM” in a short while, when covering VMCS. For now think of it as a particular VM created inside QEMU or VMWare.

VMRESUME: Enters non-root mode for an existing VM instance.

When a VM attempts to execute an instruction that is prohibited in non-root mode, CPU immediately switches to root mode in a trap-like way. This is called a VM exit.

Let’s synthesise the above information. CPU starts in a normal mode, executes VMXON to start virtualisation in root mode, executes VMLAUNCH to create and enter non-root mode for a VM instance, VM instance runs its own code as if running natively until it attempts something that is prohibited, that causes a VM exit and a switch to root mode. Recall that the software running in root mode is hypervisor. Hypervisor takes action to deal with the reason for VM exit and then executes VMRESUME to re-enter non-root mode for that VM instance, which lets the VM instance resume its operation. This interaction between root and non-root mode is the essence of hardware virtualisation support.

Of course the above description leaves some gaps. For example, how does hypervisor know why VM exit happened? And what makes one VM instance different from another? This is where VMCS comes in. VMCS stands for Virtual Machine Control Structure. It is basically a 4KiB part of physical memory which contains information needed for the above process to work. This information includes reasons for VM exit as well as information unique to each VM instance so that when CPU is in non-root mode, it is the VMCS which determines which instance of VM it is running.

As you may know, in QEMU or VMWare, we can decide how many CPUs a particular VM will have. Each such CPU is called a virtual CPU or vCPU. For each vCPU there is one VMCS. This means that VMCS stores information on CPU-level granularity and not VM level. To read and write a particular VMCS, VMREAD and VMWRITE instructions are used. They effectively require root mode so only hypervisor can modify VMCS. Non-root VM can perform VMWRITE but not to the actual VMCS, but a “shadow” VMCS – something that doesn’t concern us immediately.

There are also instructions that operate on whole VMCS instances rather than individual VMCSs. These are used when switching between vCPUs, where a vCPU could belong to any VM instance. VMPTRLD is used to load the address of a VMCS and VMPTRST is used to store this address to a specified memory address. There can be many VMCS instances but only one is marked as current and active at any point. VMPTRLD marks a particular VMCS as active. Then, when VMRESUME is executed, the non-root mode VM uses that active VMCS instance to know which particular VM and vCPU it is executing as.

Here it’s worth noting that all the VMX instructions above require CPL level 0, so they can only be executed from inside the Linux kernel (or other OS kernel).

VMCS basically stores two types of information:

  1. Context info which contains things like CPU register values to save and restore during transitions between root and non-root.
  2. Control info which determines behaviour of the VM inside non-root mode.

More specifically, VMCS is divided into six parts.

  1. Guest-state stores vCPU state on VM exit. On VMRESUME, vCPU state is restored from here.
  2. Host-state stores host CPU state on VMLAUNCH and VMRESUME. On VM exit, host CPU state is restored from here.
  3. VM execution control fields determine the behaviour of VM in non-root mode. For example hypervisor can set a bit in a VM execution control field such that whenever VM attempts to execute RDTSC instruction to read timestamp counter, the VM exits back to hypervisor.
  4. VM exit control fields determine the behaviour of VM exits. For example, when a bit in VM exit control part is set then debug register DR7 is saved whenever there is a VM exit.
  5. VM entry control fields determine the behaviour of VM entries. This is counterpart of VM exit control fields. A symmetric example is that setting a bit inside this field will cause the VM to always load DR7 debug register on VM entry.
  6. VM exit information fields tell hypervisor why the exit happened and provide additional information.

There are other aspects of hardware virtualisation support that we will conveniently gloss over in this post. Virtual to physical address conversion inside VM is done using a VT-x feature called Extended Page Tables (EPT). Translation Lookaside Buffer (TLB) is used to cache virtual to physical mappings in order to save page table lookups. TLB semantics also change to accommodate virtual machines. Advanced Programmable Interrupt Controller (APIC) on a real machine is responsible for managing interrupts. In VM this too is virtualised and there are virtual interrupts which can be controlled by one of the control fields in VMCS. I/O is a major part of any machine’s operations. Virtualising I/O is not covered by VT-x and is usually emulated in user space or accelerated by VT-d.

KVM

Kernel-based Virtual Machine (KVM) is a set of Linux kernel modules that when loaded, turn Linux kernel into hypervisor. Linux continues its normal operations as OS but also provides hypervisor facilities to user space. KVM modules can be grouped into two types: core module and machine specific modules. kvm.ko is the core module which is always needed. Depending on the host machine CPU, a machine specific module, like kvm-intel.ko or kvm-amd.ko will be needed. As you can guess, kvm-intel.ko uses the functionality we described above in VT-x section. It is KVM which executes VMLAUNCH/VMRESUME, sets up VMCS, deals with VM exits etc. Let’s also mention that AMD’s virtualisation technology AMD-V also has its own instructions and they are called Secure Virtual Machine (SVM). Under `arch/x86/kvm/` you will find files named `svm.c` and `vmx.c`. These contain code which deals with virtualisation facilities of AMD and Intel respectively.

KVM interacts with user space – in our case QEMU – in two ways: through device file `/dev/kvm` and through memory mapped pages. Memory mapped pages are used for bulk transfer of data between QEMU and KVM. More specifically, there are two memory mapped pages per vCPU and they are used for high volume data transfer between QEMU and the VM in kernel.

`/dev/kvm` is the main API exposed by KVM. It supports a set of `ioctl`s which allow QEMU to manage VMs and interact with them. The lowest unit of virtualisation in KVM is a vCPU. Everything builds on top of it. The `/dev/kvm` API is a three-level hierarchy.

  1. System Level: Calls this API manipulate the global state of the whole KVM subsystem. This, among other things, is used to create VMs.
  2. VM Level: Calls to this API deal with a specific VM. vCPUs are created through calls to this API.
  3. vCPU Level: This is lowest granularity API and deals with a specific vCPU. Since QEMU dedicates one thread to each vCPU (see QEMU section below), calls to this API are done in the same thread that was used to create the vCPU.

After creating vCPU QEMU continues interacting with it using the ioctls and memory mapped pages.

QEMU

Quick Emulator (QEMU) is the only user space component we are considering in our VT-x/KVM/QEMU stack. With QEMU one can run a virtual machine with ARM or MIPS core but run on an Intel host. How is this possible? Basically QEMU has two modes: emulator and virtualiser. As an emulator, it can fake the hardware. So it can make itself look like a MIPS machine to the software running inside its VM. It does that through binary translation. QEMU comes with Tiny Code Generator (TCG). This can be thought if as a sort of high-level language VM, like JVM. It takes for instance, MIPS code, converts it to an intermediate bytecode which then gets executed on the host hardware.

The other mode of QEMU – as a virtualiser – is what achieves the type of virtualisation that we are discussing here. As virtualiser it gets help from KVM. It talks to KVM using ioctl’s as described above.

QEMU creates one process for every VM. For each vCPU, QEMU creates a thread. These are regular threads and they get scheduled by the OS like any other thread. As these threads get run time, QEMU creates impression of multiple CPUs for the software running inside its VM. Given QEMU’s roots in emulation, it can emulate I/O which is something that KVM may not fully support – take example of a VM with particular serial port on a host that doesn’t have it. Now, when software inside VM performs I/O, the VM exits to KVM. KVM looks at the reason and passes control to QEMU along with pointer to info about the I/O request. QEMU emulates the I/O device for that requests – thus fulfilling it for software inside VM – and passes control back to KVM. KVM executes a VMRESUME to let that VM proceed.

In the end, let us summarise the overall picture in a diagram:

qemu-kvm分析相关推荐

  1. 理解 QEMU/KVM 和 Ceph(2):QEMU 的 RBD 块驱动(block driver)

    本系列文章会总结 QEMU/KVM 和 Ceph 之间的整合: (1)QEMU-KVM 和 Ceph RBD 的 缓存机制总结 (2)QEMU 的 RBD 块驱动(block driver) (3)存 ...

  2. KVM 介绍(7):使用 libvirt 做 QEMU/KVM 快照和 Nova 实例的快照 (Nova Instances Snapshot Libvirt)...

    学习 KVM 的系列文章: (1)介绍和安装 (2)CPU 和 内存虚拟化 (3)I/O QEMU 全虚拟化和准虚拟化(Para-virtulizaiton) (4)I/O PCI/PCIe设备直接分 ...

  3. 【KVM系列07】使用 libvirt 做 QEMU/KVM 快照和 Nova 实例的快照

    第七章 使用 libvirt 做 QEMU/KVM 快照和 Nova 实例的快照 1. QEMU/KVM 快照 1.1 概念 1.2 使用 virsh 实验 1.3 外部快照的删除 2. OpenSt ...

  4. QEMU/KVM原理概述

    1. QEMU与KVM 架构 qemu 和 kvm 架构整体上分为 3 部分,对应着上图的三个部分 (左上.右上和下),3 部分分别是 VMX root 的应用层,VMX no-root 和 VMX ...

  5. ceph查看卷_理解 QEMU/KVM 和 Ceph(3):存储卷挂接和设备名称

    本系列文章会总结 QEMU/KVM 和 Ceph 之间的整合: 这篇文章分析一下一个 Ceph RBD 卷是如何被映射到一个 QEMU/KVM 客户机的,以及客户机中设备的命名问题. 1. 遇到的设备 ...

  6. 2022CTF培训(十三)虚拟化QEMU架构分析QEMU CVE示例分析

    附件下载链接 虚拟化技术基本概念 硬件虚拟化 全虚拟化 提供可以完全模拟基础硬件的VME 可以在VM中运行任何能够在物理硬件上执行的软件,并且可以在每个单独的VM中运行基础硬件支持的任何OS 为每个V ...

  7. qemu kvm 虚拟化

    虚拟化: KVM是一个基于Linux内核的虚拟机,属于完全虚拟化.虚拟机监控的实现模型有两类:监控模型(Hypervisor)和宿主机模型(Host-based).由于监控模型需要进行处理器调度,还需 ...

  8. 【KVM系列08】使用 libvirt 迁移 QEMU/KVM 虚机和 Nova 虚机

    第八章 使用 libvirt 迁移 QEMU/KVM 虚机和 Nova 虚机 1. QEMU/KVM 迁移的概念 1.1 迁移效率的衡量 1.2 KVM 迁移的原理 1.3 使用命令行的方式做动态迁移 ...

  9. 【KVM系列06】Nova 通过 libvirt 管理 QEMU/KVM 虚机

    第六章 Nova 通过 libvirt 管理 QEMU/KVM 虚机 1. Libvirt 在 OpenStack 架构中的位置 2. Nova 中 libvirt 的使用 2.1 创建 QEMU/K ...

  10. 计算机io工作方式,QEMU/KVM和VirtIO工作模式

    https://blog.csdn.net/shengxia1999/article/details/52244119 KVM:Kernel-Based Virtual Machine 基于内核的虚拟 ...

最新文章

  1. Xamarin Forms启动自带模拟器缓慢
  2. 汇编中断知识之INT 1CH
  3. 在项目里交叉使用Swift和OC【转】
  4. 干货|吴恩达Coursera课程教你学习神经网络!
  5. linux下各种颜色文件的意义
  6. 描写计算机老师上课的神态,请你用一段话描写一位老师上课的情景,注意抓住神态语言动作等细节...
  7. (43)FPGA面试技能提升篇(Questa简介)
  8. 细谈最近上线的Vue2.0项目(一)
  9. colormap保存 matlab_matlab中自定义colormap的保存与调用
  10. java认证考试题库看不懂_一道JAVA认证考试试题,有点想不通,各位看看...
  11. Android学习(十二) ContentProvider
  12. 信号处理--几种常见的窗函数
  13. 台式计算机配置清单4500,4500组装电脑配置清单
  14. sdut 1299 最长上升子序列
  15. python问题解答网站_python问题解答网站
  16. 2020年最新微博相关数据API+一站式获取个人微博信息+套娃、批量式获取微博用户信息
  17. 大数据实时处理学期总结
  18. 信息精准智能推送(push)的五个关键
  19. 会声会影找不到MSVCP100.dll、MSVCR100.dll时怎么办?(其他软件也适合使用)
  20. 常用的端口号有哪些?

热门文章

  1. Electron 调用系统工具记事本、计算器等
  2. 面试--java实现一个多人图文聊天室(c/s或b/s),写出思路
  3. 【发现】彻底清除www.go2000.cc的清除方法
  4. WebWorker与WebSocket实现前端消息总线
  5. shell脚本重启tomcat
  6. java 判断一个字符串是否为纯数字
  7. 实验02 Linux文件和目录管理
  8. Scott Hanselman's 推荐的的实用工具集合(2011版)
  9. 用JSP实现基于Web的RSS阅读器
  10. 自定义的GridView控件源代码