`
cloudtech
  • 浏览: 4596013 次
  • 性别: Icon_minigender_1
  • 来自: 武汉
文章分类
社区版块
存档分类
最新评论

KVM 内存管理概述

 
阅读更多

This was written in February 2010, during the era of qemu-kvm 0.12.


The qemu/kvm process runs mostly like a normal Linux program. It allocates its memory with normal malloc() or mmap() calls. If a guest is going to have 1GB of physical memory, qemu/kvm will effectively do a malloc(1<<30), allocating 1GB of host virtual space. However, just like a normal program doing a malloc(), there is no actual physical memory allocated at the time of the malloc(). It will not be actually allocated until the first time it is touched.

Once the guest is running, it sees that malloc()'d memory area as being its physical memory. If the guest's kernel were to access what it sees as physical address 0x0, it will see the first page of that malloc() done by the qemu/kvm process.


It used to be that every time a KVM guest changed its page tables, the host had to be involved. The host would validate that the entries the guest put in its page tables were valid and that they did not access any memory which was not allowed. It did this with two mechanisms.

One was that the actual set of page tables being used by the virtualization hardware are separate from the page tables that the guest *thought* were being used. The guest first makes a change in its page tables. Later, the host notices this change, verifies it, and then makes a real page table which is accessed by the hardware. The guest software is not allowed to directly manipulate the page tables accessed by the hardware. This concept is called shadow page tables and it is a very common technique in virtualization.

The second part was that the VMX/AMD-V extensions allowed the host to trap whenever the guest tried to set the register pointing to the base page table (CR3).

This technique works fine. But, it has some serious performance implications. A single access to a guest page can take up to 25 memory accesses to complete, which gets very costly. See this paper: http://developer.amd.com/assets/NPT-WP-1%201-final-TM.pdf for more information. The basic problem is that every access to memory must go through both the page tables of the guest and then the page tables of the host. The two-dimensional part comes in because the page tables of the guest must *themselves* go through the page tables of the host.

It can also be very costly for the host to verify and maintain the shadow page tables.


Both AMD and Intel sought solutions to these problems and came up with similar answers called EPT and NPT. They specify a set of structures recognized by the hardware which can quickly translate guest physical addresses to host physical addresses *without* going through the host page tables. This shortcut removes the costly two-dimensional page table walks.

The problem with this is that the host page tables are what we use to enforce things like process separation. If a page was to be unmapped from the host (when it is swapped, for instance), it then we *must* coordinate that change with these new hardware EPT/NPT structures.


The solution in software is something Linux calls mmu_notifiers. Since the qemu/kvm memory is normal Linux memory (from the host Linux kernel's perspective) the kernel may try to swap it, replace it, or even free it just like normal memory.

But, before the pages are actually given back to the host kernel for other use, the kvm/qemu guest is notified of the host's intentions. The kvm/qemu guest can then remove the page from the shadow page tables or the NPT/EPT structures. After the kvm/qemu guest has done this, the host kernel is then free to do what it wishes with the page.


A day in the life of a KVM guest physical page:


Fault-in path

  1. QEMU calls malloc() and allocates virtual space for the page, but no backing physical page
  2. The guest process touches what it thinks is a physical address, but this traps into the host since the memory is unallocated
  3. The host kernel sees a page fault, calls do_page_fault() in the area that was malloc()'d, and if all goes well, allocates some memory to back it.
  4. The host kernel creates a pte_t to connect the malloc()'d virtual address to a host physical address, makes rmap entries, puts it on the LRU, etc...
  5. mmu_notifier change_pte()?? is called, which allows KVM to create an NPT/EPT entry for the new page. (and an spte entry??)
  6. Host returns from page fault, guest execution resumes

Swap-out path

Now, let's say the host is under memory pressure. The page from above has gone through the Linux LRU and has found itself on the inactive list. The kernel decides that it wants the page back:

  1. The host kernel uses rmap structures to find out in which VMA (vm_area_struct) the page is mapped.
  2. The host kernel looks up the mm_struct associated with that VMA, and walks down the Linux page tables to find the host hardware page table entry (pte_t) for the page.
  3. The host kernel swaps out the page and clears out the pte_t (let's assume that this page was only used in a single place). But, before freeing the page:
  4. The host kernel calls the mmu_notifier invalidate_page(). This looks up the page's entry in the NPT/EPT structures and removes it.
  5. Now, any subsequent access to the page will trap into the host ((2) in the fault-in path above)



Memory Overcommit

Given all of the above, it should be apparent that just like normal processes on linux, the host memory allocated to the host processes representing kvm guests may be overcommitted. One distro's discussion of this appears here .

分享到:
评论

相关推荐

    kvm-vmi:基于KVM的虚拟机自省

    该项目将虚拟机自检添加到KVM虚拟机管理程序。 虚拟机自检是一种旨在仅基于VM的硬件状态来了解来宾的执行上下文的技术,其用途如下: 调试 恶意软件分析 实时内存分析 操作系统强化 监控方式 模糊测试 有关更多...

    libkvmchan:Xen vchan API在KVM上的实现

    libkvmchan是用于KVM + QEMU的Xen 共享内存API的实现。 它利用提供内存后端,并使用自定义守护程序来允许对vchans进行运行时配置。 完整的libvchan API已实现,应该可以轻松地将Xen特定的应用程序移植到KVM。 概述...

    VSIP服务器虚拟化解决方案.pptx

    至少10Gb以上网络连接 运行虚拟机的宿主服务器,CPU必须支持VT技术,CPU总核心数量至少8核,内存容量大于16GB 运行VSIP管理服务的主控服务器,单台服务器CPU总核心数量至少4核,内存容量大于8GB 主管理服务器监控与...

    puppet-ksm:管理 Linux 内核同页合并 (KSM)

    KSM 子系统具有广泛的适用性,但通常用于充当KVM虚拟机管理程序的主机。 典型的操作模式是ksmtuned服务启动和停止基于可配置启发式的ksm服务。 目前,这是该模块唯一支持的配置。 警告的话 有关ksmtuned配置的...

    服务器基础知识介绍(服务器全部组件).pdf

    高性能 高可 靠 服务器特点 特点 可用性 易用性 可管理性 可靠性 可扩展性 服务器的应用模型 C/S结构 网 络 服务器 客户端 客户端 客户端 一、服务器的基本概念 1.1 服务器是什么 1.2 服务器概述 1.3 服务器设备的...

    LINUX编程白皮书

    对内存管理、进程及其通信机制、PCI、内核模块编程及内核系统结构作了详细的解释,且附有很多程序代码实例。对深入研究Linux下的编程有很大的帮助。 目 录 雷蒙序 简介 Linux文档工程小组“公告” 译者序 第一部分...

    linux编程白皮书

    对内存管理、进程及其通信机制、PCI、内核模块编程及内核系统结构作了详细的解释,且附有很多程序代码实例。对深入研究Linux下的编程有很大的帮助。 目 录 雷蒙序 简介 Linux文档工程小组“公告” 译者序 第一部分...

    Linux编程从入门到精通

    第2章 内存管理 15 2.1 虚拟内存抽象模型 15 2.1.1 请求调页 17 2.1.2 交换 17 2.1.3 共享虚拟内存 18 2.1.4 物理寻址模式和虚拟寻址模式 18 2.1.5 访问控制 18 2.2 高速缓存 19 2.3 Linux页表 20 2.4 页分配和回收 ...

    Linux编程白皮书

    第2章 内存管理 15 2.1 虚拟内存抽象模型 15 2.1.1 请求调页 17 2.1.2 交换 17 2.1.3 共享虚拟内存 18 2.1.4 物理寻址模式和虚拟寻址模式 18 2.1.5 访问控制 18 2.2 高速缓存 19 2.3 Linux页表 20 2.4 页分配和回收 ...

    LINUX编程白皮书 (全集)

    第2章 内存管理 15 2.1 虚拟内存抽象模型 15 2.1.1 请求调页 17 2.1.2 交换 17 2.1.3 共享虚拟内存 18 2.1.4 物理寻址模式和虚拟寻址模式 18 2.1.5 访问控制 18 2.2 高速缓存 19 2.3 Linux页表 20 2.4 页分配和回收 ...

    Linux编程资料

    第2章 内存管理 15 2.1 虚拟内存抽象模型 15 2.1.1 请求调页 17 2.1.2 交换 17 2.1.3 共享虚拟内存 18 2.1.4 物理寻址模式和虚拟寻址模式 18 2.1.5 访问控制 18 2.2 高速缓存 19 2.3 Linux页表 20 2.4 页分配和回收 ...

    用于无服务器计算的安全且快速的微型虚拟机。

    概述Firecracker 的主要组件是一个虚拟机监视器 (VMM),它使用 Linux 内核虚拟机 (KVM) 来创建和运行微型虚拟机。Firecracker 拥有简约的设计。它排除了不必要的设备和面向访客的功能,以减少每个 microVM 的内存...

    37篇经过消化的云计算论文

    防病毒安全产品由于他们有大量签名的文件,消耗了大量PC内存和资源,防病毒云计算模型变成流行的解决方案。本文提出了AMSDS在防病毒云下的自动恶意病毒签名发现系统,经测试有很好的性能。 31、 SnowFlock: Rapid ...

    37篇经过消化云计算论文打包下载

    防病毒安全产品由于他们有大量签名的文件,消耗了大量PC内存和资源,防病毒云计算模型变成流行的解决方案。本文提出了AMSDS在防病毒云下的自动恶意病毒签名发现系统,经测试有很好的性能。 31、 SnowFlock: Rapid ...

Global site tag (gtag.js) - Google Analytics