I have a physical machine with 24 GB RAM hosting a few VMs using libvirt-qemu.
When creating VMs, I assign a lot of memory and no swap, so that the total of assigned memory can be greater than the physical memory on the host, and the swap is managed globally at host level. I found this advice on the Internet and it makes sense to me.
I recently found out we have memory issues and before adding physical memory to the machine, I launched htop
in the host and the guests, and there's something I don't quite understand.
Guests
Guest 1
- Total: 16G
- Used: 2.5G
- Used + Cache: 13G
Guest 2
- Total: 16G
- Used: 1.8G
- Used + Cache: 3.6G
Guest 3
- Total: 10G
- Used: 0.5G
- Used + Cache: 1G
... (ignoring a few smaller guests)
Host
- Total: 23.5G
- Used: 23.2G
- Used + Cache: 23.5G
- Swap total: 18.6G
- Used: 12.5G
List of processes on host (I only copied guest 1, 2 and 3 in numerical order):
PID USER PRI NI VIRT RES SHR S CPU% MEM% TIME+ Command
2212 libvirt-q 20 0 21.3G 9.6G 3476 S 118. 41.0 1867h qemu-system-x86_64 -enable-kvm -name guest=guest_1 ...
2391 libvirt-q 20 0 21.2G 2455M 1020 S 4.0 10.2 56h49:10 qemu-system-x86_64 -enable-kvm -name guest=guest_2 ...
40694 libvirt-q 20 0 14.7G 7545M 1668 S 1.3 31.4 94h35:35 qemu-system-x86_64 -enable-kvm -name guest=guest_3 ...
...
What I'm trying to understand is how come Guest 1 currently uses 2.5G but corresponding qemu processes uses 9.6G physical RAM on the host.
All machines are Debian, if that matters. Host is debian Stratch and guests are Stretch and Jessie.
Answer
What I'm trying to understand is how come Guest 1 currently uses 2.5G
but corresponding qemu processes uses 9.6G physical RAM on the host.
According the data you posted higher up, your guest 1 is using 13G of the memory allocated to it by the host (split between allocations for processes & allocations for cache). Your host only shows 9.6 G resident so some of that 13GB has been pushed out to swap I presume.
No comments:
Post a Comment