Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Mount disconnects after reading 1 million files #1136

Closed
dakusan opened this issue Aug 26, 2024 · 20 comments
Closed

Mount disconnects after reading 1 million files #1136

dakusan opened this issue Aug 26, 2024 · 20 comments
Assignees

Comments

@dakusan
Copy link

dakusan commented Aug 26, 2024

Describe the bug
I have a linux host with an ext4 file system and a Windows 10 guest running in qemu through virt-manager. I was trying to back up the drive with backblaze but the drive kept disconnecting while scanning the file system. After running numerous tests I found out that the file system always disconnects after reading directories with a combined count of over 1 million files. By disconnect, I mean that the drive shows as empty and sometimes says "cannot access" in Windows.

To Reproduce
As an example, lets say I have the following directories, each with 300k files in their subdirectories:

F1/1
F1/2
F1/3
F1/4
F2
F3

If I run: dir /s /b F1
It drops while scanning through F1/4 (So I reboot the VM and try the next test)

If I run: dir /s /b F2; dir /s /b F1
It drops while scanning through F1/3.

If I run: dir /s /b F2; dir /s /b F3; dir /s /b F1
It drops while scanning through F1/2.

Expected behavior
Drive should not drop/disconnect.

Host:

  • Disto: Linux Mint 22
  • Kernel version: 6.8.0-40-generic 40-Ubuntu SMP PREEMPT_DYNAMIC x86_64 GNU/Linux
  • QEMU version: 8.2.2 (Debian 1:8.2.2+ds-0ubuntu1)
  • QEMU command line: I've removed a lot of the extraneous stuff added by virt-manager and tried to group the commands. The relevant device is tagged "PP4-f". Would it be helpful to include the dumpxml for the virsh machine?
-name guest=win10_backup,debug-threads=on -S -m size=16777216k -overcommit mem-lock=off -no-user-config -nodefaults -boot menu=on,strict=on \
-machine pc-q35-8.2,usb=off,vmport=off,dump-guest-core=off,memory-backend=pc.ram,hpet=off,acpi=on -accel kvm \
-cpu host,migratable=on,hv-time=on,hv-relaxed=on,hv-vapic=on,hv-spinlocks=0x1fff -smp 6,sockets=1,dies=1,cores=3,threads=2 \
-object {"qom-type":"memory-backend-memfd","id":"pc.ram","share":true,"x-use-canonical-path-for-ramblock-id":false,"size":17179869184} \
-chardev socket,id=charmonitor,fd=31,server=on,wait=off -mon chardev=charmonitor,id=monitor,mode=control \
-rtc base=localtime,driftfix=slew -global kvm-pit.lost_tick_policy=delay -no-shutdown -global ICH9-LPC.disable_s3=1 -global ICH9-LPC.disable_s4=1 \
-blockdev {"driver":"file","filename":"/var/lib/libvirt/images/win10_backup.qcow2","node-name":"libvirt-1-storage","auto-read-only":true,"discard":"unmap"} \
-blockdev {"node-name":"libvirt-1-format","read-only":false,"discard":"unmap","driver":"qcow2","file":"libvirt-1-storage","backing":null} \
-device {"driver":"virtio-blk-pci","bus":"pci.5","addr":"0x0","drive":"libvirt-1-format","id":"virtio-disk0","bootindex":2} \
-chardev socket,id=chr-vu-fs3,path=/var/lib/libvirt/qemu/domain-2-win10_backup/fs3-fs.sock \
-device {"driver":"vhost-user-fs-pci","id":"fs3","chardev":"chr-vu-fs3","queue-size":1024,"tag":"PP4-f","bus":"pci.10","addr":"0x0"} \
-global ICH9-LPC.noreboot=off -watchdog-action reset \
-device {"driver":"virtio-balloon-pci","id":"balloon0","bus":"pci.6","addr":"0x0"} \
-sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny -msg timestamp=on
  • libvirt version: 10.0.0
  • libvirt XML file: ?

VM:

  • Windows version: Windows 10 Home 22H2 build 19045.4780
  • Which driver has a problem: virtiofs
  • Driver version or commit hash that was used to build the driver: ?

Additional context
In libvirtd.conf I added the following 2 lines:
log_filters="1:qemu 1:libvirt 4:object 4:json 4:event 1:util 1:virtiofs"
log_outputs="1:file:/var/log/libvirt/file.log"

With these I still did not see anything relevant in:

  • /var/log/libvirt/file.log
  • /var/log/libvirt/qemu/win10_backup.log
  • /var/log/libvirt/qemu/win10_backup-fs3-virtiofsd.log

I also looked at the WinFsp logs in the Windows Event Viewer and saw nothing relevant there. Not sure where else to try and look.

@YanVugenfirer
Copy link
Collaborator

@xiagao Please check

@xiagao
Copy link

xiagao commented Sep 3, 2024

I didn't reproduce this issue on my env.
Host:
virtiofsd-1.11.1-1.el9.x86_64
qemu-kvm-9.0.0-3.el9.x86_64
edk2-ovmf-20240524-2.el9.noarch
seabios-bin-1.16.3-2.el9.noarch
kernel-5.14.0-494.el9.x86_64

Guest:
Win10-64bit, build 19041
virtio-win-prewhql-0.1-253
winfsp-2.0.23075.msi

Steps:

  1. prepare a shared dir on top of ext4 on host
dd if=/dev/zero of=./ext4_image.img bs=1M count=102400
mkfs.ext4 ext4_image.img
mount -o loop ext4_image.img /root/avocado/data/avocado-vt/virtio_fs_test/
  1. start virtiofsd daemon with the shared dir.
usr/libexec/virtiofsd --socket-path=/var/tmp/avocado-vt-vm1-fs-virtiofsd.sock -o ource=/root/avocado/data/avocado-vt/virtio_fs_test/ -o cache=auto
  1. start win10 64bit VM with qemu-kvm
     -m 16384,maxmem=20G \
      -object '{"size": 17179869184, "share": true, "id": "mem-mem1", "qom-type": "memory-backend-memfd", "x-use-canonical-path-for-ramblock-id":false}'  \
      -smp 24,maxcpus=24,cores=12,threads=1,dies=1,sockets=2  \
      -numa node,memdev=mem-mem1,nodeid=0  \
      -chardev socket,id=char_virtiofs_fs,path=/var/tmp/avocado-vt-vm1-fs-virtiofsd.sock \
      -device '{"id": "pcie-root-port-6", "port": 6, "driver": "pcie-root-port", "addr": "0x1.0x6", "bus": "pcie.0", "chassis": 7}' \
      -device '{"id": "vufs_virtiofs_fs", "chardev": "char_virtiofs_fs", "tag": "myfs", "queue-size": 1024, "driver": "vhost-user-fs-pci", "bus": "pcie-root-port-6", "addr": "0x0"}' \
      -device '{"driver": "virtio-balloon-pci", "id": "balloon0", "bus": "pcie-root-port-10", "addr": "0x0"}' \
    

4. create folders on shared dir

for i in {1..3}; do mkdir F$i; done
cd F1 ; for i in {1..4}; do mkdir A$i; done
tree
.
├── F1
│   ├── A1
│   ├── A2
│   ├── A3
│   └── A4
├── F2
└── F3

5. create 300k files in their subdirectories.

for j in F2 F3; do for i in {1..307200}; do dd if=/dev/urandom of=./$j/file-$j-$i.txt bs=1k count=1; done; done
for j in A1 A2 A3 A4; do for i in {1..307200}; do dd if=/dev/urandom of=./F1/$j/file-$j-$i.txt bs=128 count=1; done; done

6. inside VM, start virtiofs service and get the shared volum
7. run `dir /s /b F2; dir /s /b F1` and etc, everything works, no disconnect, no drop.


@xiagao
Copy link

xiagao commented Sep 3, 2024

But after reading the millions of files, the virtiofsd quit. It indeed has some problem.

Reading shared dir worked.
virtiofs-dir

After reading shared dir, the virtiofs stopped working.
virtiofs-dir2

Checking the virtiofsd daemon process, it quitted.

@dakusan
Copy link
Author

dakusan commented Sep 4, 2024

I'm glad it's not just me. Thanks. Is there a different project I should be reporting this too? I thought it was most likely yours but I wasn't sure.

@xiagao
Copy link

xiagao commented Sep 4, 2024

I'm glad it's not just me. Thanks. Is there a different project I should be reporting this too? I thought it was most likely yours but I wasn't sure.

Thanks reporting.
Do you mean virtiofsd gitlab repo? I think currently keeping one is ok.
Besides, this issue was also tracked internally. Will update you here if there is any update.

@dakusan
Copy link
Author

dakusan commented Sep 4, 2024

I'm glad it's not just me. Thanks. Is there a different project I should be reporting this too? I thought it was most likely yours but I wasn't sure.

Thanks reporting. Do you mean virtiofsd gitlab repo? I think currently keeping one is ok. Besides, this issue was also tracked internally. Will update you here if there is any update.

Cool, thanks.

@LRomandine
Copy link

I just started hitting this bug.

Interestingly I was not encountering it when I was on Ubuntu 22.04.4 if I used the virtio-win-guest-tools 0.1.225-2 But any version newer and I would hit the bug (or a similar one), so I just stayed on that version. Qemu came bundled with virtiofsd so I am unsure which version it was using.

Old install that worked:
Host Ubuntu 22.04.4
libvirt 8.0.0
qemu 6.2

After upgrading to Ubuntu 24.04.1 earlier this week I had to install virtiofsd separately as Qemu no longer bundled it and I started hitting this bug. I have tried every virtio-win-guest-tools between 0.1.225-2 and 0.1.262-2 (latest as of posting). I have also tried manually upgrading virtiofsd from 1.10.0 (from Ubuntu repo) to 1.11.1 (latest as of posting). I still hit the bug. virtiofsd is not crashing, it is the windows client as it is fixed by restarting the windows service.

I set a powershell script to check the mount every minute and restart the service. I noticed every ~2 hours it would crash as my backup software was scanning the share, likely hitting the 1 million file mark as I have ~1.1 million files on that drive.

Tech details
Host Ubuntu 24.04.1
libvirt 10.0.0-2
qemu 8.2.2
virtiofsd 1.11.1

Guest
Windows 11 pro 23H2 (22631.4037)
virtio-win-guest-tools 0.1.262-2
libvirt XML file attached with a few entries redacted.
win11.txt

@YanVugenfirer
Copy link
Collaborator

I am wondering if anyone experiences this bug with Linux guests as well

@LRomandine
Copy link

@YanVugenfirer I just fired up an Ubuntu 22.04 VM to test that. Works perfectly fine. I used an identical virtiofs config to my windows VM (copied the XML), same host. I can run the find command over and over again with zero issues.

root@admin-test:/Storage# find .|wc -l
1006990
root@admin-test:/Storage# find .|wc -l
1006990

@YanVugenfirer
Copy link
Collaborator

@LRomandine Can you please try running virtiofsd with --log-level debug ?

And a separate run with --log-level debug and --inode-file-handles=mandatory while running virtiofsd as root

@LRomandine
Copy link

LRomandine commented Sep 10, 2024

One clarification

KVM + libvirt + QEMU seems to launch virtiofsd as root normally. This ps -ef|grep virtiofsd is when running the VM I attached the XML for above. I am conducting these tests with virtiofsd version 1.11.1

root        6726       1  0 12:46 ?        00:00:00 /usr/lib/qemu/virtiofsd --fd=31 -o source=/Storage,cache=none
root        6742    6726  0 12:46 ?        00:00:00 /usr/lib/qemu/virtiofsd --fd=31 -o source=/Storage,cache=none

First Test

I ran the following command as the same user my VMs run as libvirt-qemu and log file is attached.
Note: This took me like 2 hours to figure out AppArmor was blocking the VM from reading the socket so for anyone else attempting this and getting permission denied errors, adjust AppArmor.

/usr/libexec/virtiofsd --log-level debug --shared-dir /Storage/virtiofs-test --socket-path /run/virtiofsd/virtiofsd.sock --sandbox none > /tmp/virtiofsd.log 2>&1

I adjsuted my QEMU XML to the following

<filesystem type="mount">
  <driver type="virtiofs" queue="1024"/>
  <source socket="/run/virtiofsd/virtiofsd.sock"/>
  <target dir="Storage"/>
  <alias name="fs0"/>
  <address type="pci" domain="0x0000" bus="0x05" slot="0x00" function="0x0"/>
</filesystem>

This test crashed as expected. log file

Second Test

I ran the following command as root and log file is attached

/usr/libexec/virtiofsd --log-level debug --shared-dir /Storage/virtiofs-test --socket-path /run/virtiofsd/virtiofsd.sock --socket-group kvm --inode-file-handles=mandatory > /tmp/virtiofsd.log 2>&1

This test did not crash after reading the 1.2mil files, so I had it read another 1.2. Zero issues. log file

Third Test

Since QEMU XML does not have a way to pass arbitrary flags to virtiofsd (only specific supported ones) I created a small shell script /usr/local/bin/virtiofsd.custom

#!/bin/bash
/usr/libexec/virtiofsd --inode-file-handles=mandatory "${@}"

Then set my QEMU XML to use the script (only changed the binary path, same VM as the XML I attached above)

<filesystem type="mount" accessmode="passthrough">
  <driver type="virtiofs" queue="1024"/>
  <binary path="/usr/local/bin/virtiofsd.custom">
    <cache mode="none"/>
  </binary>
  <source dir="/Storage"/>
  <target dir="Storage"/>
  <alias name="fs0"/>
  <address type="pci" domain="0x0000" bus="0x05" slot="0x00" function="0x0"/>
</filesystem>

This has the intended effect of starting virtiofsd with the --inode-file-handles=mandatory parameter directly from qemu

root      539192  539191  0 15:40 ?        00:00:00 /usr/libexec/virtiofsd --inode-file-handles=mandatory --fd=37 -o source=/Storage,cache=none
root      539194  539192  0 15:40 ?        00:00:00 /usr/libexec/virtiofsd --inode-file-handles=mandatory --fd=37 -o source=/Storage,cache=none

This test also did not crash after reading 1.2mil files, so I ran it again and it stayed up, just like test 2.

I have switched my VM from using a different software (Dokany) to using virtiofs again. I'm going to let it run and see if virtiofs crashes at all with --inode-file-handles=mandatory active. My underlying filesystem is ZFS so file handles are preferred for performance anyways I think.

Update: 7 hours later and virtiofsd is stable, no crashes yet (it was crashing every 1.5-2.5 hours).

@YanVugenfirer
Copy link
Collaborator

https://issues.redhat.com/browse/RHEL-56957 - Additional discussions

@germag
Copy link

germag commented Sep 11, 2024

From what can be deduced from the log, you are hitting the limit of open files in linux, if running as root is not a problem the best solution is to use --inode-file-handles=mandatory (you can instead use --rlimit-nofile=N with N > 1000000 since by default virtiofsd increases the limit up to 1M, but using file handles is better).

If running as root it's not possible:
1- run it with --cache=none (plus --allow-mmap if the guest wants to mmap() a file in the shared dir), this could impact the performance
2- or, increase the open files hard limit in the host and run virtiofsd with --rlimit-nofile=<the new limit>

I'm not familiar with windows, but if you can configure the file metadata cache to be smaller (I have no idea if that is possible), the guest will release entries before reaching the limit

@LRomandine
Copy link

Perhaps as a "fix" the documentation for the windows guest drivers can be updated with the 1 million files caveat and include instructions for how to add --inode-file-handles=mandatory. Since this is a Windows specific bug because of how long it keeps files open.

I think it would be best to get the parameter included in the XML definition and then update the libvirt and guest drivers documentation.

@dakusan
Copy link
Author

dakusan commented Sep 11, 2024

Thanks so much for debugging this!

I wanted to confirm what you said. "--inode-file-handles=mandatory" cannot be used if the virtiofsd process is running as root, correct?

When I tried your suggestion using the virtiofsd.custom (and I did chmod +x on it), in my *-fs*-virtiofsd.log file I get the error libvirt: error : cannot execute binary /usr/local/bin/virtiofsd.custom: Permission denied. I did confirm I could run /usr/local/bin/virtiofsd.custom in bash as root.

@LRomandine
Copy link

LRomandine commented Sep 11, 2024

@dakusan
I think you have it backwards, --inode-file-handles=mandatory can only be ran as root.

I am betting your permission denied is due to AppArmor, I had a heck of a time figuring it out when I was running test 2 above.

Edit:
I turned Apparmor back on for my system and hit the same permissions denied issue. I tried figuring out AppArmor profiles but gave up and instead...

mv /usr/libexec/virtiofsd /usr/libexec/virtiofsd.1.11.1
mv /usr/local/bin/virtiofsd.custom /usr/libexec/virtiofsd

And modified the script with the new executable name. Now AppArmor doesn't complain and it works as intended.

If anyone knows AppArmor profiles and can suggest an edit, I tried the following in file /etc/apparmor.d/abstractions/libvirt-qemu and many permutations to no avail

  /usr/local/bin/virtiofsd.custom rmux,
  ptrace (readby, tracedby) peer=/usr/local/bin/virtiofsd.custom,
  signal (receive) peer=/usr/local/bin/virtiofsd.custom,

@dakusan
Copy link
Author

dakusan commented Sep 12, 2024

To /etc/apparmor.d/usr.sbin.libvirtd I added /usr/local/bin/virtiofsd.custom PUx, after /usr/{lib,lib64,lib/qemu,libexec}/virtiofsd PUx, and then reloaded apparmor, and that worked. Would have liked to add it to /etc/apparmor.d/libvirtd but it wasn't working.

@LRomandine
Copy link

Can confirm adding the line to /etc/apparmor.d/usr.sbin.libvirtd works for me and I see the virtiofsd process has the correct options. I'm glad I wasn't the only one struggling with modifying the other files like

/etc/apparmor.d/libvirt/*
/etc/apparmor.d/abstractions/libvirt-qemu

@dakusan
Copy link
Author

dakusan commented Sep 12, 2024

It's working perfectly now with 1M+ files, thanks so much! Should I maybe report to the virsh/virtual manager people a suggestion to add this as an option?

@germag
Copy link

germag commented Sep 12, 2024

It's working perfectly now with 1M+ files, thanks so much! Should I maybe report to the virsh/virtual manager people a suggestion to add this as an option?

They are aware that there are many virtiofsd options that libvirtd does not support, IIRC they have plan to add a general xml field (like extra options or something like that) so any command can be added, IMO that is the best approach

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

6 participants
@YanVugenfirer @LRomandine @dakusan @germag @xiagao and others