Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

possible memory-leak in glusterfs-client "glusterfs 10.1" #4381

Open
intelliIT opened this issue Jun 24, 2024 · 0 comments
Open

possible memory-leak in glusterfs-client "glusterfs 10.1" #4381

intelliIT opened this issue Jun 24, 2024 · 0 comments

Comments

@intelliIT
Copy link

Description of problem:

i already described here that i lose client-side mounted volumes from time to time. since that i implemented a monitoring system that pointed out that also memory consumption on the affected hosts rose before the mount disconnected. i currently have a watcher which monitors gluster-volume access and restarts the host after it becomes inaccessible. the machine only provides a docker runtime and my volumes will break if gluster breaks. i extended that script with a basic output of glusterfs memory consumption, which showed glusterfs over the time of hours consuming up to 80% of (virtual) memory and then losing the volume-mount.
i will try to capture a statedump whenever this occurs the next time.

leak2

The exact command to reproduce the issue:
__

The full output of the command that failed:
__

Expected results:
__

- Provide logs present on following locations of client and server nodes -

/var/log/glusterfs/*.log

[2024-06-24 11:17:40.388246 +0000] W [MSGID: 114031] [client-rpc-fops_v2.c:2618:client4_0_lookup_cbk] 0-fast_volume_native-client-2: remote operation failed. [{path=/elk/elasticsearch/nodes/0/indices/VGjKFygPT-SNPu74CSxxxxx/_state/retention-leases-133657.st}, {gfid=00000000-0000-0000-0000-000000000000}, {errno=61}, {error=No data available}] [2024-06-24 11:17:40.388291 +0000] W [MSGID: 109007] [dht-common.c:2758:dht_lookup_everywhere_cbk] 0-fast_volume_native-dht: multiple subvolumes (fast_volume_native-replicate-0 and fast_volume_native-replicate-0) have file /elk/elasticsearch/nodes/0/indices/xxxxxxPT-SNPu74CSmQqw/0/_state/retention-leases-133657.st (preferably rename the file in the backend,and do a fresh lookup) [2024-06-24 11:17:47.184475 +0000] W [MSGID: 109011] [dht-layout.c:147:dht_layout_search] 0-fast_volume_native-dht: Failed to get hashed subvolume [{hash-value=0x1df590ea}] [2024-06-24 11:17:47.289217 +0000] W [MSGID: 109011] [dht-layout.c:147:dht_layout_search] 0-fast_volume_native-dht: Failed to get hashed subvolume [{hash-value=0x3f138241}] [2024-06-24 11:17:48.267913 +0000] W [MSGID: 109011] [dht-layout.c:147:dht_layout_search] 0-fast_volume_native-dht: Failed to get hashed subvolume [{hash-value=0x14b836e7}] [2024-06-24 11:17:50.591765 +0000] W [MSGID: 109011] [dht-layout.c:147:dht_layout_search] 0-fast_volume_native-dht: Failed to get hashed subvolume [{hash-value=0x1367b8cd}] [2024-06-24 11:18:18.642870 +0000] W [MSGID: 114031] [client-rpc-fops_v2.c:2618:client4_0_lookup_cbk] 0-fast_volume_native-client-1: remote operation failed. [{path=/intelli-main/mongo/.mongodb/mongosh/65e19c5268a4e11b3074c644_log}, {gfid=00000000-0000-0000-0000-000000000000}, {errno=61}, {error=No data available}] [2024-06-24 11:18:18.643672 +0000] W [MSGID: 114031] [client-rpc-fops_v2.c:2618:client4_0_lookup_cbk] 0-fast_volume_native-client-1: remote operation failed. [{path=(null)}, {gfid=00000000-0000-0000-0000-000000000000}, {errno=61}, {error=No data available}]

- Is there any crash ? Provide the backtrace and coredump -

Additional info:

- The operating system / glusterfs version:
Ubuntu 22.04.4 LTS; glusterfs-version=10.1

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant