-
-
Notifications
You must be signed in to change notification settings - Fork 719
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug]: High Memory Usage with SFTPGo Leading to OOM Kill During File Uploads #1735
Comments
0.19 is not a SFTPGo version. This report also lacks basic info such as the storage provider you are using and a standalone reproducer. If the problem occurs on every upload, it should be very easy to provide a standalone reproducer. A problem like this would be noticed by many users, we have several installations with millions of uploads per day and no memory issues. We are not sure what is happening in your case and unfortunately we do not have the time or motivation to help you, but we think it is quite unlikely that this is a bug that occurs all the time. |
I got confused with the image version of SFTPGo, as it is v2.5.0 with Openshift PVC linked to Azure files as the storage provider. |
I had the same problem. In our case, the issue was related to the S3 backend used in a virtual folder without a specified root directory. If the "Root directory" is not specified for a virtual folder, all data goes to the RAM first, and only after successful upload is copied to the S3 backend. It would be great if the creation of a file system would work the same way for users and virtual folders. For example, if a user is created with S3 storage, the "Root directory" is automatically set to "/srv/sftpgo/data/%user%". However, for the virtual folders with an S3 backend the "Root directory" is left empty, so all data goes to the memory first instead of temporary storage. |
SFTPGo does not work as you describe, are you writing this because you have examined the code? Or because you assume so? |
Bug description
We are experiencing an issue with SFTPGo where the memory usage continuously rises during file uploads. Memory usage gradually increases while uploading, and upon completion, there is a noticeable spike in memory usage. After this spike, the memory usage settles at a higher level than before the upload started and does not decrease over time. This cycle repeats with each file upload, eventually leading to the pod being killed by OOM (Out Of Memory).
Steps to reproduce
Expected behavior
Memory usage should increase during the upload process and should return to normal levels after the upload completes. Memory should not continuously increase after each upload, nor should it cause the pod to be OOM killed.
SFTPGo version
0.19.0
Data provider
postgresql
Installation method
Community Docker image
Configuration
sftpgo:
volumes:
- name: sftpgo-pvc
persistentVolumeClaim:
claimName: sftpgo-pvc
volumeMounts:
- name: sftpgo-pvc
mountPath: /mnt
podAnnotations:
prometheus.io/scrape: "true"
prometheus.io/port: "10000"
resources:
requests:
cpu: 100m
memory: 350Mi
limits:
cpu: 400m
memory: 350Mi
config:
data_provider:
create_default_admin: true
driver: postgresql
name: sftpgo
host: POSTGRES_SERVER
port: 5432
username: POSTGRES_ADMIN
password: POSTGRES_ADMIN_USER
envVars:
- name: SFTPGO_DEFAULT_ADMIN_USERNAME
valueFrom:
secretKeyRef:
name: sftpgo-admin-creds
key: username
- name: SFTPGO_DEFAULT_ADMIN_PASSWORD
valueFrom:
secretKeyRef:
name: sftpgo-admin-creds
key: password
autoscaling:
enabled: true
minRepliacs: 1
maxReplicas: 5
targetCPUUtilizationPercentage: ''
targetMemoryUtilizationPercentage: 80
Relevant log output
No response
What are you using SFTPGo for?
Small business (10-person firm with file exchange?)
Additional info
The text was updated successfully, but these errors were encountered: