Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: s3DefaultPageSize is wrong #1829

Open
2 tasks done
1141330133 opened this issue Dec 6, 2024 · 3 comments
Open
2 tasks done

[Bug]: s3DefaultPageSize is wrong #1829

1141330133 opened this issue Dec 6, 2024 · 3 comments
Labels
bug Something isn't working

Comments

@1141330133
Copy link

⚠️ This issue respects the following points: ⚠️

  • This is a bug, not a question or a configuration issue.
  • This issue is not already reported on Github (I've searched it).

Bug description

https://github.com/drakkan/sftpgo/blob/main/internal/vfs/s3fs.go#L69
s3DefaultPageSize = int32(5000)

Why is it set to 5000 by default here, and why is it not loaded in the form of configuration? It is recommended to modify it to below 1000.
In S3, when the ListObjectisV2 API is called, it will default to returning up to 1000 objects. If it exceeds 1000, an error message will be returned: operation error S3: ListObjectisV2, https response error StatusCode: 400, api error InvalidRequest: The value of max-keys is not valid: 5000

Steps to reproduce

1.s3 ListObjectisV2 api error

Expected behavior

https://github.com/drakkan/sftpgo/blob/main/internal/vfs/s3fs.go#L69
Change to s3DefaultPageSize=int32 (500)

SFTPGo version

2.6.4

Data provider

sqlite

Installation method

Community Docker image

Configuration

config

Relevant log output

error retrieving directory entries: operation error S3: ListObjectsV2, https response error StatusCode: 400, api error InvalidRequest: The value of max-keys is not valid: 5000

What are you using SFTPGo for?

Enterprise

Additional info

No response

@1141330133 1141330133 added the bug Something isn't working label Dec 6, 2024
@drakkan
Copy link
Owner

drakkan commented Dec 6, 2024

There is no such limitation in S3 and other S3-compatible implementations we have tested. You are probably using an S3-compatible implementation that has added this limitation, so this seems specific to your use case.
Please share the S3 implementation you are using. Thank you

@1141330133
Copy link
Author

sorry,We are a self-developed object storage that is compatible with the S3 protocol. Can we change this default value to a configuration for better compatibility ?

@1141330133
Copy link
Author

1141330133 commented Dec 9, 2024

There is no such limitation in S3 and other S3-compatible implementations we have tested. You are probably using an S3-compatible implementation that has added this limitation, so this seems specific to your use case.
Please share the S3 implementation you are using. Thank you

ListObjectsV2
https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListObjectsV2.html
ListObjects
https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListObjects.html

It's written here: Returns some or all (up to 1,000) of the objects in a bucket with each request.
image

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants