-
Notifications
You must be signed in to change notification settings - Fork 67
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changed initialDelaySeconds to 120 and set CPU to 1vcpu and memory to 4G #102
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -162,7 +162,7 @@ | |
annotations: {} | ||
|
||
|
||
resources: {} | ||
#resources: {} | ||
# We usually recommend not to specify default resources and to leave this as a conscious | ||
# choice for the user. This also increases chances charts run on environments with little | ||
# resources, such as Minikube. If you do want to specify resources, uncomment the following | ||
|
@@ -173,6 +173,10 @@ | |
# requests: | ||
# cpu: 100m | ||
# memory: 128Mi | ||
resources: | ||
requests: | ||
cpu: 1000m | ||
memory: 4Gi | ||
Comment on lines
+176
to
+179
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. The values here in the
If these settings are really applicable for every EKS cluster, we could share them here directly, or we could put them on the docs pages. But actually, I am not sure why these are necessary in the first place. Could you maybe please add some explanation why it is necessary to limit the CPU and memory in EKS? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Empty defaults are too low. The container fails to come up, runs out of memory and is killed. I think not setting the values at all leaves us with a LocalStack that won't run in most cases. Should we have an empty base default like this? Or always force an override? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. @alexrashed these are not limits, but minimum requirements. I don't know if I like the idea of having example "supported" override files for different scenarios, however I suspect there are only two, but there may be more:
|
||
|
||
# All settings inside the lambda values section are only applicable to the new v2 lambda provider | ||
lambda: | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could you explain a bit why this would be necessary?
120 seconds is quite long, if we have to add the
initialDelaySeconds
for a deployment on EKS, we should add it in a parameterized way with a way shorter value (because for the vast majority of users, this setting will slow down their deployment by at least one minute).When this has been added as a parameterized value, we could add it to the
eks-values.yaml
(as mentioned in the comment below).There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Needed this in EKS with Fargate as it takes so long to launch a new pod. I'll move to eks-values.yaml override.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree with @alexrashed here, and the way this is constructed always overrides what can be specified in an override file. See
helm-charts/charts/localstack/values.yaml
Lines 97 to 102 in ce47b15
I also however agree the initial delay shouldn't be 0 seconds, as the container always takes some time to start up, but the total 30 seconds (3 failure periods x a period of 10 seconds) is reasonably generous. When I try with EKS + fargate as you have been doing @cabeaulac, I can still get a working pod in 30 seconds, provided the pod resources are higher than the default. It feels to me that a bit of initial delay (e.g. 5 seconds) where failures to reach the health endpoint do not count towards a failing health check makes sense.