Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Changed initialDelaySeconds to 120 and set CPU to 1vcpu and memory to 4G #102

Closed
wants to merge 1 commit into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions charts/localstack/templates/deployment.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -60,6 +60,7 @@ spec:
httpGet:
path: /_localstack/health
port: {{ .Values.service.edgeService.name }}
initialDelaySeconds: 120
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you explain a bit why this would be necessary?
120 seconds is quite long, if we have to add the initialDelaySeconds for a deployment on EKS, we should add it in a parameterized way with a way shorter value (because for the vast majority of users, this setting will slow down their deployment by at least one minute).
When this has been added as a parameterized value, we could add it to the eks-values.yaml (as mentioned in the comment below).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Needed this in EKS with Fargate as it takes so long to launch a new pod. I'll move to eks-values.yaml override.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree with @alexrashed here, and the way this is constructed always overrides what can be specified in an override file. See

readinessProbe:
initialDelaySeconds: 0
periodSeconds: 10
timeoutSeconds: 1
successThreshold: 1
failureThreshold: 3

I also however agree the initial delay shouldn't be 0 seconds, as the container always takes some time to start up, but the total 30 seconds (3 failure periods x a period of 10 seconds) is reasonably generous. When I try with EKS + fargate as you have been doing @cabeaulac, I can still get a working pod in 30 seconds, provided the pod resources are higher than the default. It feels to me that a bit of initial delay (e.g. 5 seconds) where failures to reach the health endpoint do not count towards a failing health check makes sense.

resources:
{{- toYaml .Values.resources | nindent 12 }}
{{- if coalesce (and (.Values.mountDind.enabled) (.Values.mountDind.forceTLS)) .Values.enableStartupScripts .Values.persistence.enabled .Values.volumeMounts }}
Expand Down
6 changes: 5 additions & 1 deletion charts/localstack/values.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -162,7 +162,7 @@
annotations: {}


resources: {}
#resources: {}

Check failure on line 165 in charts/localstack/values.yaml

View workflow job for this annotation

GitHub Actions / lint-test

165:2 [comments] missing starting space in comment
# We usually recommend not to specify default resources and to leave this as a conscious
# choice for the user. This also increases chances charts run on environments with little
# resources, such as Minikube. If you do want to specify resources, uncomment the following
Expand All @@ -173,6 +173,10 @@
# requests:
# cpu: 100m
# memory: 128Mi
resources:
requests:
cpu: 1000m
memory: 4Gi
Comment on lines +176 to +179
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The values here in the values.yaml are default values used for every single deployment, not only for those on EKS.
Your change here sets it for everyone, where I would prefer to have this in a documentation, or in a shared values file specifically for EKS which can then be used when deploying to EKS.
For example:

helm install -f eks-values.yaml localstack/localstack

If these settings are really applicable for every EKS cluster, we could share them here directly, or we could put them on the docs pages.

But actually, I am not sure why these are necessary in the first place. Could you maybe please add some explanation why it is necessary to limit the CPU and memory in EKS?

Copy link
Contributor Author

@cabeaulac cabeaulac Jan 5, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Empty defaults are too low. The container fails to come up, runs out of memory and is killed. I think not setting the values at all leaves us with a LocalStack that won't run in most cases. Should we have an empty base default like this? Or always force an override?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@alexrashed these are not limits, but minimum requirements. I don't know if k3s respects resource requests like this, but EKS with fargate certainly requires them.

I like the idea of having example "supported" override files for different scenarios, however I suspect there are only two, but there may be more:

  • local using k3d, the default with LocalStack where it doesn't matter since the default nodes have the resources of the host, and
  • a "real" k8s cluster e.g. EKS where we have to bump up the values a bit.


# All settings inside the lambda values section are only applicable to the new v2 lambda provider
lambda:
Expand Down
Loading