-
Notifications
You must be signed in to change notification settings - Fork 48
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[ECS] [Container OOM]: Containers OOM with Amazon Linux 2023 ECS AMI #240
Comments
Hey @rixwan-sharif + all others with 👍 Thanks for the info and sharing your experience! We asked AWS ECS team via support case whether there is a known error or behavior like you described here - they said no.
Did you also forward the problem as a support case? Thanks, Robert |
Hello, I have transferred this issue to the ECS/EC2 AMI repo from containers-roadmap, since this sounds more like it could be a bug or change in behavior in the AMI, rather than a feature request. @rixwan-sharif could you let us know which AL2023 AMI version you used? Was it the latest available? Could you also provide the task and container limits that you have in your task definition(s)? Two differences that come to mind that may be relevant are the latest AL2023 AMI is using Docker 25.0 and cgroups v2, whereas the latest AL2 AMI is currently on Docker 20.10 and cgroups v1. |
If you were not using the latest AL2023 AMI, one thing to note is that the Amazon Linux team released a systemd fix in late-September 2023 for a bug in the cgroup OOM-kill behavior. (Search "systemd" in release notes here: https://docs.aws.amazon.com/linux/al2023/release-notes/relnotes-2023.2.20230920.html) |
If anyone has data to provide to the engineering team that can't be shared here, please feel free to email it to |
Hi, This is the AMI version we are using AMI Version : al2023-ami-ecs-hvm-2023.0.20240409-kernel-6.1-x86_64 Task/Container details Base Docker Image: adoptopenjdk/openjdk14:x86_64-debian-jdk-14.0.2_12 Resources: CPU : 0.125 Docker stats on Amazon Linux 2 AMI. Docker stats on Amazon Linux 2023 AMI. (Increased Memory hard limit to 3GB as container was OOMing with 1GB of memory) And yes we already opened a support case too (Case ID 171387184800518). this is what we got from support. [+] Upon further troubleshooting, we found that there seems to be issue with AL2023 AMI which our internal team is already working on and below are wordings shared by them:
|
Hi @sparrc, after switching to AML 2023 from AML 2 we faced w/ a similar issue as well, we haven't got OOM yet but the memory consumption has nearly doubled, and memory consumption regularly increases as the application runs which looks like a memory leak. |
Also experiencing the same behavior with the latest AMI: Going from AL2 to AL2023 results in significant memory consumption increase that seems to just keep increasing over time. (seems generalized regardless of language/framework). This is especially troublesome since AWS is recommending AL2023 over AL2. If the ECS internal team is aware of this is there somewhere where we can track it? This thread doesn't really indicate anything is being done to investigate or fix. It seems like this has been an issue for months and would like to stay current on any progress or updates. |
We have memory limits set at the TaskDefinition level and the JVM is allocating heap based on the total physical RAM on the host system now, which is blowing up our RAM usage. We were able to set memory limits on the individual ContainerDefinitions within the TaskDefinition and that seems to have fixed it for us. |
We ran into the same issues, our Java services were either freezing or hitting OOM errors after the upgrade to AL2023. It turned out we were running a JDK version which did not have support for Docker cgroups v2, meaning the JDKs inside the containers were not aware of the memory limits imposed by ECS tasks. Upgrading the JDK to a recent version with cgroups v2 support fixed the issue |
@dduvnjak Interesting. I've had issues too, however this is running Java 21.0.5 so it can't be the same as your root cause. Since I dont have good metrics before/after the change I can't really assume the memory usage is greater, so I'd just come to the naive conclusion that the memory killer is just far more aggressive with ECS AMI AL2023 and cgroups v2 and perhaps instantaneous blips over the max (which come down shortly after) are being punished more severely and instantly than before. I can't find good articles, but I was under the understanding that this was one of the design goals in cgroups v2 to improve isolation (requiring punishing containers over maxes more strictly). I suppose it's possible that even on a JDK with cgroups v2 support the ergonomics and values it uses differ slightly causing it to consume more though... Strangely enoguh at https://docs.aws.amazon.com/linux/al2023/ug/ecs.html it notes
Actually, I thought that killing everything in the cgroup/container was one of the intentions/goals for containers with cgroups v2.... (but can confirm that on AL2023 ECS AMIs it is just picking a process inside my container to kill, which leads to unpredictable behaviour) |
I have an issue due to running Got A lot of OOM Alerts all the time. Which is before migrating from |
I wonder how the experience differs between Fargate ECS users and old-school EC2 ones. Mine is non-fargate for what it's worth. |
We have the same issue with Fargate. AWS doesn't seem to care since the "solution" is to increase the (paid) memory via Fargate/EC2. |
For my experience the fargate it self quite convenient for whom doesn't wanna concern the infrastructure but it come with costly if you compare with ec2 it self But it has fargate spot that very cheap, if your workload has falut torelant ability it is a good candidate By the way there is limitations when using fargate like you cannot using adjust or using host capability. |
Yes, I understand Fargate - I meant whether there was an increase in OOM kills for containers experienced on Fargate users similar to EC2 users with AL2023 - not the trade-offs. This ticket was opened with respect to EC2s originally and is on the AMI repo, so doesn't really related to Fargate by definition. Actually, after looking more closely since I dont use Fargate right now personally, only AL2 platform version is supported for Fargate (?) , so this problem (increase in memory usage or OOM kills AL2 -> AL2023) cannot apply, by definition.
Same per aws/containers-roadmap#2285 |
Oh sorry i misunderstood. I have use fargate version 1.4.0 with prod workload have no experience with any OOM kills like this.
|
There's some commentary on cgroups v2 at https://www.google.com/url?sa=t&source=web&rct=j&opi=89978449&url=https://github.com/cockroachdb/cockroach/issues/114774 that notes a few relevant things that might explain why it's necessary to set task specific limits when relying on cgroups kernel stats to auto-configure software ( |
@chadlwilson fwiw, our ECS task containers have Cpu, Memory and MemoryReservation limits set, and the JVM is configured with |
Community Note
Tell us about your request
ECS Containers are getting killed due to Out of Memory with new Amazon Linux 2023 ECS AMI.
Which service(s) is this request for?
ECS - with EC2 (Autoscaling and Capacity Provider setup)
Tell us about the problem you're trying to solve. What are you trying to do, and why is it hard?
We are deploying our workloads on AWS ECS (EC2 based). We recently migrated our underlying cluster instances AMI to Amazon Linux 2023 (previously using Amazon Linux 2). After the migration, we are facing a lot of "OOM Container Killed" for our services without any change on the service side.
Are you currently working around this issue?
The text was updated successfully, but these errors were encountered: