-
Notifications
You must be signed in to change notification settings - Fork 30k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fatal error on setting memory permissions (Fatal error... Check failed: 12 == (*__errno_location ())
)
#55509
Comments
# Fatal error... Check failed: 12 == (*__errno_location ())
)Fatal error... Check failed: 12 == (*__errno_location ())
)
Interestingly we just had all of our production services crash all at once, running on AWS EC2 instances with Node v20 and v22, all at once, all with the same error. The other information as provided by Gabba holds true for us, but it out of the blue affected everything all at once after all our AWS k8s nodes got restarted by AWS all at the same time. We're still investigating the cause and solution, but the timing of this issue being created just a few hours ago seems rather suspiciously coincided with our own, makes me wonder what's going on here. We also run Java services and they were also affected by this, so I don't believe this to be an issue with NodeJS or V8 itself. The Image where our issue started The Java error was less verbose but looks like
NodeJS Error
|
facing the same issue with all the node pods.
I am also facing the same issue with all the node pods. I am on |
Hi! This appears to be a duplicate of nodejs/help#4465. Is that not the case? |
It could be considered a duplicate, but I think this issue can be considered valuable from a discoverability POV, I do not believe this is an issue with NodeJS in any way but rather an EKS image release that just hit AWS, since Java services we run are also affected. I think this is the appropriate place for the ticket aws/eks-distro#3370 This being said, it's really hard to say what the exact root cause is right now |
We have moved off Bottlerocket and onto AL2 in order to work around this. Our nodes were running the image: |
Interlinking for future discoverability bottlerocket-os/bottlerocket#4260 (comment) |
relevant excerpt from strace log: 1740 mprotect(0x84c0000, 536870912, PROT_READ|PROT_WRITE|PROT_EXEC) = -1 EACCES (Permission denied)
1740 write(2, "\n\n#\n# Fatal error in , line 0\n# ", 32) = 32
1740 write(2, "Check failed: 12 == (*__errno_lo"..., 43) = 43
1740 write(2, "\n#\n#\n#\n#FailureMessage Object: 0"..., 45) = 45
1740 write(2, "\n", 1) = 1
1740 write(2, "----- Native stack trace -----\n\n", 32) = 32
1740 futex(0x7fde3fb9b1f0, FUTEX_WAKE_PRIVATE, 2147483647) = 0
1740 write(2, " 1: 0x107e621 [node]\n", 22) = 22
1740 write(2, " 2: 0x2aba423 V8_Fatal(char cons"..., 48) = 48
1740 write(2, " 3: 0x2ac5066 v8::base::OS::SetP"..., 104) = 104
1740 write(2, " 4: 0x14c1bfc v8::internal::Code"..., 97) = 97
1740 write(2, " 5: 0x155982f v8::internal::Heap"..., 73) = 73
1740 write(2, " 6: 0x149ac92 v8::internal::Isol"..., 142) = 142
1740 write(2, " 7: 0x19ee994 v8::internal::Snap"..., 80) = 80
1740 write(2, " 8: 0x1315af6 v8::Isolate::Initi"..., 93) = 93
1740 write(2, " 9: 0xed9a18 node::NewIsolate(v8"..., 163) = 163
1740 write(2, "10: 0x1043a6d node::NodeMainInst"..., 530) = 530
1740 write(2, "11: 0xf95806 node::Start(int, ch"..., 45) = 45
1740 write(2, "12: 0x7fde3f9bb24a [/lib/x86_64"..., 54) = 54
1740 write(2, "13: 0x7fde3f9bb305 __libc_start_"..., 71) = 71
1740 write(2, "14: 0xecff4e _start [node]\n", 27) = 27
1740 --- SIGTRAP {si_signo=SIGTRAP, si_code=SI_KERNEL, si_addr=NULL} --- so clearly, the memory protection call did not pass through. this is not a node.js issue, some lower level framework is modifying the memory access / protection for the process, and node.js cannot function with that level of memory attributes. |
Yeah, this is almost certainly the MemoryDenyWriteExecute systemd setting, it enforces W^X for memory pages. Good for random apps but node.js and probably most JIT environments aren't compatible with that out of the box. Running with |
Hey everyone, this is a reminder that "me too" comments only add more noise to this already noisy issue. Please refrain from commenting unless you have something to add to the conversation |
@redyetidev - which comment you are referring to as "me too" comment? |
This is more of a general statement, I'm not directing this at anyone specific |
then pls refrain from making general statements without reason. it confused me. |
Version
v23.0.0
Platform
Subsystem
No response
What steps will reproduce the bug?
By repeatedly running Node, as simply installing dependencies of a project through the command
node $(which npm) install
in a while loop (see the script used for testing here).For instance:
How often does it reproduce? Is there a required condition?
Very often on specific platforms such as the one shown above.
What is the expected behavior? Why is that the expected behavior?
Node should not fail and crash.
What do you see instead?
At same point Node crashes giving the following error, extracted from here:
Additional information
Similar issue nodejs/help#4465.
The output of
strace
can be found here.The text was updated successfully, but these errors were encountered: