-
Notifications
You must be signed in to change notification settings - Fork 57
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Traces are not flushed to the server before the lambda terminates #787
Comments
I'm observing this behavior as well. There is a processor in the upstream On a related note, I think it would be wise to leave the upstream support for these components in place so users of ADOT Collector for Lambda have the option to use them. |
This issue is stale because it has been open 90 days with no activity. If you want to keep this issue open, please just leave a comment below and auto-close will be canceled |
Not stale, stil relevant |
I believe if I'm reading this right, no processors are allowed? |
@Dr-Emann, I don't think that's the case. There are just no processors at this moment in the lambda layer.
|
+1 Yes please. It's been a run-around to find this discussion and why the batch/decouple processor is failing. |
This issue is stale because it has been open 90 days with no activity. If you want to keep this issue open, please just leave a comment below and auto-close will be canceled |
not stale. |
This issue is stale because it has been open 90 days with no activity. If you want to keep this issue open, please just leave a comment below and auto-close will be canceled |
still relevant |
Describe the bug
The created traces are not flushed out to the tracing provider, before the lambda is terminated.
If a lambda is only executed a single time, the traces are not flushed to the tracing provider before the lambda is frozen again.
If the same lambda is executed multiple times within a few seconds the traces of the first invocations are transfered. The same behavior can be observed if a sleep command is included in the lambda after the last trace was closed.
This behavior leads me to the assumption that the lambda just does not have enough time after the final span is closed to transfer the information to the server (Honeycomb in my case)
Steps to reproduce
I created a minimal example project to reproduce the problem (https://github.com/sintexx/aws-otel-tracing-problem-example). The project can be started with 'npm run dev'. The single test lambda can be triggered via putting anything in the SQS-Queue.
The used tracing provider is Honeycomb (https://www.honeycomb.io/). To execute the project with a different provider the getMonitoringLayer function in sst.config.ts needs to be modified, to replace the API-Token and endpoint.
What did you expect to see?
The trace is uploaded even for a single lambda invocation.
What did you see instead?
Only multiple invocations shortly after each other or a timeout at the end of the lambda execution result in a visible trace.
What version of collector/language SDK version did you use?
aws-otel-nodejs-amd64-ver-1-17-1:1
What language layer did you use?
Node.js
Additional context
The project uses SST (https://sst.dev/) and not CDK directly
The text was updated successfully, but these errors were encountered: