-
Notifications
You must be signed in to change notification settings - Fork 858
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Child channel count was caught attempting to be increased over max concurrency #1803
Comments
Hi @lbourdages thank you for being diligent and reporting this to us. I understand the issue is happening out of nowhere, but can you answer some questions to help us isolate the case so we can try to reproduce it in our side?
|
Hi! Here are a few minutes of logs. Note that I've scrubbed off logs from our business logic but they should not be having any impact on the operation of the Netty internals anyway. The service had been running for about 25 hours at the time the log happened. Several hours before, we had two instances of this: That's all I can find. |
We just experienced this too on version 2.3.3 of the amazon-kinesis-client after upgrading from 2.2.5
This happend 4 times the last day. They don't seem to occur at the same time as the other error though. We're going to roll back to the old version now. Would appreciate any update on this. |
@Bas83 thank you for reporting. Regarding the |
I'm facing the same issue when writing a stream of records (batch API) to Kinesis using KinesisAsyncClient (v 2.17.106/127). Setting particularly X, but also Y too high will repeatedly fail the stream within 1 minute. Even setting numRetries unreasonably high does not prevent the issue. Initially I'm seeing this periodically:
soon after followed by:
An interesting observation is that for |
@debora-ito I tried looking into the issue. The strange thing is, when using a custom build of |
@mosche the custom build of |
@debora-ito All I did was a Digging further this turns out to be a classpath issue with different transitive versions of netty. In my case Spark 2.4.7 uses In my case forcing any consistent version of netty solves the issue. io.netty:netty:3.9.9.Final
-io.netty:netty-all:4.1.47.Final
+io.netty:netty-all:4.1.72.Final
io.netty:netty-buffer:4.1.72.Final
io.netty:netty-codec:4.1.72.Final
+io.netty:netty-codec-dns:4.1.72.Final
+io.netty:netty-codec-haproxy:4.1.72.Final
io.netty:netty-codec-http:4.1.72.Final
io.netty:netty-codec-http2:4.1.72.Final
+io.netty:netty-codec-memcache:4.1.72.Final
+io.netty:netty-codec-mqtt:4.1.72.Final
+io.netty:netty-codec-redis:4.1.72.Final
+io.netty:netty-codec-smtp:4.1.72.Final
+io.netty:netty-codec-socks:4.1.72.Final
+io.netty:netty-codec-stomp:4.1.72.Final
+io.netty:netty-codec-xml:4.1.72.Final
io.netty:netty-common:4.1.72.Final
io.netty:netty-handler:4.1.72.Final
+io.netty:netty-handler-proxy:4.1.72.Final
io.netty:netty-resolver:4.1.72.Final
+io.netty:netty-resolver-dns:4.1.72.Final
+io.netty:netty-resolver-dns-classes-macos:4.1.72.Final
+io.netty:netty-resolver-dns-native-macos:4.1.72.Final
io.netty:netty-tcnative-classes:2.0.46.Final
io.netty:netty-transport:4.1.72.Final
io.netty:netty-transport-classes-epoll:4.1.72.Final
+io.netty:netty-transport-classes-kqueue:4.1.72.Final
+io.netty:netty-transport-native-epoll:4.1.72.Final
+io.netty:netty-transport-native-kqueue:4.1.72.Final
io.netty:netty-transport-native-unix-common:4.1.72.Final
+io.netty:netty-transport-rxtx:4.1.72.Final
+io.netty:netty-transport-sctp:4.1.72.Final
+io.netty:netty-transport-udt:4.1.72.Final |
Looks like 4.1.69.Final is the first correct uber jar that doesn't contain classes itself ... |
@debora-ito Also wondering, is com.typesafe.netty » netty-reactive-streams-http an intentional dependency of |
Yes. aws-sdk-java-v2/http-clients/netty-nio-client/pom.xml Lines 90 to 93 in 76aef12
|
@lbourdages @Bas83 we want to investigate this error further, but we need a sample code so we can reproduce in our side. Do you have self-contained repro code that reliably show the error, using the latest version of the SDK? |
I'm afraid we don't, it's been a while and we've been running on version 2.4.3 for a while now. The only problem with that version that we have (or rather with the 2.4.1 one before that, but we haven't tried again) is that updating to Java 17 (from 16) miraculously makes kinesis skip part of the records while consuming. |
Not a lot of movement around this issue lately, so I'm closing this. If anyone have a repro code that reproduces the error reliably, please send it to us in a new fresh github issue. |
This issue is now closed. Comments on closed issues are hard for our team to see. |
I found a log at WARN level that told me to contact the Java SDK team, so here's the bug report.
Describe the bug
We run a Kinesis Client Library application. During what appears to be normal operation, a log printed out telling me to contact the Java SDK team.
Expected Behavior
N/A
Current Behavior
Here's the log in its entirety:
The logger name is "software.amazon.awssdk.http.nio.netty.internal.http2.MultiplexedChannelRecord"
Steps to Reproduce
It happened out of nowhere, I don't know
Possible Solution
N/A
Context
KCL github: https://github.com/awslabs/amazon-kinesis-client
Your Environment
The text was updated successfully, but these errors were encountered: