-
Notifications
You must be signed in to change notification settings - Fork 8.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Unrelated ingresses targeting the same service/port: annotations for session stickyness not respected #12399
Comments
This issue is currently awaiting triage. If Ingress contributors determines this is a relevant issue, they will accept it by applying the The Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
/assign |
/remove-kind bug
Right now, its not even known if you are using the controller released by this project. Or if so, then which version An attempt to reproduce with 5 replicas of a pod using --image nginx:alpine and even one single ingress with that annotations fails to show the hash directive under the related server block. /triage needs-information |
/kind support |
likely related to this https://nginx.org/en/docs/http/ngx_http_upstream_module.html#hash |
I have updated it with further tests. We found this in our cluster using ingress-nginx v1.10 however in minikube with the latest version, the behaviour is the same. I have also updated the instructions to include how to install minikube and apply the sample ingresses which show the behaviour. Note that the order somehow matters so if the result of |
Thanks.
|
and i am not clear about the test for stickyness. i don't want to nitpick but i do want to copy/paste from your reproduce procedure. so i don't see a step to install the echo-server, leave alone the multi replica spec to make stickiness meaningful |
An echoserver should be unnecessary because shows that stickyness is not configured (ordering or naming of the ingresses is important). We also have this on our production in v1.10, so I don't see why 1.11.x would help: the problem exists on 1.10 and on the latest dev. If you need an echoserver to show this, the code here should be able to be applied:
In the above, the expectation that host1 has stickyness enabled and host2 does not. But because of how the stickyness is handled, they either both will have stickyness enabled or both will be disabled (I haven't figured out where this is exactly happening). This was an annoying bug for us because a seemingly unrelated internal ingress broke our public ingress stickyness |
We have a workaround now that we annotate all ingresses targeting the same service and same port with the same annotations: that is, if we want session stickyness we apply it to all ingresses in this namespace and they're all forced to be sticky. This is our workaround for now. |
I am not an expert on anything so I try to reproduce a reported problem first, only by using the steps provided by the reporter. This makes sure that I am not injecting any wrong spec/config. I also wrote twice about the nginx.conf file because all annotations end up in the nginx.conf as a directive. So I was first trying to see what directive is configured in the nginx.conf if there is only one ingress for a given hostname and if that one ingress is configured with the upstream annotation. But your comments I am left guessing that maybe there is only one single server block for both the ingresses or maybe the nginx reconciling breaks when it needs to apply the relevant nginx directive to 2 server blocks because they both are the same server but only one ingress has the annotation (based on the code that the upstream name is generated using the 3 components viz namespce+servicename+serviceport). And I am also wondering if the breaking changes introduced in the v1.12.x-beta impacted this somehow, hence I asked you to test on controller v1.11.x . We can apply bug label once we have the confirming smoking-gun data. Meanwhile can you at least tell what is the nginx.conf directive this annotation configures in the nginx.conf file. You can look at the controller where you have applied the workaround that you mentioned. |
as requested, the nginx.conf which I believe does not contain any details relevant to the problem. The above is related to the full example above including the echoserver:
|
re-confirmed with version
and updated the original to include this version |
|
i did some more testing and I found out that the nginx.conf does not change even if there is only 1 ingress for one backend with the annotation. Its a lua implementation is what it looks like. In that context, I think this is not limited to 2 ingress with different hostname but one common backend. This problem, i suspect will happen for any lua implementation where the namespace+servicename+portname or namespace+servicename+portnumber are used to generate the upstream-name. |
I think this is expected and I think you need to experiment with changing the portname or not using portname but instead use 2 different port-numbers |
This has nothing to do with the krew plugin. I used the krew plugin to show how the configuration breaks, rather than deploy an echoserver and rely on multiple refreshes. The behavour happens with and without the krew plugin, it was just the fastest way to see. You are correct that it's on the lua implementation and what I identified in the opening of the issue. When you say
this is what I said
When you say
no, and this is the problem. If you annotate one ingress, then both get it...but only if the annotated ingress is (alphabetically?) second (?). |
Not sure where you saw me write that this issue is related to the krew plugin. Someone else also had reported a issue related to the "triplicate" and I tested that issue to find out that the use case is not supported. There is no handover of issues on OSS. So please wait for comments from others. |
"Not sure where you saw me write that this issue is related to the krew plugin." You said I realized just now that you are reporting something that only happens with the krew plugin for kubectl emphasis mine. |
Ah, that was a reference to your issue description that says you saw
backends, and that command is only possible with krew plugin, or so I
assume.
In any case, please wait for comments from others. Thanks for your help
…On Fri, 22 Nov, 2024, 21:07 Sean, ***@***.***> wrote:
"Not sure where you saw me write that this issue is related to the krew
plugin."
You said
I realized just now that *you are reporting something that only happens
with the krew plugin* for kubectl
emphasis mine.
—
Reply to this email directly, view it on GitHub
<#12399 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ABGZVWWFCXZQ5G742QRW3J32B5FRZAVCNFSM6AAAAABSHMXW4KVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDIOJUGA2DENJWHA>
.
You are receiving this because you were assigned.Message ID:
***@***.***>
|
This smells like a bug, i wonder why we wouldn't have an e2e test for this. |
What happened:
We have two ingresses on separate hostnames which target the same service and port. When we annotated the second ingress with
nginx.ingress.kubernetes.io/upstream-hash-by: $remote_addr
this was not added to the nginx configuration and session stickyness did not work.
What you expected to happen:
ingresses annotated with
nginx.ingress.kubernetes.io/upstream-hash-by: $remote_addr
are sticky to pods based on the remote client IP.
We were able to trace it down to the list provided by
kubectl ingress-nginx backends
which would show that the upstream hash
for both ingresses because it is in a section labeled by the namespace-service-port triplicate.
NGINX Ingress controller version
Kubernetes version v1.29.9 and v1.30.0
Environment:
uname -a
): 6.6.54-flatcarHow to reproduce this issue:
Install minikube
Install the ingress controller
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/deploy/static/provider/baremetal/deploy.yaml
or v1.11
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/refs/heads/release-1.11/deploy/static/provider/baremetal/deploy.yaml
Create two ingresses
It is important that both ingresses in the above example are in the same namespace, and both point to the same service and same port, and that the first is alphabetically before the second.
The session stickyness annotation will not be applied and can be verified with
kubectl ingress-nginx backends -n ingress-nginx
this should output:
expected was output
Because these are two separate hostnames I would expect the ingress controller to handle them separately, and that annotations on the seemingly unrelated ingress. Note that the annotation should only be on one of the ingresses and the order matters. This is tested in our cluster on v1.29 with ingress-controller version v1.10and reproduceable in minikube with v1.12.0-beta
The text was updated successfully, but these errors were encountered: