-
Notifications
You must be signed in to change notification settings - Fork 748
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Can VPC-CNI with Security Group for Pods work with kube-proxy in IPVS mode? #2982
Comments
Yes,there is no limitation that we know that will not work. Kube-Proxy is service proxy later; and it's mode of ipvs or iptables shouldn't interfere with SGPP functionality of pods. I will setup a IPVS cluster and verify and share my output. |
Hi @AhmadMS1988, I set up the cluster and switched kube-proxy to IPVS mode. I also created an Nginx service with a security group and tested in-cluster pod-to-pod connectivity to Nginx, which worked fine. Could you please provide more details on the specific networking test that is failing? TIA! |
Hi all; |
@AhmadMS1988 - I was able to verify SGP working with kube-proxy in IPVS mode. Followed these public docs.
Switched to IPVS Mode.
|
Hi @orsenthil |
Hi @AhmadMS1988 I did reproduce your issue. Yes it indeed does not work. Here's Why: |
Thank you. |
@AhmadMS1988 - Confirming the observation from @yash97 . This seems problematic only with IPVS mode. That is, Pods with SGP only on IPVS does not seem connect to Service IP. |
@AhmadMS1988 - We are going to call this out as a limitation for now. We had an known issue with Pods with Security Group not working IPVS mode for sometime now. |
This issue is stale because it has been open 60 days with no activity. Remove stale label or comment or this will be closed in 14 days |
Documented the limitation here #3064 until we add support. |
This issue is now closed. Comments on closed issues are hard for our team to see. |
What happened:
We need to use kube-proxy in IPVS mode, with VPC CNI because of our increased numbers of pods, as long as to achieve between request load balancing using k8s services.
We noticed that pods that has security groups and branch network interfaces, the both ingress and egress traffic of these pods stops and never comeback until we go back to iptables mode and refresh all nodes.
We need to know if VPC CNI supports both security groups for pods and kube-proxy in IPVS mode.
Environment:
Kubernetes version (use
kubectl version
):Client Version: v1.30.2
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: v1.30.1-eks-1de2ab1
CNI Version
v1.18.2-eksbuild.1
kube-proxy Version
v1.30.0-eksbuild.3
OS (e.g:
cat /etc/os-release
):Amazon Linux 2023.5.20240624 v1.30.0-eks-036c24b
Bottlerocket OS 1.20.3 (aws-k8s-1.30) v1.30.0-eks-fff26e3
Kernel (e.g.
uname -a
):Amazon Linux: 6.1.94-99.176.amzn2023.x86_64
Bottlerocket OS: 6.1.92
Tested on both arm64 and amd64
The text was updated successfully, but these errors were encountered: