Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

etcd-manager logs says that etcd node of cilium node has joined to the rest of clsuter correctly but that's not correct ! etcd is down and there is no data in the volume that is attached to the ec2 #16872

Open
nuved opened this issue Oct 2, 2024 · 1 comment
Labels
kind/bug Categorizes issue or PR as related to a bug.

Comments

@nuved
Copy link

nuved commented Oct 2, 2024

/kind bug

1. What kops version are you running? The command kops version, will display
this information.

1.28.4
I also tried to upgrade the cluster by the last stable version 1.29.2 but there is no difference .

2. What Kubernetes version are you running? kubectl version will print the
version if a cluster is running or provide the Kubernetes version specified as
a kops flag.

client of kubectl v1.31.0
the k8s server 1.27.16
etcd version is 3.5.9
3. What cloud provider are you using?
AWS
4. What commands did you run? What is the simplest way to reproduce this issue?
kops --name=mycluster --state s3://my-cluster-sample rolling-update cluster --instance-group=master-1b --yes
5. What happened after the commands executed?

kubectl get pod -n kube-system  | grep cilium
cilium-2vh5m                                    1/1     Running   49 (6d23h ago)   7d18h
cilium-4w865                                    1/1     Running   4 (6d23h ago)    6d23h
cilium-8ccpq                                    1/1     Running   0                28m
cilium-c78fl                                    1/1     Running   56 (38h ago)     21d
cilium-fbdhl                                    1/1     Running   0                6d7h
cilium-gwkxp                                    1/1     Running   22 (6d23h ago)   7d1h
cilium-lv2nd                                    1/1     Running   0                6d23h
cilium-operator-7575d5dccc-cwsqf                1/1     Running   2 (4m52s ago)    33m
cilium-operator-7575d5dccc-kgnm6                1/1     Running   2 (5m28s ago)    37m
cilium-pqprl                                    1/1     Running   49 (6d23h ago)   20d
cilium-rntt6                                    1/1     Running   7 (6d23h ago)    6d23h
cilium-rvbs4                                    1/1     Running   14 (6d23h ago)   7d
etcd-manager-cilium-i-01f09427e9d4fcd64         1/1     Running   0                5d23h
etcd-manager-cilium-i-026c2be03509de051         1/1     Running   0                27m
etcd-manager-cilium-i-0b84a6dd15b799c58         1/1     Running   0                6d2h

Seems the cluster state are healthy ! all etcd-manager ( and the new one etcd-manager-cilium-i-026c2be03509de051 ) are healthy .

the new etc-manager node reports ( logs of etcd-manager-cilium-i-026c2be03509de051 ) that etcd has joined to the rest of cluster but it's not true . actually etcd server could not listen up to any ports ( 4003,2382,8083 ports are down ) , just etcd-manager itself is listening to this port 3991! the volume that has attached to the machine and shared to the pod is empty as well.

Rest of cluster hopefully report that they could not connect to the new etcd because it's not up and running.

LISTEN 0      32768    10.141.18.9:3997       0.0.0.0:*    users:(("etcd-manager",pid=5691,fd=8))
LISTEN 0      32768    10.141.18.9:3996       0.0.0.0:*    users:(("etcd-manager",pid=5747,fd=8))
LISTEN 0      32768    10.141.18.9:3991       0.0.0.0:*    users:(("etcd-manager",pid=5644,fd=8))
LISTEN 0      32768              *:2381             *:*    users:(("etcd",pid=5789,fd=7))
LISTEN 0      32768              *:2380             *:*    users:(("etcd",pid=5811,fd=7))
LISTEN 0      32768              *:4001             *:*    users:(("etcd",pid=5811,fd=8))
LISTEN 0      32768              *:4002             *:*    users:(("etcd",pid=5789,fd=8))
LISTEN 0      32768              *:8081             *:*    users:(("etcd",pid=5811,fd=19))
LISTEN 0      32768              *:8082             *:*    users:(("etcd",pid=5789,fd=22))

6. What did you expect to happen?
I would expect if I can fix the issue by forcing this node to join to the rest of cluster by setting ETCD_INITIAL_CLUSTER_STATE=existing as env in our kops cluster configuration for this etcd node ! after re-create that node , I would have a healthy etcd cluster but it does not work ! it lies and seems everything is ok but it's not true.

@k8s-ci-robot k8s-ci-robot added the kind/bug Categorizes issue or PR as related to a bug. label Oct 2, 2024
@nuved
Copy link
Author

nuved commented Oct 2, 2024

I appreciate anyone let me know how should I fix this issue ,
which solution is ok to fix this issue?
re-joining the bad node and force it to join to the rest of cluster? seems it does not work.
add a new node as new controller and delete the old one? is that ok if we use the kops ?
I will plan to use etcd-manager-ctl but I'm not sure if that's safe and the documentation is not clear all. I'm not sure what's gonna be happened after running that command?
I just need to delete the pod of cilium one by one and etcd-manager could make a new cluster for us? or I need to down all nodes first for a while and make sure to empty the volumes for all nodes?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug.
Projects
None yet
Development

No branches or pull requests

2 participants