Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

node-problem-detector not able to detect kernel log events for a Kind cluster #859

Open
pravarag opened this issue Feb 7, 2024 · 6 comments
Labels
lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale.

Comments

@pravarag
Copy link

pravarag commented Feb 7, 2024

I've been trying to run node-problem-detector on a local kind cluster with 3 nodes (1 master, 2 worker). And after installing it as DaemonSet, firstly I'm seeing there are three pods running across three nodes including master. And also, when I pass any Kernel message as test, I don't see any events getting generated either in npd pod nor in the node's description.

@wangzhen127
Copy link
Member

You may need to tune your daemonset yaml

@BenTheElder
Copy link
Member

Note: kind clusters are sharing the host kernel with sketchy isolation.

What's the use case for NPD-on-kind?

@cmontemuino
Copy link

Note: kind clusters are sharing the host kernel with sketchy isolation.

What's the use case for NPD-on-kind?

It's local testing and CI in my case.

@BenTheElder
Copy link
Member

For testing NPD a fake should be used or a remote VM, we shouldn't introduce issues into the CI host's kernel and if we don't then we won't see any?

for local development, you could use a VM or local-up-cluster.sh or kubeadm init

kind is generally attempting to create a container that appears like a node, but it's on a shared kernel, in a container, which kubelet doesn't clearly support.

in general kind works best for testing API interactions and node to node interactions but not kernel / host / resource limits for now unfortunately

@cmontemuino
Copy link

Just in case it helps other people, the following configuration works pretty well with my KinD installation:

--config.system-log-monitor=/config/kernel-monitor.json,/config/systemd-monitor.json \
--config.custom-plugin-monitor=/config/iptables-mode-monitor.json,/config/network-problem-monitor.json,/config/kernel-monitor-counter.json,/config/systemd-monitor-counter.json

That helped me to quickly understand what's going on behind the scenes, and then deploy node-problem-detector in our clusters.

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 17, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale.
Projects
None yet
Development

No branches or pull requests

6 participants