Skip to content

mohsenkamini/CKA_Anisa

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

CKA_Anisa

CKA course attended in Anisa

to initiate a cluster check out this repo.

Components

image

  • kubelet:
  1. interacts with the container runtime to deploy containers on the node.
  2. reports status of the node and the containers on that node to the API server.
  3. register nodes on the cluster
  4. healthcheck (probs)
systemctl restart kubelet

config file: /var/lib/kubelet/config.yaml

  • Controller manager:
  1. monitors the nodes and containers
  • API server:
  1. everyone talk to API server with anything they want to do or report
  • ETCD:
  1. Key-value DB that contains cluster/applications info.

image

‍‍‍/etc/kubernetes/manifests/etcd.yml

  • kube-scheduler:
  1. schedules containers/pods on the nodes.
  • kubeproxy:
  1. between pods/hosts network management
  2. creates service endpoints

image

Container runtime engine

nerdctl and cri are utilities to manage containerd

runc creates namespaces and cgroups

/etc/kubernetes

*.conf files contain the access info on different roles like admin, kubelet, etc.

$HOME/.kube/config

We can add multiple clusters' info on this file. and using kubectx or kubeconf switch between users/clusters.

CNI

works as net interfaces for containers. calico and flannel and ... work as the overlay network assigning IPs to those containers.

kubectl

kubectl get node
kubectl get pod --namespace kube-system -o wide
kubectl get node --kubeconfig
kubeadm token list
kubeadm token create --print-join-command --ttl=12h
kubectl label node worker1 kubernetes.io/role=worker1
kubectl get all -A
kubectl get events
kubectl run nginx_name --image nginx:latest
kubectl get pod -o yaml
kubectl explain deployments.kind

nerdctl

install:

NERDCTL_VERSION=1.0.0 # see https://github.com/containerd/nerdctl/releases for the latest release
archType="amd64"
if test "$(uname -m)" = "aarch64"; then     archType="arm64"; fi
wget "https://github.com/containerd/nerdctl/releases/download/v${NERDCTL_VERSION}/nerdctl-full-${NERDCTL_VERSION}-linux-${archType}.tar.gz" -O /tmp/nerdctl.tar.gz
mkdir -p ~/.local/bin
tar -C ~/.local/bin/ -xzf /tmp/nerdctl.tar.gz --strip-components 1 bin/nerdctl
echo -e '\nexport PATH="${PATH}:~/.local/bin"' >> ~/.bashrc
export PATH="${PATH}:~/.local/bin"
nerdctl completion bash |  tee /etc/bash_completion.d/kubeadm > /dev/null
nerdctl
nerdctl -n k8s.io ps
nerdctl -n k8s.io images
nerdctl -n k8s.io kill 84a0b9e88743

PODs

can have multiple containers in it(helper container):

Helper containers can be used to perform a wide range of tasks, such as:

  1. Preparing the environment for the main container(s) by downloading configuration files, secrets, and other data.
  2. Managing the lifecycle of the main container(s) by monitoring their health and restarting them if necessary.
  3. Collecting and sending logs, metrics, and other data to external systems.
  4. Running tools and utilities for debugging, profiling, or testing the main container(s).
kubectl  -n calico-system edit pod  calico-node-rrpxr

image pod itself does not have control over its state, e.g if the node fails pod won't be relocated.

replicaset

image image image

kubectl get replicaset
kubectl get rs -o wide
kubectl replace -f rs-definition.yml
kubectl scale --replicas=3 -f rs-definition.yml
kubectl apply -f rs-definition.yml # no need of a kubectl replace later, do apply again

selector

if we already have a pod with lable type: front-end that pod will be a part of this rs.

deployment

A ReplicaSet ensures that a specified number of pod replicas are running at any given time. However, a Deployment is a higher-level concept that manages ReplicaSets and provides declarative updates to Pods along with a lot of other useful features like rolling update/roll back options.

image

rolling/rollback updates (rollouts)

kubectl rollout status deployment/dp-name

kubectl rollout history deployment/dp-name

kubectl set image # better not use it unless testing

the new versions are deployed in a new replicaset: image

kubectl rollout undo deployment/dp-name # rollbacks to the previous version # we can use --to-revision

namespace

only pods on the same ns see each other by default. we can define resource limitations and accesses using ns.

kubectl create ns dev

the kube-public namespace exposes its pods to everyone.

  • change default ns:
kubectl config set-context kubernetes-admin@kubernetes --namespace dev

ResourceQuota

set requested/limits to resources/object(pods/deployments,etc) for a namespace,lable or etc...

kubectl get resourcequotas

Services

expose pods to outside/other pods within the cluster. load balance traffic to pods.

image

node port: only exposes on the host and the endpoint is each host's ip. image

image

kubectl get ep  nginx-service -o wide
kubectl get svc nginx-service -o wide
iptables-save | grep 10.104.199.182 # svc ip

clusterIP: exposes pod in cluster only. access pods between two namespaces:

curl <svc-name>.<ns-name>.svc.cluster.local

LoadBalancer:

scheduling

manual through pod definition: image

label & selectors can be used to do this.

annotations

taints and tolerations

image

if any pod can tolerate blue taint will be deployed on that node. other than that no node will be deployed on that node.

types of taints: image

kubectl taint nodes <node-name> app=blue:NoSchedule

toleration configuration: image

remove taint:

kubectl taint node worker01 app=nginx:NoSchedule-

Node selector

image

kubectl label nodes worker01 size=Large

Node Affinity

Affinity meaning: image

define more detailed conditions/predictions than node selector.

image

Resource limit exceeding

CPU: K8s throttles(in queues) the cpu for the pod Memory: K8s terminate the pod (OOM(Out of Memory) killed)

Daemonset

just like the swarm service on global mode. 1 container/pod on each node. kube-proxy is deployed in this way.

kubectl -n kube-system get ds

StaticPod

The initial pods like etcd come up before the apiServer, so some pods get initialized by kubelet directly and not handled by the apiServer. image

kubelet is always watching the pod manifest directory to understand what pods to run.

image

the trail the node their on's name.

Multiple schedulers

you can define your own scheduler like do not schedule a new pod when 80% of resource allocation. image

image image

Monitoring

image

metrics server

there's a cadvisor inside the kubelet that metrics server connect to get the metrics.

metrics server data is in memory and does not get stored.

install: https://github.com/kubernetes-sigs/metrics-server

kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml

this fails as it doesn't trust our cluster CA. so :

kubectl edit deployments.apps -n kube-system metrics-server
## add the ca or disable ca verification

usage :

kubectl top node/top

Logs and Events

kubectl logs -f pod-name  # logs generated by the pod
kubectl get events  # logs generated in a cluster view

ENV

image

image

configMap

image image image

use volumes

image used this conf in multi-container-pod.yml

secrets

values should be in base64:

echo -n "value" | base64

multi-container pod

they share the same network namespace(bridged).

image

image

kubectl exec -it multi-alp-pod -- sh
Defaulted container "alpine-container-1" out of: alpine-container-1, alpine-container-2
/ # 

access one specific container:

root@manager1:~/CKA_Anisa/pods# kubectl exec -it multi-alp-pod -c alpine-container-2 -- sh

image

three design patterns and use cases of this: https://kubernetes.io/blog/2015/06/the-distributed-system-toolkit-patterns/

init containers

run before the main container comes up. image

kubectl logs multi-alp-pod --follow --container init-alpine

Self-healing applpications

image image image image image

Cluster Maintenance

System Upgrade

MUST DO:

apt-mark hold kubelet kubeadm kubectl

drain nodes:

kubectl drain <node>

keep existing pods on the node and labels the node unschedulable.

kubectl cordon <node>

to reverse both cordon and drain:

kubectl uncordon <node>

Cluster Upgrade

K8s components versions are recommended to be managed according to the version of kube-apiserver: image

update a master:

kubeadm upgrade plan
apt-mark unhold kubelet kubeadm kubectl
apt update
apt install kubeadm=1.25.11-00 kubectl=1.25.11-00
kubeadm upgrade apply v1.25.11
apt install kubelet=1.25.11-00
apt-mark hold kubelet kubeadm kubectl

update a worker:

kubectl drain <node> # run on master
apt-mark unhold kubeadm kubectl kubelet
apt update
apt install kubeadm=1.25.11-00 kubectl=1.25.11-00
kubeadm upgrade node
apt install kubelet=1.25.11-00
apt-mark hold kubelet kubeadm kubectl
kubectl uncordon <node> # run on master

Backup & Restore

image

resources

push resources to a repo in git: image export everything:

kubectl get all --all-namespaces -o yaml > all-deployed-services.yml
kubectl get deployments --all-namespaces -o yaml > all-deployed-services.yml

we could also use velero.

ETCD

kuber doc

kube-apiserver should be down (it is an static pod the yaml file should be moved).

using etcdutl: image image

Security

Accounts

image

image image

TLS

image admin privilage CN and Organization: image kube-apiserver: image

Check certificate expiration
kubeadm certs check-expiration

Access

csr generation:

openssl genrsa -out anisa.key 2048
openssl req -new -key anisa.key -out anisa.csr

A user delivers a CSR to admin and admin signs that with root CA.

cd ~/CKA_Anisa/Certs
cat anisa.csr | base64 -w 0 ## pass this as request field in csr.yml
kubectl apply -f csr.yml
kubectl get csr
kubectl describe csr anisa-csr
kubectl certificate -h
kubectl certificate approve anisa-csr
kubectl get csr anisa-csr -o yaml  # this gets you the certificate in base64
echo "base64 coded" | base64 -d
## OR
kubectl get csr anisa-csr -ojsonpath='{.status.certificate}' | base64 -d
create a kubeconfig file

image

cd ~/CKA_Anisa/Certs/
cat ./kubeConfig.yml
kubernetes get pod --kubeconfig ./kubeConfig.yml
curl -k  https://172.16.0.10:6443/api --key anisa.key --cert anisa.crt
curl -k  https://172.16.0.10:6443/apis --key anisa.key --cert anisa.crt
curl -k  https://172.16.0.10:6443/api/v1/pods --key anisa.key --cert anisa.crt

switch between contexts using kubectx and kubens: https://github.com/ahmetb/kubectx

sudo git clone https://github.com/ahmetb/kubectx /opt/kubectx
sudo ln -s /opt/kubectx/kubectx /usr/local/bin/kctx
sudo ln -s /opt/kubectx/kubens /usr/local/bin/kns

API Groups

image

core API: image

image

kubectl proxy

using this you won't need to specify --key and certs for curl and if you curl the proxy it has the kubectl access. image

user authorization

image

node authorization:

OU: system:node image

ABAC: image

RBACK: image

Roles:

Get a list of resources:

kubectl api-resources

image

RoleBinding: image

kubectl apply -f developer
kubectl apply -f developer-roleBinding.yml

check authorizations: image

Cluster Roles

image image

Image repository

The default registry is docker.io(dockerhub).

nerdctl login equals docker login.

The better solution is to use secrets: image

if the registry's certificate is self-signed: image

Security Contexts

Capabilities and run as user: (add/drop) image

Network Policy

  • ingress: input traffic to pods
  • egress: output traffic to pods

set a policy that a pod can only receive ingress/ send egress traffic to a specific policy.

image image image

Container Storage Interface (CSI)

connect to ceph, glusterfs, etc image

Volumes and mounts

image

Persistent Volumes (PVs) and Persistent Volume claims

The kubernetes object for a volume is a PV. Then we use a PVC to request a PV. image image many means it can be assigned to multiple nodes. once means only one node. image

persistenVolumeReclaimPolicy: image

binding: image

Networking

Cluster

cni config files: /etc/cni/net.d/

Service

When a service is created with an advertising IP and port, kube-proxy creates firewall rules to forward the requests to that endpoint to the corresponding pod of that service. Then, through routing the host knows where to send this request based on the routes that the cni has advertised.

image

image

Interesting thing is that iptables has a feature to set probability of acceptance of a rule. this K8s loadbalances between pods:

image

IPVS

kube-proxy's mode can be set to iptables to IPVS.

Recommended to not use iptables if you have over 1000 services.

image https://www.tigera.io/blog/comparing-kube-proxy-modes-iptables-or-ipvs/#:~:text=The%20difference%20in%20CPU%20usage,~8%25%20of%20a%20core

coredns

https://coredns.io/2017/05/08/custom-dns-entries-for-kubernetes/

kubectl get svc -n kube-system 
NAME             TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                  AGE
kube-dns         ClusterIP   10.96.0.10       <none>        53/UDP,53/TCP,9153/TCP   93d
metrics-server   ClusterIP   10.106.101.124   <none>        443/TCP                  70d

kubectl exec -it myapp-deployment-6fdd5f58cd-9l6lx -- cat /etc/resolv.conf
search dev.svc.cluster.local svc.cluster.local cluster.local
nameserver 10.96.0.10
options ndots:5

Service Account

Just like RBAC but for non-human resources either within or outside of the cluster. for example if a pod wants to access secrets defined in the cluster or list pods in another namespace, part of a CICD pipe, etc.

statefulset

job

cronjob

calico

Tshoot:

curl -L https://github.com/projectcalico/calico/releases/latest/download/calicoctl-linux-amd64 -o /usr/bin/calicoctl
chmod +x /usr/bin/calicoctl
calicoctl node status

create a pod using yaml file

how to get apiVersion, kind , etc(differs for each object/manifest (pod, etc)):

core group is the default and does not need to be mentioned in yaml.

https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.25/#pod-v1-core image

these are the four necessary keys for pods:

apiVersion: v1
kind: Pod   # always starts with capital letter
metadata:   # name of the pod, namespace, labels etc
  name: myapp-pod
  labels:
    app: myapp
    type: front-end

spec:
  containers:
    - name: nginx-container
      image: nginx:latest # default pulls out of dockerhub