- Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications.
- The CNCF/Linux Foundation offers this performance-based exam which targets the developer aspect of kubernetes skills such as deploying apps, configuring apps, rolling out the application, creating persistent volumes, etc.
- Since this exam is performance-based rather than just multiple choice questions just knowing the concepts are not enough, we need a lot of practice before the exam.
- This article helps you understand, practice K8s imperative commands and get you ready for the exam.
Table of Contents (Click on any below topic)
K8s imperative commands for Pod
1. List pod from all the namespace.
kubectl get pod -A
2. Output the yaml file of the pod without the cluster-specific
information.
kubectl get po nginx -o yaml --export
3. List all the pods showing name and namespace with a json path
expression.
kubectl get pods -o=jsonpath="{.items[*]['metadata.name', 'metadata.namespace']}"
4. Delete the pod you just created without any delay (force
delete).
kubectl delete po nginx --grace-period=0 --force
5. Create the nginx pod with version 1.17.4 and expose it on
port 80.
kubectl run nginx --image=nginx:1.17.4 --restart=Never --port=80
6. Check the Image version without the describe command.
ubectl get po nginx -o jsonpath='{.spec.containers[].image}{"\n"}
7. Create a busybox pod and run command ls while creating it
and check the logs.
kubectl run busybox --image=busybox --restart=Never -- ls
8. If pod crashed check the previous logs of the pod.
kubectl logs busybox -p
9. Create a busybox pod with command sleep 3600.
kubectl run busybox --image=busybox --restart=Never -- /bin/sh -c "sleep 3600"
10. Check the connection of the nginx pod from the busybox
pod.
kubectl get po nginx -o wide // check the connection kubectl exec -it busybox -- wget -o- <IP Address>
11. Create a busybox pod and echo message ‘How are you’
and deleted immediately.
kubectl run busybox --image=nginx --restart=Never -it -- echo "How are you"
12. List the nginx pod with custom columns POD_NAME and
POD_STATUS.
kubectl get po -o=custom-columns="POD_NAME:.metadata.name, POD_STATUS:.status.containerStatuses[].state"
13. Run command ls in the container busybox3 of the busybox
pod
kubectl exec busybox -c busybox3 -- ls
14. Show metrics of the above pod containers and puts them into the
file.log.
kubectl top pod busybox --containers // putting them into file kubectl top pod busybox --containers > file.log cat file.log
15. Create a Pod with image nginx will be deployed on node with the
label nodeName=nginxnode.
kubectl run nginx --image=nginx --restart=Never --dry-run -o yaml > pod.yaml add below in pod.yaml spec: nodeSelector: nodeName: nginxnode containers:........
16. Remove all the pods that we created so far.
kubectl delete po --all
17. List all the events sorted by timestamp and put them into
file.log and verify.
kubectl get events --sort-by=.metadata.creationTimestamp // putting them into file.log kubectl get events --sort-by=.metadata.creationTimestamp > file.log cat file.log
18. How many pods exist in the default namespace ?
kubectl get pod --no-headers | wc -l
19. Which namespace has the blue pod in it ?
kubectl get pod --A | grep blue
20. Create a pod called httpd using the image httpd:alpine in
the default namespace. Next, create a service of type
ClusterIP by the same name (httpd). The target port for the service
should be 80.Try to do this in one steps as possible.
kubectl run httpd --image=httpd:alpine --port=80 --expose
21. Create pod with command touch /tmp/ready && sleep 1d.
export do="--dry-run=client -o yaml"kubectl run pod6 --image=busybox:1.31.0 $do --command -- sh -c "touch /tmp/ready && sleep 1d"
22. Serch pod name with label my-happy-shop
kubectl get pod -o yaml | grep my-happy-shop -A10
K8s imperative commands for Label / Selector / Annotations
23. Add label app=redis to redis deployment.
kubectl label deploy redis app=redis
24. Add label color=blue to
node node01.
kubectl label node node01 color=blue
25. Get all the resources count which is Label env=prod.
kubectl get all --selector env=prod | wc -lwe can add multiple selector also.kubectl get all --selector env=prod,bu=finance,tier=frontend
26. Get pod details with Label.
kubectl get pod --show-labels
27. Get pod details which having Label type=runner.
kubectl get pod -l type=runner
28. Add new Label protected: true to pod having Label type:
worker & type: runner.
# run for label runnerk label pod -l type=runner protected=true# run for label worker k label pod -l type=worker protected=true
29. Annotate pod having Label protected=true with
protected="do not delete this pod".
kubectl annotate pod -l protected=true protected="do not delete this pod"
30. Deploy a redis pod using | Pod Name: redis | Image:
redis:alpine | Labels: tier=db
kubectl run redis --image=redis:alpine -l tier=db
31. Identify the POD which is part of the prod environment, the
finance BU and of frontend tier.
Kubectl get pods --selector env=prod,bu=finance,tier=frontend
32. Get the pods with labels env=dev and env=prod using without
using selector.
kubectl get pods -l 'env in (dev,prod)'
33. Change the label for nginx-dev3 of the pod to env=uat from
env=Qe.
Kubectl label pod/nginx-dev3 env=uat --overwrite
34. Annotate the nginx-dev pods with name=webapp.
kubectl annotate pod nginx-dev{1..3} name=webapp
K8s imperative commands for Secrets & Environment Variables
35. Create TLS secret webhook-server-tls for below
details.
Certificate :
/root/keys/webhook-server-tls.crt
Key
: /root/keys/webhook-server-tls.key
kubectl create secret tls webhook-server-tls \ --cert "/root/keys/webhook-server-tls.crt" \ --key "/root/keys/webhook-server-tls.key"
36 . Create secret with below details.
DB_Host=sql01
DB_User=root
DB_Password=password123
kubectl create secret generic db-secret --from-literal=DB_Host=sql01 --from-literal=DB_User=root --from-literal=DB_Password=password123
37. Set below env variable for webapp-color pod
APP_COLOR=green
Kubectl set env pod webapp-color APP_COLOR=green --dry-run=client -o yaml> pod37.yamlKubectl apply -f pod37.yaml --force
Create pod with below details
Name: webapp-color
image:
kodekloud/webapp-color
labels: name=webapp-color
env: APP_COLOR=green
Kubectl run webapp-color --image=kodekloud/webapp-color--labels="name=webapp-color" --env="APP_COLOR=green"
38. Set env variable for webapp-color from below configmap.
configmap: webapp-config-map
Kubectl set env pod webapp-color --from=configmap/webapp-config-map--dry-run=client -o yaml > pod38.yamlKubectl apply -f pod38.yaml --forceapiVersion: v1kind: Pod metadata: labels: name: webapp-color name: webapp-color namespace: default spec: containers: - envFrom: - configMapRef: name: webapp-config-map image: kodekloud/webapp-color name: webapp-color.......
39. Set env variable for webapp-color from below secret.
secret: db-secret.
kubectl set env --from=secret/db-secret pods webapp-pod --dry-run=client -o yaml > pod39.yamlKubectl apply -f pod39.yaml --forceapiVersion: v1 kind: Pod metadata: creationTimestamp: null labels: run: webapp-pod name: webapp-pod spec: containers: - image: kodekloud/simple-webapp-mysql name: webapp-pod resources: {} envFrom: - secretRef: name: db-secret dnsPolicy: ClusterFirst restartPolicy: Always status: {}
40. Create pod webapp-color with below Label & env variable.
Labels : name=webapp-color
Env
: APP_COLOR=green
kubectl run webapp-color --image=kodekloud/webapp-color --labels="name=webapp-color" --env="APP_COLOR=green"
K8s Imperative Commands For Replica & Deployment
41. Check api-new-c32 deployment history and undo
deployment.
kubectl rollout -h deploy api-new-c32 kubectl -n neptune rollout undo deploy api-new-c32
42. Scale replicas new-replica-set to 3 deployment.
kubectl scale --replicas=3 rs/new-replica-set
44. Check the history of the specific revision of that
deployment new-replica-set.
kubectl rollout history new-replica-set --revision=7
45. Set image to myapp:2.0 for deployment
myapp-deployment
kubectl set image deployment/myapp-deploymentmyapp=myapp:2.0 --max-surge=1 --max-unavailable=2----------------------------------------------------------apiVersion: apps/v1 kind: Deployment metadata: name: myapp namespace: default spec: replicas: 4 selector: matchLabels: name: webappstrategy: type: RollingUpdate rollingUpdate: maxSurge: 1 maxUnavailable: 2----------------------------------------------------------Change the deployment strategy to RecreateapiVersion: apps/v1 kind: Deployment metadata: name: myapp namespace: default spec: replicas: 4 selector: matchLabels: name: webapp strategy: type: Recreate......
46. Restart web-moon deploymnet using command.
kubectl rollout restart deploy web-moon
K8s Imperative Commands For Service
47. Expose pod redis with below service details.
Service Name : redis-serviceCommand
port
: 6379
kubectl expose pod redis --port=6379 --type=ClusterIP--name=redis-serviceCommand
48. Expose deployment simple-webapp-deployment with
below service details.
Service Name : webapp-service
Service Type : NodePort
Port
: 8080
nodePort.
: 30080
kubectl expose deploy simple-webapp-deployment --port=8080 --target-port=8000 --type=NodePort --name=webapp-service --dry-run=client-o yaml > scv48.yaml------------------------------------------------------------Edit svc48.yaml and add nodePort as below------------------------------------------------------------spec: ports: - port: 8080 protocol: TCP targetPort: 8000 nodePort: 30080
49. Create a ClusterlP Service named project-plt-6cc-svc. This
Service should expose a single Pod named
project-plt-6cc-api of image nginx:1.17.3-alpine,
create that Pod as well. The Pod should beidentified by label
project: plt-6cc-api. The Service should use tcp port
redirection of 3333:80.
#Create Podkubectl run project-plt-6cc-api --image=nginx:1.17.3-alpine-l project=plt-6cc-api#Create Servicekubectl expose pod project-plt-6cc-api --name project-plt-6cc-svc--port 3333 --target-port 80
50. Check above service connectivity using temp pod
kubectl run tmp --restart=Never --rm --image=nginx:alpine-i curl http://project-plt-6cc-svc:3333
What DNS name should the Blue application use to access the
database db-service in the dev namespace ?
db-service.dev.svc.cluster.local or db-service.dev
- Since the blue application and the db-service are in different namespaces. In this case, we need to use the service name along with the namespace to access the database.
- The FQDN (fully Qualified Domain Name) for the db-service in this example would be db-service.dev.svc.cluster.local
K8s Imperative Commands For ConfigMap
51. Create an env file file.env with var1=val1 and create a
configmap envcfgmap from this env file and verify the configmap
echo var1=val1 > file.env cat file.env kubectl create cm envcfgmap --from-env-file=file.env
52. Create config map from web-moon.html with index.html
as key.
kubectl create configmap configmap-web-moon-html--from-file=index.html=/opt/course/15/web-moon.html------------------------------------------------------------------------------apiVersion: v1 data: index.html: | # notice the key index.html, this will be the filename when mounted <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <title>Web Moon Webpage</title> </head> <body> This is some great content. </body> </html> kind: ConfigMap metadata: creationTimestamp: null name: configmap-web-moon-html
53. Create a new ConfigMap webapp-config-map. Use the
below spec.
APP_COLOR=darkblue
kubectl create configmap webapp-config-map--from-literal=APP_COLOR=darkblue
Update the environment variable on the webapp-color POD to use the newly created ConfigMap
apiVersion: v1 kind: Pod metadata: labels: name: webapp-color name: webapp-color namespace: default spec: containers: - envFrom: #add - configMapRef: #add name: webapp-config-map #add image: kodekloud/webapp-color name: webapp-color
K8s Imperative Commands For Job & CronJob
54. Create Job dice that should run image busybox:1.31.0 and execute
sleep 2 && echo done.Run a total of 3 times and should
execute 2 runs in parallel & backoffLimit 25.
kubectl create job dice --image=busybox:1.31.0--dry-run.client -o yaml> job54.yaml-------------------------------------------------Edit oob54.yaml and add parameter for run job parallelkind: Job metadata: creationTimestamp: null name: dice spec: completions: 3 #add parallelism: 2 #addbackoffLimit: 25 #addtemplate: metadata: creationTimestamp: spec: containers: - command: [ '/bin/sh' , '-c' , 'sleep 3600']image: busybox:1.31.0 name: neb-new-job-container resources: {} restartPolicy: Never
55. Create a Cronjob with busybox image that prints date and hello
from kubernetes cluster message for every minute
kubectl create cronjob date-job --image=busybox --schedule="*/1 * * * *"-- bin/sh -c "date; echo Hello from kubernetes cluster"
56. Create Job dice that should run image kodekloud/throw-dice
and run at 21:30 hours every day
kubectl create cronjob throw-dice-cron-job --image=kodekloud/throw-dice--schedule="30 21 * * *"
K8s Ingress & NetworkPolicy
54.Create a single ingress resource called ingress-vh-routing. The
resource should route HTTP traffic to multiple hostnames as
specified below:
- The service video-service should be accessible on http://watch.ecom-store.com:30093/video
- The service apparels-service should be accessible on http://apparels.ecom-store.com:30093/wear
- Here 30093 is the port used by the Ingress Controller
kubectl create ingress ingress-wear-watch--rule="apparels.ecom-store.com/wear=wear-service:8080"--rule="watch.ecom-store.com/watch=video-service:8080"--annotation="nginx.ingress.kubernetes.io/rewrite-target=/"
55. Check above ingress created correctly or not
curl -w 5 http://watch.ecom-store.com:30093/videocurl -w 5 http://apparels.ecom-store.com:30093/wear
56. Two Deployments named api and frontend created . Both
Deployments are exposed inside the cluster using Services. Create a
NetworkPolicy named np1 which restricts outgoing tcp connections
from Deployment frontend and only allows those going to Deployment
api. Make sure the NetworkPolicy still allows outgoing traffic on
UDP/TCP ports 53 for DNS resolution. Test using: wget api:2222 from
a Pod of Deployment frontend.
apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: np1 namespace: venus spec: podSelector: matchLabels: id: frontend policyTypes: - Egress egress: - to: # 1st egress rule - podSelector: matchLabels: id: api - ports: # 2nd egress rule - port: 53 protocol: UDP - port: 53 protocol: TCP
- Notice that we specify two egress rules in the yaml above. If we specify multiple egress rules then these are connected using a logical OR. So in the example above we do:
allow outgoing traffic if (destination pod has label id:api) OR((port is 53 UDP) OR (port is 53 TCP))
- Let's have a look at example code which wouldn't work in our case:
# this example does not work in our case ... egress: - to: # 1st AND ONLY egress rule - podSelector: matchLabels: id: api ports: # STILL THE SAME RULE but just an additional selector - port: 53 protocol: UDP - port: 53 protocol: TCP
- In the yaml above we only specify one egress rule with two selectors. It can be translated into:
allow outgoing traffic if (destination pod has label id:api) AND((port is 53 UDP) OR (port is 53 TCP))
Let's check Internal connection to api work as expected or not
after NetworkPolicy created :
kubectl -n venus exec frontend-789cbdc677-c9v8h-- wget -0- api: 2222<html><body><h1>It works!</h1></body></html> Connecting to api:2222 (10.3.255.137:2222) 100% **** 45 0:00:00 ETA
57. Allow incoming traffic to pods with label color=blue are
allowed only if they come from a pod with color=red, on port 80
kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-same-namespace namespace: default spec: podSelector: matchLabels: color: blue ingress: - from: - podSelector: matchLabels: color: red ports: - port: 80
58. incoming traffic is allowed only if they come from a pod
with label color=red, in a namespace with label shape=square, on
port 80
kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-same-namespace namespace: default spec: podSelector: matchLabels: color: blue ingress: - from: - podSelector: matchLabels: color: red namespaceSelector: matchLabels: shape: square ports: - port: 80
59. Create a network policy to allow traffic from the Internal
application only to the payroll-service and db-service. Use the spec
given below. You might want to enable ingress traffic to the pod to
test your rules in the UI.
- Policy Name: internal-policy
- Policy Type: Egress
- Egress Allow: payroll
- Payroll Port: 8080
- Egress Allow: mysql
- MySQL Port: 3306
apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: internal-policy namespace: default spec: podSelector: matchLabels: name: internal policyTypes: - Egress - Ingress ingress: - {} egress: - to: - podSelector: matchLabels: name: mysql ports: - protocol: TCP port: 3306 - to: - podSelector: matchLabels: name: payroll ports: - protocol: TCP port: 8080 - ports: - port: 53 protocol: UDP - port: 53 protocol: TCP
K8s Roles & RoleBinding
60.Create the necessary roles and role bindings required for the
dev-user to create, list and delete pods in the default
namespace.
- Role: developer
- Role Resources: pods
- Role Actions: list
- Role Actions: create
- Role Actions: delete
- RoleBinding: dev-user-binding
- RoleBinding: Bound to dev-user
kind: Role apiVersion: rbac.authorization.k8s.io/v1 metadata: namespace: default name: developer rules: - apiGroups: [""] resources: ["pods"] verbs: ["list", "create","delete"] --- kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: dev-user-binding subjects: - kind: User name: dev-user apiGroup: rbac.authorization.k8s.io roleRef: kind: Role name: developer apiGroup: rbac.authorization.k8s.io
61. Add a new rule in the existing role developer to grant the
dev-user permissions to create deployments in the blue namespace.
Remember to add api group "apps".
apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: developer namespace: blue rules: - apiGroups: - apps resourceNames: - dark-blue-app resources: - pods verbs: - get - watch - create - delete - apiGroups: (added) - apps resources: - deployments verbs: - get - watch - create - delete
K8s Cluster Role & Cluster Role Binding
Cluster Roles are cluster wide and not part of any
namespace
62. A new user michelle joined the team. She will be focusing on
the nodes in the cluster. Create the required ClusterRoles
and ClusterRoleBindings so she gets access to the nodes.
apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: node-admin rules: - apiGroups: [""] resources: ["nodes"] verbs: ["get", "watch", "list", "create", "delete"] --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: michelle-binding subjects: - kind: User name: michelle apiGroup: rbac.authorization.k8s.io roleRef: kind: ClusterRole name: node-admin apiGroup: rbac.authorization.k8s.io
K8s ServiceAccount
63. Update the deployment web-dashboard to use the newly created
ServiceAccount dashboard-sa
-----template: metadata: creationTimestamp: null labels: name: web-dashboard spec: serviceAccountName: dashboard-sa #add containers:------
64. Team own abc serviceaccount sa-abc. coworker needs
secret that belong to serviceaccount
kubectl get sa kubectl get secrets -oyaml | grep annotations -A 1 ------------------------------------------------------if secret bellog to serviceaccount will have annotationskubernetes.io/service-account.name
Helm & Docker
65. How many images are available on this host?
docker images
66. Build a docker image using the Dockerfile and name it
webapp-color. No tag to be specified.
docker build -t webapp-color .
67. Run an instance of the image webapp-color and publish port
8080 on the container to 8282 on the host.
- Container with image 'webapp-color'
- Container Port: 8080
- Host Port: 8282
docker run -p 8282:8080 webapp-color
68. Build a docker image using the Dockerfile and name it
webapp-color. lite tag to be specified.
docker build -t webapp-color:lite .
68.Install new release internal-issue-report-apache of chart
bitnami/bginx. the deploymnet should have two replicas,set
this via helm values during install.
helm install internal-issue-report-apache bitnami/bginx \ --set replicasCount=2 \ --set image.debug=true
69. There seems to be broken release stuck in pending
install state. Find it and delete it.
helm ls -a helm uninstall ...
70. Change the docker file and add env var SUN_CIPHER_ID with
hard code value 5b9c1065-e39d-4a43-a04a-e59bcea3e03f
# build container stage 1 FROM docker.io/library/golang:1.15.15-alpine3.14 WORKDIR /src COPY . . RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o bin/app . # app container stage 2 FROM docker.io/library/alpine:3.12.4 COPY --from=0 /src/bin/app app # ADD NEXT LINE ENV SUN_CIPHER_ID=5b9c1065-e39d-4a43-a04a-e59bcea3e03f
71. Build above docker file with two version latest &
v1-docker and name it sun-cipher and push it to docer
file and push to repo.
sudo docker build -t sun-cipher:latest-t sun-cipher:v1-docker .sudo docker ls sudo docker push sun-cipher:v1-docker sudo docker push sun-cipher:latest
72. Create container from prevsioaly created image
sudo docker run -d --name sun-cipher sun-cipher:latest
73. Write list of all running container in file
/opt/course/11/containers.
sudo logs sun-cipher > /opt/course/11/logs
74. Write list of all running container in file
/opt/course/11/containers.
sudo docker ps > /opt/course/11/containers
kub Config
75. How many clusters are defined in the default kubeconfig
file?
kubectl config current-context --kubeconfig my-kube-config
76. Inspect the environment and identify the authorization
modes configured on the cluster.
kubectl describe pod kube-apiserver-controlplane -n kube-system----------------------------------------------and look for --authorization-mode
77. Which account is the kube-proxy role assigned to?
kubectl describe rolebinding kube-proxy -n kube-system
K8s LivenessProbe & ReadinessProbe
75. Create the pod nginx with the above liveness and
readiness probes so that it should wait for 20 seconds
before it checks liveness and readiness probes and it should
check every 25 seconds
k run nginx --image=nginx --dry-run=client-o yaml > pod75.yaml--------------------------------------------Edit pod75.yaml and add belowspec: containers: - image: nginx name: nginx ports: - containerPort: 80#add livenessProbe: initialDelaySeconds: 20 periodSeconds: 25 httpGet: path: /healthz port: 80 readinessProbe: initialDelaySeconds: 20 periodSeconds: 25 httpGet: path: / port: 80
76. Create a single Pod named pod6 in Namespace default of
image busybox: 1.31.0. The Pod should have a readiness-probe
executing cat /tmp/ready. It should initially wait 5 and
periodically wait 10 seconds. This will set the container
ready only if the file /tmp/ready exists. The Pod should run
the command touch /tmp/ready && sleep 1d, which will
create the necessary file to be ready and then idles. Create
the Pod and confirm it starts.
apiVersion: v1 kind: Pod metadata: creationTimestamp: null labels: run: pod6 name: pod6 spec: containers: - command: - sh - -c - touch /tmp/ready && sleep 1d image: busybox:1.31.0 name: pod6 resources: {}#ADD readinessProbe: exec: command: - sh - -c - cat /tmp/ready initialDelaySeconds: 5 periodSeconds: 10 dnsPolicy: ClusterFirst restartPolicy: Always status: {}
K8s Security Contexts
75. Run the sleep process with below details.
- user ID 1010
- Pod Name: ubuntu-sleeper
- Image Name: ubuntu
- SecurityContext: User 101
k run ubantu-sleeper --image=ubantu--dry-run=client -o yaml > pod75.yaml--------------------------------------Update pod75.yaml as belowspec: securityContext: runAsUser: 1010 containers: spec: securityContext: #ADD runAsUser: 1001 #ADD containers: - image: ubuntu name: web command: ["sleep", "5000"] securityContext:
76. Create pod ubuntu-sleeper with image ubantu to run
as Root user and with the SYS_TIME capability
k run ubantu-sleeper --image=ubantu--dry-run=client -o yaml > pod76.yaml -------------------------------------- Update pod76.yaml as belowspec: containers: - command: - sleep - "4800" image: ubuntu name: ubuntu-sleeper securityContext: capabilities: add: ["SYS_TIME"]
K8s Taints and Tolerations
77. Does any taints exist on node01 node.
kubectl describe node node01 | grep -i taints
78. Create a taint on node01 with key of spray, value of
mortein and effect of NoSchedule.
kubectl taint node node01 spray:mortein:NoSchedule
79. Create pod named bee which has a toleration set to the taint
mortein.
k run bee --image=nginx --dry-run=client-o yaml > pod79.yaml----------------------------------------------------Edit pod79.yaml as below------spec: containers: - image: nginx name: bee#Add tolerations: - key: spray value: mortein effect: NoSchedule operator: Equal
80. Remove the taint on controlplane, which currently has the
taint effect of NoSchedule
kubectl taint nodes controlplane node-role.kubernetes.io/control-plane:NoSchedule-
K8s Node Affinity
81. Create Node Affinity to the deployment to place the pods on
node01 only for below details
- Name: blue
- Replicas: 3
- Image:nginx
- NodeAffinity:requiredDuringSchedulingIgnoredDuringExecutionKey: color | value: blue
k run blue --image=nginx --dry-run=client-o yaml > pod81.yaml--------------------------------------------------update pod81.yaml as below----spec: containers: - image: nginx imagePullPolicy: Always name: nginx affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: color operator: In values: - blue---
82. Create a new deployment named red with the nginx image and 2
replicas, and ensure it gets placed on the controlplane node
only. Use the label key - node-role.kubernetes.io/control-plane
- which is already set on the controlplane node.
- Name: red
- Replicas: 2
- Image: nginx
- NodeAffinity: requiredDuringSchedulingIgnoredDuringExecution
- Key: node-role.kubernetes.io/control-plane
- Use the right operator
k create deploy red --image=nginx --replicas=2--dry-run=client -o yaml > pod82.yaml-------------------------------------------------------------------------update pod82.yaml as belowtemplate: metadata: labels: run: nginx spec: containers: - image: nginx imagePullPolicy: Always name: nginx affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: node-role.kubernetes.io/control-plane operator: Exists
Tricks To Save Time
Set below var to avoid repeated command
parameters
- export do="--dry-run=client -o yaml"
- export now="--force --grace-period 0"
To indent multile line
- :set shiftwidth=2
- mark multiple line using shift v and up / down key
- To indent mark line shift + press > or < and to repeat the action press
Check Below Link for Other K8S Concepts