This page was exported from Testking Free Dumps [ http://blog.testkingfree.com ] Export date:Thu Jan 16 16:05:50 2025 / +0000 GMT ___________________________________________________ Title: Free CKS Exam Braindumps certification guide Q&A [Q24-Q40] --------------------------------------------------- Free CKS Exam Braindumps certification guide Q&A CKS Certification Overview Latest CKS PDF Dumps The CKS certification is an essential step for security professionals who want to deepen their knowledge and skills in the Kubernetes environment. It provides comprehensive coverage of Kubernetes security topics and validates the candidate's ability to secure Kubernetes clusters and containerized applications against cyber threats. Candidates who pass the CKS certification exam demonstrate their expertise in securing Kubernetes applications and stand out from their peers in a rapidly evolving Kubernetes ecosystem. Linux Foundation Certified Kubernetes Security Specialist (CKS) Certification Exam is a professional certification that validates the skills and knowledge of individuals in securing containerized applications and Kubernetes platforms. Kubernetes is an open-source container orchestration platform that has gained widespread popularity in recent years, and with the increasing use of Kubernetes, the demand for skilled Kubernetes security specialists has also increased.   QUESTION 24Create a new ServiceAccount named backend-sa in the existing namespace default, which has the capability to list the pods inside the namespace default.Create a new Pod named backend-pod in the namespace default, mount the newly created sa backend-sa to the pod, and Verify that the pod is able to list pods.Ensure that the Pod is running. A service account provides an identity for processes that run in a Pod.When you (a human) access the cluster (for example, using kubectl), you are authenticated by the apiserver as a particular User Account (currently this is usually admin, unless your cluster administrator has customized your cluster). Processes in containers inside pods can also contact the apiserver. When they do, they are authenticated as a particular Service Account (for example, default).When you create a pod, if you do not specify a service account, it is automatically assigned the default service account in the same namespace. If you get the raw json or yaml for a pod you have created (for example, kubectl get pods/<podname> -o yaml), you can see the spec.serviceAccountName field has been automatically set.You can access the API from inside a pod using automatically mounted service account credentials, as described in Accessing the Cluster. The API permissions of the service account depend on the authorization plugin and policy in use.In version 1.6+, you can opt out of automounting API credentials for a service account by setting automountServiceAccountToken: false on the service account:apiVersion: v1kind: ServiceAccountmetadata:name: build-robotautomountServiceAccountToken: false…In version 1.6+, you can also opt out of automounting API credentials for a particular pod:apiVersion: v1kind: Podmetadata:name: my-podspec:serviceAccountName: build-robotautomountServiceAccountToken: false…The pod spec takes precedence over the service account if both specify a automountServiceAccountToken value.QUESTION 25On the Cluster worker node, enforce the prepared AppArmor profile#include <tunables/global>profile nginx-deny flags=(attach_disconnected) {#include <abstractions/base>file,# Deny all file writes.deny /** w,}EOF’Edit the prepared manifest file to include the AppArmor profile.apiVersion: v1kind: Podmetadata:name: apparmor-podspec:containers:– name: apparmor-podimage: nginxFinally, apply the manifests files and create the Pod specified on it.Verify: Try to make a file inside the directory which is restricted. QUESTION 26Cluster: scannerMaster node: controlplaneWorker node: worker1You can switch the cluster/configuration context using the following command:[desk@cli] $ kubectl config use-context scannerGiven:You may use Trivy’s documentation.Task:Use the Trivy open-source container scanner to detect images with severe vulnerabilities used by Pods in the namespace nato.Look for images with High or Critical severity vulnerabilities and delete the Pods that use those images.Trivy is pre-installed on the cluster’s master node. Use cluster’s master node to use Trivy. [controlplane@cli] $ k get pods -n nato -o yaml | grep “image: “[controlplane@cli] $ trivy image <image-name>[controlplane@cli] $ k delete pod <vulnerable-pod> -n nato[desk@cli] $ ssh controlnode[controlplane@cli] $ k get pods -n natoNAME READY STATUS RESTARTS AGEalohmora 1/1 Running 0 3m7sc3d3 1/1 Running 0 2m54sneon-pod 1/1 Running 0 2m11sthor 1/1 Running 0 58s[controlplane@cli] $ k get pods -n nato -o yaml | grep “image: “[controlplane@cli] $ k delete pod thor -n nato[controlplane@cli] $ k delete pod neon-pod -n nato Reference: https://github.com/aquasecurity/trivy[controlplane@cli] $ k delete pod neon-pod -n nato Reference: https://github.com/aquasecurity/trivyQUESTION 27SIMULATIONCreate a RuntimeClass named gvisor-rc using the prepared runtime handler named runsc.Create a Pods of image Nginx in the Namespace server to run on the gVisor runtime class Install the Runtime Class for gVisor{ # Step 1: Install a RuntimeClasscat <<EOF | kubectl apply -f –apiVersion: node.k8s.io/v1beta1kind: RuntimeClassmetadata:name: gvisorhandler: runscEOF}Create a Pod with the gVisor Runtime Class{ # Step 2: Create a podcat <<EOF | kubectl apply -f –apiVersion: v1kind: Podmetadata:name: nginx-gvisorspec:runtimeClassName: gvisorcontainers:– name: nginximage: nginxEOF}Verify that the Pod is running{ # Step 3: Get the podkubectl get pod nginx-gvisor -o wide}QUESTION 28Create a User named john, create the CSR Request, fetch the certificate of the user after approving it.Create a Role name john-role to list secrets, pods in namespace johnFinally, Create a RoleBinding named john-role-binding to attach the newly created role john-role to the user john in the namespace john. To Verify: Use the kubectl auth CLI command to verify the permissions. se kubectl to create a CSR and approve it.Get the list of CSRs:kubectl get csrApprove the CSR:kubectl certificate approve myuserGet the certificateRetrieve the certificate from the CSR:kubectl get csr/myuser -o yamlhere are the role and role-binding to give john permission to create NEW_CRD resource:kubectl apply -f roleBindingJohn.yaml –as=johnrolebinding.rbac.authorization.k8s.io/john_external-rosource-rb created kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata:name: john_crdnamespace: development-johnsubjects:– kind: Username: johnapiGroup: rbac.authorization.k8s.ioroleRef:kind: ClusterRolename: crd-creationkind: ClusterRoleapiVersion: rbac.authorization.k8s.io/v1metadata:name: crd-creationrules:– apiGroups: [“kubernetes-client.io/v1”]resources: [“NEW_CRD”]verbs: [“create, list, get”]QUESTION 29You can switch the cluster/configuration context using the following command: [desk@cli] $ kubectl config use-context dev A default-deny NetworkPolicy avoid to accidentally expose a Pod in a namespace that doesn’t have any other NetworkPolicy defined.Task: Create a new default-deny NetworkPolicy named deny-network in the namespace test for all traffic of type Ingress + Egress The new NetworkPolicy must deny all Ingress + Egress traffic in the namespace test.Apply the newly created default-deny NetworkPolicy to all Pods running in namespace test.You can find a skeleton manifests file at /home/cert_masters/network-policy.yaml master1 $ k get pods -n test –show-labelsNAME READY STATUS RESTARTS AGE LABELStest-pod 1/1 Running 0 34s role=test,run=test-podtesting 1/1 Running 0 17d run=testing$ vim netpol.yamlapiVersion: networking.k8s.io/v1kind: NetworkPolicymetadata:name: deny-networknamespace: testspec:podSelector: {}policyTypes:– Ingress– Egressmaster1 $ k apply -f netpol.yamlExplanationcontrolplane $ k get pods -n test –show-labelsNAME READY STATUS RESTARTS AGE LABELStest-pod 1/1 Running 0 34s role=test,run=test-podtesting 1/1 Running 0 17d run=testingmaster1 $ vim netpol1.yamlapiVersion: networking.k8s.io/v1kind: NetworkPolicymetadata:name: deny-networknamespace: testspec:podSelector: {}policyTypes:– Ingress– Egressmaster1 $ k apply -f netpol1.yaml Reference: https://kubernetes.io/docs/concepts/services-networking/network-policies/ Explanation controlplane $ k get pods -n test –show-labels NAME READY STATUS RESTARTS AGE LABELS test-pod 1/1 Running 0 34s role=test,run=test-pod testing 1/1 Running 0 17d run=testing master1 $ vim netpol1.yaml apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata:name: deny-networknamespace: testspec:podSelector: {}policyTypes:– Ingress– Egressmaster1 $ k apply -f netpol1.yaml Reference: https://kubernetes.io/docs/concepts/services-networking/network-policies/QUESTION 30SIMULATIONa. Retrieve the content of the existing secret named default-token-xxxxx in the testing namespace.Store the value of the token in the token.txtb. Create a new secret named test-db-secret in the DB namespace with the following content:username: mysqlpassword: password@123Create the Pod name test-db-pod of image nginx in the namespace db that can access test-db-secret via a volume at path /etc/mysql-credentials To add a Kubernetes cluster to your project, group, or instance:Navigate to your:Project’s Operations > Kubernetes page, for a project-level cluster.Group’s Kubernetes page, for a group-level cluster.Admin Area > Kubernetes page, for an instance-level cluster.Click Add Kubernetes cluster.Click the Add existing cluster tab and fill in the details:Kubernetes cluster name (required) – The name you wish to give the cluster.Environment scope (required) – The associated environment to this cluster.API URL (required) – It’s the URL that GitLab uses to access the Kubernetes API. Kubernetes exposes several APIs, we want the “base” URL that is common to all of them. For example, https://kubernetes.example.com rather than https://kubernetes.example.com/api/v1.Get the API URL by running this command:kubectl cluster-info | grep -E ‘Kubernetes master|Kubernetes control plane’ | awk ‘/http/ {print $NF}’ CA certificate (required) – A valid Kubernetes certificate is needed to authenticate to the cluster. We use the certificate created by default.List the secrets with kubectl get secrets, and one should be named similar to default-token-xxxxx. Copy that token name for use below.Get the certificate by running this command:kubectl get secret <secret name> -o jsonpath=”{[‘data’][‘ca.crt’]}”QUESTION 31SIMULATIONEnable audit logs in the cluster, To Do so, enable the log backend, and ensure that1. logs are stored at /var/log/kubernetes-logs.txt.2. Log files are retained for 12 days.3. at maximum, a number of 8 old audit logs files are retained.4. set the maximum size before getting rotated to 200MBEdit and extend the basic policy to log:1. namespaces changes at RequestResponse2. Log the request body of secrets changes in the namespace kube-system.3. Log all other resources in core and extensions at the Request level.4. Log “pods/portforward”, “services/proxy” at Metadata level.5. Omit the Stage RequestReceivedAll other requests at the Metadata level Kubernetes auditing provides a security-relevant chronological set of records about a cluster. Kube-apiserver performs auditing. Each request on each stage of its execution generates an event, which is then pre-processed according to a certain policy and written to a backend. The policy determines what’s recorded and the backends persist the records.You might want to configure the audit log as part of compliance with the CIS (Center for Internet Security) Kubernetes Benchmark controls.The audit log can be enabled by default using the following configuration in cluster.yml:services:kube-api:audit_log:enabled: trueWhen the audit log is enabled, you should be able to see the default values at /etc/kubernetes/audit-policy.yaml The log backend writes audit events to a file in JSONlines format. You can configure the log audit backend using the following kube-apiserver flags:–audit-log-path specifies the log file path that log backend uses to write audit events. Not specifying this flag disables log backend. – means standard out–audit-log-maxage defined the maximum number of days to retain old audit log files–audit-log-maxbackup defines the maximum number of audit log files to retain–audit-log-maxsize defines the maximum size in megabytes of the audit log file before it gets rotated If your cluster’s control plane runs the kube-apiserver as a Pod, remember to mount the hostPath to the location of the policy file and log file, so that audit records are persisted. For example:–audit-policy-file=/etc/kubernetes/audit-policy.yaml –audit-log-path=/var/log/audit.logQUESTION 32Secrets stored in the etcd is not secure at rest, you can use the etcdctl command utility to find the secret value for e.g:-  ETCDCTL_API=3 etcdctl get /registry/secrets/default/cks-secret –cacert=”ca.crt” –cert=”server.crt” –key=”server.key” OutputUsing the Encryption Configuration, Create the manifest, which secures the resource secrets using the provider AES-CBC and identity, to encrypt the secret-data at rest and ensure all secrets are encrypted with the new configuration.QUESTION 33Before Making any changes build the Dockerfile with tag base:v1Now Analyze and edit the given Dockerfile(based on ubuntu 16:04)Fixing two instructions present in the file, Check from Security Aspect and Reduce Size point of view.Dockerfile:FROM ubuntu:latestRUN apt-get update -yRUN apt install nginx -yCOPY entrypoint.sh /RUN useradd ubuntuENTRYPOINT [“/entrypoint.sh”]USER ubuntuentrypoint.sh#!/bin/bashecho “Hello from CKS”After fixing the Dockerfile, build the docker-image with the tag base:v2  To Verify: Check the size of the image before and after the build. QUESTION 34Cluster: qa-cluster Master node: master Worker node: worker1 You can switch the cluster/configuration context using the following command: [desk@cli] $ kubectl config use-context qa-cluster Task: Create a NetworkPolicy named restricted-policy to restrict access to Pod product running in namespace dev. Only allow the following Pods to connect to Pod products-service: 1. Pods in the namespace qa 2. Pods with label environment: stage, in any namespace QUESTION 35Using the runtime detection tool Falco, Analyse the container behavior for at least 30 seconds, using filters that detect newly spawning and executing processes  store the incident file art /opt/falco-incident.txt, containing the detected incidents. one per line, in the format [timestamp],[uid],[user-name],[processName]QUESTION 36Cluster: scanner Master node: controlplane Worker node: worker1You can switch the cluster/configuration context using the following command:[desk@cli] $ kubectl config use-context scannerGiven: You may use Trivy’s documentation.Task: Use the Trivy open-source container scanner to detect images with severe vulnerabilities used by Pods in the namespace nato.Look for images with High or Critical severity vulnerabilities and delete the Pods that use those images. Trivy is pre-installed on the cluster’s master node. Use cluster’s master node to use Trivy. QUESTION 37Context: Cluster: prod Master node: master1 Worker node: worker1You can switch the cluster/configuration context using the following command:[desk@cli] $ kubectl config use-context prodTask: Analyse and edit the given Dockerfile (based on the ubuntu:18:04 image) /home/cert_masters/Dockerfile fixing two instructions present in the file being prominent security/best-practice issues.Analyse and edit the given manifest file /home/cert_masters/mydeployment.yaml fixing two fields present in the file being prominent security/best-practice issues.Note: Don’t add or remove configuration settings; only modify the existing configuration settings, so that two configuration settings each are no longer security/best-practice concerns. Should you need an unprivileged user for any of the tasks, use user nobody with user id 65535 1. For Dockerfile: Fix the image version & user name in Dockerfile 2. For mydeployment.yaml : Fix security contexts Explanation[desk@cli] $ vim /home/cert_masters/DockerfileFROM ubuntu:latest # Remove thisFROM ubuntu:18.04 # Add thisUSER root # Remove thisUSER nobody # Add thisRUN apt get install -y lsof=4.72 wget=1.17.1 nginx=4.2ENV ENVIRONMENT=testingUSER root # Remove thisUSER nobody # Add thisCMD [“nginx -d”][desk@cli] $ vim /home/cert_masters/mydeployment.yamlapiVersion: apps/v1kind: Deploymentmetadata:creationTimestamp: nulllabels:app: kafkaname: kafkaspec:replicas: 1selector:matchLabels:app: kafkastrategy: {}template:metadata:creationTimestamp: nulllabels:app: kafkaspec:containers:– image: bitnami/kafkaname: kafkavolumeMounts:– name: kafka-volmountPath: /var/lib/kafkasecurityContext:{“capabilities”:{“add”:[“NET_ADMIN”],”drop”:[“all”]},”privileged”: True,”readOnlyRootFilesystem”: False, “runAsUser”: 65535} # Delete This{“capabilities”:{“add”:[“NET_ADMIN”],”drop”:[“all”]},”privileged”: False,”readOnlyRootFilesystem”: True, “runAsUser”: 65535} # Add This resources: {} volumes:– name: kafka-volemptyDir: {}status: {}Pictorial View: [desk@cli] $ vim /home/cert_masters/mydeployment.yamlQUESTION 38SIMULATIONBefore Making any changes build the Dockerfile with tag base:v1Now Analyze and edit the given Dockerfile(based on ubuntu 16:04)Fixing two instructions present in the file, Check from Security Aspect and Reduce Size point of view.Dockerfile:FROM ubuntu:latestRUN apt-get update -yRUN apt install nginx -yCOPY entrypoint.sh /RUN useradd ubuntuENTRYPOINT [“/entrypoint.sh”]USER ubuntuentrypoint.sh#!/bin/bashecho “Hello from CKS”After fixing the Dockerfile, build the docker-image with the tag base:v2 To Verify: Check the size of the image before and after the build.  Send us the Feedback on it. QUESTION 39Create a Pod name Nginx-pod inside the namespace testing, Create a service for the Nginx-pod named nginx-svc, using the ingress of your choice, run the ingress on tls, secure port.  Send us your Feedback on this. QUESTION 40You can switch the cluster/configuration context using the following command: [desk@cli] $ kubectl config use-context stage Context: A PodSecurityPolicy shall prevent the creation of privileged Pods in a specific namespace. Task: 1. Create a new PodSecurityPolcy named deny-policy, which prevents the creation of privileged Pods. 2. Create a new ClusterRole name deny-access-role, which uses the newly created PodSecurityPolicy deny-policy. 3. Create a new ServiceAccount named psd-denial-sa in the existing namespace development. Finally, create a new ClusterRoleBindind named restrict-access-bind, which binds the newly created ClusterRole deny-access-role to the newly created ServiceAccount psp-denial-sa Create psp to disallow privileged containerapiVersion: rbac.authorization.k8s.io/v1kind: ClusterRolemetadata:name: deny-access-rolerules:– apiGroups: [‘policy’]resources: [‘podsecuritypolicies’]verbs: [‘use’]resourceNames:– “deny-policy”k create sa psp-denial-sa -n developmentapiVersion: rbac.authorization.k8s.io/v1kind: ClusterRoleBindingmetadata:name: restrict-access-bingroleRef:kind: ClusterRolename: deny-access-roleapiGroup: rbac.authorization.k8s.iosubjects:– kind: ServiceAccountname: psp-denial-sanamespace: developmentExplanationmaster1 $ vim psp.yamlapiVersion: policy/v1beta1kind: PodSecurityPolicymetadata:name: deny-policyspec:privileged: false # Don’t allow privileged pods!seLinux:rule: RunAsAnysupplementalGroups:rule: RunAsAnyrunAsUser:rule: RunAsAnyfsGroup:rule: RunAsAnyvolumes:– ‘*’master1 $ vim cr1.yamlapiVersion: rbac.authorization.k8s.io/v1kind: ClusterRolemetadata:name: deny-access-rolerules:– apiGroups: [‘policy’]resources: [‘podsecuritypolicies’]verbs: [‘use’]resourceNames:– “deny-policy”master1 $ k create sa psp-denial-sa -n development master1 $ vim cb1.yaml apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata:name: restrict-access-bingroleRef:kind: ClusterRolename: deny-access-roleapiGroup: rbac.authorization.k8s.iosubjects:# Authorize specific service accounts:– kind: ServiceAccountname: psp-denial-sanamespace: developmentmaster1 $ k apply -f psp.yaml master1 $ k apply -f cr1.yaml master1 $ k apply -f cb1.yaml Reference: https://kubernetes.io/docs/concepts/policy/pod-security-policy/ Loading … The Best Linux Foundation CKS Study Guides and Dumps of 2023: https://www.testkingfree.com/Linux-Foundation/CKS-practice-exam-dumps.html --------------------------------------------------- Images: https://blog.testkingfree.com/wp-content/plugins/watu/loading.gif https://blog.testkingfree.com/wp-content/plugins/watu/loading.gif --------------------------------------------------- --------------------------------------------------- Post date: 2023-12-01 11:20:04 Post date GMT: 2023-12-01 11:20:04 Post modified date: 2023-12-01 11:20:04 Post modified date GMT: 2023-12-01 11:20:04