- Home
- Linux Foundation
- Kubernetes and Cloud Native
- KCNA
- Kubernetes and Cloud Native Associate Questions and Answers
KCNA Kubernetes and Cloud Native Associate Questions and Answers
Which of the following observability data streams would be most useful when desiring to plot resource consumption and predicted future resource exhaustion?
Options:
stdout
Traces
Logs
Metrics
Answer:
DExplanation:
The correct answer is D: Metrics. Metrics are numeric time-series measurements collected at regular intervals, making them ideal for plotting resource consumption over time and forecasting future exhaustion. In Kubernetes, this includes CPU usage, memory usage, disk I/O, network throughput, filesystem usage, Pod restarts, and node allocatable vs requested resources. Because metrics are structured and queryable (often with Prometheus), you can compute rates, aggregates, percentiles, and trends, and then apply forecasting methods to predict when a resource will run out.
Logs and traces have different purposes. Logs are event records (strings) that are great for debugging and auditing, but they are not naturally suited to continuous quantitative plotting unless you transform them into metrics (log-based metrics). Traces capture end-to-end request paths and latency breakdowns; they help you find slow spans and dependency bottlenecks, not forecast CPU/memory exhaustion. stdout is just a stream where logs might be written; by itself it’s not an observability data type used for capacity trending.
In Kubernetes observability stacks, metrics are typically scraped from components and workloads: kubelet/cAdvisor exports container metrics, node exporters expose host metrics, and applications expose business/system metrics. The metrics pipeline (Prometheus, OpenTelemetry metrics, managed monitoring) enables dashboards and alerting. For resource exhaustion, you often alert on “time to fill” (e.g., predicted disk fill in < N hours), high sustained utilization, or rapidly increasing error rates due to throttling.
Therefore, the most appropriate data stream for plotting consumption and predicting exhaustion is Metrics, option D.
=========
How is application data maintained in containers?
Options:
Store data into data folders.
Store data in separate folders.
Store data into sidecar containers.
Store data into volumes.
Answer:
DExplanation:
Container filesystems are ephemeral: the writable layer is tied to the container lifecycle and can be lost when containers are recreated. Therefore, maintaining application data correctly means storing it in volumes, making D the correct answer. In Kubernetes, volumes provide durable or shareable storage that is mounted into containers at specific paths. Depending on the volume type, the data can persist across container restarts and even Pod rescheduling.
Kubernetes supports many volume patterns. For transient scratch data you might use emptyDir (ephemeral for the Pod’s lifetime). For durable state, you typically use PersistentVolumes consumed by PersistentVolumeClaims (PVCs), backed by storage systems via CSI drivers (cloud disks, SAN/NAS, distributed storage). This decouples the application container image from its state and enables rolling updates, rescheduling, and scaling without losing data.
Options A and B (“folders”) are incomplete because folders inside the container filesystem do not guarantee persistence. A folder is only as durable as the underlying storage; without a mounted volume, it lives in the container’s writable layer and will disappear when the container is replaced. Option C is incorrect because “sidecar containers” are not a data durability mechanism; sidecars can help ship logs or sync data, but persistent data should still be stored on volumes (or external services like managed databases).
From an application delivery standpoint, the principle is: containers should be immutable and disposable, and state should be externalized. Volumes (and external managed services) make this possible. In Kubernetes, this is a foundational pattern enabling safe rollouts, self-healing, and portability: the platform can kill and recreate Pods freely because data is maintained independently via volumes.
Therefore, the verified correct choice is D: Store data into volumes.
=========
Which of the following would fall under the responsibilities of an SRE?
Options:
Developing a new application feature.
Creating a monitoring baseline for an application.
Submitting a budget for running an application in a cloud.
Writing policy on how to submit a code change.
Answer:
BExplanation:
Site Reliability Engineering (SRE) focuses on reliability, availability, performance, and operational excellence using engineering approaches. Among the options, creating a monitoring baseline for an application is a classic SRE responsibility, so B is correct. A monitoring baseline typically includes defining key service-level signals (latency, traffic, errors, saturation), establishing dashboards, setting sensible alert thresholds, and ensuring telemetry is complete enough to support incident response and capacity planning.
In Kubernetes environments, SRE work often involves ensuring that workloads expose health endpoints for probes, that resource requests/limits are set to allow stable scheduling and autoscaling, and that observability pipelines (metrics, logs, traces) are consistent. Building a monitoring baseline also ties into SLO/SLI practices: SREs define what “good” looks like, measure it continuously, and create alerts that notify teams when the system deviates from those expectations.
Option A is primarily an application developer task—SREs may contribute to reliability features, but core product feature development is usually owned by engineering teams. Option C is more aligned with finance, FinOps, or management responsibilities, though SRE data can inform costs. Option D is closer to governance, platform policy, or developer experience/process ownership; SREs might influence processes, but “policy on how to submit code change” is not the defining SRE duty compared to monitoring and reliability engineering.
Therefore, the best verified choice is B, because establishing monitoring baselines is central to operating reliable services on Kubernetes.
=========
Which of the following systems is NOT compatible with the CRI runtime interface standard?
(Typo corrected: “CRI-0” → “CRI-O”)
Options:
CRI-O
dockershim
systemd
containerd
Answer:
CExplanation:
Kubernetes uses the Container Runtime Interface (CRI) to support pluggable container runtimes. The kubelet talks to a CRI-compatible runtime via gRPC, and that runtime is responsible for pulling images and running containers. In this context, containerd and CRI-O are CRI-compatible container runtimes (or runtime stacks) used widely with Kubernetes, and dockershim historically served as a compatibility layer that allowed kubelet to talk to Docker Engine as if it were CRI (before dockershim was removed from kubelet in newer Kubernetes versions). That leaves systemd as the correct “NOT compatible with CRI” answer, so C is correct.
systemd is an init system and service manager for Linux. While it can be involved in how services (like kubelet) are started and managed on the host, it is not a container runtime implementing CRI. It does not provide CRI gRPC endpoints for kubelet, nor does it manage containers in the CRI sense.
The deeper Kubernetes concept here is separation of responsibilities: kubelet is responsible for Pod lifecycle at the node level, but it delegates “run containers” to a runtime via CRI. Runtimes like containerd and CRI-O implement that contract; Kubernetes can swap them without changing kubelet logic. Historically, dockershim translated kubelet’s CRI calls into Docker Engine calls. Even though dockershim is no longer part of kubelet, it was still “CRI-adjacent” in purpose and often treated as compatible in older curricula.
Therefore, among the provided options, systemd is the only one that is clearly not a CRI-compatible runtime system, making C correct.
=========
Which of the following scenarios would benefit the most from a service mesh architecture?
Options:
A few applications with hundreds of Pod replicas running in multiple clusters, each one providing multiple services.
Thousands of distributed applications running in a single cluster, each one providing multiple services.
Tens of distributed applications running in multiple clusters, each one providing multiple services.
Thousands of distributed applications running in multiple clusters, each one providing multiple services.
Answer:
DExplanation:
A service mesh is most valuable when service-to-service communication becomes complex at large scale—many services, many teams, and often multiple clusters. That’s why D is the best fit: thousands of distributed applications across multiple clusters. In that scenario, the operational burden of securing, observing, and controlling east-west traffic grows dramatically. A service mesh (e.g., Istio, Linkerd) addresses this by introducing a dedicated networking layer (usually sidecar proxies such as Envoy) that standardizes capabilities across services without requiring each application to implement them consistently.
The common “mesh” value-adds are: mTLS for service identity and encryption, fine-grained traffic policy (retries, timeouts, circuit breaking), traffic shifting (canary, mirroring), and consistent telemetry (metrics, traces, access logs). Those features become increasingly beneficial as the number of services and cross-service calls rises, and as you add multi-cluster routing, failover, and policy management across environments. With thousands of applications, inconsistent libraries and configurations become a reliability and security risk; the mesh centralizes and standardizes these behaviors.
In smaller environments (A or C), you can often meet requirements with simpler approaches: Kubernetes Services, Ingress/Gateway, basic mTLS at the edge, and application-level libraries. A single large cluster (B) can still benefit from a mesh, but adding multiple clusters increases complexity: traffic management across clusters, identity trust domains, global observability correlation, and consistent policy enforcement. That’s where mesh architectures typically justify their additional overhead (extra proxies, control plane components, operational complexity).
So, the “most benefit” scenario is the largest, most distributed footprint—D.
=========
What is the practice of bringing financial accountability to the variable spend model of cloud resources?
Options:
FaaS
DevOps
CloudCost
FinOps
Answer:
DExplanation:
The practice of bringing financial accountability to cloud spending—where costs are variable and usage-based—is called FinOps, so D is correct. FinOps (Financial Operations) is an operating model and culture that helps organizations manage cloud costs by connecting engineering, finance, and business teams. Because cloud resources can be provisioned quickly and billed dynamically, traditional budgeting approaches often fail to keep pace. FinOps addresses this by introducing shared visibility, governance, and optimization processes that enable teams to make cost-aware decisions while still moving fast.
In Kubernetes and cloud-native architectures, variable spend shows up in many ways: autoscaling node pools, over-provisioned resource requests, idle clusters, persistent volumes, load balancers, egress traffic, managed services, and observability tooling. FinOps practices encourage tagging/labeling for cost attribution, defining cost KPIs, enforcing budget guardrails, and continuously optimizing usage (right-sizing resources, scaling policies, turning off unused environments, and selecting cost-effective architectures).
Why the other options are incorrect: FaaS (Function as a Service) is a compute model (serverless), not a financial accountability practice. DevOps is a cultural and technical practice focused on collaboration and delivery speed, not specifically cloud cost accountability (though it can complement FinOps). CloudCost is not a widely recognized standard term in the way FinOps is.
In practice, FinOps for Kubernetes often involves improving resource efficiency: aligning requests/limits with real usage, using HPA/VPA appropriately, selecting instance types that match workload profiles, managing cluster autoscaler settings, and allocating shared platform costs to teams via labels/namespaces. It also includes forecasting and anomaly detection, because cloud-native spend can spike quickly due to misconfigurations (e.g., runaway autoscaling or excessive log ingestion).
So, the correct term for financial accountability in cloud variable spend is FinOps (D).
=========
What is ephemeral storage?
Options:
Storage space that need not persist across restarts.
Storage that may grow dynamically.
Storage used by multiple consumers (e.g., multiple Pods).
Storage that is always provisioned locally.
Answer:
AExplanation:
The correct answer is A: ephemeral storage is non-persistent storage whose data does not need to survive Pod restarts or rescheduling. In Kubernetes, ephemeral storage typically refers to storage tied to the Pod’s lifetime—such as the container writable layer, emptyDir volumes, and other temporary storage types. When a Pod is deleted or moved to a different node, that data is generally lost.
This is different from persistent storage, which is backed by PersistentVolumes and PersistentVolumeClaims and is designed to outlive individual Pod instances. Ephemeral storage is commonly used for caches, scratch space, temporary files, and intermediate build artifacts—data that can be recreated and is not the authoritative system of record.
Option B is incorrect because “may grow dynamically” describes an allocation behavior, not the defining characteristic of ephemeral storage. Option C is incorrect because multiple consumers is about access semantics (ReadWriteMany etc.) and shared volumes, not ephemerality. Option D is incorrect because ephemeral storage is not “always provisioned locally” in a strict sense; while many ephemeral forms are local to the node, the definition is about lifecycle and persistence guarantees, not necessarily physical locality.
Operationally, ephemeral storage is an important scheduling and reliability consideration. Pods can request/limit ephemeral storage similarly to CPU/memory, and nodes can evict Pods under disk pressure. Mismanaged ephemeral storage (logs written to the container filesystem, runaway temp files) can cause node disk exhaustion and cascading failures. Best practices include shipping logs off-node, using emptyDir intentionally with size limits where supported, and using persistent volumes for state that must survive restarts.
So, ephemeral storage is best defined as storage that does not need to persist across restarts/rescheduling, matching option A.
=========
What is the main purpose of a DaemonSet?
Options:
A DaemonSet ensures that all (or certain) nodes run a copy of a Pod.
A DaemonSet ensures that the kubelet is constantly up and running.
A DaemonSet ensures that there are as many pods running as specified in the replicas field.
A DaemonSet ensures that a process (agent) runs on every node.
Answer:
AExplanation:
The correct answer is A. A DaemonSet is a workload controller whose job is to ensure that a specific Pod runs on all nodes (or on a selected subset of nodes) in the cluster. This is fundamentally different from Deployments/ReplicaSets, which aim to maintain a certain replica count regardless of node count. With a DaemonSet, the number of Pods is implicitly tied to the number of eligible nodes: add a node, and the DaemonSet automatically schedules a Pod there; remove a node, and its Pod goes away.
DaemonSets are commonly used for node-level services and background agents: log collectors, node monitoring agents, storage daemons, CNI components, or security agents—anything where you want a presence on each node to interact with node resources. This aligns with option D’s phrasing (“agent on every node”), but option A is the canonical definition and is slightly broader because it covers “all or certain nodes” (via node selectors/affinity/taints-tolerations) and the fact that the unit is a Pod.
Why the other options are wrong: DaemonSets do not “keep kubelet running” (B); kubelet is a node service managed by the OS. DaemonSets do not use a replicas field to maintain a specific count (C); that’s Deployment/ReplicaSet behavior.
Operationally, DaemonSets matter for cluster operations because they provide consistent node coverage and automatically react to node pool scaling. They also require careful scheduling constraints so they land only where intended (e.g., only Linux nodes, only GPU nodes). But the main purpose remains: ensure a copy of a Pod runs on each relevant node—option A.
=========
What do Deployments and StatefulSets have in common?
Options:
They manage Pods that are based on an identical container spec.
They support the OnDelete update strategy.
They support an ordered, graceful deployment and scaling.
They maintain a sticky identity for each of their Pods.
Answer:
AExplanation:
Both Deployments and StatefulSets are Kubernetes workload controllers that manage a set of Pods created from a Pod template, meaning they manage Pods based on an identical container specification (a shared Pod template). That is why A is correct. In both cases, you declare a desired state (replicas, container images, environment variables, volumes, probes, etc.) in spec.template, and the controller ensures the cluster converges toward that state by creating, updating, or replacing Pods.
The differences are what make the other options incorrect. OnDelete update strategy is associated with StatefulSets (it’s one of their update strategies), but it is not a shared, defining behavior across both controllers, so B is not “in common.” Ordered, graceful deployment and scaling is a hallmark of StatefulSets (ordered pod creation/termination and stable identities) rather than Deployments, so C is not shared. Sticky identity per Pod (stable network identity and stable storage identity per replica, commonly via StatefulSet + headless Service) is specifically a StatefulSet characteristic, not a Deployment feature, so D is not common.
A useful way to think about it is: both controllers manage replicas of a Pod template, but they differ in semantics. Deployments are designed primarily for stateless workloads and typically focus on rolling updates and scalable replicas where any instance is interchangeable. StatefulSets are designed for stateful workloads and add identity and ordering guarantees: each replica gets a stable name (like db-0, db-1) and often stable PersistentVolumeClaims.
So the shared commonality the question is testing is the basic workload-controller pattern: both controllers manage Pods created from a common template (identical container spec). Therefore, A is the verified answer.
=========
Which of the following is a challenge derived from running cloud native applications?
Options:
The operational costs of maintaining the data center of the company.
Cost optimization is complex to maintain across different public cloud environments.
The lack of different container images available in public image repositories.
The lack of services provided by the most common public clouds.
Answer:
BExplanation:
The correct answer is B. Cloud-native applications often run across multiple environments—different cloud providers, regions, accounts/projects, and sometimes hybrid deployments. This introduces real cost-management complexity: pricing models differ (compute types, storage tiers, network egress), discount mechanisms vary (reserved capacity, savings plans), and telemetry/charge attribution can be inconsistent. When you add Kubernetes, the abstraction layer can further obscure cost drivers because costs are incurred at the infrastructure level (nodes, disks, load balancers) while consumption happens at the workload level (namespaces, Pods, services).
Option A is less relevant because cloud-native adoption often reduces dependence on maintaining a private datacenter; many organizations adopt cloud-native specifically to avoid datacenter CapEx/ops overhead. Option C is generally untrue—public registries and vendor registries contain vast numbers of images; the challenge is more about provenance, security, and supply chain than “lack of images.” Option D is incorrect because major clouds offer abundant services; the difficulty is choosing among them and controlling cost/complexity, not a lack of services.
Cost optimization being complex is a recognized challenge because cloud-native architectures include microservices sprawl, autoscaling, ephemeral environments, and pay-per-use dependencies (managed databases, message queues, observability). Small misconfigurations can cause big bills: noisy logs, over-requested resources, unbounded HPA scaling, and egress-heavy architectures. That’s why practices like FinOps, tagging/labeling for allocation, and automated guardrails are emphasized.
So the best answer describing a real, common cloud-native challenge is B.
=========
What does “Continuous Integration” mean?
Options:
The continuous integration and testing of code changes from multiple sources manually.
The continuous integration and testing of code changes from multiple sources via automation.
The continuous integration of changes from one environment to another.
The continuous integration of new tools to support developers in a project.
Answer:
BExplanation:
The correct answer is B: Continuous Integration (CI) is the practice of frequently integrating code changes from multiple contributors and validating them through automated builds and tests. The “continuous” part is about doing this often (ideally many times per day) and consistently, so integration problems are detected early instead of piling up until a painful merge or release window.
Automation is essential. CI typically includes steps like compiling/building artifacts, running unit and integration tests, executing linters, checking formatting, scanning dependencies for vulnerabilities, and producing build reports. This automation creates fast feedback loops that help developers catch regressions quickly and maintain a releasable main branch.
Option A is incorrect because manual integration/testing does not scale and undermines the reliability and speed that CI is meant to provide. Option C confuses CI with deployment promotion across environments (which is more aligned with Continuous Delivery/Deployment). Option D is unrelated: adding tools can support CI, but it isn’t the definition.
In cloud-native application delivery, CI is tightly coupled with containerization and Kubernetes: CI pipelines often build container images from source, run tests, scan images, sign artifacts, and push to registries. Those validated artifacts then flow into CD processes that deploy to Kubernetes using manifests, Helm, or GitOps controllers. Without CI, Kubernetes rollouts become riskier because you lack consistent validation of what you’re deploying.
So, CI is best defined as automated integration and testing of code changes from multiple sources, which matches option B.
=========
In which framework do the developers no longer have to deal with capacity, deployments, scaling and fault tolerance, and OS?
Options:
Docker Swarm
Kubernetes
Mesos
Serverless
Answer:
DExplanation:
Serverless is the model where developers most directly avoid managing server capacity, OS operations, and much of the deployment/scaling/fault-tolerance mechanics, which is why D is correct. In serverless computing (commonly Function-as-a-Service, FaaS, and managed serverless container platforms), the provider abstracts away the underlying servers. You typically deploy code (functions) or a container image, define triggers (HTTP events, queues, schedules), and the platform automatically provisions the required compute, scales it based on demand, and handles much of the availability and fault tolerance behind the scenes.
It’s important to compare this to Kubernetes: Kubernetes does automate scheduling, self-healing, rolling updates, and scaling, but it still requires you (or your platform team) to design and operate cluster capacity, node pools, upgrades, runtime configuration, networking, and baseline reliability controls. Even in managed Kubernetes services, you still choose node sizes, scale policies, and operational configuration. Kubernetes reduces toil, but it does not eliminate infrastructure concerns in the same way serverless does.
Docker Swarm and Mesos are orchestration platforms that schedule workloads, but they also require managing the underlying capacity and OS-level aspects. They are not “no longer have to deal with capacity and OS” frameworks.
From a cloud native viewpoint, serverless is about consuming compute as an on-demand utility. Kubernetes can be a foundation for a serverless experience (for example, with event-driven autoscaling or serverless frameworks), but the pure framework that removes the most operational burden from developers is serverless.
Which Kubernetes feature would you use to guard against split brain scenarios with your distributed application?
Options:
Replication controllers
Consensus protocols
Rolling updates
StatefulSet
Answer:
DExplanation:
The exam-expected Kubernetes feature here is StatefulSet, so D is the correct answer. StatefulSets are designed for distributed/stateful applications that require stable network identities, stable storage, and ordered deployment/termination. Those properties are commonly required by systems that must avoid “split brain” behaviors—where multiple nodes believe they are the leader/primary due to partitions or identity confusion.
StatefulSets give each Pod a persistent identity (e.g., app-0, app-1) and stable DNS naming (typically via a headless Service), which supports consistent peer discovery and membership. They also commonly pair with PersistentVolumeClaims so that each replica keeps its own data across restarts and reschedules. The ordered rollout semantics help clustered systems bootstrap and expand in controlled sequences, reducing the chance of chaotic membership changes.
Important nuance: StatefulSet alone does not magically prevent split brain. Split brain prevention is primarily a property of the application’s own clustering/consensus design (e.g., leader election, quorum, fencing). That’s why option B (“consensus protocols”) is conceptually the true prevention mechanism—but it’s not a Kubernetes feature in the way the question frames it. Kubernetes provides primitives that make it feasible to run such systems safely (stable IDs, stable storage, predictable DNS), and StatefulSet is the Kubernetes workload API designed for that class of distributed stateful apps.
Replication controllers and rolling updates don’t address identity/quorum concerns. Therefore, within Kubernetes constructs, StatefulSet is the best verified choice for workloads needing stable identity patterns commonly used to reduce split-brain risk.
=========
Which of the following is a feature Kubernetes provides by default as a container orchestration tool?
Options:
A portable operating system.
File system redundancy.
A container image registry.
Automated rollouts and rollbacks.
Answer:
DExplanation:
Kubernetes provides automated rollouts and rollbacks for workloads by default (via controllers like Deployments), so D is correct. In Kubernetes, application delivery is controller-driven: you declare the desired state (new image, new config), and controllers reconcile the cluster toward that state. Deployments implement rolling updates, gradually replacing old Pods with new ones while respecting availability constraints. Kubernetes tracks rollout history and supports rollback to previous ReplicaSets when an update fails or is deemed unhealthy.
This is a core orchestration capability: it reduces manual intervention and makes change safer. Rollouts use readiness checks and update strategies to avoid taking the service down, and kubectl rollout status/history/undo supports day-to-day release operations.
The other options are not “default Kubernetes orchestration features”:
Kubernetes is not a portable operating system (A). It’s a platform for orchestrating containers on top of an OS.
Kubernetes does not provide filesystem redundancy by itself (B). Storage redundancy is handled by underlying storage systems and CSI drivers (e.g., replicated block storage, distributed filesystems).
Kubernetes does not include a built-in container image registry (C). You use external registries (Docker Hub, ECR, GCR, Harbor, etc.). Kubernetes pulls images but does not host them as a core feature.
So the correct “provided by default” orchestration feature in this list is the ability to safely manage application updates via automated rollouts and rollbacks.
=========
Imagine there is a requirement to run a database backup every day. Which Kubernetes resource could be used to achieve that?
Options:
kube-scheduler
CronJob
Task
Job
Answer:
BExplanation:
To run a workload on a repeating schedule (like “every day”), Kubernetes provides CronJob, making B correct. A CronJob creates Jobs according to a cron-formatted schedule, and then each Job creates one or more Pods that run to completion. This is the Kubernetes-native replacement for traditional cron scheduling, but implemented as a declarative resource managed by controllers in the cluster.
For a daily database backup, you’d define a CronJob with a schedule (e.g., "0 2 * * *" for 2:00 AM daily), and specify the Pod template that performs the backup (invokes backup scripts/tools, writes output to durable storage, uploads to object storage, etc.). Kubernetes will then create a Job at each scheduled time. CronJobs also support operational controls like concurrencyPolicy (Allow/Forbid/Replace) to decide what happens if a previous backup is still running, startingDeadlineSeconds to handle missed schedules, and history limits to retain recent successful/failed Job records for debugging.
Option D (Job) is close but not sufficient for “every day.” A Job runs a workload until completion once; you would need an external scheduler to create a Job every day. Option A (kube-scheduler) is a control plane component responsible for placing Pods onto nodes and does not schedule recurring tasks. Option C (“Task”) is not a standard Kubernetes workload resource.
This question is fundamentally about mapping a recurring operational requirement (backup cadence) to Kubernetes primitives. The correct design is: CronJob triggers Job creation on a schedule; Job runs Pods to completion. Therefore, the correct answer is B.
=========
How many different Kubernetes service types can you define?
Options:
2
3
4
5
Answer:
CExplanation:
Kubernetes defines four primary Service types, which is why C (4) is correct. The commonly recognized Service spec.type values are:
ClusterIP: The default type. Exposes the Service on an internal virtual IP reachable only within the cluster. This supports typical east-west traffic between workloads.
NodePort: Exposes the Service on a static port on each node. Traffic to
LoadBalancer: Integrates with a cloud provider (or load balancer implementation) to provision an external load balancer and route traffic to the Service. This is common in managed Kubernetes.
ExternalName: Maps the Service name to an external DNS name via a CNAME record, allowing in-cluster clients to use a consistent Service DNS name to reach an external dependency.
Some people also talk about “Headless Services,” but headless is not a separate type; it’s a behavior achieved by setting clusterIP: None. Headless Services still use the Service API object but change DNS and virtual-IP behavior to return endpoint IPs directly rather than a ClusterIP. That’s why the canonical count of “Service types” is four.
This question tests understanding of the Service abstraction: Service type controls how a stable service identity is exposed (internal VIP, node port, external LB, or DNS alias), while selectors/endpoints control where traffic goes (the backend Pods). Different environments will favor different types: ClusterIP for internal microservices, LoadBalancer for external exposure in cloud, NodePort for bare-metal or simple access, ExternalName for bridging to outside services.
Therefore, the verified answer is C (4).
=========
If kubectl is failing to retrieve information from the cluster, where can you find Pod logs to troubleshoot?
Options:
/var/log/pods/
~/.kube/config
/var/log/k8s/
/etc/kubernetes/
Answer:
AExplanation:
The correct answer is A: /var/log/pods/. When kubectl logs can’t retrieve logs (for example, API connectivity issues, auth problems, or kubelet/API proxy issues), you can often troubleshoot directly on the node where the Pod ran. Kubernetes nodes typically store container logs on disk, and a common location is under /var/log/pods/, organized by namespace, Pod name/UID, and container. This directory contains symlinks or files that map to the underlying container runtime log location (often under /var/log/containers/ as well, depending on distro/runtime setup).
Option B (~/.kube/config) is your local kubeconfig file; it contains cluster endpoints and credentials, not Pod logs. Option D (/etc/kubernetes/) contains Kubernetes component configuration/manifests on some installations (especially control plane), not application logs. Option C (/var/log/k8s/) is not a standard Kubernetes log path.
Operationally, the node-level log locations depend on the container runtime and logging configuration, but the Kubernetes convention is that kubelet writes container logs to a known location and exposes them through the API so kubectl logs works. If the API path is broken, node access becomes your fallback. This is also why secure node access is sensitive: anyone with node root access can potentially read logs (and other data), which is part of the threat model.
So, the best answer for where to look on the node for Pod logs when kubectl can’t retrieve them is /var/log/pods/, option A.
=========
What is the default deployment strategy in Kubernetes?
Options:
Rolling update
Blue/Green deployment
Canary deployment
Recreate deployment
Answer:
AExplanation:
For Kubernetes Deployments, the default update strategy is RollingUpdate, which corresponds to “Rolling update” in option A. Rolling updates replace old Pods with new Pods gradually, aiming to maintain availability during the rollout. Kubernetes does this by creating a new ReplicaSet for the updated Pod template and then scaling the new ReplicaSet up while scaling the old one down.
The pace and safety of a rolling update are controlled by parameters like maxUnavailable and maxSurge. maxUnavailable limits how many replicas can be unavailable during the update, protecting availability. maxSurge controls how many extra replicas can be created temporarily above the desired count, helping speed up rollouts while maintaining capacity. If readiness probes fail, Kubernetes will pause progression because new Pods aren’t becoming Ready, helping prevent a bad version from fully replacing a good one.
Options B (Blue/Green) and C (Canary) are popular progressive delivery patterns, but they are not the default built-in Deployment strategy. They are typically implemented using additional tooling (service mesh routing, traffic splitting controllers, or specialized rollout controllers) or by operating multiple Deployments/Services. Option D (Recreate) is a valid strategy but not the default; it terminates all old Pods before creating new ones, causing downtime unless you have external buffering or multi-tier redundancy.
From an application delivery perspective, RollingUpdate aligns with Kubernetes’ declarative model: you update the desired Pod template and let the controller converge safely. kubectl rollout status is commonly used to monitor progress. Rollbacks are also supported because the Deployment tracks history. Therefore, the verified correct answer is A: Rolling update.
=========
What is an advantage of using the Gateway API compared to Ingress in Kubernetes?
Options:
To automatically scale workloads based on CPU and memory utilization.
To provide clearer role separation between infrastructure providers and application developers.
To configure routing rules through annotations directly on Ingress resources.
To expose an application externally by creating only a Service resource.
Answer:
BExplanation:
The Gateway API is a newer Kubernetes networking API designed to address several limitations of the traditional Ingress resource. One of its most significant advantages is the clear separation of roles and responsibilities between infrastructure providers (such as platform teams or cluster administrators) and application developers. This design principle is a core motivation behind the Gateway API and directly differentiates it from Ingress.
With Ingress, a single resource often combines concerns such as load balancer configuration, TLS settings, routing rules, and application-level details. This frequently leads to heavy reliance on annotations, which are controller-specific, non-standardized, and blur ownership boundaries. Application developers may need elevated permissions to modify Ingress objects, even when changes affect shared infrastructure, creating operational risk.
The Gateway API introduces multiple distinct resources—such as GatewayClass, Gateway, and route resources (e.g., HTTPRoute)—each aligned with a specific role. Infrastructure providers manage GatewayClass and Gateway resources, which define how traffic enters the cluster and what capabilities are available. Application developers interact primarily with route resources to define how traffic is routed to their Services, without needing access to the underlying infrastructure configuration. This separation improves security, governance, and scalability in multi-team environments.
Option A is incorrect because automatic scaling based on CPU and memory is handled by the Horizontal Pod Autoscaler, not by Gateway API or Ingress. Option C describes a characteristic of Ingress, not an advantage of Gateway API; in fact, Gateway API explicitly reduces reliance on annotations by using structured, portable fields. Option D is incorrect because exposing applications externally requires more than just a Service; traffic management resources like Ingress or Gateway are still necessary.
Therefore, the correct and verified answer is Option B, as the Gateway API’s role-oriented design is a key advancement over Ingress and is clearly documented in Kubernetes networking architecture guidance.
The cloud native architecture centered around microservices provides a strong system that ensures ______________.
Options:
fallback
resiliency
failover
high reachability
Answer:
BExplanation:
The best answer is B (resiliency). A microservices-centered cloud-native architecture is designed to build systems that continue to operate effectively under change and failure. “Resiliency” is the umbrella concept: the ability to tolerate faults, recover from disruptions, and maintain acceptable service levels through redundancy, isolation, and automated recovery.
Microservices help resiliency by reducing blast radius. Instead of one monolith where a single defect can take down the entire application, microservices separate concerns into independently deployable components. Combined with Kubernetes, you get resiliency mechanisms such as replication (multiple Pod replicas), self-healing (restart and reschedule on failure), rolling updates, health probes, and service discovery/load balancing. These enable the platform to detect and replace failing instances automatically, and to keep traffic flowing to healthy backends.
Options C (failover) and A (fallback) are resiliency techniques but are narrower terms. Failover usually refers to switching to a standby component when a primary fails; fallback often refers to degraded behavior (cached responses, reduced features). Both can exist in microservice systems, but the broader architectural guarantee microservices aim to support is resiliency overall. Option D (“high reachability”) is not the standard term used in cloud-native design and doesn’t capture the intent as precisely as resiliency.
In practice, achieving resiliency also requires good observability and disciplined delivery: monitoring/alerts, tracing across service boundaries, circuit breakers/timeouts/retries, and progressive delivery patterns. Kubernetes provides platform primitives, but resilient microservices also need careful API design and failure-mode thinking.
So the intended and verified completion is resiliency, option B.
=========
If a Pod was waiting for container images to download on the scheduled node, what state would it be in?
Options:
Failed
Succeeded
Unknown
Pending
Answer:
DExplanation:
If a Pod is waiting for its container images to be pulled to the node, it remains in the Pending phase, so D is correct. Kubernetes Pod “phase” is a high-level summary of where the Pod is in its lifecycle. Pending means the Pod has been accepted by the cluster but one or more of its containers has not started yet. That can occur because the Pod is waiting to be scheduled, waiting on volume attachment/mount, or—very commonly—waiting for the container runtime to pull the image.
When image pulling is the blocker, kubectl describe pod
Why the other phases don’t apply:
Succeeded is for run-to-completion Pods that have finished successfully (typical for Jobs).
Failed means the Pod terminated and at least one container terminated in failure (and won’t be restarted, depending on restartPolicy).
Unknown is used when the node can’t be contacted and the Pod’s state can’t be reliably determined (rare in healthy clusters).
A subtle but important Kubernetes detail: status “Waiting” reasons like ImagePullBackOff are container states inside .status.containerStatuses, while the Pod phase can still be Pending. So, “waiting for images to download” maps to Pod Pending, with container waiting reasons providing the deeper diagnosis.
Therefore, the verified correct answer is D: Pending.
=========
Which statement about Ingress is correct?
Options:
Ingress provides a simple way to track network endpoints within a cluster.
Ingress is a Service type like NodePort and ClusterIP.
Ingress is a construct that allows you to specify how a Pod is allowed to communicate.
Ingress exposes routes from outside the cluster to Services in the cluster.
Answer:
DExplanation:
Ingress is the Kubernetes API resource for defining external HTTP/HTTPS routing into the cluster, so D is correct. An Ingress object specifies rules such as hostnames (e.g., app.example.com), URL paths (e.g., /api), and TLS configuration, mapping those routes to Kubernetes Services. This provides Layer 7 routing capabilities beyond what a basic Service offers.
Ingress is not a Service type (so B is wrong). Service types (ClusterIP, NodePort, LoadBalancer, ExternalName) are part of the Service API and operate at Layer 4. Ingress is a separate API object that depends on an Ingress Controller to actually implement routing. The controller watches Ingress resources and configures a reverse proxy/load balancer (like NGINX, HAProxy, or a cloud load balancer integration) to enforce the desired routing. Without an Ingress Controller, creating an Ingress object alone will not route traffic.
Option A describes endpoint tracking (that’s closer to Endpoints/EndpointSlice). Option C describes NetworkPolicy, which controls allowed network flows between Pods/namespaces. Ingress is about exposing and routing incoming application traffic from outside the cluster to internal Services.
So the verified correct statement is D: Ingress exposes routes from outside the cluster to Services in the cluster.
Which of the following is a good habit for cloud native cost efficiency?
Options:
Follow an automated approach to cost optimization, including visibility and forecasting.
Follow manual processes for cost analysis, including visibility and forecasting.
Use only one cloud provider to simplify the cost analysis.
Keep your legacy workloads unchanged, to avoid cloud costs.
Answer:
AExplanation:
The correct answer is A. In cloud-native environments, costs are highly dynamic: autoscaling changes compute footprint, ephemeral environments come and go, and usage-based billing applies to storage, network egress, load balancers, and observability tooling. Because of this variability, automation is the most sustainable way to achieve cost efficiency. Automated visibility (dashboards, chargeback/showback), anomaly detection, and forecasting help teams understand where spend is coming from and how it changes over time. Automated optimization actions can include right-sizing requests/limits, enforcing TTLs on preview environments, scaling down idle clusters, and cleaning unused resources.
Manual processes (B) don’t scale as complexity grows. By the time someone reviews a spreadsheet or dashboard weekly, cost spikes may have already occurred. Automation enables fast feedback loops and guardrails, which is essential for preventing runaway spend caused by misconfiguration (e.g., excessive log ingestion, unbounded autoscaling, oversized node pools).
Option C is not a cost-efficiency “habit.” Single-provider strategies may simplify some billing views, but they can also reduce leverage and may not be feasible for resilience/compliance; it’s a business choice, not a best practice for cloud-native cost management. Option D is counterproductive: keeping legacy workloads unchanged often wastes money because cloud efficiency typically requires adapting workloads—right-sizing, adopting autoscaling, and using managed services appropriately.
In Kubernetes specifically, cost efficiency is tightly linked to resource management: accurate CPU/memory requests, limits where appropriate, cluster autoscaler tuning, and avoiding overprovisioning. Observability also matters because you can’t optimize what you can’t measure. Therefore, the best habit is an automated cost optimization approach with strong visibility and forecasting—A.
=========
How many hosts are required to set up a highly available Kubernetes cluster when using an external etcd topology?
Options:
Four hosts. Two for control plane nodes and two for etcd nodes.
Four hosts. One for a control plane node and three for etcd nodes.
Three hosts. The control plane nodes and etcd nodes share the same host.
Six hosts. Three for control plane nodes and three for etcd nodes.
Answer:
DExplanation:
In a highly available (HA) Kubernetes control plane using an external etcd topology, you typically run three control plane nodes and three separate etcd nodes, totaling six hosts, making D correct. HA design relies on quorum-based consensus: etcd uses Raft and requires a majority of members available to make progress. Running three etcd members is the common minimum for HA because it tolerates one member failure while maintaining quorum (2/3).
In the external etcd topology, etcd is decoupled from the control plane nodes. This separation improves fault isolation: if a control plane node fails or is replaced, etcd remains stable and independent; likewise, etcd maintenance can be handled separately. Kubernetes API servers (often multiple instances behind a load balancer) talk to the external etcd cluster for storage of cluster state.
Options A and B propose four hosts, but they break common HA/quorum best practices. Two etcd nodes do not form a robust quorum configuration (a two-member etcd cluster cannot tolerate a single failure without losing quorum). One control plane node is not HA for the API server/scheduler/controller-manager components. Option C describes a stacked etcd topology (control plane + etcd on same hosts), which can be HA with three hosts, but the question explicitly says external etcd, not stacked. In stacked topology, you often use three control plane nodes each running an etcd member. In external topology, you use three control plane + three etcd.
Operationally, external etcd topology is often used when you want dedicated resources, separate lifecycle management, or stronger isolation for the datastore. It can reduce blast radius but increases infrastructure footprint and operational complexity (TLS, backup/restore, networking). Still, for the canonical HA external-etcd pattern, the expected answer is six hosts: 3 control plane + 3 etcd.
=========
Which mechanism can be used to automatically adjust the amount of resources for an application?
Options:
Horizontal Pod Autoscaler (HPA)
Kubernetes Event-driven Autoscaling (KEDA)
Cluster Autoscaler
Vertical Pod Autoscaler (VPA)
Answer:
AExplanation:
The verified answer in the PDF is A (HPA), and that aligns with the common Kubernetes meaning of “adjust resources for an application” by scaling replicas. The Horizontal Pod Autoscaler automatically changes the number of Pod replicas for a workload (typically a Deployment) based on observed metrics such as CPU utilization, memory (in some configurations), or custom/external metrics. By increasing replicas under load, the application gains more total CPU/memory capacity available across Pods; by decreasing replicas when load drops, it reduces resource consumption and cost.
It’s important to distinguish what each mechanism adjusts:
HPA adjusts replica count (horizontal scaling).
VPA adjusts Pod resource requests/limits (vertical scaling), which is literally “amount of CPU/memory per pod,” but it often requires restarts to apply changes depending on mode.
Cluster Autoscaler adjusts the number of nodes in the cluster, not application replicas.
KEDA is event-driven autoscaling that often drives HPA behavior using external event sources (queues, streams), but it’s not the primary built-in mechanism referenced in many foundational Kubernetes questions.
Given the wording and the provided answer key, the intended interpretation is: “automatically adjust the resources available to the application” by scaling out/in the number of replicas. That’s exactly HPA’s role. For example, if CPU utilization exceeds a target (say 60%), HPA computes a higher desired replica count and updates the workload. The Deployment then creates more Pods, distributing load and increasing available compute.
So, within this question set, the verified correct choice is A (Horizontal Pod Autoscaler).
=========
What are the 3 pillars of Observability?
Options:
Metrics, Logs, and Traces
Metrics, Logs, and Spans
Metrics, Data, and Traces
Resources, Logs, and Tracing
Answer:
AExplanation:
The correct answer is A: Metrics, Logs, and Traces. These are widely recognized as the “three pillars” because together they provide complementary views into system behavior:
Metrics are numeric time series collected over time (CPU usage, request rate, error rate, latency percentiles). They are best for dashboards, alerting, and capacity planning because they are structured and aggregatable. In Kubernetes, metrics underpin autoscaling and operational visibility (node/pod resource usage, cluster health signals).
Logs are discrete event records (often text) emitted by applications and infrastructure components. Logs provide detailed context for debugging: error messages, stack traces, warnings, and business events. In Kubernetes, logs are commonly collected from container stdout/stderr and aggregated centrally for search and correlation.
Traces capture the end-to-end journey of a request through a distributed system, breaking it into spans. Tracing is crucial in microservices because a single user request may cross many services; traces show where latency accumulates and which dependency fails. Tracing also enables root cause analysis when metrics indicate degradation but don’t pinpoint the culprit.
Why the other options are wrong: a span is a component within tracing, not a top-level pillar; “data” is too generic; and “resources” are not an observability signal category. The pillars are defined by signal type and how they’re used operationally.
In cloud-native practice, these pillars are often unified via correlation IDs and shared context: metrics alerts link to logs and traces for the same timeframe/request. Tooling like Prometheus (metrics), log aggregators (e.g., Loki/Elastic), and tracing systems (Jaeger/Tempo/OpenTelemetry) work together to provide a complete observability story.
Therefore, the verified correct answer is A.
=========
What is CloudEvents?
Options:
It is a specification for describing event data in common formats for Kubernetes network traffic management and cloud providers.
It is a specification for describing event data in common formats in all cloud providers including major cloud providers.
It is a specification for describing event data in common formats to provide interoperability across services, platforms and systems.
It is a Kubernetes specification for describing events data in common formats for iCloud services, iOS platforms and iMac.
Answer:
CExplanation:
CloudEvents is an open specification for describing event data in a common way to enable interoperability across services, platforms, and systems, so C is correct. In cloud-native architectures, many components communicate asynchronously via events (message brokers, event buses, webhooks). Without a standard envelope, each producer and consumer invents its own event structure, making integration brittle. CloudEvents addresses this by standardizing core metadata fields—like event id, source, type, specversion, and time—and defining how event payloads are carried.
This helps systems interoperate regardless of transport. CloudEvents can be serialized as JSON or other encodings and carried over HTTP, messaging systems, or other protocols. By using a shared spec, you can route, filter, validate, and transform events more consistently.
Option A is too narrow and incorrectly ties CloudEvents to Kubernetes traffic management; CloudEvents is broader than Kubernetes. Option B is closer but still framed incorrectly—CloudEvents is not merely “for all cloud providers,” it is an interoperability spec across services and platforms, including but not limited to cloud provider event systems. Option D is clearly incorrect.
In Kubernetes ecosystems, CloudEvents is relevant to event-driven systems and serverless platforms (e.g., Knative Eventing and other eventing frameworks) because it provides a consistent event contract across producers and consumers. That consistency reduces coupling, supports better tooling (schema validation, tracing correlation), and makes event-driven architectures easier to operate at scale.
So, the correct definition is C: a specification for common event formats to enable interoperability across systems.
=========
What are the characteristics for building every cloud-native application?
Options:
Resiliency, Operability, Observability, Availability
Resiliency, Containerd, Observability, Agility
Kubernetes, Operability, Observability, Availability
Resiliency, Agility, Operability, Observability
Answer:
DExplanation:
Cloud-native applications are typically designed to thrive in dynamic, distributed environments where infrastructure is elastic and failures are expected. The best set of characteristics listed is Resiliency, Agility, Operability, Observability, making D correct.
Resiliency means the application and its supporting platform can tolerate failures and continue providing service. In Kubernetes terms, resiliency is supported through self-healing controllers, replica management, health probes, and safe rollout mechanisms, but the application must also be designed to handle transient failures, retries, and graceful degradation.
Agility reflects the ability to deliver changes quickly and safely. Cloud-native systems emphasize automation, CI/CD, declarative configuration, and small, frequent releases—often enabled by Kubernetes primitives like Deployments and rollout strategies. Agility is about reducing the friction to ship improvements while maintaining reliability.
Operability is how manageable the system is in production: clear configuration, predictable deployments, safe scaling, and automation-friendly operations. Kubernetes encourages operability through consistent APIs, controllers, and standardized patterns for configuration and lifecycle.
Observability means you can understand what’s happening inside the system using telemetry—metrics, logs, and traces—so you can troubleshoot issues, measure SLOs, and improve performance. Kubernetes provides many integration points for observability, but cloud-native apps must also emit meaningful signals.
Options B and C include items that are not “characteristics” (containerd is a runtime; Kubernetes is a platform). Option A includes “availability,” which is important, but the canonical cloud-native framing in this question emphasizes the four qualities in D as the foundational build characteristics.
=========
How does Horizontal Pod autoscaling work in Kubernetes?
Options:
The Horizontal Pod Autoscaler controller adds more CPU or memory to the pods when the load is above the configured threshold, and reduces CPU or memory when the load is below.
The Horizontal Pod Autoscaler controller adds more pods when the load is above the configured threshold, but does not reduce the number of pods when the load is below.
The Horizontal Pod Autoscaler controller adds more pods to the specified DaemonSet when the load is above the configured threshold, and reduces the number of pods when the load is below.
The Horizontal Pod Autoscaler controller adds more pods when the load is above the configured threshold, and reduces the number of pods when the load is below.
Answer:
DExplanation:
Horizontal Pod Autoscaling (HPA) adjusts the number of Pod replicas for a workload controller (most commonly a Deployment) based on observed metrics, increasing replicas when load is high and decreasing when load drops. That matches D, so D is correct.
HPA does not add CPU or memory to existing Pods—that would be vertical scaling (VPA). Instead, HPA changes spec.replicas on the target resource, and the controller then creates or removes Pods accordingly. HPA commonly scales based on CPU utilization and memory (resource metrics), and it can also scale using custom or external metrics if those are exposed via the appropriate Kubernetes metrics APIs.
Option A is vertical scaling behavior, not HPA. Option B is incorrect because HPA can scale down as well as up (subject to stabilization windows and configuration), so it’s not “scale up only.” Option C is incorrect because HPA does not scale DaemonSets in the usual model; DaemonSets are designed to run one Pod per node (or per selected nodes) rather than a replica count. HPA targets resources like Deployments, ReplicaSets (via Deployment), and StatefulSets in typical usage, where replica count is a meaningful knob.
Operationally, HPA works as a control loop: it periodically reads metrics (for example, via metrics-server for CPU/memory, or via adapters for custom metrics), compares the current value to the desired target, and calculates a desired replica count within min/max bounds. To avoid flapping, HPA includes stabilization behavior and cooldown logic so it doesn’t scale too aggressively in response to short spikes or dips. You can configure minimum and maximum replicas and behavior policies to tune responsiveness.
In cloud-native systems, HPA is a key elasticity mechanism: it enables services to handle variable traffic while controlling cost by scaling down during low demand. Therefore, the verified correct answer is D.
=========
What kubectl command is used to retrieve the resource consumption (CPU and memory) for nodes or Pods?
Options:
kubectl cluster-info
kubectl version
kubectl top
kubectl api-resources
Answer:
CExplanation:
To retrieve CPU and memory consumption for nodes or Pods, you use kubectl top, so C is correct. kubectl top nodes shows per-node resource usage, and kubectl top pods shows per-Pod (and optionally per-container) usage. This data comes from the Kubernetes resource metrics pipeline, most commonly metrics-server, which scrapes kubelet/cAdvisor stats and exposes them via the metrics.k8s.io API.
It’s important to recognize that kubectl top provides current resource usage snapshots, not long-term historical trending. For long-term metrics and alerting, clusters typically use Prometheus and related tooling. But for quick operational checks—“Is this Pod CPU-bound?” “Are nodes near memory saturation?”—kubectl top is the built-in day-to-day tool.
Option A (kubectl cluster-info) shows general cluster endpoints and info about control plane services, not resource usage. Option B (kubectl version) prints client/server version info. Option D (kubectl api-resources) lists resource types available in the cluster. None of those report CPU/memory usage.
In observability practice, kubectl top is often used during incidents to correlate symptoms with resource pressure. For example, if a node is high on memory, you might see Pods being OOMKilled or the kubelet evicting Pods under pressure. Similarly, sustained high CPU utilization might explain latency spikes or throttling if limits are set. Note that kubectl top requires metrics-server (or an equivalent provider) to be installed and functioning; otherwise it may return errors like “metrics not available.”
So, the correct command for retrieving node/Pod CPU and memory usage is kubectl top.
=========
What is a cloud native application?
Options:
It is a monolithic application that has been containerized and is running now on the cloud.
It is an application designed to be scalable and take advantage of services running on the cloud.
It is an application designed to run all its functions in separate containers.
It is any application that runs in a cloud provider and uses its services.
Answer:
BExplanation:
B is correct. A cloud native application is designed to be scalable, resilient, and adaptable, and to leverage cloud/platform capabilities rather than merely being “hosted” on a cloud VM. Cloud-native design emphasizes principles like elasticity (scale up/down), automation, fault tolerance, and rapid, reliable delivery. While containers and Kubernetes are common enablers, the key is the architectural intent: build applications that embrace distributed systems patterns and cloud-managed primitives.
Option A is not enough. Simply containerizing a monolith and running it in the cloud does not automatically make it cloud native; that may be “lift-and-shift” packaging. The application might still be tightly coupled, hard to scale, and operationally fragile. Option C is too narrow and prescriptive; cloud native does not require “all functions in separate containers” (microservices are common but not mandatory). Many cloud-native apps use a mix of services, and even monoliths can be made more cloud native by adopting statelessness, externalized state, and automated delivery. Option D is too broad; “any app running in a cloud provider” includes legacy apps that don’t benefit from elasticity or cloud-native operational models.
Cloud-native applications typically align with patterns: stateless service tiers, declarative configuration, health endpoints, horizontal scaling, graceful shutdown, and reliance on managed backing services (databases, queues, identity, observability). They are built to run reliably in dynamic environments where instances are replaced routinely—an assumption that matches Kubernetes’ reconciliation and self-healing model.
So, the best verified definition among these options is B.
=========
What is the core metric type in Prometheus used to represent a single numerical value that can go up and down?
Options:
Summary
Counter
Histogram
Gauge
Answer:
DExplanation:
In Prometheus, a Gauge represents a single numerical value that can increase and decrease over time, which makes D the correct answer. Gauges are used for values like current memory usage, number of in-flight requests, queue depth, temperature, or CPU usage—anything that can move up and down.
This contrasts with a Counter, which is strictly monotonically increasing (it only goes up, except for resets when a process restarts). Counters are ideal for cumulative totals like total HTTP requests served, total errors, or bytes transmitted. Histograms and Summaries are used to capture distributions (often latency distributions), providing bucketed counts (histogram) or quantile approximations (summary), and are not the “single value that goes up and down” primitive the question asks for.
In Kubernetes observability, metrics are a primary signal for understanding system health and performance. Prometheus is widely used to scrape metrics from Kubernetes components (kubelet, API server, controller-manager), cluster add-ons, and applications. Gauges are common for resource utilization metrics and for instantaneous states, such as container_memory_working_set_bytes or go_goroutines.
When you build alerting and dashboards, selecting the right metric type matters. For example, if you want to alert on the current memory usage, a gauge is appropriate. If you want to compute request rates, you typically use counters with Prometheus functions like rate() to derive per-second rates. Histograms and summaries are used when you need latency percentiles or distribution analysis.
So, for “a single numerical value that can go up and down,” the correct Prometheus metric type is Gauge (D).
=========
What helps an organization to deliver software more securely at a higher velocity?
Options:
Kubernetes
apt-get
Docker Images
CI/CD Pipeline
Answer:
DExplanation:
A CI/CD pipeline is a core practice/tooling approach that enables organizations to deliver software faster and more securely, so D is correct. CI (Continuous Integration) automates building and testing code changes frequently, reducing integration risk and catching defects early. CD (Continuous Delivery/Deployment) automates releasing validated builds into environments using consistent, repeatable steps—reducing manual errors and enabling rapid iteration.
Security improves because automation enables standardized checks on every change: static analysis, dependency scanning, container image scanning, policy validation, and signing/verification steps can be integrated into the pipeline. Instead of relying on ad-hoc human processes, security controls become repeatable gates. In Kubernetes environments, pipelines commonly build container images, run tests, publish artifacts to registries, and then deploy via manifests, Helm, or GitOps controllers—keeping deployments consistent and auditable.
Option A (Kubernetes) is a platform that helps run and manage workloads, but by itself it doesn’t guarantee secure high-velocity delivery. It provides primitives (rollouts, declarative config, RBAC), yet the delivery workflow still needs automation. Option B (apt-get) is a package manager for Debian-based systems and is not a delivery pipeline. Option C (Docker Images) are artifacts; they improve portability and repeatability, but they don’t provide the end-to-end automation of building, testing, promoting, and deploying across environments.
In cloud-native application delivery, the pipeline is the “engine” that turns code changes into safe production releases. Combined with Kubernetes’ declarative deployment model (Deployments, rolling updates, health probes), a CI/CD pipeline supports frequent releases with controlled rollouts, fast rollback, and strong auditability. That is exactly what the question is targeting. Therefore, the verified answer is D.
=========
What is a Pod?
Options:
A networked application within Kubernetes.
A storage volume within Kubernetes.
A single container within Kubernetes.
A group of one or more containers within Kubernetes.
Answer:
DExplanation:
A Pod is the smallest deployable/schedulable unit in Kubernetes and consists of a group of one or more containers that are deployed together on the same node—so D is correct. The key idea is that Kubernetes schedules Pods, not individual containers. Containers in the same Pod share important runtime context: they share the same network namespace (one Pod IP and port space) and can share storage volumes defined at the Pod level. This is why a Pod is often described as a “logical host” for its containers.
Most Pods run a single container, but multi-container Pods are common for sidecar patterns. For example, an application container might run alongside a service mesh proxy sidecar, a log shipper, or a config reloader. Because these containers share localhost networking, they can communicate efficiently without exposing extra network endpoints. Because they can share volumes, one container can produce files that another consumes (for example, writing logs to a shared volume).
Options A and B are incorrect because a Pod is not “an application” abstraction nor is it a storage volume. Pods can host applications, but they are the execution unit for containers rather than the application concept itself. Option C is incorrect because a Pod is not limited to a single container; “one or more containers” is fundamental to the Pod definition.
Operationally, understanding Pods is essential because many Kubernetes behaviors key off Pods: Services select Pods (typically by labels), autoscalers scale Pods (replica counts), probes determine Pod readiness/liveness, and scheduling constraints place Pods on nodes. When a Pod is replaced (for example during a Deployment rollout), a new Pod is created with a new UID and potentially a new IP—reinforcing why Services exist to provide stable access.
Therefore, the verified correct answer is D: a Pod is a group of one or more containers within Kubernetes.
=========
Which of the following is a responsibility of the governance board of an open source project?
Options:
Decide about the marketing strategy of the project.
Review the pull requests in the main branch.
Outline the project's “terms of engagement”.
Define the license to be used in the project.
Answer:
CExplanation:
A governance board in an open source project typically defines how the community operates—its decision-making rules, roles, conflict resolution, and contribution expectations—so C (“Outline the project's terms of engagement”) is correct. In large cloud-native projects (Kubernetes being a prime example), clear governance is essential to coordinate many contributors, companies, and stakeholders. Governance establishes the “rules of the road” that keep collaboration productive and fair.
“Terms of engagement” commonly includes: how maintainers are selected, how proposals are reviewed (e.g., enhancement processes), how meetings and SIGs operate, what constitutes consensus, how voting works when consensus fails, and what code-of-conduct expectations apply. It also defines escalation and dispute resolution paths so technical disagreements don’t become community-breaking conflicts. In other words, governance is about ensuring the project has durable, transparent processes that outlive any individual contributor and support vendor-neutral decision making.
Option B (reviewing pull requests) is usually the responsibility of maintainers and SIG owners, not a governance board. The governance body may define the structure that empowers maintainers, but it generally does not do day-to-day code review. Option A (marketing strategy) is often handled by foundations, steering committees, or separate outreach groups, not governance boards as their primary responsibility. Option D (defining the license) is usually decided early and may be influenced by a foundation or legal process; while governance can shape legal/policy direction, the core governance responsibility is broader community operating rules rather than selecting a license.
In cloud-native ecosystems, strong governance supports sustainability: it encourages contributions, protects neutrality, and provides predictable processes for evolution. Therefore, the best verified answer is C.
=========
What are the two steps performed by the kube-scheduler to select a node to schedule a pod?
Options:
Grouping and placing
Filtering and selecting
Filtering and scoring
Scoring and creating
Answer:
CExplanation:
The kube-scheduler selects a node in two main phases: filtering and scoring, so C is correct. First, filtering identifies which nodes are feasible for the Pod by applying hard constraints. These include resource availability (CPU/memory requests), node taints/tolerations, node selectors and required affinities, topology constraints, and other scheduling requirements. Nodes that cannot satisfy the Pod’s requirements are removed from consideration.
Second, scoring ranks the remaining feasible nodes using priority functions to choose the “best” placement. Scoring can consider factors like spreading Pods across nodes/zones, packing efficiency, affinity preferences, and other policies configured in the scheduler. The node with the highest score is selected (with tie-breaking), and the scheduler binds the Pod by setting spec.nodeName.
Option B (“filtering and selecting”) is close but misses the explicit scoring step that is central to scheduler design. The scheduler does “select” a node, but the canonical two-step wording in Kubernetes scheduling is filtering then scoring. Options A and D are not how scheduler internals are described.
Operationally, understanding filtering vs scoring helps troubleshoot scheduling failures. If a Pod can’t be scheduled, it failed in filtering—kubectl describe pod often shows “0/… nodes are available” reasons (insufficient CPU, taints, affinity mismatch). If it schedules but lands in unexpected places, it’s often about scoring preferences (affinity weights, topology spread preferences, default scheduler profiles).
So the verified correct answer is C: kube-scheduler uses Filtering and Scoring.
=========
Kubernetes ___ protect you against voluntary interruptions (such as deleting Pods, draining nodes) to run applications in a highly available manner.
Options:
Pod Topology Spread Constraints
Pod Disruption Budgets
Taints and Tolerations
Resource Limits and Requests
Answer:
BExplanation:
The correct answer is B: Pod Disruption Budgets (PDBs). A PDB is a policy object that limits how many Pods of an application can be voluntarily disrupted at the same time. “Voluntary disruptions” include actions such as draining a node for maintenance (kubectl drain), cluster upgrades, or an administrator deleting Pods. The core purpose is to preserve availability by ensuring that a minimum number (or percentage) of replicas remain running and ready while those planned disruptions occur.
A PDB is typically defined with either minAvailable (e.g., “at least 3 Pods must remain available”) or maxUnavailable (e.g., “no more than 1 Pod can be unavailable”). Kubernetes uses this budget when performing eviction operations. If evicting a Pod would violate the PDB, the eviction is blocked (or delayed), which forces maintenance workflows to proceed more safely—either by draining more slowly, scaling up first, or scheduling maintenance in stages.
Why the other options are not correct: topology spread constraints (A) influence scheduling distribution across failure domains but don’t directly protect against voluntary disruptions. Taints and tolerations (C) control where Pods can schedule, not how many can be disrupted. Resource requests/limits (D) control CPU/memory allocation and do not guard availability during drains or deletions.
PDBs also work best when paired with Deployments/StatefulSets that maintain replicas and with readiness probes that accurately represent whether a Pod can serve traffic. PDBs do not prevent involuntary disruptions (node crashes), but they materially reduce risk during planned operations—exactly what the question is targeting.
=========
Which cloud native tool keeps Kubernetes clusters in sync with sources of configuration (like Git repositories), and automates updates to configuration when there is new code to deploy?
Options:
Flux and ArgoCD
GitOps Toolkit
Linkerd and Istio
Helm and Kustomize
Answer:
AExplanation:
Tools that continuously reconcile cluster state to match a Git repository’s desired configuration are GitOps controllers, and the best match here is Flux and ArgoCD, so A is correct. GitOps is the practice where Git is the source of truth for declarative system configuration. A GitOps tool continuously compares the desired state (manifests/Helm/Kustomize outputs stored in Git) with the actual state in the cluster and then applies changes to eliminate drift.
Flux and Argo CD both implement this reconciliation loop. They watch Git repositories, detect updates (new commits/tags), and apply the updated Kubernetes resources. They also surface drift and sync status, enabling auditable, repeatable deployments and easy rollbacks (revert Git). This model improves delivery velocity and security because changes flow through code review, and cluster changes can be restricted to the GitOps controller identity rather than ad-hoc human kubectl access.
Option B (“GitOps Toolkit”) is related—Flux uses a GitOps Toolkit internally—but the question asks for a “tool” that keeps clusters in sync; the recognized tools are Flux and Argo CD in this list. Option C lists service meshes (traffic/security/telemetry), not deployment synchronization tools. Option D lists packaging/templating tools; Helm and Kustomize help build manifests, but they do not, by themselves, continuously reconcile cluster state to a Git source.
In Kubernetes application delivery, GitOps tools become the deployment engine: CI builds artifacts, updates references in Git (image tags/digests), and the GitOps controller deploys those changes. This separation strengthens traceability and reduces configuration drift. Therefore, A is the verified correct answer.
=========
In a cloud native environment, who is usually responsible for maintaining the workloads running across the different platforms?
Options:
The cloud provider.
The Site Reliability Engineering (SRE) team.
The team of developers.
The Support Engineering team (SE).
Answer:
BExplanation:
B (the Site Reliability Engineering team) is correct. In cloud-native organizations, SREs are commonly responsible for the reliability, availability, and operational health of workloads across platforms (multiple clusters, regions, clouds, and supporting services). While responsibilities vary by company, the classic SRE charter is to apply software engineering to operations: build automation, standardize runbooks, manage incident response, define SLOs/SLIs, and continuously improve system reliability.
Maintaining workloads “across different platforms” implies cross-cutting operational ownership: deployments need to behave consistently, rollouts must be safe, monitoring and alerting must be uniform, and incident practices must work across environments. SRE teams typically own or heavily influence the observability stack (metrics/logs/traces), operational readiness, capacity planning, and reliability guardrails (error budgets, progressive delivery, automated rollback triggers). They also collaborate closely with platform engineering and application teams, but SRE is often the group that ensures production workloads meet reliability targets.
Why other options are less correct:
The cloud provider (A) maintains the underlying cloud services, but not your application workloads’ correctness, SLOs, or operational processes.
Developers (C) do maintain application code and may own on-call in some models, but the question asks “usually” in cloud-native environments; SRE is the widely recognized function for workload reliability across platforms.
Support Engineering (D) typically focuses on customer support and troubleshooting from a user perspective, not maintaining platform workload reliability at scale.
So, the best and verified answer is B: SRE teams commonly maintain and ensure reliability of workloads across cloud-native platforms.
=========
Which of these is a valid container restart policy?
Options:
On login
On update
On start
On failure
Answer:
DExplanation:
The correct answer is D: On failure. In Kubernetes, restart behavior is controlled by the Pod-level field spec.restartPolicy, with valid values Always, OnFailure, and Never. The option presented here (“On failure”) maps to Kubernetes’ OnFailure policy. This setting determines what the kubelet should do when containers exit:
Always: restart containers whenever they exit (typical for long-running services)
OnFailure: restart containers only if they exit with a non-zero status (common for batch workloads)
Never: do not restart containers (fail and leave it terminated)
So “On failure” is a valid restart policy concept and the only one in the list that matches Kubernetes semantics.
The other options are not Kubernetes restart policies. “On login,” “On update,” and “On start” are not recognized values and don’t align with how Kubernetes models container lifecycle. Kubernetes is declarative and event-driven: it reacts to container exit codes and controller intent, not user “logins.”
Operationally, choosing the right restart policy is important. For example, Jobs typically use restartPolicy: OnFailure or Never because the goal is completion, not continuous uptime. Deployments usually imply “Always” because the workload should keep serving traffic, and a crashed container should be restarted. Also note that controllers interact with restarts: a Deployment may recreate Pods if they fail readiness, while a Job counts completions and failures based on Pod termination behavior.
Therefore, among the options, the only valid (Kubernetes-aligned) restart policy is D.
=========
What factors influence the Kubernetes scheduler when it places Pods on nodes?
Options:
Pod memory requests, node taints, and Pod affinity.
Pod labels, node labels, and request labels.
Node taints, node level, and Pod priority.
Pod priority, container command, and node labels.
Answer:
AExplanation:
The Kubernetes scheduler chooses a node for a Pod by evaluating scheduling constraints and cluster state. Key inputs include resource requests (CPU/memory), taints/tolerations, and affinity/anti-affinity rules. Option A directly names three real, high-impact scheduling factors—Pod memory requests, node taints, and Pod affinity—so A is correct.
Resource requests are fundamental: the scheduler must ensure the target node has enough allocatable CPU/memory to satisfy the Pod’s requests. Requests (not limits) drive placement decisions. Taints on nodes repel Pods unless the Pod has a matching toleration, which is commonly used to reserve nodes for special workloads (GPU nodes, system nodes, restricted nodes) or to protect nodes under certain conditions. Affinity and anti-affinity allow expressing “place me near” or “place me away” rules—e.g., keep replicas spread across failure domains or co-locate components for latency.
Option B includes labels, which do matter, but “request labels” is not a standard scheduler concept; labels influence scheduling mainly through selectors and affinity, not as a direct category called “request labels.” Option C mixes a real concept (taints, priority) with “node level,” which isn’t a standard scheduling factor term. Option D includes “container command,” which does not influence scheduling; the scheduler does not care what command the container runs, only placement constraints and resources.
Under the hood, kube-scheduler uses a two-phase process (filtering then scoring) to select a node, but the inputs it filters/scores include exactly the kinds of constraints in A. Therefore, the verified best answer is A.
=========
In Kubernetes, what is the primary function of a RoleBinding?
Options:
To provide a user or group with permissions across all resources at the cluster level.
To assign the permissions of a Role to a user, group, or service account within a namespace.
To enforce namespace network rules by binding policies to Pods running in the namespace.
To create and define a new Role object that contains a specific set of permissions.
Answer:
BExplanation:
In Kubernetes, authorization is managed using Role-Based Access Control (RBAC), which defines what actions identities can perform on which resources. Within this model, a RoleBinding plays a crucial role by connecting permissions to identities, making option B the correct answer.
A Role defines a set of permissions—such as the ability to get, list, create, or delete specific resources—but by itself, a Role does not grant those permissions to anyone. A RoleBinding is required to bind that Role to a specific subject, such as a user, group, or service account. This binding is namespace-scoped, meaning it applies only within the namespace where the RoleBinding is created. As a result, RoleBindings enable fine-grained access control within individual namespaces, which is essential for multi-tenant and least-privilege environments.
When a RoleBinding is created, it references a Role (or a ClusterRole) and assigns its permissions to one or more subjects within that namespace. This allows administrators to reuse existing roles while precisely controlling who can perform certain actions and where. For example, a RoleBinding can grant a service account read-only access to ConfigMaps in a single namespace without affecting access elsewhere in the cluster.
Option A is incorrect because cluster-wide permissions are granted using a ClusterRoleBinding, not a RoleBinding. Option C is incorrect because network rules are enforced using NetworkPolicies, not RBAC objects. Option D is incorrect because Roles are defined independently and only describe permissions; they do not assign them to identities.
In summary, a RoleBinding’s primary purpose is to assign the permissions defined in a Role to users, groups, or service accounts within a specific namespace. This separation of permission definition (Role) and permission assignment (RoleBinding) is a fundamental principle of Kubernetes RBAC and is clearly documented in Kubernetes authorization architecture.
What is a best practice to minimize the container image size?
Options:
Use a DockerFile.
Use multistage builds.
Build images with different tags.
Add a build.sh script.
Answer:
BExplanation:
A proven best practice for minimizing container image size is to use multi-stage builds, so B is correct. Multi-stage builds allow you to separate the “build environment” from the “runtime environment.” In the first stage, you can use a full-featured base image (with compilers, package managers, and build tools) to compile your application or assemble artifacts. In the final stage, you copy only the resulting binaries or necessary runtime assets into a much smaller base image (for example, a distroless image or a slim OS image). This dramatically reduces the final image size because it excludes compilers, caches, and build dependencies that are not needed at runtime.
In cloud-native application delivery, smaller images matter for several reasons. They pull faster, which speeds up deployments, rollouts, and scaling events (Pods become Ready sooner). They also reduce attack surface by removing unnecessary packages, which helps security posture and scanning results. Smaller images tend to be simpler and more reproducible, improving reliability across environments.
Option A is not a size-minimization practice: using a Dockerfile is simply the standard way to define how to build an image; it doesn’t inherently reduce size. Option C (different tags) changes image identification but not size. Option D (a build script) may help automation, but it doesn’t guarantee smaller images; the image contents are determined by what ends up in the layers.
Multi-stage builds are commonly paired with other best practices: choosing minimal base images, cleaning package caches, avoiding copying unnecessary files (use .dockerignore), and reducing layer churn. But among the options, the clearest and most directly correct technique is multi-stage builds.
Therefore, the verified answer is B.
=========
How do you deploy a workload to Kubernetes without additional tools?
Options:
Create a Bash script and run it on a worker node.
Create a Helm Chart and install it with helm.
Create a manifest and apply it with kubectl.
Create a Python script and run it with kubectl.
Answer:
CExplanation:
The standard way to deploy workloads to Kubernetes using only built-in tooling is to create Kubernetes manifests (YAML/JSON definitions of API objects) and apply them with kubectl, so C is correct. Kubernetes is a declarative system: you describe the desired state of resources (e.g., a Deployment, Service, ConfigMap, Ingress) in a manifest file, then submit that desired state to the API server. Controllers reconcile the actual cluster state to match what you declared.
A manifest typically includes mandatory fields like apiVersion, kind, and metadata, and then a spec describing desired behavior. For example, a Deployment manifest declares replicas and the Pod template (containers, images, ports, probes, resources). Applying the manifest with kubectl apply -f
Option B (Helm) is indeed a popular deployment tool, but Helm is explicitly an “additional tool” beyond kubectl and the Kubernetes API. The question asks “without additional tools,” so Helm is excluded by definition. Option A (running Bash scripts on worker nodes) bypasses Kubernetes’ desired-state control and is not how Kubernetes workload deployment is intended; it also breaks portability and operational safety. Option D is not a standard Kubernetes deployment mechanism; kubectl does not “run Python scripts” to deploy workloads (though scripts can automate kubectl, that’s still not the primary mechanism).
From a cloud native delivery standpoint, manifests support GitOps, reviewable changes, and repeatable deployments across environments. The Kubernetes-native approach is: declare resources in manifests and apply them to the cluster. Therefore, C is the verified correct answer.
Which of the following is the name of a container orchestration software?
Options:
OpenStack
Docker
Apache Mesos
CRI-O
Answer:
CExplanation:
C (Apache Mesos) is correct because Mesos is a cluster manager/orchestrator that can schedule and manage workloads (including containerized workloads) across a pool of machines. Historically, Mesos (often paired with frameworks like Marathon) was used to orchestrate services and batch jobs at scale, similar in spirit to Kubernetes’ scheduling and cluster management role.
Why the other answers are not correct as “container orchestration software” in this context:
OpenStack (A) is primarily an IaaS cloud platform for provisioning compute, networking, and storage (VM-focused). It’s not a container orchestrator, though it can host Kubernetes or containers.
Docker (B) is a container platform/tooling ecosystem (image build, runtime, local orchestration via Docker Compose/Swarm historically), but “Docker” itself is not the best match for “container orchestration software” in the multi-node cluster orchestration sense that the question implies.
CRI-O (D) is a container runtime implementing Kubernetes’ CRI; it runs containers on a node but does not orchestrate placement, scaling, or service lifecycle across a cluster.
Container orchestration typically means capabilities like scheduling, scaling, service discovery integration, health management, and rolling updates across multiple hosts. Mesos fits that definition: it provides resource management and scheduling over a cluster and can run container workloads via supported containerizers. Kubernetes ultimately became the dominant orchestrator for many use cases, but Mesos is clearly recognized as orchestration software in this category.
So, among these choices, the verified orchestration platform is Apache Mesos (C).
=========
A site reliability engineer needs to temporarily prevent new Pods from being scheduled on node-2 while keeping the existing workloads running without disruption. Which kubectl command should be used?
Options:
kubectl cordon node-2
kubectl delete node-2
kubectl drain node-2
kubectl pause deployment
Answer:
AExplanation:
In Kubernetes, node maintenance and availability are common operational tasks, and the platform provides specific commands to control how the scheduler places Pods on nodes. When the requirement is to temporarily prevent new Pods from being scheduled on a node without affecting the currently running Pods, the correct approach is to cordon the node.
The command kubectl cordon node-2 marks the node as unschedulable. This means the Kubernetes scheduler will no longer place any new Pods onto that node. Importantly, cordoning a node does not evict, restart, or interrupt existing Pods. All workloads already running on the node continue operating normally. This makes cordoning ideal for scenarios such as diagnostics, monitoring, or preparing for future maintenance while ensuring zero workload disruption.
Option B, kubectl delete node-2, is incorrect because deleting a node removes it entirely from the cluster. This action would cause Pods running on that node to be terminated and rescheduled elsewhere, resulting in disruption—exactly what the question specifies must be avoided.
Option C, kubectl drain node-2, is also incorrect in this context. Draining a node safely evicts Pods (except for certain exclusions like DaemonSets) and reschedules them onto other nodes. While drain is useful for maintenance and upgrades, it does not keep existing workloads running on the node, making it unsuitable here.
Option D, kubectl pause deployment, applies only to Deployments and merely pauses rollout updates. It does not affect node-level scheduling behavior and has no impact on where Pods are placed by the scheduler.
Therefore, the correct and verified answer is Option A: kubectl cordon node-2, which aligns with Kubernetes operational best practices and official documentation for non-disruptive node management.
Which of the following are tasks performed by a container orchestration tool?
Options:
Schedule, scale, and manage the health of containers.
Create images, scale, and manage the health of containers.
Debug applications, and manage the health of containers.
Store images, scale, and manage the health of containers.
Answer:
AExplanation:
A container orchestration tool (like Kubernetes) is responsible for scheduling, scaling, and health management of workloads, making A correct. Orchestration sits above individual containers and focuses on running applications reliably across a fleet of machines. Scheduling means deciding which node should run a workload based on resource requests, constraints, affinities, taints/tolerations, and current cluster state. Scaling means changing the number of running instances (replicas) to meet demand (manually or automatically through autoscalers). Health management includes monitoring whether containers and Pods are alive and ready, replacing failed instances, and maintaining the declared desired state.
Options B and D include “create images” and “store images,” which are not orchestration responsibilities. Image creation is a CI/build responsibility (Docker/BuildKit/build systems), and image storage is a container registry responsibility (Harbor, ECR, GCR, Docker Hub, etc.). Kubernetes consumes images from registries but does not build or store them. Option C includes “debug applications,” which is not a core orchestration function. While Kubernetes provides tools that help debugging (logs, exec, events), debugging is a human/operator activity rather than the orchestrator’s fundamental responsibility.
In Kubernetes specifically, these orchestration tasks are implemented through controllers and control loops: Deployments/ReplicaSets manage replica counts and rollouts, kube-scheduler assigns Pods to nodes, kubelet ensures containers run, and probes plus controller logic replace unhealthy replicas. This is exactly what makes Kubernetes valuable at scale: instead of manually starting/stopping containers on individual hosts, you declare your intent and let the orchestration system continually reconcile reality to match. That combination—placement + elasticity + self-healing—is the core of container orchestration, matching option A precisely.
=========
Which Kubernetes resource uses immutable: true boolean field?
Options:
Deployment
Pod
ConfigMap
ReplicaSet
Answer:
CExplanation:
The immutable: true field is supported by ConfigMap (and also by Secrets, though Secret is not in the options), so C is correct. When a ConfigMap is marked immutable, its data can no longer be changed after creation. This is useful for protecting configuration from accidental modification and for improving cluster performance by reducing watch/update churn on frequently referenced configuration objects.
In Kubernetes, ConfigMaps store non-sensitive configuration as key-value pairs. They can be consumed by Pods as environment variables, command-line arguments, or mounted files in volumes. Without immutability, ConfigMap updates can trigger complex runtime behaviors: for example, file-mounted ConfigMap updates can eventually reflect in the volume (with some delay), but environment variables do not update automatically in running Pods. This can cause confusion and configuration drift between expected and actual behavior. Marking a ConfigMap immutable makes the configuration stable and encourages explicit rollout strategies (create a new ConfigMap with a new name and update the Pod template), which is generally more reliable for production delivery.
Why the other options are wrong: Deployments, Pods, and ReplicaSets do not use an immutable: true field as a standard top-level toggle in their API schema for the purpose described. These objects can be updated through the normal API mechanisms, and their updates are part of typical lifecycle operations (rolling updates, scaling, etc.). The immutability concept exists in Kubernetes, but the specific immutable boolean in this context is a recognized field for ConfigMap (and Secret) objects.
Operationally, immutable ConfigMaps help enforce safer practices: instead of editing live configuration in place, teams adopt versioned configuration artifacts and controlled rollouts via Deployments. This fits cloud-native principles of repeatability and reducing accidental production changes.
=========
What edge and service proxy tool is designed to be integrated with cloud native applications?
Options:
CoreDNS
CNI
gRPC
Envoy
Answer:
DExplanation:
The correct answer is D: Envoy. Envoy is a high-performance edge and service proxy designed for cloud-native environments. It is commonly used as the data plane in service meshes and modern API gateways because it provides consistent traffic management, observability, and security features across microservices without requiring every application to implement those capabilities directly.
Envoy operates at Layer 7 (application-aware) and supports protocols like HTTP/1.1, HTTP/2, gRPC, and more. It can handle routing, load balancing, retries, timeouts, circuit breaking, rate limiting, TLS termination, and mutual TLS (mTLS). Envoy also emits rich telemetry (metrics, access logs, tracing) that integrates well with cloud-native observability stacks.
Why the other options are incorrect:
CoreDNS (A) provides DNS-based service discovery within Kubernetes; it is not an edge/service proxy.
CNI (B) is a specification and plugin ecosystem for container networking (Pod networking), not a proxy.
gRPC (C) is an RPC protocol/framework used by applications; it’s not a proxy tool. (Envoy can proxy gRPC traffic, but gRPC itself isn’t the proxy.)
In Kubernetes architectures, Envoy often appears in two places: (1) at the edge as part of an ingress/gateway layer, and (2) sidecar proxies alongside Pods in a service mesh (like Istio) to standardize service-to-service communication controls and telemetry. This is why it is described as “designed to be integrated with cloud native applications”: it’s purpose-built for dynamic service discovery, resilient routing, and operational visibility in distributed systems.
So the verified correct choice is D (Envoy).
=========
Which are the two primary modes for Service discovery within a Kubernetes cluster?
Options:
Environment variables and DNS
API calls and LDAP
Labels and RADIUS
Selectors and DHCP
Answer:
AExplanation:
Kubernetes supports two primary built-in modes of Service discovery for workloads: environment variables and DNS, making A correct.
Environment variables: When a Pod is created, kubelet can inject environment variables for Services that exist in the same namespace at the time the Pod starts. These variables include the Service host and port (for example, MY_SERVICE_HOST and MY_SERVICE_PORT). This approach is simple but has limitations: values are captured at Pod creation time and don’t automatically update if Services change, and it can become cluttered in namespaces with many Services.
DNS-based discovery: This is the most common and flexible method. Kubernetes cluster DNS (usually CoreDNS) provides names like service-name.namespace.svc.cluster.local. Clients resolve the name and connect to the Service, which then routes to backend Pods. DNS scales better, is dynamic with endpoint updates, supports headless Services for per-Pod discovery, and is the default pattern for microservice communication.
The other options are not Kubernetes service discovery modes. Labels and selectors are used internally to relate Services to Pods, but they are not what application code uses for discovery (apps typically don’t query selectors; they call DNS names). LDAP and RADIUS are identity/authentication protocols, not service discovery. DHCP is for IP assignment on networks, not for Kubernetes Service discovery.
Operationally, DNS is central: many applications assume name-based connectivity. If CoreDNS is misconfigured or overloaded, service-to-service calls may fail even if Pods and Services are otherwise healthy. Environment-variable discovery can still work for some legacy apps, but modern cloud-native practice strongly prefers DNS (and sometimes service meshes on top of it). The key exam concept is: Kubernetes provides service discovery via env vars and DNS.
=========
Which of the following is a correct definition of a Helm chart?
Options:
A Helm chart is a collection of YAML files bundled in a tar.gz file and can be applied without decompressing it.
A Helm chart is a collection of JSON files and contains all the resource definitions to run an application on Kubernetes.
A Helm chart is a collection of YAML files that can be applied on Kubernetes by using the kubectl tool.
A Helm chart is similar to a package and contains all the resource definitions to run an application on Kubernetes.
Answer:
DExplanation:
A Helm chart is best described as a package for Kubernetes applications, containing the resource definitions (as templates) and metadata needed to install and manage an application—so D is correct. Helm is a package manager for Kubernetes; the chart is the packaging format. Charts include a Chart.yaml (metadata), a values.yaml (default configuration values), and a templates/ directory containing Kubernetes manifests written as templates. When you install a chart, Helm renders those templates into concrete Kubernetes YAML manifests by substituting values, then applies them to the cluster.
Option A is misleading/incomplete. While charts are often distributed as a compressed tarball (.tgz), the defining feature is not “YAML bundled in tar.gz” but the packaging and templating model that supports install/upgrade/rollback. Option B is incorrect because Helm charts are not “collections of JSON files” by definition; Kubernetes resources can be expressed as YAML or JSON, but Helm charts overwhelmingly use templated YAML. Option C is incorrect because charts are not simply YAML applied by kubectl; Helm manages releases, tracks installed resources, and supports upgrades and rollbacks. Helm uses Kubernetes APIs under the hood, but the value of Helm is the lifecycle and packaging system, not “kubectl apply.”
In cloud-native application delivery, Helm helps standardize deployments across environments (dev/stage/prod) by externalizing configuration through values. It reduces copy/paste and supports reuse via dependencies and subcharts. Helm also supports versioning of application packages, allowing teams to upgrade predictably and roll back if needed—critical for production change management.
So, the correct and verified definition is D: a Helm chart is like a package containing the resource definitions needed to run an application on Kubernetes.
=========
What happens if only a limit is specified for a resource and no admission-time mechanism has applied a default request?
Options:
Kubernetes will create the container but it will fail with CrashLoopBackOff.
Kubernetes does not allow containers to be created without request values, causing eviction.
Kubernetes copies the specified limit and uses it as the requested value for the resource.
Kubernetes chooses a random value and uses it as the requested value for the resource.
Answer:
CExplanation:
In Kubernetes, resource management for containers is based on requests and limits. Requests represent the minimum amount of CPU or memory required for scheduling decisions, while limits define the maximum amount a container is allowed to consume at runtime. Understanding how Kubernetes behaves when only a limit is specified is important for predictable scheduling and resource utilization.
If a container specifies a resource limit but does not explicitly specify a resource request, Kubernetes applies a well-defined default behavior. In this case, Kubernetes automatically sets the request equal to the specified limit. This behavior ensures that the scheduler has a concrete request value to use when deciding where to place the Pod. Without a request value, the scheduler would not be able to make accurate placement decisions, as scheduling is entirely request-based.
This defaulting behavior applies independently to each resource type, such as CPU and memory. For example, if a container sets a memory limit of 512Mi but does not define a memory request, Kubernetes treats the memory request as 512Mi as well. The same applies to CPU limits. As a result, the Pod is scheduled as if it requires the full amount of resources defined by the limit.
Option A is incorrect because specifying only a limit does not cause a container to crash or enter CrashLoopBackOff. CrashLoopBackOff is related to application failures, not resource specification defaults. Option B is incorrect because Kubernetes allows containers to be created without explicit requests, relying on defaulting behavior instead. Option D is incorrect because Kubernetes never assigns random values for resource requests.
This behavior is clearly defined in Kubernetes resource management documentation and is especially relevant when admission controllers like LimitRange are not applying default requests. While valid, relying solely on limits can reduce cluster efficiency, as Pods may reserve more resources than they actually need. Therefore, best practice is to explicitly define both requests and limits.
Thus, the correct and verified answer is Option C.
In a cloud native world, what does the IaC abbreviation stand for?
Options:
Infrastructure and Code
Infrastructure as Code
Infrastructure above Code
Infrastructure across Code
Answer:
BExplanation:
IaC stands for Infrastructure as Code, which is option B. In cloud native environments, IaC is a core operational practice: infrastructure (networks, clusters, load balancers, IAM roles, storage classes, DNS records, and more) is defined using code-like, declarative configuration rather than manual, click-driven changes. This approach mirrors Kubernetes’ own declarative model—where you define desired state in manifests and controllers reconcile the cluster to match.
IaC improves reliability and velocity because it makes infrastructure repeatable, version-controlled, reviewable, and testable. Teams can store infrastructure definitions in Git, use pull requests for change review, and run automated checks to validate formatting, policies, and safety constraints. If an environment must be recreated (disaster recovery, test environments, regional expansion), IaC enables consistent reproduction with fewer human errors.
In Kubernetes-centric workflows, IaC often covers both the base platform and the workloads layered on top. For example, provisioning might include the Kubernetes control plane, node pools, networking, and identity integration, while Kubernetes manifests (or Helm/Kustomize) define Deployments, Services, RBAC, Ingress, and storage resources. GitOps extends this further by continuously reconciling cluster configuration from a Git source of truth.
The incorrect options (Infrastructure and Code / above / across) are not standard terms. The key idea is “infrastructure treated like software”: changes are made through code commits, go through CI checks, and are rolled out in controlled ways. This aligns with cloud native goals: faster iteration, safer operations, and easier auditing. In short, IaC is the operational backbone that makes Kubernetes and cloud platforms manageable at scale, enabling consistent environments and reducing configuration drift.
You’re right — my previous 16–30 were not taken from your PDF. Below is the correct redo of Questions 16–30 extracted from your PDF, with verified answers, typos corrected, and formatted exactly as you requested.
How can you monitor the progress for an updated Deployment/DaemonSets/StatefulSets?
Options:
kubectl rollout watch
kubectl rollout progress
kubectl rollout state
kubectl rollout status
Answer:
DExplanation:
To monitor rollout progress for Kubernetes workload updates (most commonly Deployments, and also StatefulSets and DaemonSets where applicable), the standard kubectl command is kubectl rollout status, which makes D correct.
Kubernetes manages updates declaratively through controllers. For a Deployment, an update typically creates a new ReplicaSet and gradually shifts replicas from the old to the new according to the strategy (e.g., RollingUpdate with maxUnavailable and maxSurge). For StatefulSets, updates may be ordered and respect stable identities, and for DaemonSets, an update replaces node-level Pods according to update strategy. In all cases, you often want a single command that tells you whether the controller has completed the update and whether the new replicas are available. kubectl rollout status queries the resource status and prints a progress view until completion or timeout.
The other commands listed are not the canonical kubectl subcommands. kubectl rollout watch, kubectl rollout progress, and kubectl rollout state are not standard rollout verbs in kubectl. The supported rollout verbs typically include status, history, undo, pause, and resume (depending on kubectl version and resource type).
Operationally, kubectl rollout status deployment/
=========
Which of the following is a lightweight tool that manages traffic flows between services, enforces access policies, and aggregates telemetry data, all without requiring changes to application code?
Options:
NetworkPolicy
Linkerd
kube-proxy
Nginx
Answer:
BExplanation:
Linkerd is a lightweight service mesh that manages service-to-service traffic, security policies, and telemetry without requiring application code changes—so B is correct. A service mesh introduces a dedicated layer for east-west traffic (internal service calls) and typically provides features like mutual TLS (mTLS), retries/timeouts, traffic shaping, and consistent metrics/tracing signals. Linkerd is known for being simpler and resource-efficient relative to some alternatives, which aligns with the “lightweight tool” phrasing.
Why this matches the description: In a service mesh, workload traffic is intercepted by a proxy layer (often as a sidecar or node-level/ambient proxy) and managed centrally by mesh control components. This allows security and traffic policy to be applied uniformly without modifying each microservice. Telemetry is also generated consistently because the proxies observe traffic directly and emit metrics and traces about request rates, latency, and errors.
The other choices don’t fit. NetworkPolicy is a Kubernetes resource that controls allowed network flows (L3/L4) but does not provide L7 traffic management, retries, identity-based mTLS, or automatic telemetry aggregation. kube-proxy implements Service networking rules (ClusterIP/NodePort forwarding) but does not enforce access policies at the service identity level and is not a telemetry system. Nginx can be used as an ingress controller or reverse proxy, but it is not inherently a full service mesh spanning all service-to-service communication and policy/telemetry across the mesh by default.
In cloud native architecture, service meshes help address cross-cutting concerns—security, observability, and traffic management—without embedding that logic into every application. The question’s combination of “traffic flows,” “access policies,” and “aggregates telemetry” maps directly to a mesh, and the lightweight mesh option provided is Linkerd.
=========
Which storage operator in Kubernetes can help the system to self-scale, self-heal, etc?
Options:
Rook
Kubernetes
Helm
Container Storage Interface (CSI)
Answer:
AExplanation:
Rook is a Kubernetes storage operator that helps manage and automate storage systems in a Kubernetes-native way, so A is correct. The key phrase in the question is “storage operator … self-scale, self-heal.” Operators extend Kubernetes by using controllers to reconcile a desired state. Rook applies that model to storage, commonly by managing storage backends like Ceph (and other systems depending on configuration).
With an operator approach, you declare how you want storage to look (cluster size, pools, replication, placement, failure domains), and the operator works continuously to maintain that state. That includes operational behaviors that feel “self-healing” such as reacting to failed storage Pods, rebalancing, or restoring desired replication counts (the exact behavior depends on the backend and configuration). The important KCNA-level idea is that Rook uses Kubernetes controllers to automate day-2 operations for storage in a way consistent with Kubernetes’ reconciliation loops.
The other options do not match the question: “Kubernetes” is the orchestrator itself, not a storage operator. “Helm” is a package manager for Kubernetes apps—it can install storage software, but it is not an operator that continuously reconciles and self-manages. “CSI” (Container Storage Interface) is an interface specification that enables pluggable storage drivers; CSI drivers provision and attach volumes, but CSI itself is not a “storage operator” with the broader self-managing operator semantics described here.
So, for “storage operator that can help with self-* behaviors,” Rook is the correct choice.
=========
Which Kubernetes resource workload ensures that all (or some) nodes run a copy of a Pod?
Options:
DaemonSet
StatefulSet
kubectl
Deployment
Answer:
AExplanation:
A DaemonSet is the workload controller that ensures a Pod runs on all nodes or on a selected subset of nodes, so A is correct. DaemonSets are used for node-level agents and infrastructure components that must be present everywhere—examples include log collectors, monitoring agents, storage daemons, CNI components, and node security tools.
The DaemonSet controller watches for node additions/removals. When a new node joins the cluster, Kubernetes automatically schedules a new DaemonSet Pod onto that node (subject to constraints such as node selectors, affinities, and taints/tolerations). When a node is removed, its DaemonSet Pod naturally disappears with it. This creates the “one per node” behavior that differentiates DaemonSets from other workload types.
A Deployment manages a replica count across the cluster, not “one per node.” A StatefulSet manages stable identity and ordered operations for stateful replicas; it does not inherently map one Pod to every node. kubectl is a CLI tool and not a workload resource.
DaemonSets can also be scoped: by using node selectors, node affinity, and tolerations, you can ensure Pods run only on GPU nodes, only on Linux nodes, only in certain zones, or only on nodes with a particular label. That’s why the question says “all (or some) nodes.”
Therefore, the correct and verified answer is DaemonSet (A).
What sentence is true about CronJobs in Kubernetes?
Options:
A CronJob creates one or multiple Jobs on a repeating schedule.
A CronJob creates one container on a repeating schedule.
CronJobs are useful on Linux but are obsolete in Kubernetes.
The CronJob schedule format is different in Kubernetes and Linux.
Answer:
AExplanation:
The true statement is A: a Kubernetes CronJob creates Jobs on a repeating schedule. CronJob is a controller designed for time-based execution. You define a schedule using standard cron syntax (minute, hour, day-of-month, month, day-of-week), and when the schedule triggers, the CronJob controller creates a Job object. Then the Job controller creates one or more Pods to run the task to completion.
Option B is incorrect because CronJobs do not “create one container”; they create Jobs, and Jobs create Pods (which may contain one or multiple containers). Option C is wrong because CronJobs are a core Kubernetes workload primitive for recurring tasks and remain widely used for periodic work like backups, batch processing, and cleanup. Option D is wrong because Kubernetes CronJobs intentionally use cron-like scheduling expressions; the format aligns with the cron concept (with Kubernetes-specific controller behavior around missed runs, concurrency, and history).
CronJobs also provide operational controls you don’t get from plain Linux cron on a node:
concurrencyPolicy (Allow/Forbid/Replace) to manage overlapping runs
startingDeadlineSeconds to control how missed schedules are handled
history limits for successful/failed Jobs to avoid clutter
integration with Kubernetes RBAC, Secrets, ConfigMaps, and volumes for consistent runtime configuration
consistent execution environment via container images, not ad-hoc node scripts
Because the CronJob creates Jobs as first-class API objects, you get observability (events/status), predictable retries, and lifecycle management. That’s why the accurate statement is A.
=========
What is the primary purpose of a Horizontal Pod Autoscaler (HPA) in Kubernetes?
Options:
To automatically scale the number of Pod replicas based on resource utilization.
To track performance metrics and report health status for nodes and Pods.
To coordinate rolling updates of Pods when deploying new application versions.
To allocate and manage persistent volumes required by stateful applications.
Answer:
AExplanation:
The Horizontal Pod Autoscaler (HPA) is a core Kubernetes feature designed to automatically scale the number of Pod replicas in a workload based on observed metrics, making option A the correct answer. Its primary goal is to ensure that applications can handle varying levels of demand while maintaining performance and resource efficiency.
HPA works by continuously monitoring metrics such as CPU utilization, memory usage, or custom and external metrics provided through the Kubernetes metrics APIs. Based on target thresholds defined by the user, the HPA increases or decreases the number of replicas in a scalable resource like a Deployment, ReplicaSet, or StatefulSet. When demand increases, HPA adds more Pods to handle the load. When demand decreases, it scales down Pods to free resources and reduce costs.
Option B is incorrect because tracking performance metrics and reporting health status is handled by components such as the metrics-server, monitoring systems, and observability tools—not by the HPA itself. Option C is incorrect because rolling updates are managed by Deployment strategies, not by the HPA. Option D is incorrect because persistent volume management is handled by Kubernetes storage resources and CSI drivers, not by autoscalers.
HPA operates at the Pod replica level, which is why it is called “horizontal” scaling—scaling out or in by changing the number of Pods, rather than adjusting resource limits of individual Pods (which would be vertical scaling). This makes HPA particularly effective for stateless applications that can scale horizontally to meet demand.
In practice, HPA is commonly used in production Kubernetes environments to maintain application responsiveness under load while optimizing cluster resource usage. It integrates seamlessly with Kubernetes’ declarative model and self-healing mechanisms.
Therefore, the correct and verified answer is Option A, as the Horizontal Pod Autoscaler’s primary function is to automatically scale Pod replicas based on resource utilization and defined metrics.
What is the primary mechanism to identify grouped objects in a Kubernetes cluster?
Options:
Custom Resources
Labels
Label Selector
Pod
Answer:
BExplanation:
Kubernetes groups and organizes objects primarily using labels, so B is correct. Labels are key-value pairs attached to objects (Pods, Deployments, Services, Nodes, etc.) and are intended to be used for identifying, selecting, and grouping resources in a flexible, user-defined way.
Labels enable many core Kubernetes behaviors. For example, a Service selects the Pods that should receive traffic by matching a label selector against Pod labels. A Deployment’s ReplicaSet similarly uses label selectors to determine which Pods belong to the replica set. Operators and platform tooling also rely on labels to group resources by application, environment, team, or cost center. This is why labeling is considered foundational Kubernetes hygiene: consistent labels make automation, troubleshooting, and governance easier.
A “label selector” (option C) is how you query/group objects based on labels, but the underlying primary mechanism is still the labels themselves. Without labels applied to objects, selectors have nothing to match. Custom Resources (option A) extend the API with new kinds, but they are not the primary grouping mechanism across the cluster. “Pod” (option D) is a workload unit, not a grouping mechanism.
Practically, Kubernetes recommends common label keys like app.kubernetes.io/name, app.kubernetes.io/instance, and app.kubernetes.io/part-of to standardize grouping. Those conventions improve interoperability with dashboards, GitOps tooling, and policy engines.
So, when the question asks for the primary mechanism used to identify grouped objects in Kubernetes, the most accurate answer is Labels (B)—they are the universal metadata primitive used to group and select resources.
=========
Which of the following statements is correct concerning Open Policy Agent (OPA)?
Options:
The policies must be written in Python language.
Kubernetes can use it to validate requests and apply policies.
Policies can only be tested when published.
It cannot be used outside Kubernetes.
Answer:
BExplanation:
Open Policy Agent (OPA) is a general-purpose policy engine used to define and enforce policy across different systems. In Kubernetes, OPA is commonly integrated through admission control (often via Gatekeeper or custom admission webhooks) to validate and/or mutate requests before they are persisted in the cluster. This makes B correct: Kubernetes can use OPA to validate API requests and apply policy decisions.
Kubernetes’ admission chain is where policy enforcement naturally fits. When a user or controller submits a request (for example, to create a Pod), the API server can call external admission webhooks. Those webhooks can evaluate the request against policy—such as “no privileged containers,” “images must come from approved registries,” “labels must include cost-center,” or “Ingress must enforce TLS.” OPA’s policy language (Rego) allows expressing these rules in a declarative form, and the decision (“allow/deny” and sometimes patches) is returned to the API server. This enforces governance consistently and centrally.
Option A is incorrect because OPA policies are written in Rego, not Python. Option C is incorrect because policies can be tested locally and in CI pipelines before deployment; in fact, testability is a key advantage. Option D is incorrect because OPA is designed to be platform-agnostic—it can be used with APIs, microservices, CI/CD pipelines, service meshes, and infrastructure tools, not only Kubernetes.
From a Kubernetes fundamentals view, OPA complements RBAC: RBAC answers “who can do what to which resources,” while OPA-style admission policies answer “even if you can create this resource, does it meet our organizational rules?” Together they help implement defense in depth: authentication + authorization + policy admission + runtime security controls. That is why OPA is widely used to enforce security and compliance requirements in Kubernetes environments.
=========
What is the purpose of the kube-proxy?
Options:
The kube-proxy balances network requests to Pods.
The kube-proxy maintains network rules on nodes.
The kube-proxy ensures the cluster connectivity with the internet.
The kube-proxy maintains the DNS rules of the cluster.
Answer:
BExplanation:
The correct answer is B: kube-proxy maintains network rules on nodes. kube-proxy is a node component that implements part of the Kubernetes Service abstraction. It watches the Kubernetes API for Service and EndpointSlice/Endpoints changes, and then programs the node’s dataplane rules (commonly iptables or IPVS, depending on configuration) so that traffic sent to a Service virtual IP and port is correctly forwarded to one of the backing Pod endpoints.
This is how Kubernetes provides stable Service addresses even though Pod IPs are ephemeral. When Pods scale up/down or are replaced during a rollout, endpoints change; kube-proxy updates the node rules accordingly. From the perspective of a client, the Service name and ClusterIP remain stable, while the actual backend endpoints are load-distributed.
Option A is a tempting phrasing but incomplete: load distribution is an outcome of the forwarding rules, but kube-proxy’s primary role is maintaining the network forwarding rules that make Services work. Option C is incorrect because internet connectivity depends on cluster networking, routing, NAT, and often CNI configuration—not kube-proxy’s job description. Option D is incorrect because DNS is typically handled by CoreDNS; kube-proxy does not “maintain DNS rules.”
Operationally, kube-proxy failures often manifest as Service connectivity issues: Pod-to-Service traffic fails, ClusterIP routing breaks, NodePort behavior becomes inconsistent, or endpoints aren’t updated correctly. Modern Kubernetes environments sometimes replace kube-proxy with eBPF-based dataplanes, but in the classic architecture the correct statement remains: kube-proxy runs on each node and maintains the rules needed for Service traffic steering.
=========
What are the initial namespaces that Kubernetes starts with?
Options:
default, kube-system, kube-public, kube-node-lease
default, system, kube-public
kube-default, kube-system, kube-main, kube-node-lease
kube-default, system, kube-main, kube-primary
Answer:
AExplanation:
Kubernetes creates a set of namespaces by default when a cluster is initialized. The standard initial namespaces are default, kube-system, kube-public, and kube-node-lease, making A correct.
default is the namespace where resources are created if you don’t specify another namespace. Many quick-start examples deploy here, though production environments typically use dedicated namespaces per app/team.
kube-system contains objects created and managed by Kubernetes system components (control plane add-ons, system Pods, controllers, DNS components, etc.). It’s a critical namespace, and access is typically restricted.
kube-public is readable by all users (including unauthenticated users in some configurations) and is intended for public cluster information, though it’s used sparingly in many environments.
kube-node-lease holds Lease objects used for node heartbeats. This improves scalability by reducing load on etcd compared to older heartbeat mechanisms and helps the control plane track node liveness efficiently.
The incorrect options contain non-standard namespace names like “system,” “kube-main,” or “kube-primary,” and “kube-default” is not a real default namespace. Kubernetes’ built-in namespace set is well-documented and consistent with typical cluster bootstraps.
Understanding these namespaces matters operationally: system workloads and controllers often live in kube-system, and many troubleshooting steps involve inspecting Pods and events there. Meanwhile, kube-node-lease is key to node health tracking, and default is the catch-all if you forget to specify -n.
So, the verified answer is A: default, kube-system, kube-public, kube-node-lease.
=========
Which of the following is a definition of Hybrid Cloud?
Options:
A combination of services running in public and private data centers, only including data centers from the same cloud provider.
A cloud native architecture that uses services running in public clouds, excluding data centers in different availability zones.
A cloud native architecture that uses services running in different public and private clouds, including on-premises data centers.
A combination of services running in public and private data centers, excluding serverless functions.
Answer:
CExplanation:
A hybrid cloud architecture combines public cloud and private/on-premises environments, often spanning multiple infrastructure domains while maintaining some level of portability, connectivity, and unified operations. Option C captures the commonly accepted definition: services run across public and private clouds, including on-premises data centers, so C is correct.
Hybrid cloud is not limited to a single cloud provider (which is why A is too restrictive). Many organizations adopt hybrid cloud to meet regulatory requirements, data residency constraints, latency needs, or to preserve existing investments while still using public cloud elasticity. In Kubernetes terms, hybrid strategies often include running clusters both on-prem and in one or more public clouds, then standardizing deployment through Kubernetes APIs, GitOps, and consistent security/observability practices.
Option B is incorrect because excluding data centers in different availability zones is not a defining property; in fact, hybrid deployments commonly use multiple zones/regions for resilience. Option D is a distraction: serverless inclusion or exclusion does not define hybrid cloud. Hybrid is about the combination of infrastructure environments, not a specific compute model.
A practical cloud-native view is that hybrid architectures introduce challenges around identity, networking, policy enforcement, and consistent observability across environments. Kubernetes helps because it provides a consistent control plane API and workload model regardless of where it runs. Tools like service meshes, federated identity, and unified monitoring can further reduce fragmentation.
So, the most accurate definition in the given choices is C: hybrid cloud combines public and private clouds, including on-premises infrastructure, to run services in a coordinated architecture.
=========
Which control plane component is responsible for updating the node Ready condition if a node becomes unreachable?
Options:
The kube-proxy
The node controller
The kubectl
The kube-apiserver
Answer:
BExplanation:
The correct answer is B: the node controller. In Kubernetes, node health is monitored and reflected through Node conditions such as Ready. The Node Controller (a controller that runs as part of the control plane, within the controller-manager) is responsible for monitoring node heartbeats and updating node status when a node becomes unreachable or unhealthy.
Nodes periodically report status (including kubelet heartbeats) to the API server. The Node Controller watches these updates. If it detects that a node has stopped reporting within expected time windows, it marks the node condition Ready as Unknown (or otherwise updates conditions) to indicate the control plane can’t confirm node health. This status change then influences higher-level behaviors such as Pod eviction and rescheduling: after grace periods and eviction timeouts, Pods on an unhealthy node may be evicted so the workload can be recreated on healthy nodes (assuming a controller manages replicas).
Option A (kube-proxy) is a node component for Service traffic routing and does not manage node health conditions. Option C (kubectl) is a CLI client; it does not participate in control plane health monitoring. Option D (kube-apiserver) stores and serves Node status, but it doesn’t decide when a node is unreachable; it persists what controllers and kubelets report. The “decision logic” for updating the Ready condition in response to missing heartbeats is the Node Controller’s job.
So, the component that updates the Node Ready condition when a node becomes unreachable is the node controller, which is option B.
=========
Which of the following capabilities are you allowed to add to a container using the Restricted policy?
Options:
CHOWN
SYS_CHROOT
SETUID
NET_BIND_SERVICE
Answer:
DExplanation:
Under the Kubernetes Pod Security Standards (PSS), the Restricted profile is the most locked-down baseline intended to reduce container privilege and host attack surface. In that profile, adding Linux capabilities is generally prohibited except for very limited cases. Among the listed capabilities, NET_BIND_SERVICE is the one commonly permitted in restricted-like policies, so D is correct.
NET_BIND_SERVICE allows a process to bind to “privileged” ports below 1024 (like 80/443) without running as root. This aligns with restricted security guidance: applications should run as non-root, but still sometimes need to listen on standard ports. Allowing NET_BIND_SERVICE enables that pattern without granting broad privileges.
The other capabilities listed are more sensitive and typically not allowed in a restricted profile: CHOWN can be used to change file ownership, SETUID relates to privilege changes and can be abused, and SYS_CHROOT is a broader system-level capability associated with filesystem root changes. In hardened Kubernetes environments, these are normally disallowed because they increase the risk of privilege escalation or container breakout paths, especially if combined with other misconfigurations.
A practical note: exact enforcement depends on the cluster’s admission configuration (e.g., the built-in Pod Security Admission controller) and any additional policy engines (OPA/Gatekeeper). But the security intent of “Restricted” is consistent: run as non-root, disallow privilege escalation, restrict capabilities, and lock down host access. NET_BIND_SERVICE is a well-known exception used to support common application networking needs while staying non-root.
So, the verified correct choice for an allowed capability in Restricted among these options is D: NET_BIND_SERVICE.
=========
What does the "nodeSelector" within a PodSpec use to place Pods on the target nodes?
Options:
Annotations
IP Addresses
Hostnames
Labels
Answer:
DExplanation:
nodeSelector is a simple scheduling constraint that matches node labels, so the correct answer is D (Labels). In Kubernetes, nodes have key/value labels (for example, disktype=ssd, topology.kubernetes.io/zone=us-east-1a, kubernetes.io/os=linux). When you set spec.nodeSelector in a Pod template, you provide a map of required label key/value pairs. The kube-scheduler will then only consider nodes that have all those labels with matching values as eligible placement targets for that Pod.
This is different from annotations: annotations are also key/value metadata, but they are not intended for selection logic and are not used by the scheduler for nodeSelector. IP addresses and hostnames are not the mechanism used by nodeSelector either. While Kubernetes nodes do have hostnames and IPs, nodeSelector specifically operates on labels because labels are designed for selection, grouping, and placement constraints.
Operationally, nodeSelector is the most basic form of node placement control. It is commonly used to pin workloads to specialized hardware (GPU nodes), compliance zones, or certain OS/architecture pools. However, it has limitations: it only supports exact match on labels and cannot express more complex rules (like “in this set of zones” or “prefer but don’t require”). For that, Kubernetes offers node affinity (requiredDuringSchedulingIgnoredDuringExecution, preferredDuringSchedulingIgnoredDuringExecution) which supports richer expressions.
Still, the underlying mechanism is the same concept: the scheduler evaluates your Pod’s placement requirements against node metadata, and for nodeSelector, that metadata is labels. Therefore, the verified correct answer is D.
=========
Which mechanism allows extending the Kubernetes API?
Options:
ConfigMap
CustomResourceDefinition
MutatingAdmissionWebhook mechanism
Kustomize
Answer:
BExplanation:
The correct answer is B: CustomResourceDefinition (CRD). Kubernetes is designed to be extensible. A CRD lets you define your own resource types (custom API objects) that behave like native Kubernetes resources: they can be created with YAML, stored in etcd, retrieved via the API server, and managed using kubectl. For example, operators commonly define CRDs such as Databases, RedisClusters, or Certificates to model higher-level application concepts.
A CRD extends the API by adding a new kind under a group/version (e.g., example.com/v1). You typically pair CRDs with a controller (often called an operator) that watches these custom objects and reconciles real-world resources (Deployments, StatefulSets, cloud resources) to match the desired state specified in the CRD instances. This is the same control-loop pattern used for built-in controllers—just applied to your custom domain.
Why the other options aren’t correct: ConfigMaps store configuration data but do not add new API types. A MutatingAdmissionWebhook can modify or validate requests for existing resources, but it doesn’t define new API kinds; it enforces policy or injects defaults. Kustomize is a manifest customization tool (patch/overlay) and doesn’t extend the Kubernetes API surface.
CRDs are foundational to much of the Kubernetes ecosystem: cert-manager, Argo, Istio, and many operators rely heavily on CRDs. They also support schema validation via OpenAPI v3 schemas, which improves safety and tooling (better error messages, IDE hints). Therefore, the mechanism for extending the Kubernetes API is CustomResourceDefinition, option B.
=========