A Pod named my-app must be created to run a simple nginx container. Which kubectl command should be used?
kubectl create nginx --name=my-app
kubectl run my-app --image=nginx
kubectl create my-app --image=nginx
kubectl run nginx --name=my-app
In Kubernetes, the simplest and most direct way to create a Pod that runs a single container is to use the kubectl run command with the appropriate image specification. The command kubectl run my-app --image=nginx explicitly instructs Kubernetes to create a Pod named my-app using the nginx container image, which makes option B the correct answer.
The kubectl run command is designed to quickly create and run a Pod (or, in some contexts, a higher-level workload resource) from the command line. When no additional flags such as --restart=Always are specified, Kubernetes creates a standalone Pod by default. This is ideal for simple use cases like testing, demonstrations, or learning scenarios where only a single container is required.
Option A is incorrect because kubectl create nginx --name=my-app is not valid syntax; the create subcommand requires a resource type (such as pod, deployment, or service) or a manifest file. Option C is also incorrect because kubectl create my-app --image=nginx omits the resource type and therefore is not a valid kubectl create command. Option D is incorrect because kubectl run nginx --name=my-app attempts to use the deprecated --name flag, which is no longer supported in modern versions of kubectl.
Using kubectl run with explicit naming and image flags is consistent with Kubernetes command-line conventions and is widely documented as the correct approach for creating simple Pods. The resulting Pod can be verified using commands such as kubectl get pods and kubectl describe pod my-app.
In summary, Option B is the correct and verified answer because it uses valid kubectl syntax to create a Pod named my-app running the nginx container image in a straightforward and predictable way.
What is ephemeral storage?
Storage space that need not persist across restarts.
Storage that may grow dynamically.
Storage used by multiple consumers (e.g., multiple Pods).
Storage that is always provisioned locally.
The correct answer is A: ephemeral storage is non-persistent storage whose data does not need to survive Pod restarts or rescheduling. In Kubernetes, ephemeral storage typically refers to storage tied to the Pod’s lifetime—such as the container writable layer, emptyDir volumes, and other temporary storage types. When a Pod is deleted or moved to a different node, that data is generally lost.
This is different from persistent storage, which is backed by PersistentVolumes and PersistentVolumeClaims and is designed to outlive individual Pod instances. Ephemeral storage is commonly used for caches, scratch space, temporary files, and intermediate build artifacts—data that can be recreated and is not the authoritative system of record.
Option B is incorrect because “may grow dynamically” describes an allocation behavior, not the defining characteristic of ephemeral storage. Option C is incorrect because multiple consumers is about access semantics (ReadWriteMany etc.) and shared volumes, not ephemerality. Option D is incorrect because ephemeral storage is not “always provisioned locally” in a strict sense; while many ephemeral forms are local to the node, the definition is about lifecycle and persistence guarantees, not necessarily physical locality.
Operationally, ephemeral storage is an important scheduling and reliability consideration. Pods can request/limit ephemeral storage similarly to CPU/memory, and nodes can evict Pods under disk pressure. Mismanaged ephemeral storage (logs written to the container filesystem, runaway temp files) can cause node disk exhaustion and cascading failures. Best practices include shipping logs off-node, using emptyDir intentionally with size limits where supported, and using persistent volumes for state that must survive restarts.
So, ephemeral storage is best defined as storage that does not need to persist across restarts/rescheduling, matching option A.
=========
Kubernetes supports multiple virtual clusters backed by the same physical cluster. These virtual clusters are called:
Namespaces
Containers
Hypervisors
cgroups
Kubernetes provides “virtual clusters” within a single physical cluster primarily through Namespaces, so A is correct. Namespaces are a logical partitioning mechanism that scopes many Kubernetes resources (Pods, Services, Deployments, ConfigMaps, Secrets, etc.) into separate environments. This enables multiple teams, applications, or environments (dev/test/prod) to share a cluster while keeping their resource names and access controls separated.
Namespaces are often described as “soft multi-tenancy.” They don’t provide full isolation like separate clusters, but they do allow administrators to apply controls per namespace:
RBAC rules can grant different permissions per namespace (who can read Secrets, who can deploy workloads, etc.).
ResourceQuotas and LimitRanges can enforce fair usage and prevent one namespace from consuming all cluster resources.
NetworkPolicies can isolate traffic between namespaces (depending on the CNI).
Containers are runtime units inside Pods and are not “virtual clusters.” Hypervisors are virtualization components for VMs, not Kubernetes partitioning constructs. cgroups are Linux kernel primitives for resource control, not Kubernetes virtual cluster constructs.
While there are other “virtual cluster” approaches (like vcluster projects) that create stronger virtualized control planes, the built-in Kubernetes mechanism referenced by this question is namespaces. Therefore, the correct answer is A: Namespaces.
=========
What happens if only a limit is specified for a resource and no admission-time mechanism has applied a default request?
Kubernetes will create the container but it will fail with CrashLoopBackOff.
Kubernetes does not allow containers to be created without request values, causing eviction.
Kubernetes copies the specified limit and uses it as the requested value for the resource.
Kubernetes chooses a random value and uses it as the requested value for the resource.
In Kubernetes, resource management for containers is based on requests and limits. Requests represent the minimum amount of CPU or memory required for scheduling decisions, while limits define the maximum amount a container is allowed to consume at runtime. Understanding how Kubernetes behaves when only a limit is specified is important for predictable scheduling and resource utilization.
If a container specifies a resource limit but does not explicitly specify a resource request, Kubernetes applies a well-defined default behavior. In this case, Kubernetes automatically sets the request equal to the specified limit. This behavior ensures that the scheduler has a concrete request value to use when deciding where to place the Pod. Without a request value, the scheduler would not be able to make accurate placement decisions, as scheduling is entirely request-based.
This defaulting behavior applies independently to each resource type, such as CPU and memory. For example, if a container sets a memory limit of 512Mi but does not define a memory request, Kubernetes treats the memory request as 512Mi as well. The same applies to CPU limits. As a result, the Pod is scheduled as if it requires the full amount of resources defined by the limit.
Option A is incorrect because specifying only a limit does not cause a container to crash or enter CrashLoopBackOff. CrashLoopBackOff is related to application failures, not resource specification defaults. Option B is incorrect because Kubernetes allows containers to be created without explicit requests, relying on defaulting behavior instead. Option D is incorrect because Kubernetes never assigns random values for resource requests.
This behavior is clearly defined in Kubernetes resource management documentation and is especially relevant when admission controllers like LimitRange are not applying default requests. While valid, relying solely on limits can reduce cluster efficiency, as Pods may reserve more resources than they actually need. Therefore, best practice is to explicitly define both requests and limits.
Thus, the correct and verified answer is Option C.
Which GitOps engine can be used to orchestrate parallel jobs on Kubernetes?
Jenkins X
Flagger
Flux
Argo Workflows
Argo Workflows (D) is the correct answer because it is a Kubernetes-native workflow engine designed to define and run multi-step workflows—often with parallelization—directly on Kubernetes. Argo Workflows models workflows as DAGs (directed acyclic graphs) or step-based sequences, where each step is typically a Pod. Because each step is expressed as Kubernetes resources (custom resources), Argo can schedule many tasks concurrently, control fan-out/fan-in patterns, and manage dependencies between steps (e.g., “run these 10 jobs in parallel, then aggregate results”).
The question calls it a “GitOps engine,” but the capability being tested is “orchestrate parallel jobs.” Argo Workflows fits because it is purpose-built for running complex job orchestration, including parallel tasks, retries, timeouts, artifacts passing, and conditional execution. In practice, many teams store workflow manifests in Git and apply GitOps practices around them, but the distinguishing feature here is the workflow orchestration engine itself.
Why the other options are not best:
Flux (C) is a GitOps controller that reconciles cluster state from Git; it doesn’t orchestrate parallel job graphs as its core function.
Flagger (B) is a progressive delivery operator (canary/blue-green) often paired with GitOps and service meshes/Ingress; it’s not a general workflow orchestrator for parallel batch jobs.
Jenkins X (A) is CI/CD-focused (pipelines), not primarily a Kubernetes-native workflow engine for parallel job DAGs in the way Argo Workflows is.
So, the Kubernetes-native tool specifically used to orchestrate parallel jobs and workflows is Argo Workflows (D).
=========
What’s the difference between a security profile and a security context?
Security Contexts configure Clusters and Namespaces at runtime. Security profiles are control plane mechanisms to enforce specific settings in the Security Context.
Security Contexts configure Pods and Containers at runtime. Security profiles are control plane mechanisms to enforce specific settings in the Security Context.
Security Profiles configure Pods and Containers at runtime. Security Contexts are control plane mechanisms to enforce specific settings in the Security Profile.
Security Profiles configure Clusters and Namespaces at runtime. Security Contexts are control plane mechanisms to enforce specific settings in the Security Profile.
The correct answer is B. In Kubernetes, a securityContext is part of the Pod and container specification that configures runtime security settings for that workload—things like runAsUser, runAsNonRoot, Linux capabilities, readOnlyRootFilesystem, allowPrivilegeEscalation, SELinux options, seccomp profile selection, and filesystem group (fsGroup). These settings directly affect how the Pod’s containers run on the node.
A security profile, in contrast, is a higher-level policy/standard enforced by the cluster control plane (typically via admission control) to ensure workloads meet required security constraints. In modern Kubernetes, this concept aligns with mechanisms like Pod Security Standards (Privileged, Baseline, Restricted) enforced through Pod Security Admission. The “profile” defines what is allowed or forbidden (for example, disallow privileged containers, disallow hostPath mounts, require non-root, restrict capabilities). The control plane enforces these constraints by validating or rejecting Pod specs that do not comply—ensuring consistent security posture across namespaces and teams.
Option A and D are incorrect because security contexts do not “configure clusters and namespaces at runtime”; security contexts apply to Pods/containers. Option C reverses the relationship: security profiles don’t configure Pods at runtime; they constrain what security context settings (and other fields) are acceptable.
Practically, you can think of it as:
SecurityContext = workload-level configuration knobs (declared in manifests, applied at runtime).
SecurityProfile/Standards = cluster-level guardrails that determine which knobs/settings are permitted.
This separation supports least privilege: developers declare needed runtime settings, and cluster governance ensures those settings stay within approved boundaries. Therefore, B is the verified answer.
=========
Which of the following is a lightweight tool that manages traffic flows between services, enforces access policies, and aggregates telemetry data, all without requiring changes to application code?
NetworkPolicy
Linkerd
kube-proxy
Nginx
Linkerd is a lightweight service mesh that manages service-to-service traffic, security policies, and telemetry without requiring application code changes—so B is correct. A service mesh introduces a dedicated layer for east-west traffic (internal service calls) and typically provides features like mutual TLS (mTLS), retries/timeouts, traffic shaping, and consistent metrics/tracing signals. Linkerd is known for being simpler and resource-efficient relative to some alternatives, which aligns with the “lightweight tool” phrasing.
Why this matches the description: In a service mesh, workload traffic is intercepted by a proxy layer (often as a sidecar or node-level/ambient proxy) and managed centrally by mesh control components. This allows security and traffic policy to be applied uniformly without modifying each microservice. Telemetry is also generated consistently because the proxies observe traffic directly and emit metrics and traces about request rates, latency, and errors.
The other choices don’t fit. NetworkPolicy is a Kubernetes resource that controls allowed network flows (L3/L4) but does not provide L7 traffic management, retries, identity-based mTLS, or automatic telemetry aggregation. kube-proxy implements Service networking rules (ClusterIP/NodePort forwarding) but does not enforce access policies at the service identity level and is not a telemetry system. Nginx can be used as an ingress controller or reverse proxy, but it is not inherently a full service mesh spanning all service-to-service communication and policy/telemetry across the mesh by default.
In cloud native architecture, service meshes help address cross-cutting concerns—security, observability, and traffic management—without embedding that logic into every application. The question’s combination of “traffic flows,” “access policies,” and “aggregates telemetry” maps directly to a mesh, and the lightweight mesh option provided is Linkerd.
=========
Which option represents best practices when building container images?
Use multi-stage builds, use the latest tag for image version, and only install necessary packages.
Use multi-stage builds, pin the base image version to a specific digest, and install extra packages just in case.
Use multi-stage builds, pin the base image version to a specific digest, and only install necessary packages.
Avoid multi-stage builds, use the latest tag for image version, and install extra packages just in case.
Building secure, efficient, and reproducible container images is a core principle of cloud native application delivery. Kubernetes documentation and container security best practices emphasize minimizing image size, reducing attack surface, and ensuring deterministic builds. Option C fully aligns with these principles, making it the correct answer.
Multi-stage builds allow developers to separate the build environment from the runtime environment. Dependencies such as compilers, build tools, and temporary artifacts are used only in intermediate stages and excluded from the final image. This significantly reduces image size and limits the presence of unnecessary tools that could be exploited at runtime.
Pinning the base image to a specific digest ensures immutability and reproducibility. Tags such as latest can change over time, potentially introducing breaking changes or vulnerabilities without notice. By using a digest, teams guarantee that the same base image is used every time the image is built, which is essential for predictable behavior, security auditing, and reliable rollbacks.
Installing only necessary packages further reduces the attack surface. Every additional package increases the risk of vulnerabilities and expands the maintenance burden. Minimal images are faster to pull, quicker to start, and easier to scan for vulnerabilities. Kubernetes security guidance consistently recommends keeping container images as small and purpose-built as possible.
Option A is incorrect because using the latest tag undermines build determinism and traceability. Option B is incorrect because installing extra packages “just in case” contradicts the principle of minimalism and increases security risk. Option D is incorrect because avoiding multi-stage builds and installing unnecessary packages leads to larger, less secure images and is explicitly discouraged in cloud native best practices.
According to Kubernetes and CNCF security guidance, combining multi-stage builds, immutable image references, and minimal dependencies results in more secure, reliable, and maintainable container images. Therefore, option C represents the best and fully verified approach when building container images.
What is the goal of load balancing?
Automatically measure request performance across instances of an application.
Automatically distribute requests across different versions of an application.
Automatically distribute instances of an application across the cluster.
Automatically distribute requests across instances of an application.
The core goal of load balancing is to distribute incoming requests across multiple instances of a service so that no single instance becomes overloaded and so that the overall service is more available and responsive. That matches option D, which is the correct answer.
In Kubernetes, load balancing commonly appears through the Service abstraction. A Service selects a set of Pods using labels and provides stable access via a virtual IP (ClusterIP) and DNS name. Traffic sent to the Service is then forwarded to one of the healthy backend Pods. This spreads load across replicas and provides resilience: if one Pod fails, it is removed from endpoints (or becomes NotReady) and traffic shifts to remaining replicas. The actual traffic distribution mechanism depends on the networking implementation (kube-proxy using iptables/IPVS or an eBPF dataplane), but the intent remains consistent: distribute requests across multiple backends.
Option A describes monitoring/observability, not load balancing. Option B describes progressive delivery patterns like canary or A/B routing; that can be implemented with advanced routing layers (Ingress controllers, service meshes), but it’s not the general definition of load balancing. Option C describes scheduling/placement of instances (Pods) across cluster nodes, which is the role of the scheduler and controllers, not load balancing.
In cloud environments, load balancing may also be implemented by external load balancers (cloud LBs) in front of the cluster, then forwarded to NodePorts or ingress endpoints, and again balanced internally to Pods. At each layer, the objective is the same: spread request traffic across multiple service instances to improve performance and availability.
=========
In Kubernetes, what is the primary function of a RoleBinding?
To provide a user or group with permissions across all resources at the cluster level.
To assign the permissions of a Role to a user, group, or service account within a namespace.
To enforce namespace network rules by binding policies to Pods running in the namespace.
To create and define a new Role object that contains a specific set of permissions.
In Kubernetes, authorization is managed using Role-Based Access Control (RBAC), which defines what actions identities can perform on which resources. Within this model, a RoleBinding plays a crucial role by connecting permissions to identities, making option B the correct answer.
A Role defines a set of permissions—such as the ability to get, list, create, or delete specific resources—but by itself, a Role does not grant those permissions to anyone. A RoleBinding is required to bind that Role to a specific subject, such as a user, group, or service account. This binding is namespace-scoped, meaning it applies only within the namespace where the RoleBinding is created. As a result, RoleBindings enable fine-grained access control within individual namespaces, which is essential for multi-tenant and least-privilege environments.
When a RoleBinding is created, it references a Role (or a ClusterRole) and assigns its permissions to one or more subjects within that namespace. This allows administrators to reuse existing roles while precisely controlling who can perform certain actions and where. For example, a RoleBinding can grant a service account read-only access to ConfigMaps in a single namespace without affecting access elsewhere in the cluster.
Option A is incorrect because cluster-wide permissions are granted using a ClusterRoleBinding, not a RoleBinding. Option C is incorrect because network rules are enforced using NetworkPolicies, not RBAC objects. Option D is incorrect because Roles are defined independently and only describe permissions; they do not assign them to identities.
In summary, a RoleBinding’s primary purpose is to assign the permissions defined in a Role to users, groups, or service accounts within a specific namespace. This separation of permission definition (Role) and permission assignment (RoleBinding) is a fundamental principle of Kubernetes RBAC and is clearly documented in Kubernetes authorization architecture.
