Basic Kubernetes Interview Questions (2025 Edition)
Kubernetes has become the de facto standard for container orchestration, making it an essential skill for anyone entering the DevOps, cloud engineering, or platform engineering space. Whether you’re a hiring manager evaluating entry-level candidates or a professional preparing for your first Kubernetes interview, this guide covers the fundamental concepts that demonstrate core understanding.
Expected Answer: Kubernetes is an open-source container orchestration platform that automates the deployment, scaling, and management of containerized applications. It solves several critical problems in modern application deployment:
Problems it solves:
- Container Orchestration: Manages thousands of containers across multiple hosts
- Service Discovery: Automatically finds and connects services
- Load Balancing: Distributes traffic across multiple instances
- Self-healing: Automatically replaces failed containers
- Horizontal Scaling: Scales applications up or down based on demand
- Rolling Updates: Updates applications without downtime
- Resource Management: Efficiently allocates CPU, memory, and storage
Key Concepts to Mention:
- Declarative configuration (desired state vs. current state)
- API-driven architecture
- Cloud-native design principles
- Multi-cloud and hybrid cloud support
Example Response: “Kubernetes is a container orchestration platform that solves the complexity of managing containerized applications at scale. Instead of manually deploying containers on individual servers, Kubernetes provides automation for deployment, scaling, load balancing, and self-healing. It’s like having an intelligent system that ensures your applications are always running, properly distributed, and automatically recover from failures.”
Expected Answer: A Pod is the smallest deployable unit in Kubernetes. It represents a single instance of a running process in your cluster and can contain one or more containers.
Key Points:
- Atomic Unit: Pods are the basic building blocks
- Shared Resources: Containers in a Pod share network namespace and storage
- Lifecycle: Pods are ephemeral and can be created, destroyed, and recreated
- IP Address: Each Pod gets its own IP address
- Scheduling: Pods are scheduled to nodes by the scheduler
Container Relationship:
apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
containers:
- name: main-app
image: nginx:latest
ports:
- containerPort: 80
- name: sidecar
image: busybox:latest
command: ['sh', '-c', 'while true; do echo "sidecar running"; sleep 30; done']
Example Response: “A Pod is the smallest unit in Kubernetes that can be deployed. Think of it as a wrapper around one or more containers that share the same network namespace, storage, and lifecycle. For example, if you have a web application that needs a logging sidecar, both containers would run in the same Pod so they can communicate via localhost and share the same network identity.”
Expected Answer: Deployments and StatefulSets are both controllers that manage Pods, but they serve different purposes based on the application’s state requirements.
Deployment:
- Stateless Applications: Designed for applications that don’t need persistent state
- Random Names: Pods get random names (e.g.,
app-abc123
,app-def456
) - Interchangeable: Any Pod can replace any other Pod
- Rolling Updates: Supports rolling updates and rollbacks
- Scaling: Easy horizontal scaling
StatefulSet:
- Stateful Applications: Designed for applications that need stable, unique identities
- Ordered Names: Pods get predictable names (e.g.,
app-0
,app-1
,app-2
) - Stable Network: Each Pod gets a stable network identity
- Ordered Operations: Creates and deletes Pods in order
- Persistent Storage: Each Pod can have its own persistent volume
Example Response: “Deployments are for stateless applications where any instance can handle any request. Think of a web server - you can have 10 instances and it doesn’t matter which one serves a request. StatefulSets are for applications like databases where each instance has a specific role, needs stable network identity, and requires persistent storage. For example, in a Redis cluster, each node needs to know its position and maintain its data.”
Learn more about Deployments Learn more about StatefulSets
Expected Answer: A Service provides a stable network endpoint for accessing a set of Pods. It abstracts the underlying Pod IPs and provides load balancing.
Key Functions:
- Service Discovery: Provides a stable IP address and DNS name
- Load Balancing: Distributes traffic across multiple Pods
- Abstraction: Hides Pod lifecycle from clients
- Port Mapping: Maps service ports to container ports
Service Types:
- ClusterIP: Internal access only (default)
- NodePort: External access via node IP and port
- LoadBalancer: External access via cloud load balancer
- ExternalName: Maps service to external DNS name
Example Configuration:
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
selector:
app: my-app
ports:
- protocol: TCP
port: 80
targetPort: 8080
type: ClusterIP
Example Response: “A Service acts as a stable frontend for your Pods. When Pods are created or destroyed, the Service automatically updates its endpoints to route traffic to the available Pods. It provides load balancing and service discovery. For example, if you have 3 Pods running your web application, the Service will distribute incoming requests across all three Pods and automatically handle Pod failures by routing traffic to healthy Pods.”
Expected Answer: Both ConfigMaps and Secrets are used to store configuration data, but they serve different purposes based on the sensitivity of the data.
ConfigMap:
- Non-sensitive Data: Configuration files, environment variables, command-line arguments
- Plain Text: Data is stored in plain text
- Examples: Database URLs, feature flags, application settings
- Use Cases: Configuration that can be shared or version controlled
Secret:
- Sensitive Data: Passwords, API keys, certificates, tokens
- Base64 Encoded: Data is base64 encoded (not encrypted)
- Examples: Database passwords, OAuth tokens, TLS certificates
- Use Cases: Credentials and sensitive configuration
ConfigMap Example:
apiVersion: v1
kind: ConfigMap
metadata:
name: app-config
data:
database_url: "postgresql://db.example.com:5432/mydb"
log_level: "INFO"
feature_flags: |
enable_cache=true
debug_mode=false
Secret Example:
apiVersion: v1
kind: Secret
metadata:
name: app-secrets
type: Opaque
data:
db_password: cGFzc3dvcmQxMjM= # base64 encoded
api_key: YXBpLWtleS1oZXJl
Example Response: “ConfigMaps store non-sensitive configuration like database URLs, log levels, or feature flags. Secrets store sensitive data like passwords, API keys, or certificates. The main difference is that Secrets are base64 encoded and treated with more care by Kubernetes. You’d use a ConfigMap for something like ‘database_url’ and a Secret for ‘database_password’. Both can be mounted as environment variables or files in your Pods.”
Learn more about ConfigMaps Learn more about Secrets
Expected Answer: Rolling updates allow you to update an application without downtime by gradually replacing old Pods with new ones.
Process:
- Gradual Replacement: Updates Pods one by one or in small batches
- Health Checks: Verifies new Pods are healthy before continuing
- Rollback Capability: Can rollback to previous version if issues occur
- Zero Downtime: Ensures service availability throughout the update
Configuration:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
spec:
replicas: 3
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1 # Maximum extra Pods during update
maxUnavailable: 1 # Maximum unavailable Pods during update
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
spec:
containers:
- name: app
image: my-app:v2
Example Response: “Rolling updates work by gradually replacing old Pods with new ones. If you have 3 replicas, Kubernetes might update them one at a time. It creates a new Pod with the updated image, waits for it to be healthy, then terminates an old Pod. This continues until all Pods are updated. The key is that there are always enough Pods running to handle traffic, ensuring zero downtime. If something goes wrong, you can quickly rollback to the previous version.”
Learn more about Rolling Updates
Expected Answer: Probes are health checks that help Kubernetes determine the health and readiness of your application.
Liveness Probe:
- Purpose: Determines if the application is alive and running
- Action: Restarts the Pod if the probe fails
- Use Case: Detects deadlocks, infinite loops, or stuck states
- Frequency: Runs periodically throughout the Pod’s lifecycle
Readiness Probe:
- Purpose: Determines if the application is ready to receive traffic
- Action: Removes Pod from service endpoints if probe fails
- Use Case: Ensures application is fully initialized and ready
- Frequency: Runs before the Pod receives traffic
Example Configuration:
apiVersion: v1
kind: Pod
metadata:
name: my-app
spec:
containers:
- name: app
image: my-app:latest
livenessProbe:
httpGet:
path: /health
port: 8080
initialDelaySeconds: 30
periodSeconds: 10
readinessProbe:
httpGet:
path: /ready
port: 8080
initialDelaySeconds: 5
periodSeconds: 5
Example Response: “Liveness probes check if your application is alive and should restart the Pod if it fails. For example, if your app gets stuck in a deadlock, the liveness probe would detect this and restart it. Readiness probes check if your app is ready to receive traffic. For example, if your app needs to load configuration or connect to a database, the readiness probe ensures it’s fully initialized before receiving requests. Liveness probes restart Pods, while readiness probes just remove them from the load balancer.”
Expected Answer: These are core components of the Kubernetes control plane that work together to manage the cluster.
kubelet:
- Role: Primary node agent that runs on each node
- Responsibilities:
- Manages Pod lifecycle on the node
- Reports node and Pod status to API server
- Executes Pod specifications
- Handles container runtime communication
- Location: Runs on every worker node
kube-apiserver:
- Role: Frontend for the Kubernetes control plane
- Responsibilities:
- Exposes the Kubernetes API
- Validates and processes API requests
- Manages authentication and authorization
- Coordinates all cluster operations
- Location: Runs on control plane nodes
etcd:
- Role: Distributed key-value store that stores all cluster data
- Responsibilities:
- Stores cluster state and configuration
- Provides consistency and reliability
- Handles leader election
- Maintains cluster data integrity
- Location: Runs on control plane nodes
Example Response: “kubelet is like a supervisor on each worker node - it makes sure Pods are running correctly and reports back to the control plane. kube-apiserver is the front door to the cluster - all requests go through it, and it validates and processes them. etcd is the cluster’s memory - it stores all the configuration and state information. Think of it like this: you send a request to create a Pod to the API server, it validates it and stores the information in etcd, then kubelet on the appropriate node reads the information and creates the Pod.”
Learn more about Kubernetes Components
Expected Answer: Kubernetes provides multiple ways to scale applications, both manually and automatically.
Manual Scaling:
# Scale deployment to 5 replicas
kubectl scale deployment my-app --replicas=5
# Scale using YAML
kubectl apply -f - <<EOF
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
spec:
replicas: 5
EOF
Automatic Scaling (HPA):
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: my-app-hpa
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: my-app
minReplicas: 2
maxReplicas: 10
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 70
Scaling Types:
- Horizontal Scaling: Add more Pod instances
- Vertical Scaling: Increase resource limits (VPA)
- Cluster Scaling: Add more nodes to the cluster
Example Response: “You can scale applications manually using kubectl scale or by updating the YAML. For automatic scaling, you use HorizontalPodAutoscaler (HPA) which monitors metrics like CPU or memory usage and automatically adjusts the number of replicas. For example, if CPU usage goes above 70%, HPA might scale from 3 to 5 replicas. You can also use VerticalPodAutoscaler (VPA) to adjust resource requests and limits automatically.”
Expected Answer: Both commands provide information about Kubernetes resources, but they serve different purposes and provide different levels of detail.
kubectl get:
- Purpose: Lists resources with basic information
- Output: Tabular format with key fields
- Use Case: Quick overview, checking status, listing resources
- Example:
kubectl get pods
shows Pod name, ready status, restart count, age
kubectl describe:
- Purpose: Provides detailed information about a specific resource
- Output: Detailed YAML-like format with all fields and events
- Use Case: Debugging, troubleshooting, understanding resource state
- Example:
kubectl describe pod my-pod
shows full Pod specification, events, conditions
Example Output Comparison:
# kubectl get pods
NAME READY STATUS RESTARTS AGE
my-pod-abc123 1/1 Running 0 5m
# kubectl describe pod my-pod-abc123
Name: my-pod-abc123
Namespace: default
Priority: 0
Node: worker-1/10.0.1.5
Start Time: Mon, 01 Jan 2025 10:00:00 +0000
Labels: app=my-app
Annotations: kubernetes.io/psp: restricted
Status: Running
IP: 10.244.1.5
Containers:
app:
Container ID: docker://abc123...
Image: nginx:latest
State: Running
Started: Mon, 01 Jan 2025 10:00:01 +0000
Ready: True
Restart Count: 0
Limits:
cpu: 500m
memory: 512Mi
Requests:
cpu: 250m
memory: 256Mi
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 5m default-scheduler Successfully assigned default/my-pod-abc123 to worker-1
Normal Pulling 5m kubelet Pulling image "nginx:latest"
Normal Pulled 5m kubelet Successfully pulled image "nginx:latest"
Normal Created 5m kubelet Created container app
Normal Started 5m kubelet Started container app
Example Response: “kubectl get gives you a quick overview - like a table showing the basic status of resources. It’s great for checking if things are running or seeing how many replicas you have. kubectl describe gives you the full story - all the details about a specific resource including its configuration, events, and current state. I use get for quick checks and describe when I need to debug something or understand what’s happening with a resource.”
Focus on understanding concepts rather than memorizing commands. Interviewers want to see that you understand the “why” behind Kubernetes design decisions.
Be prepared to discuss:
- Real-world scenarios you’ve encountered
- How you’ve solved specific problems
- Trade-offs you’ve considered in your decisions
- “What would you do if a Pod keeps crashing?”
- “How would you troubleshoot a service that’s not accessible?”
- “What’s the difference between a Pod and a container?”
- Over-reliance on managed services: Show understanding of underlying concepts
- Inability to explain basic concepts: Demonstrate fundamental knowledge
- No practical experience: Be ready to discuss real scenarios
These basic Kubernetes interview questions test fundamental understanding of core concepts. Success depends not just on knowing the answers, but on demonstrating practical understanding and the ability to apply concepts to real-world scenarios.
For candidates: Focus on understanding the “why” behind Kubernetes design decisions and be prepared to discuss practical applications.
For interviewers: Look for candidates who can explain concepts clearly, discuss trade-offs, and demonstrate practical problem-solving skills rather than just memorized answers.
Remember, Kubernetes is a complex system, and no one expects entry-level candidates to know everything. Focus on demonstrating solid foundational knowledge, eagerness to learn, and practical problem-solving abilities.
For more information about Kubernetes concepts and best practices, visit the official Kubernetes documentation and the Kubernetes.io tutorials.