Skip to main content

Introduction to Kubernetes

What Is Kubernetes?

Kubernetes is a portable, extensible, open-source container orchestration platform used to manage containerized workloads and services.

In simple terms, Kubernetes helps you:

  • Run containers reliably
  • Scale applications automatically
  • Recover from failures without manual intervention
  • Manage traffic and resources efficiently

Why Is It Called Kubernetes (K8s)?

Kubernetes is also called K8s.

This is a numeronym, a naming standard used since the 1980s:

  • The letter K
  • Followed by 8 characters
  • Ending with S

So, Kubernetes → K + 8 letters + S = K8s

History of Kubernetes

Before Kubernetes:

  • Google internally used systems called Borg and later Omega
  • These systems were designed to manage large-scale data centers and applications

In 2014, Google:

  • Open-sourced Kubernetes
  • Wrote it in the Go (Golang) programming language
  • Donated it to the Cloud Native Computing Foundation (CNCF)

Today, Kubernetes is the industry standard for container orchestration.

What Was Used Before Containerization?

1. Physical Servers

Initially, applications were deployed directly on physical servers.

Problems:

  • One application could consume all resources
  • Other applications would suffer
  • Scaling was expensive
  • Infrastructure management was difficult
  • High operational cost

2. Virtual Machines (VMs)

To solve these problems, Virtual Machines were introduced.

Advantages of VMs:

  • Multiple applications on one physical server
  • Each VM has its own OS
  • Better isolation than physical servers
  • Reduced hardware cost

Limitations:

  • Heavy resource usage
  • Slow startup time
  • Each VM needs a full operating system

3. Containers

Containers improved upon VMs.

Key differences from VMs:

  • Containers are lightweight
  • They share the host OS kernel
  • No need for a separate OS per application
  • Faster startup
  • Better resource utilization

However, containers alone are not enough at scale.

Why Kubernetes Is Needed

Containers are excellent for packaging applications, but managing containers manually is difficult, especially in production.

Example Problem

  • Application is running in production
  • A container crashes
  • Without orchestration:
    • Engineers must manually restart containers
    • Downtime occurs
    • Scaling during traffic spikes is hard

At large scale (hundreds or thousands of containers):

  • Manual management is impossible
  • High risk of downtime

Kubernetes as the Solution

Kubernetes is a container orchestration platform that provides:

  • Auto scaling
  • Load balancing
  • Auto healing
  • Version control (rollout & rollback)
  • Health monitoring
  • Automatic scheduling

Kubernetes continuously monitors the cluster. If traffic increases suddenly, Kubernetes:

  • Scales pods automatically
  • Allocates resources dynamically
  • Maintains application availability

Features of Kubernetes

1. Auto Scaling

Kubernetes automatically:

  • Increases pods when traffic increases
  • Decreases pods when traffic decreases
  • Optimizes resource usage

2. Automated Deployment

Kubernetes automates:

  • Application deployment
  • Updates and upgrades
  • Multi-cloud and on-prem deployments

3. Fault Tolerance (Auto Healing)

If:

  • A pod crashes
  • A container fails
  • A node becomes unhealthy

Kubernetes automatically:

  • Recreates the container
  • Reschedules pods
  • Maintains application availability

4. Load Balancing

Kubernetes:

  • Distributes traffic across pods
  • Monitors healthy endpoints
  • Sends requests only to healthy services

5. Rollout and Rollback

Kubernetes supports:

  • Versioned deployments
  • Safe rollouts
  • Automatic rollback if an update fails

6. Health Monitoring

Kubernetes continuously checks:

  • Node health
  • Pod health
  • Container health

Using:

  • Liveness probes
  • Readiness probes

7. Platform Independent

Kubernetes is:

  • Open source
  • Cloud agnostic

It runs on:

  • Public cloud
  • Private cloud
  • On-premises
  • Hybrid environments

Docker vs Kubernetes

Docker

Docker is a container platform.

It is used to:

  • Build container images
  • Run containers
  • Package applications

Docker does NOT manage containers at scale.

Kubernetes (K8s)

Kubernetes is a container orchestration platform.

It is used to:

  • Manage containers at scale
  • Provide automation
  • Ensure high availability
  • Support enterprise workloads

Problems with Docker (Without Kubernetes)

Problem 1: Single Host Nature

  • Docker typically runs on a single host
  • Containers are ephemeral (short-lived)

Ephemeral means: Containers can die and be recreated at any time.

Example:

  • 100 containers running on one host
  • Container C1 consumes high resources
  • Container C100 crashes due to lack of resources

This happens because Docker lacks cluster awareness.

Problem 2: No Auto Healing

  • If a container crashes:
    • Application becomes unavailable
    • Engineer must restart it manually

For hundreds of containers:

  • Manual recovery is not practical
  • High downtime risk

Docker does not support auto healing by default.

Problem 3: No Auto Scaling

Example:

  • Container has 4 GB RAM and 4 CPUs
  • Normally handles 10,000 users
  • During peak traffic (festivals):
    • Traffic increases
    • Container crashes
    • Application becomes unavailable

Docker does not support auto scaling.

Problem 4: No Enterprise-Level Support

By default, Docker does not provide:

  • Load balancing
  • Firewalls
  • Auto healing
  • Auto scaling
  • API Gateway
  • Advanced networking

How Kubernetes Solves These Problems

1. Cluster-Based Architecture

Kubernetes works as a cluster.

Cluster = Group of Nodes

  • Control Plane (Master)
  • Worker Nodes

If one container consumes too many resources:

  • Kubernetes schedules other containers on different nodes
  • Prevents single-host failure

2. Auto Scaling in Kubernetes

Kubernetes uses YAML configuration.

You define:

  • Number of replicas
  • Resource limits
  • Auto-scaling policies

Kubernetes supports:

  • Horizontal Pod Autoscaler (HPA)
  • Automatic scaling based on CPU/memory

3. Auto Healing in Kubernetes

If a container crashes:

  • Kubernetes creates a new container automatically
  • In many cases, the new container is created before users notice any issue

4. Enterprise-Level Support

Kubernetes supports:

  • Load balancing
  • Auto healing
  • Auto scaling
  • Networking policies
  • API Gateway
  • Security controls
  • Observability and monitoring

Historical Context for Kubernetes

To understand why Kubernetes is so powerful and widely used, it helps to look at the evolution of application deployment over time.

1. Traditional Deployment Era

In the early days, organizations ran applications directly on physical servers.

There was no way to define resource boundaries for applications on a physical server. This led to resource allocation issues. For example, when multiple applications ran on the same server, one application could consume most of the resources, causing other applications to underperform.

One solution was to run each application on a separate physical server. However, this approach did not scale well:

  • Resources were often underutilized
  • Hardware and maintenance costs were high
  • Managing many physical servers was expensive and inefficient

2. Virtualized Deployment Era

To address these problems, virtualization was introduced.

Virtualization allows multiple Virtual Machines (VMs) to run on a single physical server. Each VM is isolated from the others and includes its own operating system.

Benefits of virtualization:

  • Better utilization of physical server resources
  • Improved scalability
  • Strong isolation between applications
  • Enhanced security, as one application cannot freely access another

Virtualization enables physical resources to be presented as a cluster of disposable virtual machines, making application deployment and scaling easier.

However, each VM still runs a full operating system, which increases overhead.

3. Container Deployment Era

Containers emerged as a more lightweight alternative to virtual machines.

Containers are similar to VMs but use OS-level isolation instead of running a full operating system for each instance. Containers share the host OS kernel, making them much lighter and faster.

Each container has:

  • Its own filesystem
  • Allocated CPU and memory
  • Its own process space

Because containers are decoupled from the underlying infrastructure, they are portable across different clouds and operating systems.

Containers provide several important advantages:

  • Agile application creation and deployment — Faster and more efficient image creation compared to VM images.
  • Continuous development, integration, and deployment (CI/CD) — Enables frequent, reliable deployments with quick rollbacks due to immutable images.
  • Separation of development and operations concerns — Applications are packaged at build time, not deployment time, decoupling them from infrastructure.
  • Improved observability — Provides visibility into OS-level metrics as well as application health and performance signals.
  • Environmental consistency — Applications run the same way on a developer's laptop, in testing, and in production.
  • Cloud and OS portability — Runs consistently across Ubuntu, RHEL, CoreOS, on-premises environments, and major public clouds.
  • Application-centric management — Shifts focus from managing operating systems to managing applications using logical resources.
  • Loosely coupled, distributed, and elastic microservices — Applications are broken into smaller, independent components that can be deployed and scaled independently.
  • Resource isolation — Ensures predictable application performance.
  • High resource utilization — Enables greater efficiency and higher workload density.