What is Kubernetes?

Kubernetes have become a standard in cloud native software as it pertains to containers. In fact, the Cloud Native Computing Foundation’s (CNCF) most recent survey found that Kubernetes use in production has grown to 78%. However, like with any technology that surges in popularity in a short amount of time, there is plenty of confusion around what Kubernetes is and what security considerations come with Kubernetes and containerization. Case in point: in the CNCF same survey, 40% of respondents mentioned security as a challenge with container use/deployment and 38% called out complexity as a challenge.

For security engineers tasked with hardening container deployments, there is definitely a learning curve to overcome. To help you get up to speed, we’ll take a look at what Kubernetes is, how it works, and explore some basic Kubernetes security concepts.

What is Kubernetes?


Kubernetes is a platform used to orchestrate and manage container workloads (e.g. Docker containers). 

Since its initial release in 2014, Kubernetes — an open source CNCF graduating project with roots tracing back to Google’s development team — has become one of the most popular tools in DevOps and cloud native circles.

A prerequisite to understanding Kubernetes is first understanding containers. In a nutshell, containers are lightweight software packages that include all the dependencies an application needs to run. Containers solve the problem of reliably running applications in different environments. Because they are lightweight and portable, containers have surged in popularity and are vital to the development of modern microservices and web applications.

While using a single container doesn’t require much management and orchestration, large applications need to be able to scale. Modern development teams need to be able to automate and scale the process of container deployment. This is where containerized workload management tools like Kubernetes come in. Kubernetes provide the missing management and orchestration.

The phrase “management and orchestration” gets thrown around a lot when discussing Kubernetes. However, that doesn’t tell us much about the specifics. In simple terms, what that means is Kubernetes enables load-balancing, configuration management, configuration of storage resources, automatic resource allocation (e.g. CPU and RAM per container), and the scaling up or down of container deployments.

Kubernetes vs Docker

There can be plenty of confusion around the topic of Docker vs Kubernetes. However, it’s actually quite simple:

  • Docker is a platform for creating and running containers. It’s not the only container platform, but it is the most popular.
  • Kubernetes is a tool for managing multiple containers, including Docker containers.
  • Docker offers a tool with some of the same features as Kubernetes known as Swarm.

That last point is what can lead to some of the confusion. Docker just so happens to offer a tool — Swarm — that provides similar functionality to Kubernetes. However, Kubernetes is by far the more popular management and orchestration tool.

Important Kubernetes concepts and terms

In addition to containers, there are other important concepts to understand when getting started with Kubernetes. Let’s take a look at some key terms:

  • K8s. This is a synonym for Kubernetes. Developers sometimes create shorthand for popular terms by taking the first and last letter of a word, the number of letters in between them, and combining it into shorthand for the word. Interoperability becomes “i14y“, localization becomes “l10n”, and Kubernetes becomes “k8s”.
  • Pods. These are the smallest units that can be deployed in Kubernetes. A pod consists of one or more containers that share storage and networking resources and a spec for running the container(s).
  • Workloads. These are the applications (sets of pods, really) that Kubernetes runs. Kubernetes deploys updates, and scales pods based on what the configured workload dictates.
  • Nodes. The compute “hardware” (think RAM and CPU) that workloads run on are known as nodes. The “hardware” can be anything from a Raspberry Pi to a virtual machine to a physical server, the key here is that the compute resources come from the nodes. There are two main types of nodes: worker/minion nodes that run the workloads and master nodes that manage a set of worker/minion nodes.
  • Clusters. A Kubernetes cluster is a group of nodes. In a cluster, worker nodes run the workloads and a master node controls what the worker nodes do.

What is Kubernetes used for?

Kubernetes can be useful effectively anywhere an infrastructure-as-code approach to container deployment can be useful. This means Agile development teams and teams with a focus on DevOps practices will often use Kubernetes to help automate their continuous integration/continuous delivery (CI/CD) pipelines. Additionally, Kubernetes can help auto scale cloud native applications by monitoring the health of nodes and resource utilization and scaling up or down as needed.

Important Kubernetes security concepts

Of course, while Kubernetes adds value from an automation and scalability perspective, it also adds a new wrinkle for security teams. How can you ensure your apps and services are deployed securely? As always, this begins with understanding your threat model and risk appetite, but there are a few basics to help you get started.

  • The 4C’s of cloud native security– Kubernetes calls out a 4 layered approach to cloud native security known as the 4C’s: Code, Container, Cluster, and Cloud/Co-location/Corporate datacenter. At each layer, you need to ensure security best practices are followed. This means different things at different levels. For example, ensuring code enforces encryption of data in transit and limits access to only essential network ports, disallowing privileged users in containers, hardening configurable Kubernetes cluster components, and following security best practices from your cloud service provider.
  • Pod security policies. With Kubernetes pods, there are 3 base security policies. To properly secure your environment, you’ll need to understand the usability/security tradeoffs with each, and act accordingly.
    • Privileged. The widest level of permissions. Known privilege escalations are possible with this policy.
    • Baseline/default. The middle ground between privileged and restricted. Prevents known privilege escalations.
    • Restricted. Based on Kubernetes Pod hardening best practices, this is the most restrictive of the 3 base policies.
  • Pod security contexts. A pod’s security context configures its security settings at runtime. This covers aspects such as access control, running as privileged or not, mounting read only file systems, etc. You may think this sounds a lot like Pod security policies, and indeed they are related. Security contexts are what happens at runtime, while security policies allow you to define parameters for what contexts are used within a cluster.
  • Secrets. Infrastructure-as-code tool is great, but you probably don’t want OAuth tokens or passwords sitting in a YAML file. Secrets are the Kubernetes way of storing this sort of sensitive information.

It’s also vital to have a plan for container visibility and vulnerability scanning to scanning ensure issue remediation occurs quickly and you remain compliant to any relevant standards. To get started with container vulnerability scanning and security posture assessments, we recommend reading Continuous Container Visibility and Compliance (PDF).

Next steps with Kubernetes

Now that you understand how Kubernetes works at a high-level — controlling and managing container deployments automatically — you can take the next steps to deploy k8s securely. For a deep-dive on implementing security with containers, Kubernetes, and serverless infrastructure, sign up to view the free How to Layer Security into Modern Cloud Applications webinar presented by Cloud Security Expert Hillel Solow.

This website uses cookies to ensure you get the best experience. Got it, Thanks! MORE INFO