Understanding Containers

Understanding Containers

In today’s competitive landscape, companies are in a race to build more agile cloud native systems that support dynamic deployment models for their applications and services. Containers provide an ideal building block for this new model because they are lightweight, portable across platforms, and easily scalable. This guide gives you insight into the world of containers and discusses the many features and benefits of the Kubernetes container orchestration platform.

What Are Containers?

A container is a way of packaging software, such as an application or service, so it can be stored or run on a computer. Containers use operating system virtualization similar to hardware virtualization. However, containers rely on the kernel features of the host operating system rather than requiring hardware support.

Containers allow developers to package their applications in a way that makes them portable across different environments. For example, a Java developer may develop their application on CentOS 6 (an older OS), while the software runs on CentOS 7 (a newer OS). The same developer could also use containers to ship their application to a different environment.

One way to think about a container is as a portable, self-sufficient, executable package that includes all the necessary dependencies, including code, runtime, system tools, and libraries. They can be moved from one computing environment to another (such as from development to test or production systems) without concern for conflicts with other software versions or incompatibilities with shared libraries.

Containers vs. VMs

Virtual machines (VMs) are a technology that enables software, such as an operating system, to run on top of another operating system. Virtual machines are also known as guest operating systems. VMs are software emulations of computers that allow multiple operating systems to run on the same physical machine.

In contrast to containers, VMs have independent OS kernels, file systems, and network interfaces. The VM’s files reside on a virtual hard drive stored in a file on the physical machine’s hard drive. They also have their own IP addresses, making them independent from the host machines.

Containers, on the other hand, share the host machine’s kernel. They also share the host machine’s network interface and file system. The main difference between containers and VMs is that containers are a way to package an application with its dependencies into a standardized unit for software development, deployment, and delivery. Containers have their own file system, memory space, and isolated networking environment that only allows communication from within the container.

See More

Container Orchestration

An application could consist of a few containers to hundreds of containers. Rather than managing containers manually, developers use orchestration to handle all tasks associated with running containers. Orchestration handles the following:

  • Provisioning and deploying containers
  • Configuring containers
  • Scheduling
  • Resource allocation
  • Managing container availability
  • Load balancing
  • Routing traffic to containers
  • Security

There are several solutions to orchestrate containers but over the past few years, Kubernetes has become the de facto standard to deploy and orchestrate containerized applications. Kubernetes was originally developed at Google and open sourced in 2014. Building on Google’s long-standing experience with running containerized workloads at scale, it makes everything related to deploying and managing containers easier and more secure.

Kubernetes Orchestration Architecture

A Kubernetes cluster consists of one or more nodes that can run multiple containers which are systematically organized into so-called pods. 

Nodes

Nodes are virtual machines that communicate with each other using the Kubernetes control plane. The Kubernetes control plane uses network and storage resources managed by the Kubernetes API server and scheduler to ensure that pods are scheduled on nodes with sufficient resources and that pods are placed on nodes with available capacity.

Pods (What Is a Kubernetes Pod vs. a Container)

A Kubernetes pod is a logical group of one or more containers that share the same network, storage, and other resources. It contains the systematic needs of the application it serves. Pods are the smallest, most fundamental object in Kubernetes.

Container Benefits

Containers have become a valuable tool for creating lightweight software that can be deployed and scaled quickly. Containers offer additional benefits such as:

Reduce complexity – Instead of setting up an operating system instance for each application, you can use containers to set up a single OS instance with multiple applications running on top of it. This reduces complexity by reducing the number of operating systems needed, which saves on hardware costs and reduces management overhead as fewer instances need to be set up and maintained.

Reduce costs – The fewer instances required, the lower the costs. Containers are also very efficient, allowing you to run more applications per server than traditional virtual machines. A single OS instance can support multiple containers without additional overheads. In addition, containers are faster than virtual machines because they do not require an entire guest operating system to be run and managed for each application.

Make use of existing infrastructure – Using containers does not require any changes to existing infrastructure as it uses standard hardware and software configurations for operating systems and hypervisors. This means that no additional hardware is required.

Maintain security – Containers are less complex than virtual machines and are therefore more secure. Because they share one OS instance, there is no need to run multiple operating systems on the same hardware, which reduces the number of potential attack points. In addition, because containers do not have a full operating system, there is no need to patch a guest OS – patches only need to be applied at the kernel level. This also makes containers simpler and more agile than virtual machines as there is no need for image consistency between different versions of an OS.

Reduce costs – Using containers can reduce IT costs by allowing you to consolidate multiple applications onto fewer servers and manage them centrally from an enterprise container platform. You can also make use of existing infrastructure such as storage and networks without having to invest in new hardware.

Improve performance – Because containers use standard hardware and software configurations for operating systems and applications, they can be deployed at scale with no performance degradation. Because containers are stateless, they can be started and stopped quickly and easily, which improves the performance of multi-tenant applications.

Improve security – Containers make it possible to run multiple applications on a single server without compromising the security of those applications. This helps organizations comply with regulations, thus reducing the number of potential attack points and simplifying compliance testing.

Improve application portability – By packaging components into containers that are self-contained and independent from other components, you can easily move them between servers or data centers in case of a disaster or other disruption. This makes it easier to migrate to new hardware without having to worry about compatibility issues or dependencies between software components.

Reduce network latency – Because each container is isolated from other containers on the same host server, latency is greatly reduced when compared to virtual machines hosted on the same physical server. This can lead to a significant increase in performance, especially when running multi-tenant applications where multiple users are trying to access the same server at the same time.

Reduce operational overhead – Because containers are lightweight and use fewer resources than virtual machines, it is easier to scale your application horizontally by adding more servers to your environment. This means that you can easily handle peak traffic loads without having to worry about overloading your infrastructure.

Run legacy applications – Containers allow you to run legacy applications written for older versions of operating systems on newer versions of the operating system with minimal effort, which makes it easier for organizations with older applications to migrate their environments to newer hardware and reduce their overall costs.

Container Security With Kubernetes

When it comes to securing your containers, there are several layers of fortification that Kubernetes offers and that you need to consider for a “defense-at-depth” approach. Both securing containerized applications and access to Kubernetes itself should be considered vital to IT security success.

Kubernetes API Authentication & Authorization

With the Kubernetes API being the central control unit of a Kubernetes cluster, it is vital to properly secure access to it. This can start with putting the Kubernetes API endpoint on a private network, but most importantly proper usage of authentication and authorization mechanisms is necessary.

To authenticate a request (i.e. verify who is sending the request), Kubernetes offers several mechanisms:

  • TLS client certificates
  • OIDC
  • Service Account tokens
  • Static tokens

To authorize a request (i.e. is the authenticated user allowed to perform the requested action), role-based access control (RBAC) is available. Roles can be assigned to users or groups and should only allow the bare minimum for an individual to perform their work. In particular, cluster-wide permissions should be closely guarded and assigned with caution.

Namespaces allow for separating individuals or teams and their workloads and give cluster administrators the ability to limit RBAC permissions to a team’s logical unit of a Kubernetes cluster. Proper usage of namespaces can improve security by reducing the impact of credential compromise, as those can only access a small portion of the cluster and its workloads.

Read our blog post: Kubernetes Security Practices

Read our blog post: Improving Kubernetes Security With the CIS Benchmark and Kubermatic Kubernetes Platform 2.19

Securing Container Workloads

Apart from Kubernetes itself, of course container workloads can be secured beyond the defaults that Kubernetes offers when creating e.g. a Pod.For once, network access can be locked down via NetworkPolicies. These rulesets similar to firewalls allow defining restrictions both for external traffic to Pods, but also traffic between Pods in the same cluster. Apart from preventing attacks from the outside, proper usage of NetworkPolicies can drastically reduce the impact a compromised Pod can have on your environment. For example, if a Pod is not supposed to have any outgoing network connections, those can be prohibited via NetworkPolicies.

In addition, Kubernetes offers a significant amount of optional settings for your workloads to tighten application security, disabling features that might not be needed (e.g. the container’s file system can be set to read-only to prevent writes) or improving the container sandbox beyond the default process isolation it provides. These partially depend on the application profile, so they might not be generally applicable, but should be reviewed and applied whenever possible.

Compliance with specific rules for workload settings (e.g. all containers must run as a specific non-root user) can be enforced with tools like OPA Gatekeeper for keeping standards consistently used across an organization. 

Video: Kyverno vs. Open Policy Agent – Update Your Kubernetes Policy Management