If you’ve decided to set up and deploy a Kubernetes cluster, ensuring proper security controls and alignment with overall organization security strategy can be a daunting process.
Security should not be an afterthought, and the process is one that is vital throughout the application lifecycle. It needs to be baked in from the start during critical architectural decisions.
Bearing this in mind, what are the first steps to ensuring secure clusters? In this guide we compile a checklist of good practices and actions to perform in order to establish a secure cluster for running containerized applications.
A Security Checklist for Kubernetes Clusters
When creating a new Kubernetes cluster for production environments, you should generally be guided by the following requirements, categorised by their main principles:
Ensure that the Cluster is set up for high availability:
- Why: To prevent single points of failure when a node fails to start or it has unexpected outages. you need to make sure that the cluster remains operational in those circumstances.
- When: When assigning the master nodes, typically in the cluster-creation phase.
- How: When defining the nodes that act as masters, you should create at least three nodes meeting the minimum hardware requirements. You should also define at least three nodes that act exclusively as workers (and not both as master and workers). This is in case a worker node crashes; then another node can host the application workloads.
When using a managed Kubernetes cluster, the provider handles the control plane availability so you only have to assign the worker node’s capacity on your part.
Ensure all sensitive data are encrypted both in transit and at rest:
- Why: To protect sensitive information in the cluster from being exposed to unauthorized users.
- When: When setting up the cluster and when joining new nodes into the cluster.
- How: First you need to enable encryption at rest for cluster data, as it is not enabled by default. This is done by passing a flag for the encryption configuration --encryption-provider-config with a kind: EncryptionConfig, which defines how data is encrypted in Etcd. You will also need to make sure the communication between Etcd peers and the kube-apiserver is also encrypted.
When using a managed Kubernetes service typically this is handled by the provider. However with custom clusters, you have to set this up from the beginning.
Ensure file integrity checks at the service and application layers are in place:
- Why: To prevent unauthorized changes to the filesystem or other critical data of the Kubernetes cluster.
- When: When running the cluster for production workloads.
- How: Install a Kuberneters-specific agent to protect the application and underlying compute resources.
When running a managed Kubernetes, you have no or limited access to the control plane, as the cloud provider handles that for you. However, you can perform security scans on the worker nodes.
Ensure that there is a sound audit logging policy:
- Why: To properly ensure that you track the series of events performed against the Kubernetes API you need to enable audit logging. You are especially looking for suspicious or unauthorized activities.
- When: Whenever you start the kube-apiserver server.
- How: You need to start the kube-apiserver binary with a --audit-policy-file flag specifying a yml file with the audit.k8s.io/v1 apiVersion header. Digital Ocean currently does not offer this kind of logging. GCloud and AWS providers have this control auditing enabled by default and you have the option to disable it. Additional charges occur depending on the kinds of activity logs enabled, though.
Additional Security Requirements
Ensure access controls are in place:
- Why: You need to ensure that only authorized actors can perform actions (verbs) on resources. Here actors can be real users or other machines. Verbs can be: get, list execute, update, etc. Resources can be: configmaps, deployments, volumes etc.
- When: When creating the cluster and administrating new resources and actors.
- How: Luckily there are myriad configuration options when it comes to access controls and authorization. First create namespaces for separating logical resources, and to introduce the initial boundaries. Then enable RBAC for authorization rules. Then you need to look into the general Node and Pod admission policies by reviewing the admission controller configuration. For a more granular configuration you can use Pod Security Policies (K8s 1.19 only feature). When using managed Kubernetes, many of these protections are enabled by default, however, you may find that you will need to configure additional policies (for IAM Access, for example).
Ensure that the Cluster uses the latest available Kubernetes Version:
- Why: Quite often Kubernetes receives security updates and even critical updates. In order to safeguard against vulnerabilities, it’s important to be on top of those updates regularly.
- When: Initially when creating the cluster; then subsequently, checking every quarter unless a zero-day exploit fix has been offered earlier.
- How: For creating custom clusters you need to select to download the latest GA version from the release pages. For managed clusters there are usually a list of predefined options when choosing the version, which may not be the latest available.
With Google Cloud you have the option to enroll in a release channel that has different tradeoffs between having the latest features vs having the latest stable updates. For AWS AKS, the latest supported version is 1.17. For Azure, the latest is 1.18.
That concludes our security checklist of best practices when creating Kubernetes clusters from the start. Always remember that by ensuring a secure baseline configuration in respect to the security triad policies, you can safeguard against most common infrastructure attacks. Naturally, setting up a cluster is only a starting point. Continuing to monitor and proactively reduce any attack vectors in the infrastructure deployments, can mitigate risks and prevent those that can slip through the cracks and perform damage.
Prisma Cloud offers unmatched defense-in-depth for Kubernetes container platforms, including Red Hat OpenShift®. Palo Alto Networks is a Red Hat OpenShift Ready Partner, helping organizations across government, healthcare, financial services and the intelligence community secure their cloud native environments.
To learn more about how to integrate and automate continuous security methods into your entire DevOps pipeline, check out the Red Hat webinar series Modernize & Secure Your Lifecycle with DevSecOps, where the Prisma Cloud team speaks about DevSecOps for cloud native applications.