The Talent500 Blog
Kubernetes Best Practices Production 1

Kubernetes Best Practices Production

Companies typically start their Kubernetes journey without a clear idea of where to begin in the ever-growing cloud-native environment. It is essential to plan, consider all possible outcomes, and avoid making mistakes to succeed. Nobody wants to make a bad call and regret it afterward. 

Knowing the best practices will guide your Kubernetes implementation, process development, and the identification of top priorities. Once you’ve got a firm grasp on the overall picture of your Kubernetes adventure, you’ll be ready to dive into the options and best practices at your disposal.

Health Checks

Mandatory Readiness Probe

Readiness probes tell the kubelet when a container is ready to start taking traffic. A Pod is said to be prepared when all its containers are ready. This signal can decide which Pods will be used as backends for Services. When a Pod is not prepared, it is taken off the Service load balancers.

Notably, neither “readiness” nor “liveness” have any predetermined default value. The kubelet will treat the container as ready to accept traffic as soon as it is started if the readiness probe is not set. If it takes a minute for the container to start up, all requests to it will fail for that minute.

Container Crash during a fatal error

It is essential that “failure, and the reason for failure, can be detected by remote processes,” It is recommended that processes “do what they are supposed to do or fail as soon as possible.” Additionally, it is recommended that methods “do what they are supposed to do or fail as soon as possible.” If the program merely exited because of an error, Kubernetes would recognize that the container had crashed and restart it with back-off status. This is because the default behavior of Kubernetes is to identify that the container has crashed. 

If the application hits an error from which it cannot be recovered, you should let it crash.

Some examples of errors that cannot be rectified include the following:

  • an uncaught exception
  • a typo in the code (for dynamic languages)
  • unable to load a header or dependency

You must remember that you should not signal an unsuccessful Liveness probe; instead, handle this from the application by exiting the process as soon as possible and letting the kubelet restart the container.

A Liveness probe is a must

Liveness restarts a stuck container. When a process uses 100% CPU, it can’t respond to Readiness probe checks and is removed from the Service. The Pod remains an operational replica for the current Deployment. Without a Liveness probe, the Service remains Running but disconnected.

The process isn’t serving requests and is consuming resources.

It would help if you were doing the following:

  • Have your app exposing an endpoint
  • The endpoint returns the status code.
  • Set up Liveness probe on it. 

You shouldn’t utilize Liveness to handle fatal app problems and restart Kubernetes. Only if the process is unresponsive should the Liveness probe be employed.

Liveness probes shouldn’t be the same as Readiness

The Liveness and Readiness probes’ impacts are combined when they point to the same goal.

The container is detached from the Service by the kubelet when the app sends a signal indicating it is not ready or live, and the kubelet deletes the container simultaneously.

Application

Readiness probe to be independent

Your readiness probe should not be configured on dependent services. It would help if you were using init container for that use case. The readiness probe should be configured on application start.  

App Retries if dependent services are not available

It shouldn’t be possible for the application to fail to start because one of its dependencies, like a database, isn’t prepared.

Instead, the application should continue attempting to establish a connection to the database until it is successful.

Kubernetes operates on the assumption that individual application components may be started in any specified order.

When you verify that your application can re-establish a connection to a dependency, such as a database, you know that you can provide a more dependable and resilient service.

Cluster Configuration

More than one replica

Never operate a single Pod without the others. Instead, it would help if you thought about deploying your Pod as part of a DaemonSet, ReplicaSet, or StatefulSet. If you run many instances of your Pods, you can ensure that removing a single Pod won’t result in any downtime by running multiple instances.

Pods running on a single node

Take into consideration the following scenario: you have 11 copies stored on a single node of the cluster. In the event that the node is rendered inaccessible, the 11 clones will be destroyed, and you will experience downtime. It is recommended that anti-affinity rules be applied to your Deployments to ensure that Pods are distributed evenly across all of the nodes in your cluster.

Resources and Limits are mandatory

Resource limits limit how much CPU and memory your containers can use. The resource property of a container spec is used to set these limits. This is one of the things the scheduler looks at to decide which node is best for the current Pod.

The scheduler says that a container with no memory limit has no memory being used. If you can schedule an unlimited number of Pods on any node, you could overcommit resources and cause the node (and Kubelet) to crash. CPU limits are the same way. But should you always set limits and make requests for memory and CPU? If your application goes berserk, it may bring the whole node down and lead to a noisy neighbor problem; that is, it will crash other pods running in the same node.

Logging to stdout

It is generally a good idea to configure the log to output to standard output because any modifications made inside a container that is operating will be lost once the container is stopped.

You also do not need to worry about rotating individual log files for each Service because you only need to turn your own logs. If you use the standard output, you can simply archive and export your records to standard formats such as JSON. This dramatically simplifies the entire logging infrastructure, and it is now possible to check real-time logs by utilizing fundamental Kubernetes commands. There is no requirement for anyone to exec into the container. It also eliminates the need for monitoring and the additional costs associated with maintaining a sidecar.

Externalised Configs

It is best to practice managing configuration independently of the application’s code.

This offers several advantages. To begin, recompiling the application is not necessary if you change the configuration settings. Second, changing the settings while the application is still running is possible. Third, the same code can be deployed in several distinct stages.

In Kubernetes, the configuration can be recorded in ConfigMaps. These ConfigMaps can then be supplied as environment variables to containers, where they can be mounted as volumes.

ConfigMaps should only be used to save settings that are not sensitive. Use the Secret resource for information that must be kept confidential, such as credentials.

In addition to, Instead of secrets being provided as environment variables, the content of Secret resources should be mounted inside containers as volumes.

This is done to prevent the personal values from appearing in the command that was used to start the container, which could be inspected by people who shouldn’t have access to the personal values. This is done to prevent the personal values from appearing in the command that was used to start the container.

No data is to be stored in the containers

Because containers come equipped with their local filesystems, you might be tempted to use that system to persist data.

On the other hand, keeping persistent data on a container’s local filesystem makes it impossible to scale the Pod that the container is part of horizontally (that is, by adding or removing replicas of the Pod).

Because each container keeps its own “state” by utilizing the local filesystem, the states of Pod replicas can diverge over the course of time. This is the reason why this is the case. Because of this, the behavior of the application, as seen from the user’s perspective, is unpredictable (for example, a specific piece of user information is available when the request hits one Pod, but not when the demand hits another Pod).

Instead, any information that needs to be kept should be stored at a centralized location outside of the Pods, for instance, in a storage service that is not part of the cluster altogether, or even better, in a PersistentVolume that is part of the cluster itself.

Use the Horizontal Pod Autoscaler for apps with variable usage patterns

Horizontal Pod Autoscaler (HPA) is a built-in feature of Kubernetes that watches your application and automatically adds or removes Pod replicas based on the current usage. 

By properly configuring the HPA, your application will be able to remain available and responsive regardless of the volume of traffic, even if it suddenly increases.

You will need to construct a HorizontalPodAutoscaler resource to setup the HPA to autoscale your application. This resource will define the metric that will be monitored for your application.

Built-in resource measurements (such as the amount of CPU and memory used by your Pods) and custom metrics are also available for monitoring by the HPA. In the case of custom metrics, it is also your responsibility to collect and provide these metrics. You can accomplish this, for instance, by using Prometheus and the Prometheus Adapter.

Conclusion

Kubernetes’s security, efficiency, and dependability, as well as its monitoring and alerting, are challenging to implement. Kubernetes provides a comprehensive feature set, which necessitates that your deployments usually function not only at the deployment level but also at the cluster level.

Use this blog as a help doc and a benchmark for your Kubernetes environment running in production. Implementing all of them is not mandatory; it might vary according to your business logic. See you at the next one!

 

0
Shubham Singh

Shubham Singh

He is a SDE-2 DevOps Engineer, with demonstrated history in working with scaling startup. A problem solver by mettle, who loves watching anime when he is not working. He is a learner and educator by heart.

Add comment