The Talent500 Blog
Kubernetes Minikube: Guide To Local Development And Testing

Kubernetes Minikube: Guide To Local Development And Testing

With Gartner predicting 85% of businesses will embrace a ‘cloud-first” approach by 2025, tools like Kubernetes will be leveraged in a more versatile manner. If you’re reading this, you’re likely seeking to use Kubernetes in your local development environment. And this is where Minikube comes in. 

This guide will walk you through the basics of Minikube, enabling you to develop and test locally with ease, thus accelerating your deployment pipeline and enhancing your productivity. By the end, you’ll have a thorough understanding of local development and testing using Minikube, a skill that’s becoming increasingly valuable in today’s cloud-centric world. 

Let’s begin:

Prerequisites For Using Kubernetes Minikube

Before we move ahead with this tutorial, it’s critical to ensure that your environment is adequately prepared. Here are the prerequisites to consider for a seamless local development and testing experience with Minikube:

#1 Basic Familiarity with Kubernetes

To effectively follow this guide, you should have a basic understanding of Kubernetes concepts. For more insight, you can refer to this Kubernetes article on our blog.

#2 System Requirements

Your system should have at least 2 CPUs, 2GB of free memory, and 20GB of free disk space. These resources are a bare minimum for the proper functioning of Minikube. 

#3 Reliable Internet Connection

While working with Kubernetes and Minikube, it’s important to have a stable internet connection. This ensures smooth downloads and operation of the various tools and drivers needed.

#4 Docker Installation

The Docker container framework should be installed in your environment—be it Windows, Mac, or Linux. If you are using Docker on Linux, make sure it’s configured to work without sudo privileges. For non-Linux users, Docker’s documentation provides the necessary installation steps.

#5 Homebrew Package Manager

Homebrew is another essential tool for this process. For macOS and Linux users, Homebrew can be installed directly. However, if you’re using Windows, you can install Homebrew under the Windows Subsystem for Linux (WSL).

#6 Virtualization Drivers

To work with Minikube, your system should have certain virtualization drivers installed. This could include Docker, Hyperkit, Hyper-V, KVM, Parallels, Podman, VirtualBox, or VMWare. It’s important to install one of these tools before starting with Minikube. For simplicity and reliability, it’s recommended to use Virtualbox as the backend Minikube driver as it tends to work without any issues.

#7 Kubectl and Minikube Installation

Kubectl is a command-line tool for controlling Kubernetes clusters, while Minikube runs a single-node Kubernetes cluster on your personal computer. These tools should be installed based on your operating system:

  • For Mac OS X users: You can install xhyve driver, VirtualBox, or VMware Fusion, followed by Kubectl and Minikube. For installation, use the below command:
curl -LO “https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/darwin/amd64/kubectl”

chmod +x ./kubectl

sudo mv ./kubectl /usr/local/bin/kubectl

curl -Lo minikube https://storage.googleapis.com/minikube/releases/v0.21.0/minikube-darwin-amd64 && chmod +x minikube && sudo mv minikube /usr/local/bin/

These commands are used to download the latest version of kubectl and minikube (version v1.30.1). Then, change the permissions of the downloaded files to make them executable and finally move them to the /usr/local/bin directory which is generally in the system’s PATH.

  • You simply need to download the minikube-windows-amd64.exe file, rename it to minikube.exe and add it to your PATH. This process is generally done manually via the system’s GUI rather than command line.
  • For Linux users: You can install VirtualBox or KVM, then Kubectl and Minikube. Use the below command for installation.
curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl

chmod +x ./kubectl

sudo mv ./kubectl /usr/local/bin/kubectl

curl -Lo minikube https://storage.googleapis.com/minikube/releases/v0.21.0/minikube-linux-amd64 && chmod +x minikube && sudo mv minikube /usr/local/bin/

You can find the latest versions of these commands on the Minikube GitHub releases page.

Verifying the Installation

Now that you’ve installed both Minikube and kubectl, you must verify that they’re installed correctly.

For Minikube, open a new terminal window and type minikube version. You should see a response with your installed version of Minikube.

For kubectl, you must type kubectl version –client in your terminal window. You should see a response with your installed version of kubectl.

If you see the version numbers without any errors, congratulations! You have successfully set up your local development environment with Minikube and kubectl. In the next section, we’ll dive deeper into Kubernetes concepts and how you can start your first Minikube cluster.

By ensuring you have these prerequisites in place, you will be able to learn local development and testing using Kubernetes Minikube without any trouble.

Introduction To Kubernetes Concepts

Before we move forward, it’s essential to know and understand fundamental Kubernetes concepts. Kubernetes can seem complex at first but don’t worry. I’ll break it down into digestible chunks:

Pods: At face value, a pod is a single instance of an application, which may consist of one or more containers. It runs on a single node and represents the smallest deployable unit within a Kubernetes cluster.

Services: A service is a network abstraction that defines a logical set of pods and enables external traffic exposure load balancing and service discovery for these pods. The set of pods targeted by a service is usually determined by labels.

Deployments: A deployment is a higher-level concept that manages Pods and ReplicaSets. It describes the desired state for your instances and manages rollouts and rollbacks to your application.

Namespaces: Namespaces are a way to divide cluster resources between multiple users or teams. Put simply, they’re like virtual clusters sitting on top of the actual Kubernetes cluster.

How These Concepts Relate

Let’s now see how these concepts work together within a Kubernetes cluster so that you can follow this tutorial easily:

Imagine that you’re deploying a simple web application. The application itself runs in a container. This container is wrapped inside a Pod (Kubernetes’ smallest unit) and it gets deployed on a node within your cluster.

You might have multiple replicas of your pod running for redundancy or to handle the load, managed by a ReplicaSet. A Deployment oversees the ReplicaSet, ensuring that the right number of pods are running at all times.

A Service would then be used to provide a single point of access to these replicated pods. This could be via load balancing incoming requests or by providing a stable IP address and DNS name that other pods can use to access your application, regardless of the underlying pods’ lifecycle.

Namespaces come into play when you want to segregate parts of your cluster for different applications, teams, or environments (like dev, staging, and production). Naturally, each namespace can have its own set of resources, policies, and permissions.

Understanding these concepts is crucial when working with Kubernetes. With the basics covered, we’re ready to dive into Minikube and start our journey of local development and testing.

In the following sections, we’ll start our Minikube cluster, deploy an application, expose it as a service, and understand the practical application of these concepts.

Interacting With Your Cluster

After setting up your Kubernetes cluster using Minikube, you must get started by interacting with it and start deploying your applications. This can be achieved through the Kubernetes command-line interface, kubectl.

Using Kubectl

If you have already installed kubectl on your local machine, you can start using it to interact with your cluster immediately. Run the following command to get the status of all pods across all namespaces in your cluster:

kubectl get po -A

This command will list all the pods that are currently running in your Kubernetes cluster, including those in the ‘kube-system’ namespace that is used to manage the cluster itself.

Using Minikube’s Built-In Kubectl

If you have not installed kubectl, Minikube can help you here as well. Minikube comes with a built-in version of kubectl that you can use without any additional installation. Here is how you use it:

minikube kubectl — get po -A

The command above performs the same action as the previous kubectl command, displaying all pods in all namespaces.

Making Kubectl Easier to Use

To save time and make it easier to run kubectl commands, you can add an alias to your shell configuration file (.bashrc, .zshrc, etc.) as follows:

alias kubectl=”minikube kubectl –“

After adding this alias, you can use kubectl commands just as if you had kubectl installed on your machine, and the commands will be directed to your Minikube cluster.

Understanding Your Cluster State

When you first create your cluster, you may notice that all services are not in a ‘Running’ state. This is quite normal during the startup process as some services take a bit longer to initialize.

Minikube includes the Kubernetes Dashboard, a web-based user interface for Kubernetes to add more visual insight into the state of your cluster. You can start the dashboard with the following command:

minikube dashboard

The dashboard provides a user-friendly interface to interact with your cluster, allowing you to visually monitor the state of your workloads and cluster resources. You can start, stop, and scale applications, and you can even edit individual Kubernetes resources.

Learning Deployment

We have gained a substantial amount of knowledge and we can proceed with learning the deployment part:

Deploying a Python Flask Application

Create a deployment and expose it on port 5000:

kubectl create deployment flask-app –image=myuser/flask-app:1.0

kubectl expose deployment flask-app –type=NodePort –port=5000

Check your deployment:

kubectl get services flask-app

Access the service with Minikube or forward the port:

minikube service flask-app

# or

kubectl port-forward service/flask-app 7050:5000

The application will be accessible at http://localhost:7050/.

Deploying a Load Balanced Python Flask Application

Create a deployment

kubectl create deployment flask-loadbalanced –image=myuser/flask-app:1.0

kubectl expose deployment flask-loadbalanced –type=LoadBalancer –port=5000

Start the Minikube tunnel:

minikube tunnel

Find the External IP:

kubectl get services flask-loadbalanced

Your Flask application is available at <EXTERNAL-IP>:5000.

Deploying Echo-Server Service ‘alpha’

Create a Pod and Service:

kind: Pod

apiVersion: v1

metadata:

  name: alpha-app

  labels:

    app: alpha

spec:

  containers:

    – name: alpha-app

      image: ‘kicbase/echo-server:1.0’

kind: Service

apiVersion: v1

metadata:

  name: alpha-service

spec:

  selector:

    app: alpha

  ports:

    – port: 8080

Setting Up Test Environments

Just like you would separate your development, staging, and production environments, it’s also a good idea to maintain a separate environment for testing. You can do this using Kubernetes namespaces.

To create a new namespace, use the following command:

kubectl create namespace test

When deploying your application for testing, you can specify this namespace to isolate it from other applications:

kubectl apply -f your-app.yaml -n test

Running Tests

There are many ways to run tests against your applications in Minikube. The best method depends on the nature of your application and your testing strategy.

For instance, you could use a tool like Postman or curl to make HTTP requests to your service and validate the responses. 

Alternatively, you might use a Kubernetes Job to run a test suite within the cluster. Let’s assume you have a test suite in a Docker image called my-app-test-suite, which contains all the tests for your application.

apiVersion: batch/v1

kind: Job

metadata:

  name: my-app-test-suite

spec:

  template:

    spec:

      containers:

      – name: test-runner

        image: my-app-test-suite

        command: [“./run-tests”]

      restartPolicy: Never

  backoffLimit: 4

Explanation:

This YAML file defines a Job resource named my-app-test-suite. The Job will create a Pod that runs the my-app-test-suite image. The command array specifies the command that will be run when the container starts. In this case, it’s running a script in the image called run-tests.

The restartPolicy: Never field means the job will not automatically retry if it fails. The backoffLimit specifies that Kubernetes should give up on the job after 4 failed attempts to run the Pod.

You can create the Job with the kubectl apply -f job.yaml command, replacing job.yaml with the name of your YAML file.

You can then monitor the status of the Job with kubectl describe job my-app-test-suite, and view the logs of the test runner with kubectl logs jobs/my-app-test-suite.

Remember, this is a simplified example. In a real-world scenario, your test suite might need to communicate with your application, which would require additional configuration such as setting environment variables, adding a Service, etc.

It’s also worth mentioning that Kubernetes supports readiness and liveness probes. These can be used to automatically check the health of your application and restart it if necessary. While not a replacement for a full test suite, these probes can help you to catch issues that might otherwise go unnoticed.

Service Mocks and Other Testing Techniques

Sometimes, you might want to test your application in isolation from its dependencies. For example, you might have a service that relies on a database or an external API.

In these cases, you can use service mocks to simulate these dependencies. A service mock is a stand-in for the real service, which returns predefined responses to requests. This can be useful for testing how your application handles specific scenarios or errors.

Various tools like Kube-AWS and Speedscale create service mocks, and many of them can run directly within your Kubernetes cluster. You can also use the stub or the sidecar pattern to simulate dependencies at the pod level.

Remember, the goal of testing in Minikube is to validate your applications in an environment that mirrors production as closely as possible. By leveraging Minikube’s features and Kubernetes’ flexibility, you can build a powerful and efficient testing pipeline.

Wrap Up

Though the above-mentioned process allows you to experiment, learn, develop, and test your applications while simulating a production-like environment, which is invaluable in the era of cloud-native applications.

Remember, the key to mastering Kubernetes and Minikube is practice and exploration which won’t always be simple. But don’t be afraid to try out new things, make mistakes, and learn from them.

I hope this guide has been helpful and has sparked your interest in further exploring what you can achieve with Kubernetes and Minikube. 

Looking for a DevOps job? 

Join Talent500 now!

0
Avatar

Neel Vithlani

Add comment