Step-by-Step Process to Setup Kubernetes Cluster on Ubuntu

Step-By-Step Process To Setup Kubernetes Cluster On Ubuntu

Kubernetes is a platform where we can manage containerized services and workloads. Containerized services or containerization is a deployment approach where the application’s code is bundled with all the other files and libraries. These containers are self-sufficient applications and can run on any infrastructure. 

Using Kubernetes, developers can automate the deployment process, making the process faster and more efficient. The Kubernetes platform is popular in the industry for its optimization capabilities, load balancing, and simplifying container management systems.  

Understanding Kubernetes

Kubernetes is a growing ecosystem of open-source platforms used for container management. Using this platform, we can build and manage a containerized infrastructure and harness proficient load balancing, disaster recovery services, and better scalability.  

Moreover, using Kubernetes, it becomes easier for businesses to configure the security settings of the container according to the requirements.  

In addition to configuring the required security settings, we can also block traffic going toward this ecosystem. Kubernetes personifies a crucial stage in the evolution of application or software deployment.  

Things have changed from the earlier versions of traditional deployment, where digital solutions were based on physical servers, to the virtual deployment era, where Virtual Machines (VMs) were on a server’s CPU. 

But with a containerized deployment system, we can implement automated operations, infrastructure abstraction, and service health monitoring.  

For a business looking for higher agility and scalability with their application execution, Kubernetes helps with continuous development, DevOps separation, and better observability.  

Developed by developers at Google, Kubernetes is widely used at Google for containerization. Moreover, Kubernetes is also a vital part of Google Cloud Services.  

Kubernetes Architecture

Kubernetes run on a predefined architecture consisting of; 

Kubernetes Cluster on Ubuntu_Blog 02

Kubernetes Master:

This element has four components that collectively manage the Kubernetes cluster.  

API Server: This is the focal point of all the components used in the Kubernetes ecosystem.  

etcd: It’s a key value storage saving configuration data and the state of the Kubernetes cluster.  

Controller Manager: Task automation in the Kubernetes cluster is taken care of by the controller manager.  

Scheduler: The scheduler in Kubernetes is used for scheduling worker node tasks in the cluster.  

Kubernetes Worker Node:

Workers nodes in Kubernetes run the application with POD and deployment features. 

Kublet: Kublet is a single agent responsible for running each worker and managing the containers used within the node.  

Container Runtime: This refers to the software responsible for running the containers  

Kube-proxy: This feature in the Kubernetes architecture is used for managing network connectivity used for maintaining a connection between containers and their services.  

Advantages of Using Kubernetes

1. Optimized Development:

A significant advantage of using Kubernetes is the efficiency it brings to the deployment environment. When businesses choose to deploy the applications on the cloud, Kubernetes gives them a platform to schedule and run containers.

2. Operations are Automated:

Kubernetes provide built-in commands that are especially useful for application management. Using this feature, businesses can benefit from the automation of common daily operations. Hence, the applications will run just the way you want them to run every day.

3. Infrastructure Abstraction:

Given the workloads developers allocate to Kuberneters, it handles the compute, networking, and storage tasks. As a result, developers can focus on making the building innovative rather than stressing about the environment on which it is built.

4. Installation of Kubernetes:

The level of installation of Kubernetes varies according to the number of workers and master nodes required by the cloud service providers.

The following process and requirements are for the installation of one worker node and one master node in Kubernetes.

• Ubuntu servers are required along with their sudo permission with the following configuration;
Master 4 Core CPU with 16GB memory
• Worker 4 Core CPU with 8GB memory
Docker runtime

Kubernetes Installation Steps

Following are the installation steps for Kubernetes on Ubuntu; 

1. Updating Ubuntu

Begin by updating Ubuntu on each Kubernetes node with #sudo apt update. This updates the package list of upgrades for the packages in Kubernetes that require new packages. Moreover, this command gets the package list from the repositories, followed by updating them to source information about the new versions.

2. Docker Installation

We need to run the command #sudo apt install docker.io for docker installation on Ubuntu. The “install” command is used for package installation, and docker.io is the package that will be installed.

Using this command in the given terminal, the docker will be installed. Docker is an open-source platform we can use to build, ship, and run distributed applications with the help of containers.

To check if docker has been installed properly, use the following command;

#docker -version

3. Enabling Docker on Each Node

For each node, you must run the following commands;

# sudo systemctl enable docker:

This command will allow the Docker to auto-start at system boot. Using the systemctl command is required for controlling the systemd system and service manager.

# sudo systemctl start docker:

After entering this command, you can start using the docker service. The “start” in this command will allow the docker service to begin immediately.

Once this system is set, use #sudo systemctl status docker to check whether the docker is running.

4. Add Signing Key

The next two commands you must input into the terminal include;

curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg:

For developers working with Kubernetes, they can use the above command to download the GPG key, which is instrumental to sign the Kubernetes package for Google Cloud public key server. In this, using the curl command is required to transfer data between the server and the system. Whereas ‘s’ instructs the same to run in silent mode.

# sudo apt-key add:  

The second command is for when you want to add the GPG key downloaded to the trusted keys repository. ‘Apt-key’ is used to manage APT authentication.

5. Add Software Repositories

The following command #sudo apt-add-repository “deb http://apt.Kubernetes.io/Kubernetes-xenial main allows Advanced Package Tool on Ubuntu Xenial system to use the Kubernetes package repository.  

Specifically, the apt-add-repository instructs the system to add a new package repository to the available list of sources with APT. The HTTP address shares the location of the repository with the system from where it can extract the required service package.

6. Install Kubernetes on Every Node

In this step, developers must add two commands to install Kubernetes on Ubuntu. Plus, these commands instruct the systems to work with specific versions of Kubernetes.

# sudo apt-get install kubeadm kubelet kubectl:

This is the command we can use to install three Kubernetes components, kubeadm, kubelet, and kubectl, with the APT package manager on Ubuntu. They are useful for setting up the cluster, node management, and interaction with the API server.

# sudo apt-mark hold kubeadm kubelet kubectl:

We can use this command to hold the Kubernetes components with the ‘apt-mark.’ As a result, the “held” components won’t be upgraded, and allows the components to sustain a specific version.

Once implemented, you can use the following command; # kubeadm version to check the current Kubernetes version.

7. Disabling Swap Memory of Each Node, If Enabled

The Swap function in Linux is used when developers want to disable all the swap space within the system. In a Linux system, when the Swap space function is enabled, it is useful for enhancing memory by allowing the system to use a hard drive or SSD as virtual memory.

However, the same swap space function is not as useful in the Kubernetes cluster as it causes issues with resource allocation. Hence, using this command, # sudo swap off –a, we can disable the function, preventing any sort of issues.

8. Assigning Hostname to Each Node

The next step in Kubernetes configuration is setting the hostname for the master node and worker nodes. For this purpose, we can use two different commands for the respective node;

# sudo hostnamectl set-hostname master-node:

The hostnamectl is the command set names on the Linux ecosystem. Set hostname is added to give the master node a name for further correspondence.

# sudo hostnamectl set-hostname worker-node-01:  

Similar to setting the name for the master node, here, too, we can set the name for worker.

Remember that the hostname can be any string that is validated as hostname. You can use descriptive names for better identification of the system. Using this command, identifying the network or the cluster environment becomes easier.

In case there are more worker nodes, follow the same process to set a unique name and identifier for each of the nodes.

9. Initializing Kubernetes on Master Node

This is the command that will take you toward the final steps of Kubernetes installation and execution. Whether you want to set Kubernetes for disaster recovery mode or any other purpose, the initialization process requires implementing the following command;

sudo kubeadm init –pod-network-cidr=10.244.0.0/16

Once the command is entered, it returns in a position of administration power giving the users control as a superuser and performing administrative tasks. The kubeadm string is used to deliver a new Kubernetes cluster with the help of the control plane.

This string of the command (–pod-network-cidr=10.244.0.0/16) is required to specify the CIDR range respective to the pod network in the Kubernetes cluster. As you define a range of IP addresses through this command, which in our example is 10.244.0.0/16, is required for the pod network.

So, using this command, developers can work with a newly made Kubernetes control plane working on the current node with a pre-defined CIDR range.

In addition to this, once the command is executed, you will be prompted to join the nodes with the master node. The prompt comes in the form of a kudeabm message after the command is executed.

10. Setup Home Variables for Kubectl

The last step in Kubernetes installation and execution is setting home variables for kubectl for the master node. Executed these three commands;

# mkdir -p $HOME/.kube:

It is required to create a new directory with the .kube extension within the home directory, which is in use currently. The same command also tells the system to create new parent directories.

# sudo cp -i /etc/Kubernetes/admin.conf $HOME/.kube/config:

This command will allow you to copy Kubernetes configuration files for the admin and paste it to the .kube extension directory.

# sudo chown $(id -u):$(id -g) $HOME/.kube/config:

With this command, you can change the ownership of the configuration copied in the previous step to the current directory of the new user. It extracts the USER ID and Group ID of the existing user.

Ultimately, the trio of commands helps developers build a .kube directory for the user’s home directory and copy the Kubernetes configuration file relevant to the administrator of the cluster.

For checking the availability of these commands and their respective nodes after execution, use the command,

# kubectl get nod.

Conclusion

For the development of modern applications, businesses leverage different approaches and processes. The motive is to ensure their applications deliver the best user experience while streamlining the backend tasks for a more productive output. Using the process above, you can set up Kubernetes in Ubuntu systems and build a smooth DevOps implementation system.