![]() VRRP is a fundamental brick for router failover. On the other hand high-availability is achieved by VRRP protocol. Keepalived implements a set of checkers to dynamically and adaptively maintain and manage a load-balanced server pool according to their health. Loadbalancing framework relies on a well-known and widely used Linux Virtual Server (IPVS) kernel module providing Layer4 load-balancing. The main goal of this project is to provide simple and robust facilities for load-balancing and high-availability to Linux system and Linux based infrastructures. Keepalived is a routing software written in C. Keepalived is free software distributed under the terms of the GNU General Public License. In this guide, we present a floating Virtual IP address solution for Kubernetes and Openshift control plane based on the battle-tested keepalived project by Alexandre Cassen. ![]() However, if you still want all your master nodes to actively handle traffic coming from worker nodes, you can extend the solution above with very little effort by introducing a thin load-balancer layer as in the following picture. This mechanism can guarantee the high availability of the control plane against different types of failures, including hardware and software. Depending on the configuration, when the failed master is recovered, the VIP is passed back to it. When the active master fails, the VIP is immediately passed to the next master that becomes the active master. Well, this is not completely true since they still run the etcd database and the other components of the control plane. The active master node owns the VIP handling all the traffic coming from workers and directed to the api-server while the other two masters are in stand-by mode. If you just want high availability on the control plane, a VIP based solution is a solid choice. It works well without any load balancing if your cluster won’t generate enough control traffic to saturate a single master. So the solution is to rely on a second instance of the load-balancer to protect the first one when it fails.Īs this makes sense in many cases, in other cases, it may be too expensive both in terms of costs and operations.Īs a cheaper solution, we'll use a floating Virtual IP address (aka VIP) in front of the masters. This will balance the load to the master nodes, but we have just moved the single point of failure from the masters to the load-balancer. In such environments, the worker nodes all use the load-balancer to talk to the control plane. However, if you have a hardware load-balancer, that would be obviously a better option. This project provides the strategy and methodology for a cheaper software solutions only. For clusters operating in public cloud environments the options and the methodology are usually straightforward: main cloud providers as AWS, GCP, Azure, have their own HA solutions which will work very well, and these should be preferred when operating in such environments.įor private cloud deployments, and edge deployments, there are several different options, and most of them are based on hardware load-balancers. In many production environments, it is desirable to have a Kubernetes Control Plane that is resilient to failure and highly available. Worker nodes talk to the Master nodes using the Control Plane Endpoint, usually a hostname or an IP address which tells all the nodes how to reach the Control Plane. These nodes implement the so-called Control Plane, i.e. Master nodes run the api-server, the etcd database, and the controllers. In a typical kubernetes setup, we have worker nodes and master nodes.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |