Cofigure a Kubernetes cluster using Ansible.
Creating a Kubernetes cluster using ansible in just a one click.
Hello viewers…😊😊
So in this article I am going to configuring a Kubernetes Multi Node Cluster over EC2 instance using Ansible. Full description of task noted below…
🔅 Create roles for provisioning ec2 instances and configure them as Kubernetes cluster’s master and slave node.
🔅 Create a main playbook by running it all instances provision and cluster will setup.
A multi node cluster in Kubernetes contain two or more slave node which is connected with a master node. This cluster is used to avoid single point failure.
So here, I am assuming you all have little information like what is Ansible, AWS , Kubernetes etc.
SO lets start….
Starting from Provision EC2 instance through ansible
So provisioning a EC2 instance through Ansible we have to setup a environment like…
- Some python libraries like boto, boto3, botocore
- Some files for dynamic inventory like:
- ec2.py
- ec2.ini
The all pre-requirement for provision EC2 instance I shown in my last article. I provided the link below. Please go through it.
Read this article till “Now all set to launch a instance on AWS, just need to write a play book for provisioning a instance.”
So All pre-setup is done now jump to our project .
This is my main playbook to configure hole Kubernetes cluster in just a single click and after setup whole cluster if you need some more slave nodes, so run it again and it provision and configure salve instances as many we want.
So lets trying to understand what I did…
- hosts: localhost
vars:
instance_name_tag: "master"
no_of_instances: "1"vars_prompt:
- name: choice
prompt: "---Do You Want a master node instance,true or false---"
private: no
roles:
- name: Ec2 role to launch a master server
role: "ec2"
when: choice|bool == true
Firstly on localhost with the help of ec2 role I provision a master node instance on AWS.
On the code you can see a message occur “ — -Do You Want a master node instance,true or false — -” So if I do true then master instance provision if I do false it can skip the task or can say not provisioning and instance.
- hosts: localhost
vars:
instance_name_tag: "slave"
vars_prompt:
- name: choice
prompt: "---Do You Want a slave node instances,true or false---"
private: no
- name: no_of_instances
prompt: "---How many slave node instances you want in Numbers,eg.1.---"
private: noroles:
- name: Ec2 role to launch a master server
role: "ec2"
when: choice|bool == true
Now again on localhost with the help of ec2 role I provision slave node instances on AWS.
On the code you can see a message occur “ — -Do You Want a slave node instances,true or false — -” So if I do true then slave instance provision if I do false it can skip the task or can say not provisioning and instance.
Then a message occur “ — -How many slave node instances you want in Numbers,eg.1. — -” So by giving values like 1,2,3… as many slave instances we can provision by it.
tasks:
- name: waiting for instnaces ssh
command: sleep 30
when: choice|bool == true- name: refresh inventory
meta: refresh_inventory
Then on localhost I perform two task-
First one named as waiting for instnaces ssh. It is for waiting for few second while this time AWS instances perfectly boot it.
Second one named as refresh inventory. It is for refreshing our ansible inventory to fetch new IPs of instances I provision on AWS
- hosts: tag_name_master
roles:
- name: "role for master"
role: "k8s_cluster"
Now on master host (tag_name_master) with the help of k8s_cluster I configure this instance as a master node of Kubernetes management cluster.
- hosts: tag_name_slave
roles:
- name: "role for master"
role: "k8s_cluster"
Now on slave nodes host (tag_name_slave) with the help of k8s_clsuter. I configure these instances as salve nodes of Kubernetes management cluster.
So this is all about my main playbook but here I am using two ansible roles
- ec2
- k8s_cluster
Lets talk about ec2 role
This role I created for provisioning ec2 instances on AWS. To run this role we have to do some pre-setup for it which mention it above now just have a look on ec2 role tasks and handlers file.
ansible-galaxy init ec2
This is the command to initialize any ansible role.
In the above image you can see I wrote all the tasks in tasks file and keep the variable in vars file.
- name: launching a ec2 instance on aws
ec2:
key_name: "{{ key }}"
instance_type: "{{ instance_type }}"
image: "{{ os_image }}"
wait: yes
count: "{{ no_of_instances }}"
instance_tags:
name: "{{ instance_name_tag }}"
country: "{{ instance_country_tag }}"
region: "{{ instance_region_tag }}"
vpc_subnet_id: "{{ vpc_subnet }}"
region: "{{ region_name }}"
assign_public_ip: yes
state: present
group_id: "{{ security_group_id }}"
aws_access_key: "{{ aws_access_key }}"
aws_secret_key: "{{ aws_secret_key }}"
register: data
- name: debug
debug:
var: data
In first task named as launching a ec2 instance on aws — I am just using ec2 module provided by ansible as you can see in above image or code.
Then task named as debug I debug the register value data.
So by ec2 role we can launch as many as instances over AWS.
Lets come to k8s_clauster role
This role I created for configuring machines as master and slave node. By this single role we can configure as many systems as master node and slave node.
ansible-galaxy init k8s_cluster
So you can see I wrote all the tasks in tasks file and all the tasks which will done by handler wrote in handler file.
Total 17 tasks are written in this role few are common for both master and slave node few are only for master node and few only for slave node.
lets see the handles file…
Let see one by one all the tasks done by k8s_cluster role.
- name: "installing docker"
package:
name: "docker"
state: present- name: "creating a file and changing the driver of docker from cgroupdriver to systemd"
copy:
dest: /etc/docker/daemon.json
content: |
{
"exec-opts":["native.cgroupdriver=systemd"]
}- name: "starting and enabling docker"
service:
enabled: yes
name: docker
state: started
notify:
- configuring yum repo for kubeadm,kubelet & kubectl
- installing kubeadm,kubelet & kubectl
- starting and enabling kubelet service
- pulling config images
- installing iproute-tc
- change the value bridge-nf-call-iptables
- copying flannel-cfg file
- initializing kubeadm and config master as a client
- storing token value
- fetch the token value file from masterhost to localhost
- config kubernetes config file
- restart system
- copy token file from local host to slave host
- converting token file to execution mode and then execute it
Task1 named as “installing docker” is for installing docker on both the nodes master and slave.
Task 2 named as “creating a file and changing the driver of docker from cgroupdriver to systemd” It is for changing the driver of docker software so firstly it create a file then writing content in it. This task run on both the nodes master and slave.
Task 3 named as “starting and enabling docker” This task start and enable docker services on system. This task run on both the nodes master and slave.
After running no. 3 task this will notify the handlers to perform further tasks. I do this things because after one time successful run main playbook if we go to run playbook again then these tasks not run on same machine on which they perform lastly.
So now see handlers tasks….
# handlers file for test
- name: "configuring yum repo for kubeadm,kubelet & kubectl"
yum_repository:
name: "kubernetes"
description: "repo for kubernetes"
baseurl: https://packages.cloud.google.com/yum/repos/kubernetes-el7-\$basearch
enabled: yes
repo_gpgcheck: yes
gpgcheck: yes
gpgkey:
- "https://packages.cloud.google.com/yum/doc/yum-key.gpg"
- "https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg"
Task 4 named as “configuring yum repo for kubeadm,kubelet & kubectl”. It is configuring repo for kubeadm,kubelet and kubectl. This task perform on both nodes master and slave.
- name: “installing kubeadm,kubelet & kubectl”
yum:
name: [“kubeadm”,”kubelet”,”kubectl”]
disable_excludes: “kubernetes”
state: present
Task 5 named as “installing kubeadm,kubelet & kubectl”. It is for installing kubeadm, kubelet and kubectl on both the node slave and master.
- name: “starting and enabling kubelet service”
service:
enabled: yes
name: “kubelet”
state: started
Task 6 named as “starting and enabling kubelet service”. It is for start and enable the kubelet service. This task run on both the nodes master and slave.
- name: "pulling config images"
shell:
cmd: "kubeadm config images pull"
Task 7 named as “pulling config images”. It is for pulling docker images to run pods which is necessary for cluster. This task run on both the nodes master and slave.
- name: "installing iproute-tc"
package:
name: "iproute-tc"
state: present
Task 8 named as “installing iproute-tc”. It is for installing software iproute-tc. This task run on both the nodes master and slave.
- name: "change the value bridge-nf-call-iptables"
lineinfile:
path: /proc/sys/net/bridge/bridge-nf-call-iptables
line: "1"
state: present
when: "'tag_name_master' in group_names"
Task 9 named as “change the value bridge-nf-call-iptables”. It is for changing the value a iproute file. This task run only master node.
- name: "copying flannel-cfg file"
template:
src: /flannel-cfg/flannel.yml
dest: /root/flannel.yml
when: "'tag_name_master' in group_names"
Task 10 named as “copying flannel-cfg file”. It is for copying flannel config file in the master node.
- name: "initializing kubeadm and config master as a client"
shell:
cmd: "{{ item }}"
chdir: /root
with_items:
- "kubeadm init --pod-network-cidr={{ network_ip }} --ignore-preflight-errors=NumCPU --ignore-preflight-errors=Mem"
- "mkdir -p $HOME/.kube"
- "sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config"
- "sudo chown $(id -u):$(id -g) $HOME/.kube/config"
- "kubectl apply -f flannel.yml"
when: "'tag_name_master' in group_names"
Task 11 named as “initializing kubeadm and config master as a client”. It is for initialize kubeadm and then config master node as a client so this task only run on master node.
- name: "storing token value"
shell:
cmd: "kubeadm token create --print-join-command > /root/token.py"
when: "'tag_name_master' in group_names"
Task 12 name as “storing token value”. It is for storing token value which is used in future to join slave node to the master. So this task run only on master node as token only genrated by master node.
- name: "fetch the token value file from masterhost to localhost"
fetch:
src: /root/token.py
dest: /token/
flat: yes
when: "'tag_name_master' in group_names"
Task 13 named as “fetch the token value file from masterhost to localhost”. It is for fetching token value which is used in future to join slave node to the master. So this task fetch the token value from remote host to localhost. As this fetching file from master so its run only on master node.
- name: "config kubernetes config file"
copy:
dest: "/etc/sysctl.d/k8s.conf"
content: |
net.bridge.bridge-nf-call-ip6tables=1
net.bridge.bridge-nf-call-iptables=1
when: "'tag_name_slave' in group_names"
Task 14 named as “config kubernetes config file”. It is for config Kubernetes config file. This task run only on salve node.
- name: "restart system"
shell:
cmd: "sysctl --system"
when: "'tag_name_slave' in group_names"
Task 15 named as “restart system”. it is for restarting services . This task run only on slave node.
- name: "copy token file from local host to slave host"
copy:
src: /token/token.py
dest: /etc/
when: "'tag_name_slave' in group_names"
Task 16 named as “copy token file from local host to slave host”. It is for copying token file which we fetch from master to store in slave node. This task run only on slave node.
- name: "converting token file to execution mode and then execute it"
shell:
cmd: "{{ item }}"
chdir: /etc/
with_items:
- "chmod +x token.py"
- "./token.py"
when: "'tag_name_slave' in group_names"
Task 17 named as “converting token file to execution mode and then execute it”. Firstly this task converting token file in execution mode then execute it by this our slave connect to the master node.
So all done now time to run our main playbook.
ansible-playbook setup.yml
These are the outputs while palybook was running.
Aws console ec2 instances summary before playing playbook.
Aws console ec2 instances summary after playing playbook.
So all done our one slave node Kubernetes management cluster is setup we can confirm it by going inside are master node.
Lets run our main playbook once again and this time I just provision two new instances and config them as slave node.
ansible-playbook setup.yml
All set playbook run sucessfully now check the output.
Look two new nodes join to the master. Now in cluster 1 master node and 3 slave node.
Aws console summary after provision 2 more instances.
Lets deploy a website and exposed it and checked Overlay configuration is working or not.
I deploy a website with the help of docker image and then exposed it.
See the output…..
Look from all the nodes we can access the website.
All set by in a single click Kubernetes cluster configured. This is the beauty of Ansible tool.
Thanks for reading and i hope you will like the Blog!!!
For Ansible role and playbook you can visit my GitHub repo…
Suggestions, Feedbacks are always welcome, keep in touch on Linkedin.