Kubernetes on the Pikubator with Ansible

Warm Up

After we finished the picubator in the last post, it is now time to do something with the hardware. First of all, all steps I show are covered in an Ansible Playbook, so it’s not necessary to type everything.

First we need a computer on which Ansible is installed. For this purpose I used a dedicated Raspberry Pi, which also does other small tasks. Basically, the principle is the same as with any non-distributable package:

$ apt-get -y install dirmngr
$ echo deb http://ppa.launchpad.net/ansible/ansible/ubuntu trusty main >> /etc/apt/sources.list
$ apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 3C4A3FD7BB9C367
$ apt-get update
$ apt-get -y install ansible

On Debian we need the dirmngr package so that gpg can read the key directly from the keyserver. With a

$ ansible --version

we can display the version. Don’t be surprised, on the Pi it takes a moment to display the version.

So our “management” Pi is already set up. Now would also be a possible time to clone the repository at https://github.com/alexander-truemper/k8sOnPiAnsible, so you have the scripts at hand.

For my taste, it would be a good idea to have a directory where you store your repository.

$ cd 
$ mkdir repos
$ cd repos
$ git clone git@github.com:alexander-truemper/k8sOnPiAnsible.git

But you can also type the playbook yourself, as I find there are always a few details that you miss if you just download and run something.

Count 1,2,3

Important to continue is the Ansible Inventory file. Here we describe which nodes we want to manage with Ansible and to which groups they belong. In my case the file looks like this:

all:
  children:
    pikubator:
      hosts:
        node02.fritz.box:
        node03.fritz.box:
        node04.fritz.box:
        node05.fritz.box:
    master:
      hosts:
        node01.fritz.box:

The separation into Master and Nodes makes sense, so that the Master Playbook does not run on the nodes and we then have 5 Cluster Master 😉

Attention, please

So, now we’ve almost done all the preparations, almost… Because so far the Ansible Pi can’t “talk” to the Kubernetes Pies (it does this by default via ssh). Therefore we have to create a SSH key pair on the Ansible Pi, if we don’t have one yet. Do this:

$ ssh-keygen

and we can answer the question for a password with an empty password (Whether this is always a good idea is another question). The naming of the files is also ok and we can just confirm everything.

So that we can now access all nodes without a password, we still have to distribute our keys. There is also a small script for this that we find in this repo: https://github.com/alexander-truemper/supplementary-tools

$ cd ~
$ cd repos
$ git clone git@github.com:alexander-truemper/supplementary-tools.git

Again, you have to maintain an inventory file, here it is called nodelist.  It should have basically the same entries as the inventory.yml, but without descriptions. Now we can start with

$ cd supplementary-tools
$ ./ssh-copy-id-all-nodes.sh

distribute our SSH key to all nodes. This should result in a confirmation of the SSH host key during the first SSH connection, which we can confirm first. If everything goes well here, then we can continue with Ansible. There is also a small tool in the repo, with which we can execute the same command on all nodes:

$ ./commandOnAllNodes.sh echo 'Hello World'

This should now write Hello World on the terminal for all nodes. This is helpful if you want to shut down all nodes before disconnecting the power. In this case the parameters would not be “echo ‘Hello World'” but “sudo reboot -h now”.

Bootstrap

So, back to the k8sOnPiAnPiAnsible repository:

$ cd../k8sOnPiAnsible/

From now on we will stay “here” for now.

Once upon a tome

Now, let’s deal with a basic matter first. Disabling swap and enabling cgroups for Docker / Kubernetes operation. We only need this playbook exactly once:

$ ansible-playbook -i inventory.yml preps.yml

If everything works, Ansible aborts with an error message that the nodes are no longer accessible because the restart trigger was executed. It’s not really elegant yet, but it works.

I don’t like this playbook very much, because the call for customizing the /boot/cmdline.txt is not repeatable (idempotent).  That’s why the separation of these basic tasks was done in a separate playbook.

More Software

In the next playbook we install the basic software for a Kubernetes cluster, which is Docker and all dependencies and Kubernetes and all dependencies.

$ ansible-playbook -i inventory.yml installKubeAdm.yml

This can take a while on slower SD cards, where you can first have a hot drink of your choice. Since we do the whole thing five more times, it’s a good idea to increase the parallelism of Ansible, with:

$ ansible-playbook -i inventory.yml installKubeAdm.yml -f 5

The parameter -f here does not mean force, but forks. However, the default is 5 anyway, so there is no increase in performance for a small cluster like mine.

So now we have 5 Raspis waiting to get work. Let’s give you a little test task:

$ cd ../supplementary-tools/
$ ./commandOnAllNodes.sh docker run hello-world

This should run the hello-world image on all nodes with a corresponding, comprehensive output:

Unable to find image 'hello-world:latest' locally
latest: Pulling from library/hello-world
ad0f38092cf2: Pulling fs layer
ad0f38092cf2: Verifying Checksum
ad0f38092cf2: Download complete
ad0f38092cf2: Pull complete
Digest: sha256:4b8ff392a12ed9ea17784bd3c9a8b1fa3299cac44aca35a85c90c5e3c7afacdc
Status: Downloaded newer image for hello-world:latest
Hello from Docker!
This message shows that your installation appears to be working correctly.
To generate this message, Docker took the following steps:
 1. The Docker client contacted the Docker daemon.
 2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
    (arm32v7)
 3. The Docker daemon created a new container from that image which runs the
    executable that produces the output you are currently reading.
 4. The Docker daemon streamed that output to the Docker client, which sent it
    to your terminal.
To try something more ambitious, you can run an Ubuntu container with:
 $ docker run -it ubuntu bash
Share images, automate workflows, and more with a free Docker ID:
 https://hub.docker.com/
For more examples and ideas, visit:
 https://docs.docker.com/engine/userguide/
ssh exited with code 0 on node01.fritz.box

This command must work on all nodes. If something goes wrong here, it must be fixed so that Kubernetes will work.

Master, Master!

So, now we have a stack of Raspberry Pis waiting to be commanded. So we activate the role of the master on node01 and will take over world domination and make the other nodes slaves.

To do this we now run the initializeMaster Playbook. This will first load the spec from flannel and change the platform from amd64 to arm. The arm support is also the reason why we use flannel here.

After that the init starts with two parameters that are not suitable for non-test systems. We specify a join token that is very lean and we generate a Snake Oil certificate for the CA. But for our purposes as picubators this is sufficient.

$ ansible-playbook -i nodelist.yml initializeMaster.yml

lets the magic begin.

This can take quite a while, especially because the Raspis are quite leisurely in the IO on the SD card. Booting without SD card is a topic for a new post 😉

Master and Servant

If the master is finished after quite a while, a command must be executed on the nodes. Since this command only runs once, we can use the commandOnAllNodes.sh script again:

$ ./commandOnAllNodes.sh sudo kubeadm join --token abcdef.1234567890123456 node01:6443 --discovery-token-unsafe-skip-ca-verification

Here we use the insecure token again and ignore the insecure certificate. Not recommended, except to experiment!

Control me

So now we have a Kubernetes installation 🙂

To be able to use kubectl from the workstation, we need the ~/.kube/config from the master and store it as ~/.kube/config on our workstation.

$ scp pi@node01:~/.kube/config ~/.kube/config

As a final test:

$ kubectl get nodes

NAME      STATUS    ROLES     AGE       VERSION
node01    Ready     master    18d       v1.11.1
node02    Ready         18d       v1.11.1
node03    Ready         18d       v1.11.1
node04    Ready         18d       v1.11.1
node05    Ready         18d       v1.11.1

Congratulations, your Raspberry Pis can now do something more 🙂

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.