Posts

Harvester 1.7.0 Custom Routes

Because Harvester is an immutable Linux distribution, changes in terminal will be discarded when the server is restarted. Additional config should be written in specific configuration files so it can persist between reboot, including custom route. In previous Harvester version, the configuration can be added in the file inside /oem/ folder. But for version 1.7.0, it should be added using nmcli. Below is the example to add custom route to 192.168.1.0/24 via interface 192.168.0.1

Setup Timezone in Harvester

To apply time zone in harvester node , you need to create yaml file timezone.yaml, with this implementation, harvester node will automatically apply time zone when join the cluster.

Harvester storage using partition

Image
Newer Harvester from 1.3.0 no longer support for partition as main storage, but I found a workaround to use partition rather than whole disk. In this case, I installed harvester in the ssd raid disk and want to spare part of the disk as ssd storage class. After installation, I made the partition using gparted and then use the partition as ssd ctorage class. Here is how to do it. Create file /oem/90_after_install.yaml with this content: name: "fstab related patch" stages: initramfs: - name: "Add ssd mountpoint" commands: - | echo "/dev/disk/by-label/HARV_LH_SSD /var/lib/harvester/ssd auto defaults 0 0" >> /etc/fstab directories: - path: /var/lib/harvester/ssd permissions: 493 owner: 0 group: 0 add label to the partition using this command: e2label /dev/sdg6 HARV_LH_SSD add disk in longhorn using path /var/lib/harvester/ssd

Auto Scale DNS using kubernetes and CI/CD deployment using gitlab

Image
Imagine that your application can automatically add another container to serve more request and delete unused container when requests are declining, with this approach, you can preserve the resources for another application and also decrease the bill for cloud resource usage. With kubernetes, you can achieve this using HPA (Horizontal pod autoscaler) or using vertical autoscaler. But in this tutorial, we will only show how to use HPA. The case for this tutorial is to autoscale DNS to serve approximately thousand users. We need external Load Balancer to forward requests to kubernetes.

Galera behind NAT (mixed environment in k8s and docker)

Image
Database plays an important role in the application development. To have a redundant and high available database, we can use Galera for mysql/mariadb. With Galera, the database can have replication across different server and the load can be divided into some servers. By that capability, we can have a reliable backend database for our application. In this tutorial, we will deploy galera cluster inside kubernetes and docker. In normal circumstance, when we install galera inside kubernetes, the replication communication is using internal network. So we need to set galera to use external network so that galera cluster can communicate with mariadb outside the kubernetes. With this setup, we can combine galera node located in kubernetes, docker, private or public cloud.

Dynamic Jenkins agent in Kubernetes

Image
Continuing the article about installing Jenkins in kubernetes , we will setup a dynamic agent for jenkins so that when we build the application, jenkins will launch pod automatically and build our application under the kubernetes. For this to work, you should have this following condition: Jenkins installed in Kubernetes (you can follow this article ) Kubernetes plugin for jenkins Install Kubernetes Plugin Go to Dashboard ⟶ Manage Jenkins ⟶ Manage Plugins ⟶ Available Tab, select kubernetes plugin Click Download and restart

Persistent Jenkins using Longhorn in Kubernetes

CI/CD consist of automation, there are so many tools that can be used to automate this process. One of them is Jenkins, in this article, I will show how to deploy Jenkins in kubernetes using persistent disk with Longhorn. Below are the task that we will do to deploy jenkins persistent disk in kubernetes: Create persistent volume yaml. Create persistent volume claim yaml. Create service account yaml. Create deployment yaml. Create service yaml. Create ingress yaml. Deploy the configuration.