Create a router between 2 VMs on Proxmox /
Connect 2 machines in different subnets via a homemade router

This project is purely an experience to observe how a router works by creating our own. We will create two VMs in different subnets, and we are going to create a router ourselves based on iptables to connect the two machine and also give them access to the internet with NAT

Setup of the experiment

Machine Interface Bridge IP
PC1 eth0 vmbr10 192.168.10.2/24
ROUTER eth0 vmbr10 192.168.10.1/24
ROUTER eth1 vmbr20 192.168.20.1/24
PC2 eth0 vmbr20 192.168.20.2/24

Bridge ( vmbr 10 and 20)

You can create the bridge by modifying this file (on Proxmox).

micro /etc/network/interfaces

By adding

# start
auto vmbr10
iface vmbr10 inet manual
    bridge-ports none
    bridge-stp off
    bridge-fd 0

auto vmbr20
iface vmbr20 inet manual
    bridge-ports none
    bridge-stp off
    bridge-fd 0

# end

Then :

systemctl restart networking

pc1

pc2

routeur

Explications

Test

pc1

root@experimentalPc1:~# ip a s | grep 192
    inet 192.168.10.2/24 brd 192.168.10.255 scope global eth0

pc1 has the ip 192.168.10.2 and it’s in the subnet 192.168.10.255

So it cannot reach pc2 at 192.168.20.2 because they are not in the same subnet

But we have a router that is connected to both networks:

The router

The router is well connected in both networks, it’s in both subnets.

root@experimentalRouter:~# ip a s | grep 192
    inet 192.168.10.1/24 brd 192.168.10.255 scope global eth0
    inet 192.168.20.1/24 brd 192.168.20.255 scope global eth1

It routes the communications after activating:

sysctl -w net.ipv4.ip_forward=1

pc 2

root@experimentalPc2:~# ip a s | grep 192
inet 192.168.20.2/24 brd 192.168.20.255 scope global eth0

pc1 has 192.168.20.2 and it’s in the subnet 192.168.20.255

And the default route, so the Gateway that will be used to try to exit the subnet if needed:

root@experimentalPc2:~# ip route 
default via 192.168.20.1 dev eth0 onlink 
192.168.20.0/24 dev eth0 proto kernel scope link src 192.168.20.2 

Here we have our router

On the router, we will add a new interface.

Then we restart the networking part.

systemctl restart networking 

This one will retrieve an IP address on the subnet 192.168.1.255 from the Router of my ISP.

That is indeed the case:

root@experimentalRouter:~# ip a s | grep 192
    inet 192.168.10.1/24 brd 192.168.10.255 scope global eth0
    inet 192.168.20.1/24 brd 192.168.20.255 scope global eth1
    inet 192.168.1.132/24 brd 192.168.1.255 scope global dynamic eth2

Here we can see that we have retrieved a new interface and IP:

inet 192.168.1.132/24 brd 192.168.1.255 scope global dynamic eth2

Now I can go online, we can check that by pinging.

root@experimentalRouter:~# ping 1.1.1.1
PING 1.1.1.1 (1.1.1.1) 56(84) bytes of data.
64 bytes from 1.1.1.1: icmp_seq=1 ttl=64 time=2.59 ms
64 bytes from 1.1.1.1: icmp_seq=2 ttl=64 time=0.323 ms
64 bytes from 1.1.1.1: icmp_seq=3 ttl=64 time=0.363 ms

Route the addresses outside of the subnets to the internet.

When I try to go online on pc1 it doesn’t work.

root@experimentalPc1:~# ping 1.1.1.1
PING 1.1.1.1 (1.1.1.1) 56(84) bytes of data.
...

On the router, we will direct the traffic from machines that want to go to the Internet towards the router’s interface eth2 with:

First, we will install iptables:

apt install iptables

Then we add a NAT rule :

iptables -t nat -A POSTROUTING -o eth2 -j MASQUERADE

To make this permanent:

apt install iptables-persistent -y
netfilter-persistent save

Now it should work !

pc1 has access to the Web via a NAT

root@experimentalPc1:~# ping 1.1.1.1
PING 1.1.1.1 (1.1.1.1) 56(84) bytes of data.
64 bytes from 1.1.1.1: icmp_seq=1 ttl=63 time=0.401 ms
64 bytes from 1.1.1.1: icmp_seq=2 ttl=63 time=0.288 ms
64 bytes from 1.1.1.1: icmp_seq=3 ttl=63 time=1.11 ms

You can check that the default gateway is indeed 192.168.10.1 for pc1 with

ip route | grep default 

Which should return :

default via 192.168.10.1 dev eth0 onlink 

Traceroute

You can also verify this with a traceroute:

root@experimentalPc1:~# traceroute 8.8.8.8 
traceroute to 8.8.8.8 (8.8.8.8), 30 hops max, 60 byte packets
 1  192.168.10.1 (192.168.10.1)  0.397 ms  0.348 ms  0.327 ms
 2  192.168.1.1 (192.168.1.1)  10.426 ms  10.402 ms  10.380 ms
 3  80.10.255.181 (80.10.255.181)  1.997 ms  2.074 ms  2.145 ms

Here we can see that we have gone from our router (1) 192.168.10.1

to the gateway (2) 192.168.1.1 of the router

and then outside the network (3) 80.10.255.181.

Diagram with web access:

Create firewall rules on the router with iptables.

If I want to prevent PC1 and the 192.168.10.0/24 network from communicating with PC2 on the 192.168.20.0/24 network, but keep them connected to my ISP router gateway.

On our homemade router.

Block LAN1 → LAN2


# Block LAN1 → LAN2 iptables -A FORWARD -s 192.168.10.0/24 -d 192.168.20.0/24 -j DROP # Block LAN2 → LAN1 iptables -A FORWARD -s 192.168.20.0/24 -d 192.168.10.0/24 -j DROP

These rules only block communication between the two subnet – East – West

The traffic from LAN1 → WAN (eth2) and LAN2 → WAN remains allowed thanks to NAT.

Let’s try

Indeed it no longer works… Here I am trying to contact PC2 from PC1.

root@experimentalPc1:~# ping 192.168.20.2 
PING 192.168.20.2 (192.168.20.2) 56(84) bytes of data.
...

But the Internet works perfectly.

root@experimentalPc1:~# ping 1.1.1.1
PING 1.1.1.1 (1.1.1.1) 56(84) bytes of data.
64 bytes from 1.1.1.1: icmp_seq=1 ttl=63 time=0.493 ms
64 bytes from 1.1.1.1: icmp_seq=2 ttl=63 time=0.321 ms

But internet access is still working:

root@experimentalPc1:~# ping google.fr
PING google.fr (142.250.201.163) 56(84) bytes of data.
64 bytes from par21s23-in-f3.1e100.net (142.250.201.163): icmp_seq=1 ttl=116 time=6.91 ms
64 bytes from par21s23-in-f3.1e100.net (142.250.201.163): icmp_seq=2 ttl=116 time=7.40 ms

Allow LAN1 → LAN2

To delete the rule:

iptables -D FORWARD -s 192.168.10.0/24 -d 192.168.20.0/24 -j DROP
iptables -D FORWARD -s 192.168.20.0/24 -d 192.168.10.0/24 -j DROP

Or, if we want to change the rule and therefore keep a rule, but this time that explicitly accepts.

iptables -A FORWARD -s 192.168.10.0/24 -d 192.168.20.0/24 -j ACCEPT
iptables -A FORWARD -s 192.168.20.0/24 -d 192.168.10.0/24 -j ACCEPT

See the active iptables rules

iptables -L -v -n 

See the rules of NAT.

iptables -t nat -L -v -n

This comprehensive, step-by-step guide shows you how to build a true GitOps CI/CD pipeline from scratch. We’ll use GitHub Actions to automatically build and publish your app’s Docker image, and then configure ArgoCD to watch your Git repo and automatically deploy every change to your K3s cluster

Let’s do this step-by-step. We’ll break it into two main phases:

  1. Phase 1: Continuous Integration (CI). We’ll set up a workflow where git push automatically builds and publishes your app’s Docker image.

  2. Phase 2: Continuous Deployment (CD). We’ll install a tool in K3s that watches your Git repo and automatically updates your cluster when you push changes

Phase 1: Continuous Integration (Git to Docker Image)

Our goal here is to make a simple Node.js app, push it to GitHub, and have GitHub Actions automatically build the Docker image and push it to a registry.

Step 1.1: Create a Simple Node.js App

First, let’s create a super-simple « Hello World » Express app on your local machine.

  1. Create a new folder (e.g., k8s-hello-world) and go into it.

  2. Run npm init -y to create a package.json.

  3. Install Express: npm install express.

  4. Create a file named index.js and add this code:


const express = require('express'); const app = express(); const port = process.env.PORT || 3000; // A "VERSION" variable we can change later const VERSION = '1.0.0'; app.get('/', (req, res) => { res.send(`Hello from Kubernetes! Version: ${VERSION}`); }); app.listen(port, () => { console.log(`App listening at http://localhost:${port}`); });

Add a start script to your package.json. It should look something like this:


{ "name": "k8s-hello-world", "version": "1.0.0", "description": "", "main": "index.js", "scripts": { "start": "node app.js" }, "keywords": [], "author": "", "license": "ISC", "dependencies": { "express": "^4.18.2" } }

You can test it locally by running npm start. You should be able to see the message at `http://localhost:3000`.

Step 1.2: Create the Dockerfile

Now, let’s add the « recipe » for Docker to build this app.

Create a file named Dockerfile (no extension) in the same folder:


# Stage 1: Build the app FROM node:18-alpine AS builder WORKDIR /app COPY package*.json ./ RUN npm install COPY . . # Stage 2: Create the final, small image FROM node:18-alpine WORKDIR /app COPY --from=builder /app . # Set the port ENV PORT=3000 EXPOSE 3000 # Run the app CMD [ "npm", "start" ]

Plain English: This is a « multi-stage build. » It uses one container to install all the dev dependencies and build the app, then copies only the necessary files into a fresh, clean container. This makes your final image much smaller and more secure.

You can test your image is working by building your image locally :

docker build -t antoinebr/k8s-hello-world .

And run it to verify :

docker run -d -p 3001:3000 antoinebr/k8s-hello-world

1.3: Set up the GitHub Repo

This is the central hub for our project.

  1. Go to GitHub and create a new public repository (e.g., k8s-hello-world).

  2. On your local machine, initialize Git and push your code

# Create a .gitignore file
echo "node_modules" > .gitignore

# Initialize git
git init
git add .
git commit -m "Initial commit: hello world app + Dockerfile"

# Link it to your new GitHub repo (replace with your URL)
git remote add origin https://github.com/YOUR_USERNAME/k8s-hello-world.git
git branch -M main
git push -u origin main

1.4: Set up GitHub Actions (The CI Workflow)

This is the magic. We’ll tell GitHub to run a job every time we push to the main branch.

  1. Add Secrets: The GitHub Action will need to log in to Docker Hub to push your image.
    • Go to your GitHub repo’s page.
    • Click on Settings > Secrets and variables > Actions.

Click New repository secret for each of these:

Create the Workflow File: On your local machine, create a new folder path: .github/workflows

mkdir -p .github/workflows

Inside that folder, create a file named build-and-push.yaml:

name: Build and Push Docker Image

# This workflow runs on every push to the 'main' branch
# (The docs you sent use `on: release:`, but for our
# CI/CD project, running on *every push* is what we want)
on:
  push:
    branches: [ "main" ]

jobs:
  build:
    runs-on: ubuntu-latest

    steps:
      # 1. Check out your code from the repo
      - name: Check out the repo
        uses: actions/checkout@v4 

      # 2. Log in to Docker Hub
      - name: Log in to Docker Hub
        uses: docker/login-action@v3
        with:
          # Use the secrets we created in Step 1.4
          username: ${{ secrets.DOCKERHUB_USERNAME }}
          password: ${{ secrets.DOCKERHUB_TOKEN }}

      # 3. NEW: Extract metadata (tags and labels)
      # This is the modern helper action from the docs you shared.
      # It automatically creates smart tags based on the Git event.
      - name: Extract metadata for Docker
        id: meta
        uses: docker/metadata-action@v5 # <-- The new, smart helper
        with:
          # This tells the action what to name your image
          images: ${{ secrets.DOCKERHUB_USERNAME }}/k8s-hello-world

      # 4. Build and push the image
      - name: Build and push Docker image
        uses: docker/build-push-action@v5 # <-- Updated to v5
        with:
          context: .
          push: true
          # This line is the magic!
          # It uses the smart tags and labels generated by the 'meta' step above
          tags: ${{ steps.meta.outputs.tags }}
          labels: ${{ steps.meta.outputs.labels }}

Plain English: This file tells GitHub: « When someone pushes to main, spin up a new Linux server. On that server, check out the code, log in to Docker Hub using the secrets I gave you, and then build the Dockerfile in this repo. Finally, push that new image to my Docker Hub account with two tags: latest and a unique one based on the Git commit. »

Commit and Push the Workflow:

git commit -a -m "Add GitHub Actions CI workflow"

Then push :

git push origin main

Now, go to your GitHub repo and click the « Actions » tab. You should see your workflow running!

If it’s successful (it might take a minute or two), you’ll see a green checkmark.

our code is now automatically being built and published to the world (or at least, to Docker Hub). You have the CI (Continuous Integration) part of CI/CD all set.

Let’s set up the CD (Continuous Deployment).

Phase 2: Continuous Deployment (Git to Live App)

Right now, your workflow is a « push » system. You git push your code, and GitHub pushes a Docker image to Docker Hub.

We’re about to build a « pull » system in the K3s cluster. We will install a « robot » inside your cluster called ArgoCD.

This robot’s only job is to watch your Git repository. The moment it sees a change in your YAML files, it will « pull » those changes into the cluster and make it happen automatically.

You will never have to run kubectl apply -f ... for your app again. Your Git repo becomes the single source of truth. This is GitOps.

Here’s the plan:

  1. Step 2.1: Create the Kubernetes YAML files (Deployment, Service, Ingress) for your new app.

  2. Step 2.2: Push those new YAML files to your GitHub repo.

  3. Step 2.3: Install ArgoCD (the « robot ») into your K3s cluster.

  4. Step 2.4: Configure ArgoCD to watch your repo.

  5. Step 2.5: Test the complete loop by making a change and watching it deploy.

Step 2.1: Create Your App’s Kubernetes Files

Just like what we have done with WordPress, our app needs a Deployment, a Service, and an Ingress.

  1. On our local machine, in your k8s-hello-world project, create a new folder named k8s.

  2. Inside that k8s folder, create a new file named app.yaml.

  3. Paste all of this into that new app.yaml file:

Before you save!

This file bundle contains the « instructions » for Kubernetes, telling it how to run your app.

Step 2.2: Push the YAML to GitHub

Now, let’s commit our new k8s folder to the repo. This puts our app code and our infrastructure code in the same place.


git add k8s/app.yaml git commit -m "Add Kubernetes manifests for hello-world app" git push

Great! Your repo now has everything needed to run the app.


git add --all git commit -a -m "Add Kubernetes manifests for hello-world app" git push origin main

Step 2.3: Install ArgoCD in K3s

Time to install our « robot. » These two commands will install ArgoCD into its own argocd namespace in your cluster.

Create the namespace:

kubectl create namespace argocd

It should return :

namespace/argocd created

Apply the official installation YAML (this is a big one!)

kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml

This will create a bunch of new Deployments, Services, and CRDs (Custom Resource Definitions), which are what make ArgoCD work.

You can check on its progress by running: kubectl get pods -n argocd Wait until all the pods show Running.

In K9s we can see our new namespace too

Step 2.4: Log In to the ArgoCD Dashboard

ArgoCD has a great web UI. By default, it’s not exposed by an Ingress. The most secure and simple way to access it is with kubectl port-forward.

First, get the admin password: ArgoCD generates a default password and stores it in a Secret. Run this command to get it

kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d

It will print out a long, weird string. Copy that password!

Second, access the UI: Open a new terminal (you need to leave this one running) and execute:

kubectl port-forward --address 0.0.0.0 -n argocd svc/argocd-server 8080:443

Now, open your browser and go to: `https://localhost:8080`

Your browser will give you a safety warning (it’s a self-signed certificate). Just click « Advanced » and « Proceed. »

You are now in the ArgoCD dashboard!

Step 2.5: Create the « Application »

This is the final step. We need to give our « robot » its instruction sheet. We’ll tell it: « Watch this Git repo, in this folder, and deploy it to this cluster. »

We’ll do this the GitOps way, by creating one more YAML file.

  1. On your local machine, create one last file. You can call it argo-application.yaml (save it anywhere, this is just for you to run).

  2. Paste this code into it:

# MAKE SURE to change your repo URL below!

apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  # This is the name of the "card" in the ArgoCD UI
  name: hello-world 
  # Deploy this Application *into* the argocd namespace
  namespace: argocd 
spec:
  project: default

  # Source: Where is the code?
  source:
    # ==========================================================
    # !! IMPORTANT !!
    # Change this to your GitHub repo's URL
    repoURL: https://github.com/antoinebr/k8s-hello-world.git
    # ==========================================================

    # This is the folder it should look inside
    path: k8s 

    # It will watch the 'main' branch
    targetRevision: main

  # Destination: Where should it deploy?
  destination:
    # This means "the same cluster ArgoCD is running in"
    server: https://kubernetes.default.svc 

    # Deploy the app into the 'default' namespace
    namespace: default

  # This is the magic!
  # It tells ArgoCD to automatically sync when it sees a change.
  syncPolicy:
    automated:
      prune: true    # Deletes things that are no longer in Git
      selfHeal: true # Fixes any manual changes (drift)



Run :

kubectl apply -f argo-application.yaml

Step 2.6: 🤩 Watch the Magic

Go back to your ArgoCD dashboard in your browser.

You will see a new « card » named hello-world. At first, it will say Missing and OutOfSync.

Within a few moments, ArgoCD will see the new Application. It will:

  1. Clone your Git repo.

  2. Read the k8s/app.yaml file.

  3. Compare it to what’s in your cluster (nothing).

  4. It will automatically start applying the Deployment, Service, and Ingress.

The status will change to Progressing and then, finally, to Healthy and Synced.

If it doesn’t Sync and the error message looks like :

ComparisonError

Failed to load target state: failed to generate manifest for source 1 of 1: rpc error: code = Unknown desc = failed to list refs: authentication required: Repository not found.

It’s because your GitHub repository is private. You can create SSH keys for Argo CD, but for this guide, let’s keep our repository public.

It means

Okay, I’m inside your GitHub repository! But you told me to look for a folder named k8s… and there’s no folder here with that name.

Let’s double check the path

path: k8s # <-- THIS LINE!

After any modification, re-run the command:

kubectl apply -f argo-application.yaml

Ok now it should look like this :

You are now deployed!

To see your app, just:

  1. Add your domain (e.g., hello.home) to your /etc/hosts file, pointing to your K3s node’s IP.

  2. Visit `http://hello.home` in your browser.

You should see: Hello from Kubernetes! Version: 1.0.0

Step 2.7: The Final Test (The Full Loop)

This is the whole point. Let’s make a change and watch it deploy automatically.

  1. On your local machine, open app.js.

  2. Change the version: const VERSION = '2.0.0';

  3. Commit and push the change:

Now, watch what happens:

  1. GitHub Actions: Go to your repo’s « Actions » tab. You’ll see your build-and-push workflow kick off. It’s building your v2.0.0 image and pushing it to Docker Hub with the :main tag.

ArgoCD: Wait for the GitHub Action to finish. ArgoCD checks your repo for changes every 3 minutes (by default). Because we used imagePullPolicy: Always in our Deployment, ArgoCD will detect a new version of the :main image and trigger a new sync

Now it’s in sync but maybe you noticed that we are still on Version 1.0.0 while our code is on Version 2.0.0

What happened here?

We published a new Image on the Docker hub :

antoinebr/k8s-hello-world:main

But in K8s/app.yaml the image didn’t change. So for Argo CD there’s noting to do…

image: antoinebr/k8s-hello-world:main

We need to find a way to first change our version of the image.

Run the GitHub workflow to push the image to the Docker registry.

Tell Argo CD to deploy the new image.

To make this happen, we will tag our code version with git tags to create new version tags.


# Make sure your main branch is up to date git checkout main git pull # Create the new version tag git tag v2 # Push all your tags to GitHub git push --tags

The Robot’s Action (GitHub Actions): The workflow file below will see this v2 tag, build your app, and push the new Docker image: antoinebr/k8s-hello-world:v2.

Part 2: The « Deploy » Manager (You + CD)

This part is your manual « Go » button. You do this after the build in Part 1 is finished.

Your Action:

  1. You know the v2 image is now on Docker Hub.

  2. You open your k8s/app.yaml file.

  3. You manually change the image: line to tell your cluster which version to use:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: hello-world
spec:
  template:
    spec:
      containers:
      - name: app
        # You manually update this line
        image: antoinebr/k8s-hello-world:v2
...

You commit and push this one file change:

git add k8s/app.yaml
git commit -m "Deploy: Promote v2 to production"
git push

Your Final Workflow File

This is the only file you need. This code is « smart »—it creates a :main tag when you push to main (for testing) and a :v2 tag when you push a v2 tag.

File: .github/workflows/build-and-push.yaml


name: Build and Push Docker Image on: push: # Run on pushes to the 'main' branch branches: [ "main" ] # ALSO run on pushes of tags that start with 'v' (e.g., v1, v2.0, v1.2.3) tags: [ 'v*' ] jobs: build: runs-on: ubuntu-latest steps: - name: Check out the repo uses: actions/checkout@v4 - name: Log in to Docker Hub uses: docker/login-action@v3 with: username: ${{ secrets.DOCKERHUB_USERNAME }} password: ${{ secrets.DOCKERHUB_TOKEN }} # This step is smart! - name: Extract metadata for Docker id: meta uses: docker/metadata-action@v5 with: images: ${{ secrets.DOCKERHUB_USERNAME }}/k8s-hello-world tags: | # This creates the ':main' tag on a push to main type=ref,event=branch # This creates the ':v2' tag on a push of a tag 'v2' type=ref,event=tag - name: Build and push uses: docker/build-push-action@v5 with: context: . push: true # This 'meta' step will output the correct tag automatically tags: ${{ steps.meta.outputs.tags }} labels: ${{ steps.meta.outputs.labels }}

Step 1 : Push Your New Workflow File

First, make sure the new build-and-push.yaml file (the one that builds based on tags) is in your main branch.

### Push Your New Workflow File
git add .github/workflows/build-and-push.yaml git commit -m "Update CI to build from Git tags" git push

Step 2 : Create Your First « Release » (Build the v1 Image)

Now, let’s play the role of the developer releasing « Version 1 » of your app.

On your local machine, create a Git tag named v1:

git tag v1

Push your tags to GitHub. This is the trigger for your CI build.

Bash

git push --tags

Watch the « Build » Robot (GitHub Actions):

Step 3: « Deploy » Your v1 Release

You are now the « Release Manager. » The build is done. It’s time to tell ArgoCD to deploy it.

  1. On your local machine, manually edit k8s/app.yaml.

  2. Change the image: line to point to the new, permanent v1 tag:


spec: containers: - name: app # Before: image: antoinebr/k8s-hello-world:main (or something else) # After: image: antoinebr/k8s-hello-world:v1

Commit and push this change. This is your « Go » button.

git add k8s/app.yaml
git commit -m "Deploy: Promote v1 to production"
git push origin main

Step 4: Watch the « Deploy » Robot (ArgoCD)

  1. Open your ArgoCD dashboard.

  2. You will see your hello-world application change to OutOfSync (because the Git image: is now different from the cluster’s image:).

  3. ArgoCD will automatically start syncing. It will see the new v1 tag.

  4. It will perform a rolling update in your K3s cluster. The status will go to Progressing and then Healthy and Synced again.

Step 5: Verify Your v1 App is Live!

  1. Add your domain (e.g., hello.home) to your /etc/hosts file (if you haven’t already).

  2. Go to `http://hello.home` in your browser.

CheatSheet : How to Publish & Deploy a New Version

Step 0: Code a new version

Code you version then commit


git commit -a -m "My new feature for v3" git push

Step 1: Build a New Release (Your Job -> GitHub)

This tells your CI « robot » (GitHub Actions) to build and publish a new, permanent Docker image.

  1. Tag your code: (Replace v2 with your new version, e.g., v3)
git tag v3

Push the tag:

git push --tags

Verify: Go to your GitHub Actions tab and watch the build finish. Check Docker Hub to see your new v2 image.

Step 2: Deploy the New Release (Your Job -> ArgoCD)

This tells your CD « robot » (ArgoCD) to pull the new image and update your cluster.

  1. Edit k8s/app.yaml: Manually change the image: line to the new tag you just built.

... spec: containers: - name: app # Change this line image: antoinebr/k8s-hello-world:v2 ...

Commit and push the change:

git add K8s/app.yaml
git commit -m "Deploy: Promote v2"
git push

Verify: Go to your ArgoCD dashboard. Watch your app sync up. Refresh your website to see the change!

Why Helm Charts are So Useful

Think of Helm as apt or brew for Kubernetes.

You’ve just seen that deploying even a « simple » app like WordPress requires at least 5-6 separate YAML files (Deployment, Service for WP, Deployment, Service for MySQL, Secret, PVCs, Ingress).

Helm bundles all of those YAML files into a single package called a Chart.

This gives you three huge advantages:

  1. Simple Installs: Instead of kubectl apply -f ... six times, you just run one command: helm install my-wordpress ....

  2. Easy Configuration: You don’t have to edit 5 different YAML files to change a password or domain name. The chart provides one single values.yaml file to set all your options.

  3. Clean Lifecycle: When you’re done, you don’t have to remember to delete the Deployment, Service, Secret, and Ingress. You just run helm uninstall my-wordpress, and Helm removes everything it created.

Install

It’s important to understand that it makes more sense to install and execute Helm on your workstation instead of the master node.

curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash

Let’s Deploy a Basic App: Nginx

Let’s use Helm to deploy a simple Nginx server. This chart will install Nginx and automatically expose it with a Service.

Step 1: Add the Chart Repository

A « repository » is just a place that hosts charts. We’ll add the bitnami repository, which has thousands of popular apps.

helm repo add bitnami https://charts.bitnami.com/bitnami
helm repo update

This just tells Helm where to find the « bitnami » charts.

Step 2: Install the App

This one command will download the chart, configure all the YAML files in the background, and apply them to your cluster.

Bash

helm install my-nginx bitnami/nginx

You’ll see a big output, but the important part is: NAME: my-nginx STATUS: deployed

Step 3: See What Helm Did

First, you can ask Helm what it’s managing:

Bash

helm ls

You’ll see your new « release » named my-nginx.

Now, let’s see the Kubernetes objects it created for us (just like we did before):

❯ helm ls
NAME        NAMESPACE   REVISION    UPDATED                                 STATUS      CHART           APP VERSION
my-nginx    default     1           2025-10-28 01:00:14.346439 +0100 CET    deployed    nginx-22.2.1    1.29.2

Now, let’s see the Kubernetes objects it created for us (just like we did before):

Bash

kubectl get all

You’ll see that Helm automatically created:

  1. A pod/my-nginx-....

  2. A service/my-nginx

  3. A deployment.apps/my-nginx

  4. A **`replicaset.apps/my-nginx-…

It did all that work for us!

Step 4: Access Your App

The Bitnami chart, by default, creates a Service of type LoadBalancer. In K3s, the built-in service load balancer (Klipper) will automatically assign it an external IP from your node’s IP.

Let’s find that IP:

Bash

kubectl get svc my-nginx

Output:

NAME       TYPE           CLUSTER-IP      EXTERNAL-IP    PORT(S)        AGE
my-nginx   LoadBalancer   10.43.144.156   192.168.1.68   80:31123/TCP   2m

(Note: If your EXTERNAL-IP stays <pending>, it might be because your K3s setup doesn’t have an IP pool. In that case, you can still use the NODEPORT 31123 just like in your first Nginx test.)

Like here :

You can now access your app at the external IP:

Step 5: Clean Up

This is the best part. To delete everything we just created—the deployment, the service, the pod—you just run one command:

helm uninstall my-nginx

It will say release "my-nginx" uninstalled.

If you run kubectl get all again, you’ll see all the my-nginx objects are gone. It’s a perfect, clean uninstallation.

Install jellyseerr

To fin helm charts there’s this site : https://artifacthub.io/

I decided to use the following helm chart :

https://artifacthub.io/packages/helm/loeken-at-home/jellyseerr

Add repository

helm repo add loeken-at-home https://loeken.github.io/helm-charts

Install chart

helm install my-jellyseerr loeken-at-home/jellyseerr --version 2.7.3
# jellyseerr-values.yaml 

ingress:
  main:
    enabled: true
    hosts:
      - host: jellyseerr.home
        paths:
          - path: /

persistence:
  config:
    enabled: true
    type: persistentVolumeClaim 
    accessMode: ReadWriteOnce
    size: 1Gi
    storageClass: "local-path"

There’s options that can be used to set the persistence and other : Link to values.yaml

In a previous artcile we covered the core concepts of Deployments, Services, and Ingress Deploying WordPress is the perfect next step. It introduces two crucial new Kubernetes concepts that build on your Docker knowledge

  1. Persistent Storage: In Docker, you’d use a volume like -v /my/files:/var/www/html to save your data. In Kubernetes, you need a way to do this that works even if your app (Pod) gets moved to a different node. We’ll use PersistentVolumeClaims (PVCs) for this.

  2. Secrets: You should never put passwords in your config files. Kubernetes has a special object called a Secret to store sensitive data like your database password.

  3. App-to-App Communication: Your WordPress app needs to talk to your MySQL database. We’ll use a Service for this, but it will be a special internal-only type called ClusterIP.

Here is a step-by-step plan to get our WordPress site running.

Create a Secret for Your Passwords

First, let’s create a Secret to hold all the passwords for our database. This keeps them out of our YAML files.

Run this command on your master node. Remember to change the passwords!

kubectl create secret generic wordpress-db-secret \
  --from-literal=MYSQL_ROOT_PASSWORD='YOUR_ROOT_PASSWORD' \
  --from-literal=MYSQL_PASSWORD='YOUR_WORDPRESS_DB_PASSWORD' \
  --from-literal=MYSQL_USER='wordpress'

I’m going to use

kubectl create secret generic wordpress-db-secret \
  --from-literal=MYSQL_ROOT_PASSWORD='admin' \
  --from-literal=MYSQL_PASSWORD='admin' \
  --from-literal=MYSQL_USER='wordpress'

This creates one Secret named wordpress-db-secret with three separate values inside it.

This should return :

secret/wordpress-db-secret created

You can check the secret creation by running :

kubectl get secret wordpress-db-secret

Create the « File Store » (PersistentVolumeClaims)

You need two persistent storage areas: one for the MySQL database files and one for the WordPress files (like your uploads, themes, and plugins).

Which will be the equivalent of mounting local directory inside a container with Docker like

-v /my/files:/var/www/html

We’ll create two PersistentVolumeClaim (PVC) objects. Think of a PVC as a request for storage. The good news is that K3s comes with a Local Path Provisioner built-in, which will automatically fulfill these requests by creating storage directories on your nodes.

Create a file named storage.yaml:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: mysql-pv-claim  # A claim for the database
spec:
  accessModes:
    - ReadWriteOnce    # This volume can be mounted by one node at a time
  resources:
    requests:
      storage: 5Gi     # Request 5 Gigabytes
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: wordpress-pv-claim # A claim for the WP files
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 10Gi    # Request 10 Gigabytes

Apply it:

kubectl apply -f storage.yaml

It should return

persistentvolumeclaim/mysql-pv-claim created
persistentvolumeclaim/wordpress-pv-claim created

You can check that they were created and « Bound » (fulfilled) by running:

kubectl get pvc

It worked if you see something like :

NAME                 STATUS    VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   VOLUMEATTRIBUTESCLASS   AGE
mysql-pv-claim       Pending                                      local-path     <unset>                 27s
wordpress-pv-claim   Pending                                      local-path     <unset>                 27s

Deploy MySQL

Now we’ll deploy the database. This will consist of two parts in one file:

  1. A Deployment to run the MySQL container.

  2. A Service so WordPress can find the database.

Create a file named mysql.yaml:

apiVersion: v1
kind: Service
metadata:
  name: mysql  # This is the stable DNS name WordPress will use to connect
spec:
  ports:
  - port: 3306
  selector:
    app: mysql   # Connects this service to Pods with the label "app: mysql"
  type: ClusterIP  # IMPORTANT: Only reachable *inside* the cluster
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: mysql
spec:
  replicas: 1
  selector:
    matchLabels:
      app: mysql
  template:
    metadata:
      labels:
        app: mysql # The label the Service is looking for
    spec:
      containers:
      - name: mysql
        image: mysql:8.0 # Using MySQL 8.0
        env:
        - name: MYSQL_ROOT_PASSWORD
          valueFrom:
            secretKeyRef:
              name: wordpress-db-secret  # The secret we created
              key: MYSQL_ROOT_PASSWORD # The specific key inside the secret
        - name: MYSQL_PASSWORD
          valueFrom:
            secretKeyRef:
              name: wordpress-db-secret
              key: MYSQL_PASSWORD
        - name: MYSQL_USER
          valueFrom:
            secretKeyRef:
              name: wordpress-db-secret
              key: MYSQL_USER
        - name: MYSQL_DATABASE
          value: "wordpress" # We'll hardcode the database name
        ports:
        - containerPort: 3306
        volumeMounts:
        - name: mysql-persistent-storage
          mountPath: /var/lib/mysql # Mount the storage
      volumes:
      - name: mysql-persistent-storage
        persistentVolumeClaim:
          claimName: mysql-pv-claim # Use the PVC we created

Apply it:

kubectl apply -f mysql.yaml

It should return

service/mysql created
deployment.apps/mysql created

Deploy WordPress

Now for the main event! This is very similar to the MySQL setup. We create a Deployment and a Service.

Create a file named wordpress.yaml:

apiVersion: v1
kind: Service
metadata:
  name: wordpress # The name our Ingress will point to
spec:
  ports:
  - port: 80
    targetPort: 80
  selector:
    app: wordpress
  type: ClusterIP # We'll expose this with Ingress, so ClusterIP is perfect
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: wordpress
spec:
  replicas: 2 # Let's run 2 replicas for high availability
  selector:
    matchLabels:
      app: wordpress
  template:
    metadata:
      labels:
        app: wordpress
    spec:
      containers:
      - name: wordpress
        image: wordpress:latest
        env:
        - name: WORDPRESS_DB_HOST
          value: "mysql" # This is the name of the MySQL Service!
        - name: WORDPRESS_DB_USER
          valueFrom:
            secretKeyRef:
              name: wordpress-db-secret
              key: MYSQL_USER
        - name: WORDPRESS_DB_PASSWORD
          valueFrom:
            secretKeyRef:
              name: wordpress-db-secret
              key: MYSQL_PASSWORD
        - name: WORDPRESS_DB_NAME
          value: "wordpress" # Must match the MYSQL_DATABASE name
        ports:
        - containerPort: 80
        volumeMounts:
        - name: wordpress-persistent-storage
          mountPath: /var/www/html # Mount the storage for WP files
      volumes:
      - name: wordpress-persistent-storage
        persistentVolumeClaim:
          claimName: wordpress-pv-claim # Use the other PVC

And we apply it :

kubectl apply -f wordpress.yaml

It should return

service/wordpress created
deployment.apps/wordpress created

Expose WordPress with Ingress

This is the final step, and it’s exactly what we did for our Node.js app. We’ll create an Ingress to route external traffic to our new WordPress Service.

Create a file named wordpress-ingress.yaml:


apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: wordpress-ingress annotations: nginx.ingress.kubernetes.io/rewrite-target: / spec: rules: - host: wordpress.home # Or whatever domain you want http: paths: - path: / pathType: Prefix backend: service: name: wordpress # Point to the WordPress Service port: number: 80 # On port 80

Apply it:

kubectl apply -f wordpress-ingress.yaml

It should return

ingress.networking.k8s.io/wordpress-ingress created

Setup the DNS

Update your DNS: Just like before, add your new host to your /etc/hosts file (or your Pi-hole):

192.168.1.68 wordpress.home

Check Your Cluster: You can see everything you’ve created:

kubectl get all,ingress,pvc

Our WordPress files will be saved in /var/lib/rancher/k3s/storage/ on one of your nodes (managed by the wordpress-pv-claim), and your database data will be in a similar location (managed by the mysql-pv-claim). They will both persist even if you delete or restart the Pods.

root@k3sMasterNode:/var/lib/rancher/k3s/storage#  ls -l
total 8
drwxr-xr-x 5 www-data www-data 4096 Oct 24 22:16 pvc-149b4f6f-5885-4e25-9776-4621267ceac4_default_wordpress-pv-claim
drwxrwxrwx 8      999 root     4096 Oct 27 22:12 pvc-8b33a04a-72d3-466a-a172-222220597d88_default_mysql-pv-claim

And I can explore my pv-claim

cd /var/lib/rancher/k3s/storage/pvc-149b4f6f-5885-4e25-9776-4621267ceac4_default_wordpress-pv-claim#

We can explore the Persistent Volume Claim with K9 by doing:

  1. In K9s, type :pvc (or :persistentvolumeclaim) and press Enter.

  2. You will see a list of all your PVCs, like mysql-pv-claim and wordpress-pv-claim.

  3. This view is great for checking their Status (it should say Bound) and how much Capacity they have.

What we created

Here’s a not so simple or super clear diagram of what we created, but if you take the time to explore it, and if you understood this article, I believe it will make sense for you.

The Kubernetes-to-Docker-Compose Map

Now you have a good understanding of how Kubernetes works, and you should be able to translate what you know from Docker to Kubernetes. But I made for you a quick cheat sheet.

Kubernetes (K3s) Object Docker Compose + Caddy Equivalent
Deployment A service definition in your docker-compose.yaml (e.g., services: wordpress:).
Pod A running Docker container instance managed by that service.
Service (ClusterIP) Docker Compose’s internal networking. (e.g., when your wordpress container can reach your mysql container just by using the name mysql).
Ingress Your Caddyfile or Caddy’s configuration. It’s the reverse proxy that routes wordpress.home to the right container.
PersistentVolumeClaim A named volume in your docker-compose.yaml (e.g., volumes: - wp_data:/var/www/html).
Secret Your .env file that you load with env_file: .env.
K3s Nodes (Master/Worker) The single VM/server you are running docker-compose on.

You have now learned the complete set of building blocks to run almost any application you can run in Docker or Docker Compose.

Think of it this way: any project you have in a docker-compose.yaml file can be « translated » to K3s using the objects you now know.

Your New « Translation » Toolkit

When you look at any of your Docker projects, you can just map the concepts:

Have fun !

In this project, we will set up a basic CI/CD pipeline. To illustrate this, we will use a basic Node.js and Express app.Our goal is that each time we commit something to the main GitHub branch, we want to run our tests. If the tests pass, then we will deploy our app in production.For this project, the production environment is a VM running Debian.

Set up the app

npm init -y

Install Express, Mocha, and Chai (for the tests). Here, we install chai@4.3.10 as it handles CommonJS imports, while version 5 does not.

npm install express
npm install mocha chai@4.3.10 supertest --save-dev

Create the Express app

Create an app.js file:


// app.js const express = require("express"); const app = express(); app.get("/", (req, res) => { res.send("Hello World!"); }); app.get("/api", (req, res) => { res.json({ message: "API is working" }); }); module.exports = app;

Create a file server.js and start the server:


// server.js const app = require("./app"); const PORT = process.env.PORT || 3000; app.listen(PORT, () => { console.log(`Server running on http://localhost:${PORT}`); });

Let’s start the local server :

node server.js

Go to http://localhost:3000 and check if that « Hello World! » appears.

Add the tests

Create a test folder:

mkdir test

Create the Unit tests

Create the file test/unit.test.js:

// test/unit.test.js
const { expect } = require("chai");

describe("Basic Math", () => {
  it("should add two numbers correctly", () => {
    expect(2 + 2).to.equal(4);
  });

  it("should check if a string contains a word", () => {
    const str = "Hello World";
    expect(str).to.include("World");
  });
});

Create the integration tests

Create the file test/integration.test.js:

// test/integration.test.js
const request = require("supertest");
const app = require("../app");

describe("Integration Tests", () => {
  it("GET / should return Hello World", async () => {
    const res = await request(app).get("/");
    expect(res.status).to.equal(200);
    expect(res.text).to.equal("Hello World!");
  });

  it("GET /api should return JSON response", async () => {
    const res = await request(app).get("/api");
    expect(res.status).to.equal(200);
    expect(res.body).to.deep.equal({ message: "API is working" });
  });
});

Set up npm scripts

Modify your package.json:Modifie ton package.json :

{
  "name": "nodejs-ci-cd",
  "version": "1.0.0",
  "main": "index.js",
  "scripts": {
    "start": "node server.js",
    "test": "mocha test/**/*.test.js"
  },
  "keywords": [],
  "author": "",
  "license": "ISC",
  "description": "",
  "dependencies": {
    "express": "^4.21.2"
  },
  "devDependencies": {
    "mocha": "^11.1.0",
    "chai": "^4.3.10",
    "supertest": "^7.1.0"
  }
}

Test locally

Run the tests:

npm test

You should get something like


> mocha test/**/*.test.js Integration Tests ✔ GET / should return Hello World ✔ GET /api should return JSON response ✔ GET /api should return JSON response Basic Math ✔ should add two numbers correctly ✔ should check if a string contains a word 5 passing (31ms)

Put the code on GitHub

Create a repository and add it to your project, follow the instructions from Github

Set up GitHub Actions

Create the .github/workflows directory:

mkdir -p .github/workflows

Add the .github/workflows/ci.yml file:

touch ci.yml 
name: CI for Node.js App

on:
  push:
    branches: [main]
  pull_request:
    branches: [main]

jobs:
  test:
    runs-on: ubuntu-latest

    steps:
      - name: Checkout code
        uses: actions/checkout@v3

      - name: Setup Node.js
        uses: actions/setup-node@v4
        with:
          node-version: '20'

      - name: Install dependencies
        run: npm install

      - name: Run tests
        run: npm test

Push the code on Github and check if the tests are passing

Check the Actions tab of your repo to ensure that the tests pass.

Set up automated deployment when tests pass

Be sure you have Node.js installed on your server, and Git.

Clone your project:


git clone <URL_DE_TON_REPO> ~/apps/nodejs-ci-cd cd ~/apps/nodejs-ci-cd npm install Use PM2 to keep it running. sudo npm install -g pm2 pm2 start server.js --name "nodejs-app" pm2 save

Automate deployment

Create the deploy.sh script on your VM:

nano ~/apps/nodejs-ci-cd/deploy.sh

Paste this :


#!/bin/bash cd ~/apps/nodejs-ci-cd git pull origin main npm install pm2 restart nodejs-app

Make it executable


chmod +x ~/apps/nodejs-ci-cd/deploy.sh

Add deployment to the pipeline

Update your ci.yml:


deploy: needs: test runs-on: ubuntu-latest steps: - name: Deploy to VPS uses: appleboy/ssh-action@v0.1.10 with: host: ${{ vars.SERVERHOST }} username: ${{ vars.SERVERUSER }} port: ${{ vars.SERVERPORT }} key: ${{ secrets.SSH_PRIVATE_KEY }} script: "bash /home/antoine/apps/nodejs-ci-cd/deploy.sh"

Add the infos to Github

As you may have noticed, we use environment variables from Github action.

We also need to setup ssh jeys to let Github Action run the commands for the deployement on our VM

Let’s run this command to generate the key :

 ssh-keygen -t ed25519 -C "your_email@example.com

This command will generate two files

id_ed25519 (the private key)
id_ed25519.pub (the public key)

By default the keys will be created in ~/.ssh/

Set the permissions on the private key

chmod 600 ~/.ssh/id_ed25519

Now we need to put the public key in the list of keys that can be used to connect to our VM

Copy the contents of your public key:

cat ~/.ssh/id_ed25519.pub

Then append the public key to the authorized_keys file:

echo "your_public_key_here" >> ~/.ssh/authorized_keys

Ensure the file is in the right location (~/.ssh/authorized_keys) and has the correct permissions:

chmod 600 ~/.ssh/authorized_keys

Restart SSH on the server:

sudo systemctl restart ssh

Be sure the VM allows connecting with a key, to do so open :

nano /etc/ssh/sshd_config

Verify that the following lines are present and not commented out (no # at the beginning of the line):

PubkeyAuthentication yes
PasswordAuthentication no # you can keep this value to yes even if it's less secure 

Restart ssh

systemctl restart ssh

To verify if you can acutally connect to the server with the key you cna test it like so :

ssh -i ./yourPrivate.key -p 22 user@<VM-ip>

Add the private key to Github

Go to the folder where you saved your SSH keys :

cd /root/.ssh

Get the private key :

cat id_ed25519

It should look like this :

-----BEGIN OPENSSH PRIVATE KEY-----
b3BlbnNzaC1rZXktdjEAAAAABG5vbmUAAAAEZWQyNTUxOQAAACDfyqxZPq/jVxqg
1PM7J/v2D3gF41UbwGBj3G8W+M4uM79fQDRNdgkPm4F0dghhAAAAJQCn9Flfp/RZ
X6cAAAAg3kBIUgB0dI3AwIjk5GMQXoEDCnD2MyHHo7HvDWud4G/cDpK53M8ZdAHg
dW9V1bWoGR2cU6BAM+xPM4PiRwA==
-----END OPENSSH PRIVATE KEY-----

Take the key and store it in a secret variable in GitHub Actions as SSH_PRIVATE_KEY.

Conclusion

Now you should have a basic working CI/CD pipeline for your Node.js app with GitHub Actions.

This article organizes step-by-step instructions for deploying, exposing, and managing Kubernetes workloads, handling connectivity issues, and ensuring self-healing deployments with K3s. These notes represent the of my personal experiments with K3s.

Install K3s

To install K3S, you need at least 2 VMs. In my case, I decided to use Debian VMs on Proxmox. The first VM will be the master node, and the second will be the worker node. The master node orchestrates the cluster and distributes the load across one or more worker nodes. As your application grows and requires more machines, you can simply add more worker nodes to the cluster.

Install the master node

curl -sfL https://get.k3s.io | sh - 

Get the token

cat /var/lib/rancher/k3s/server/token

It will return something like

root@k8masterNode:~# cat /var/lib/rancher/k3s/server/token
K10ee9c18dac933cab0bdbd1c64ebece61a4fa7ad60fce2515a5fcfe19032edd707::server:64ee8a3fec9c3d1db6c2ab0fc40f8996

Get the IP of the machine

root@k8masterNode:~# ip addr show | grep 192
inet 192.168.1.68/24 brd 192.168.1.255 scope global dynamic ens18

Join the cluster from the worker node

NB : Be sure the worker node has a different hostname otherwise it will not work

If the VM has been cloned then run : sudo hostnamectl set-hostname <new-hostname>

curl -sfL https://get.k3s.io | K3S_URL=https://<server-ip>:6443 K3S_TOKEN=<token> sh -
curl -sfL https://get.k3s.io | K3S_URL=https://192.168.1.68:6443 K3S_TOKEN="K10ee9c18dac933cab0bdbd1c64ebece61a4fa7ad60fce2515a5fcfe19032edd707::server:64ee8a3fec9c3d1db6c2ab0fc40f8996" sh -

Check if the worker node joined the cluster

Use the command below on the master node :

kubectl get nodes

It should show the worker node along the master node like so :

root@k3sMasterNode:/home/antoine# kubectl get nodes
NAME            STATUS   ROLES                  AGE     VERSION
k3smasternode   Ready    control-plane,master   8m14s   v1.33.5+k3s1
k3sworkernode   Ready    <none>                 2m3s    v1.33.5+k3s1

Creating and Exposing Deployments

Now let’s test our cluster by doing a basic deployment, we will deploy a simplistic webserver with Nginx.

Create a Deployment

kubectl create deployment nginx --image=nginx

It should return something like :

root@k3sMasterNode:/home/antoine# kubectl create deployment nginx --image=nginx
deployment.apps/nginx created

Expose the Deployment

kubectl expose deployment nginx --type=NodePort --port=80

It should return :

service/nginx exposed

Verify the service:

kubectl get svc

Expected output:

NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)        AGE
kubernetes   ClusterIP   10.43.0.1    <none>        443/TCP        26m
nginx        NodePort    10.43.51.9   <none>        80:31193/TCP   6m50s

Verify the pods:

kubectl get pods -o wide

Example:

NAME                    READY   STATUS    IP          NODE
nginx-bf5d5cf98-knd5t   1/1     Running   10.42.1.3   workernode

Note that the Nginx service has two ports: 80 and 31193. The first one will be used to access the service from inside the cluster, while the other one is for access from outside.

Access the Nginx App:

From the cluster ( From the master node )

curl http://<SERVICE-IP>

Like so :

curl  10.43.51.9

From a machine outside of the cluster

curl http://<MASTER-IP>:<NODEPORT>

Like so :

curl http://192.168.1.68:31193

So here we used two Kubernetes objects :

The Deployment : my app logic

That created a Deployment object, which in turn manages Pods.
Those pods are running the NGINX container image.

The Service : The network access

It acts as a stable network entry point for your pods:
• Internally (via ClusterIP): 10.43.51.9:80
• Externally (via NodePort): <any-node-ip>:31193

It automatically load-balances traffic across all pods in the deployment.

To see everything we launched so far you can type :

kubectl get all

That will show what’s running :

NAME                         READY   STATUS    RESTARTS   AGE
pod/nginx-5869d7778c-jwl6j   1/1     Running   0          33m

NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)        AGE
service/kubernetes   ClusterIP   10.43.0.1    <none>        443/TCP        50m
service/nginx        NodePort    10.43.51.9   <none>        80:31193/TCP   30m

NAME                    READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/nginx   1/1     1            1           33m

NAME                               DESIRED   CURRENT   READY   AGE
replicaset.apps/nginx-5869d7778c   1         1         1       33m

Managing a Deployment

Ok now let’s remove this basic deployment which was just here to confirm everything was working. We will remove what we just done to get a clean base to deploy our own application to the cluster ! Exciting !

Remove a Deployment

Delete the service:

kubectl delete service nginx

It should return :

service "nginx" deleted

Delete the deployment :

kubectl delete deployment nginx

It should return :

deployment.apps "nginx" deleted

Confirm deletion:

kubectl get deployments

It should show :

No resources found in default namespace.

To check if our service is now gone :

kubectl get svc

Here we can see our nginx service is no longer present :

NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.43.0.1    <none>        443/TCP   54m

Building and Pushing Custom Docker Images

Deploying that Nginx image was just the beginning; let’s deploy our own image to our cluster.

If you don’t have an image you want to deploy yet, you can create one and upload it to an image repository like Docker Hub.

Build the Docker Image

Ensure your app is ready for deployment:

docker build -t node-express-app .

Login to DockerHub

docker login

Tag and Push the Image

docker tag node-express-app antoinebr/node-express-app:v1
docker push antoinebr/node-express-app:v1

Deploying Custom Applications

Now that we have our Docker image ready let’s deploy it to our cluster. But this time we want to be more precise and you a file that will represent our deployement.

Create a Deployment YAML

Save as node-express-deployment.yaml:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: node-express-app
spec:
  replicas: 6
  selector:
    matchLabels:
      app: node-express-app
  template:
    metadata:
      labels:
        app: node-express-app
    spec:
      containers:
      - name: node-express-app
        image: antoinebr/node-express-app:v10
        ports:
        - containerPort: 3000

Apply the Deployment

kubectl apply -f node-express-deployment.yaml

If that worked you should see :

deployment.apps/node-express-app created

And you can check if you see the deployment we created by running:

kubectl get deployments

This should show

root@k3sMasterNode:/home/antoine/node-express-app# kubectl get deployments
NAME               READY   UP-TO-DATE   AVAILABLE   AGE
node-express-app   3/6     6            3           58s

We can see all our pods that contain an instance of our container. We chose to have six of them, and we can observe this by using:

kubectl get pods

That will return

root@k3sMasterNode:/home/antoine/node-express-app# kubectl get pods
NAME                               READY   STATUS    RESTARTS   AGE
node-express-app-d58f5fb9b-d47nm   1/1     Running   0          7m25s
node-express-app-d58f5fb9b-kl9nj   1/1     Running   0          7m25s
node-express-app-d58f5fb9b-plrkd   1/1     Running   0          7m25s
node-express-app-d58f5fb9b-stb87   1/1     Running   0          7m25s
node-express-app-d58f5fb9b-trj9j   1/1     Running   0          7m25s
node-express-app-d58f5fb9b-zjt6b   1/1     Running   0          7m25s

Expose the Deployment

In order to access the app we just launched, we need to expose our app to the network. To do so, we will use the following command.

kubectl expose deployment node-express-app --port=80 --target-port=3000 --type=NodePort

It should return :

service/node-express-app exposed

Let’s break it down ;

 --port=80 --target-port=3000 

That command says something like : « When someone sends a request to port 80 on the Service, forward it to port 3000 on the pod.”

--type=NodePort

Exposes the service externally on each node’s IP at a static port

So with NodePort, Kubernetes assigns a random port between 30000–32767 (unless you specify one with –node-port=).

That’s how you can access it from outside, like:

curl http://192.168.1.68:31193

So in plain language it says something like :

Create a Service that makes my Deployment node-express-app reachable on port 80,and forward that traffic to port 3000 inside the app’s container, and expose it externally via a NodePort.

Get App IP

kubectl get svc node-express-app

You will get something like :

root@k3sMasterNode:/home/antoine/node-express-app# kubectl get svc node-express-app
NAME               TYPE       CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE
node-express-app   NodePort   10.43.220.17   <none>        80:31581/TCP   24s

Access the App:

curl http://<MASTER-IP>:<NODEPORT>

In my case :

curl http://192.168.1.68:31581

If you deploy my image, you will see that the counter increases by one after each reload. And subtly, we go from 10 to 2. It’s because the site shows the state of the container after our request has been load balanced.

Make it more production ready

So it works ! But it’s a bit hacky with the use of NodePort, so to access our app from outisde of the cluster we have to specify a port…

http://192.168.1.68:31581/

We could keep it this way and add a reverse proxy in front of it that will forward our users’ requests to the right port…

But guess what that exists in K3s It’s built in, and it’s called an ingress controller.
Let’s setup this.

Before we continue, let’s remove the deployment we created previously. To remove that deployment, use the command:

kubectl delete deployment node-express-app

It should return :

deployment.apps "node-express-app" deleted

Right after running the command you can check the pod states by running :

kubectl get pods

It should show something like thsis where the pors chnaged status to « Terminating »

NAME                               READY   STATUS        RESTARTS   AGE
node-express-app-d58f5fb9b-d47nm   1/1     Terminating   0          22m
node-express-app-d58f5fb9b-kl9nj   1/1     Terminating   0          22m
node-express-app-d58f5fb9b-plrkd   1/1     Terminating   0          22m
node-express-app-d58f5fb9b-stb87   1/1     Terminating   0          22m
node-express-app-d58f5fb9b-trj9j   1/1     Terminating   0          22m
node-express-app-d58f5fb9b-zjt6b   1/1     Terminating   0          22m

Before completely disappearing when we run :

kubectl get pods

Using an Ingress Controller

With the ingress controller, we want to get rid of the <ip>:<port> way to access our app. In other words, we want to set up a reverse proxy in our cluster.

Deployment and Service Configurations

Create deployment.yaml:

Our deployment is the same as before, but this time I have chosen to use only 3 replicas. So, I will have 3 instances of my container running the app, which will provide load balancing and self-healing (reconstruction) in case something goes wrong.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: node-express-app
spec:
  replicas: 3
  selector:
    matchLabels:
      app: node-express-app
  template:
    metadata:
      labels:
        app: node-express-app
    spec:
      containers:
      - name: node-express-app
        image: antoinebr/node-express-app:v10
        ports:
        - containerPort: 3000

Create service.yaml:

Here, our service will indicate that when women request access to the app node-express-app, we expect the request on port 80, which we will forward to port 3000.

apiVersion: v1
kind: Service
metadata:
  name: node-express-app
spec:
  selector:
    app: node-express-app
  ports:
  - protocol: TCP
    port: 80
    targetPort: 3000
  type: ClusterIP

Create ingress.yaml:

Here is our ingress controller definition, which serves as our reverse proxy. We are expecting requests from the host node-express.home, which we will forward to our service (backup) node-express-app on port 80, as defined earlier.

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: node-express-ingress
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  rules:
  - host: node-express.home
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: node-express-app
            port:
              number: 80

Apply Configurations

kubectl apply -f deployment.yaml
kubectl apply -f service.yaml
kubectl apply -f ingress.yaml

Check our deployement

kubectl get pod
NAME                               READY   STATUS    RESTARTS   AGE
node-express-app-d58f5fb9b-tdxmn   1/1     Running   0          2m18s
node-express-app-d58f5fb9b-xjnkx   1/1     Running   0          2m18s
node-express-app-d58f5fb9b-zqmrv   1/1     Running   0          2m18s

All good we have our 3 replicas as requested.

Check our service

kubectl get svc
NAME               TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)   AGE
kubernetes         ClusterIP   10.43.0.1      <none>        443/TCP   114m
node-express-app   ClusterIP   10.43.220.17   <none>        80/TCP    44m

All good we have our node-express-app service as requested.

Check our ingress

kubectl get ingress
NAME                   CLASS     HOSTS               ADDRESS                     PORTS   AGE
node-express-ingress   traefik   node-express.home   192.168.1.68,192.168.1.69   80      2m16s

All good for the ingress too

Set Hostname in Local System or DNS

Because we want to access our app via its domain name, we need to associate the IP address with the domain. The easiest way to do this is by editing your hosts file on your local machine like this:

Add to /etc/hosts:

192.168.1.68 node-express.home

Or with your local DNS server if you have on locally, iwhichs my case whith PiHole installed on my network.

Let’s ping our domain to confirm it point to our cluster :

ping  node-express.home

PING node-express.home (192.168.1.68): 56 data bytes
64 bytes from 192.168.1.68: icmp_seq=0 ttl=64 time=6.861 m

Ok all good now let’s test it from our browser :

Here’s a diagram of the setup

Here’s a visual representation of what we created:

Setup K9s to manage our cluster

K9s is a CLI tool to manage your cluster it’s cool because you can :

See everything easily: You can instantly see all your apps, pods, and services running in your cluster.

Get logs fast: Press a key and you can read your app’s logs — no need to type long kubectl logs commands.

Enter containers easily: You can jump inside a pod/container with just a few keystrokes to check or debug things.

Do simple actions quickly: Scale apps, restart pods, or delete things directly from K9s — all in one screen.

Install

On a mac I installed K9s like so :

brew install k9s

Get the keys

To be able to controll our cluster from K9s we need to have the keys

So what I’m going to do it’s to copy my K3S keys to my machine using scp

scp root@192.168.1.69:/etc/rancher/k3s/k3s.yaml ~/.kube/config

An other way to proceed is to copy paste your key from the master node ot the worker node like so :

cat /etc/rancher/k3s/k3s.yaml

And copy paste the content in :

~/.kube/config

Adapt the keys

By default, the kubeconfig file from K3s refers to the cluster as 127.0.0.1, which only works on the node itself.

In ~/.kube/config

Find the line that looks like this:

server: https://127.0.0.1:6443

Replace 127.0.0.1 with your master node’s IP:

server: https://192.168.1.68:6443

Launch K9s

k9s

Now you should be connected to your cluster with K9s :

Handling Worker Node Connectivity Issues

Verify Node Status

kubectl get nodes

If workernode shows NotReady, check connectivity:

curl -k https://<MASTER-IP>:6443

Expected output:

{
  "status": "Failure",
  "message": "Unauthorized",
  "reason": "Unauthorized",
  "code": 401
}

Troubleshoot Worker Token

Verify the token:

cat /var/lib/rancher/k3s/server/token

Re-set the token:

echo "THE_TOKEN" | tee /var/lib/rancher/k3s/agent/token

Restart K3s Agent

sudo systemctl daemon-reload
sudo systemctl restart k3s-agent.service

Check Nodes

kubectl get nodes

Restarting K3s Service

Common Errors

/var/lib/rancher/k3s/server/token

Restart K3s

sudo systemctl restart k3s

Kubernetes Crash Handling

Kubernetes automatically restarts failed pods.

Check Pod Status:

kubectl get pods

Example output:

NAME                                READY   STATUS    RESTARTS   AGE
node-express-app-8477686cf7-6fmvz   1/1     Running   0          30m

If a pod crashes, Kubernetes will reschedule it to maintain the desired replica count.

Resolving Proxmox Update Errors: A Guide for Home Labs

proxmox Error: command 'apt-get update' failed: exit code 100

Many home lab users encounter update errors in Proxmox VE due to the default configuration including enterprise repositories. These repositories require a paid subscription, leading to « unauthorized IP » errors when attempting updates without one. This article provides a step-by-step guide to resolve this issue by switching to the community (no-subscription) repositories.

Understanding the Problem

Proxmox VE, by default, includes enterprise repositories in its configuration. These repositories provide access to features and updates intended for production environments and require a valid subscription. When a user without a subscription attempts to update their Proxmox installation, the system tries to access these restricted repositories, resulting in errors and failed updates.

The Solution: Switching to Community Repositories

The solution is to disable the enterprise repository and enable the community (no-subscription) repository. Here’s how:

  1. Access the Proxmox Web UI: Log in to the web interface of your Proxmox server.

  2. Navigate to Repositories:

    • Go to « Datacenter » and select your Proxmox node.
    • Click on « Repositories ».

  1. Disable the Enterprise Repository:
    • Locate the enterprise repository in the list (it will likely have a name indicating it requires a subscription).
    • Click the « Disable » button next to it.
  2. Add the No-Subscription Repository:
    • Click the « Add » button.
    • In the pop-up window, select « No-Subscription » from the dropdown menu.
    • Click « Add » to add the repository, then click « Enable » to activate it.

  1. Refresh Updates:
    • Go to « Updates » and click the « Refresh » button. This will force Proxmox to update its package list from the newly enabled community repository.

Important Considerations

Conclusion

By following these steps, you can easily resolve Proxmox update errors in your home lab environment without requiring a paid subscription. This allows you to keep your Proxmox installation up-to-date with the latest community-supported packages and features.

In the Proxmox terminal:

Install ethtool

run

apt install ethtool -y

List the IP Addresses

ip addr

Search for your network card; in my case:

2: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr0 state UP group default qlen 1000
    link/ether 9c:7b:ef:b5:bd:c1 brd ff:ff:ff:ff:ff:ff
    altname enp5s0f0

Get the network card id in my case eno1

Verify if Wake on LAN is Enabled

Run

ethtool <network card id>

So in my case

ethtool eno1

This should return:

Settings for eno1:
        Supported ports: [ TP    MII ]
        Supported link modes:   10baseT/Half 10baseT/Full
                                100baseT/Half 100baseT/Full
                                1000baseT/Full
        Supported pause frame use: Symmetric Receive-only
        Supports auto-negotiation: Yes
        Supported FEC modes: Not reported
        Advertised link modes:  10baseT/Half 10baseT/Full
                                100baseT/Half 100baseT/Full
                                1000baseT/Full
        Advertised pause frame use: Symmetric Receive-only
        Advertised auto-negotiation: Yes
        Advertised FEC modes: Not reported
        Link partner advertised link modes:  10baseT/Half 10baseT/Full
                                             100baseT/Half 100baseT/Full
                                             1000baseT/Half 1000baseT/Full
        Link partner advertised pause frame use: No
        Link partner advertised auto-negotiation: Yes
        Link partner advertised FEC modes: Not reported
        Speed: 1000Mb/s
        Duplex: Full
        Auto-negotiation: on
        master-slave cfg: preferred slave
        master-slave status: slave
        Port: Twisted Pair
        PHYAD: 0
        Transceiver: external
        MDI-X: Unknown
        Supports Wake-on: pumbg
        Wake-on: g
        Link detected: yes

Look for

   Supports Wake-on: pumbg
        Wake-on: g

If you get pumbg This means that your network card supports Wake-on-LAN.

If you don’t have Wake-on: g it means the feature is currently disabled. To enable it, please run:

ethtool -s <network card id> wol g

In my case :

ethtool -s eno1 wol g

Persist the WOL even after a machine restart

It’s not impossible that what we have done will be erased by a restart in order to make that change persistent.

nano /etc/systemd/system/wol.service

Then put

[Unit]
Description=Enable Wake-on-LAN
After=network.target

[Service]
Type=oneshot
ExecStart=/usr/sbin/ethtool -s <network card id> wol g

[Install]
WantedBy=multi-user.target

So in my case :

[Unit]
Description=Enable Wake-on-LAN
After=network.target

[Service]
Type=oneshot
ExecStart=/usr/sbin/ethtool -s eno1  wol g

[Install]
WantedBy=multi-user.target

Then run to enable the file we created.

systemctl enable wol.service

and

systemctl start wol.service

Install a Wake-on-LAN Utility

To send a Wake-on-LAN (WoL) packet from a Mac, you can use a tool such as « wakeonlan. »

brew install wakeonlan

Then run

wakeonlan <mac_address_of_the_machine>

Something like

wakeonlan 3C:5B:JK:B1:ED:E9 

WireGuard is a super simple and fast VPN. It’s built with modern encryption, so it’s secure, and it’s designed to be lightweight and easy to set up. Unlike older VPNs like OpenVPN or IPSec, WireGuard runs right in the Linux kernel, making it crazy fast and efficient. Whether you want to secure your internet traffic or connect devices, it gets the job done with minimal hassle.

Setting Up WireGuard with Docker Compose

You can deploy WireGuard easily using Docker Compose. Below is an example of a docker-compose.yml file. Modify it to suit your needs.

This configuration creates a WireGuard container that listens on UDP port 51820 and maps it to the container’s internal port 51820.

services:
  wireguard:
    image: lscr.io/linuxserver/wireguard:latest
    container_name: wireguard
    cap_add:
      - NET_ADMIN
      - SYS_MODULE
    environment:
      - PUID=1000
      - PGID=1000
      - TZ=Europe/Paris              # Set your timezone
      - SERVERURL=32.123.113.16    # Replace with your domain or public IP
      - SERVERPORT=51820
      - PEERS=1
      - PEERDNS=8.8.8.8 
      - INTERNAL_SUBNET=10.13.13.0
      - ALLOWEDIPS=0.0.0.0/0
      - PERSISTENTKEEPALIVE_PEERS=
      - LOG_CONFS=true
    volumes:
      - ./config:/config
      - /lib/modules:/lib/modules
    ports:
      - 51820:51820/udp
    sysctls:
      - net.ipv4.conf.all.src_valid_mark=1
      - net.ipv4.ip_forward=1 
    restart: unless-stopped 

Once your docker-compose.yml file is ready, start the container with:

docker compose up -d

Checking the Configuration Files

After running the container, the WireGuard configuration files are stored in the ./config directory. To view the server configuration, use:

cat ./config/wg_confs/wg0.conf

You’ll see something like this:

[Interface]
Address = 10.13.13.1
ListenPort = 51820
PrivateKey = kDDjhdkPZpdpsKKsksdsdOOdjssksdI=
PostUp = iptables -A FORWARD -i %i -j ACCEPT; iptables -A FORWARD -o %i -j ACCEPT; iptables -t nat -A POSTROUTING -o eth+ -j MASQUERADE
PostDown = iptables -D FORWARD -i %i -j ACCEPT; iptables -D FORWARD -o %i -j ACCEPT; iptables -t nat -D POSTROUTING -o eth+ -j MASQUERADE

[Peer]
# peer1
PublicKey = cqkdqqdkqdknqdnqkdnqdkqdknqdnkdqnqdqdk=
PresharedKey = Ndqqdqdkoqdokdoqkokqdokdqokqd=
AllowedIPs = 10.13.13.2/32

Connecting a WireGuard Client

To connect a client to your WireGuard server, use the following configuration in your WireGuard client app:


[Interface] PrivateKey = 8Ldqddqqddqoodododod4= # The client-generated private key ListenPort = 51820 Address = 10.13.13.2/32 DNS = 8.8.8.8 [Peer] PublicKey = cqkdqqdkqdknqdnqkdnqdkqdknqdnkdqnqdqdk # Public key from the server's wg0.conf [Peer] section PresharedKey = Ndqqdqdkoqdokdoqkokqdokdqokqd= # Preshared key from the server's wg0.conf [Peer] section AllowedIPs = 0.0.0.0/0 # Allowed IPs from the server's wg0.conf [Peer] section Endpoint = 32.123.113.160:51820 # Server public IP/domain and port

And that’s it! With this setup, you’ll have a fully functional WireGuard VPN server running in Docker, ready to secure your connections.

Having compressed images on a website is crucial for delivering a fast and seamless user experience. When images are optimized, web pages load faster, allowing users to navigate the site without delays, which is especially important for maintaining engagement and reducing bounce rates. A faster experience is better for your users, as it keeps them on the site longer and enhances their satisfaction. This is particularly important on mobile devices, where 4G and other connections can often be unstable or slow, causing unoptimized images to load slowly and frustrate users.

Additionally, compressing images can have a significant impact on your website’s egress costs and the environment. Reducing the overall size of a website means less data is transferred, which lowers bandwidth costs and energy usage. Websites that are lighter and faster are more efficient and eco-friendly, reducing the carbon footprint associated with hosting and delivering content across the web. By optimizing images, you’re not only improving the user experience but also contributing to a more sustainable internet.

Compress images which are part of your layout

Images that are part of a website’s layout are visual elements that form the design and structure of the site. These include:

These images typically do not change often and remain consistent across multiple pages or sections of the website. They are different from content images (like product photos or blog images) that might be frequently updated.

Use Photoshop « Save for the Web »

Photoshop is a tool used by front-end developers and designers. Out of the box, the software comes with a handy feature to compress images.

To compress images with Photoshop, follow this process:

  1. First of all, open the images with Photoshop

DPR Example 2

  1. Export the image:

Click on File -> Export -> Save for Web

DPR Example 2

  1. Handle the compression manually

Click on the 4-Up tab on the top left, this will display your image with 4 different compression settings.

DPR Example 3

On the bottom left, you have the possibility to zoom in on the image. I definitely recommend you zoom your image to see more precisely the image quality degradation.

Alternatively, you can use the Photoshop online alternative: Photopea.

  1. Open the images with Photopea

  1. Export the image:

  2. Tweak the compression manually

Squoosh.app

Squoosh is a powerful, web-based image compression tool developed by Google, allowing users to easily reduce image file sizes without sacrificing quality. It supports various image formats and offers real-time comparison between the original and compressed versions, along with advanced settings for resizing, format conversion, and optimizing images for the web. Squoosh runs entirely in the browser, making it fast, private, and highly accessible for quick image optimization tasks.

The usage of Squoosh is very straightforward. I’m not sure I need to guide you.

Optimize in bulk

ImageOptim for Mac

If you need to optimize a lot of images, I recommend ImageOptim. This tool is quite simple to use.

  1. Tweak the settings:

First, I recommend changing the quality settings. Don’t be afraid to try different settings to optimize the savings.

ImageOptim Settings

  1. Drop the images and wait for the compression to finish:

ImageOptim Drag and Drop

ImageOptim Compression

Optimize in bulk with the CLI

Optimize PNG

To optimize PNG in bulk, I recommend pngquant. Have a look at the documentation.

To install it:

apt-get install pngquant

Personally, I use this command:

pngquant --quality=60 --ext=.png --force  *.png

This converts (overrides) the original image.

Optimize JPG

Mozjpeg is the perfect tool to optimize JPG on Linux. To install it, do the following:

sudo apt-get -y install build-essential cmake libtool autoconf automake m4 nasm pkg-config
sudo ldconfig /usr/lib
cd ~
wget https://github.com/mozilla/mozjpeg/archive/v3.1.tar.gz
cd mozjpeg-3.1/
autoreconf -fiv
mkdir build
cd build
sh ../configure
sudo make install

Move the binary executable (cjpeg) to your path:

cd /usr/local/bin
ln -s ~/mozjpeg-3.1/build/cjpeg

Start to optimize:

By default, the compression level is set to 75%.

cjpeg -outfile myImage.moz.jpg -optimise myImage.jpg

You can change the quality setting (here 50%):

cjpeg -quality 50 -outfile myImage.moz.jpg -optimise myImage.jpg

My advice is to try different quality levels to see what’s acceptable for you. Once you’ve found the right setting, you can optimize in bulk. There are plenty of ways to optimize in bulk, but I decided to create a simple Node.js script to do it.

Have a look at the documentation if you want to learn more about MozJpeg usage.

On the fly optimization with a service

There are various ways to perform on-the-fly optimization and resizing for your images. Most of the time, websites and apps will use cloud services to do this. In this article, I will present one self-hosted/open-source service and one managed service.

IPX image optimizer

IPX is a high-performance, secure, and easy-to-use image optimizer powered by the Sharp library and SVGO. It’s a project behind Nuxt Images and is used by Netlify. It’s pretty straightforward to use, but I will try to save you a bit of time by giving you my working configuration in the following lines.

Install IPX with Express

First of all start a new project with :

npm init -y 

Then you are ready to install the packages :

npm install listhen express ipx

NB : It’s important to change the type in your package.json to "type": "module"

Create the main file for your server like :

touch app.js

Also create a public folder :

mkdir public 

In your main file add the following :

import { listen } from "listhen";
import express from "express";
import {
  createIPX,
  ipxFSStorage,
  ipxHttpStorage,
  createIPXNodeServer,
} from "ipx";

const ipx = createIPX({
  storage: ipxFSStorage({ dir: "./public" }),
  httpStorage: ipxHttpStorage({ domains: ["origin-playground.antoinebrossault.com"] })
});

const app = express().use("/", createIPXNodeServer(ipx));

listen(app);

THe importannt part are along those lines :

const ipx = createIPX({
  storage: ipxFSStorage({ dir: "./public" }),
  httpStorage: ipxHttpStorage({ domains: ["origin-playground.antoinebrossault.com"] })
});

With storage: ipxFSStorage({ dir: "./public" }) IPX will optimize images stored in path of the app.

With httpStorage: ipxHttpStorage({ domains: ["origin-playground.antoinebrossault.com"] }) IPX will optimize images coming for a given domain.

How to use IPX ?

After your done configuring IPX you should be ready to optimize your images. Let’s start simple with a basic resize

Basic Resize :

Keep original format (png) and set width to 800:

/w_800/static/buffalo.png

Then we use

http://localhost:3000/w_800/https://origin-playground.antoinebrossault.com/images/sm_215118145_porsche-944-1982-side-view_4x.png

Basic Resize with a local image :

Assuming one.png is in our public folder /public/one.ong

http://localhost:3000/W_500/one.png

Compression level at 80%:

http://localhost:3000/quality_80,w_500/one.png

Compression level at 10%:

http://localhost:3000/quality_10,w_500/one.png

Comprehensive guide to image optimization

If you want to deep dive into image optimization, I recommend you have a look at images.guide by Addy Osmani.

For most developers it’s easier to scale images in CSS than to create other size versions of the images. So a lot of too big images are loaded on mobile and downscaled in CSS.Use Chrome Developers tools to spot desktop images on mobile

How to fix the big image issue ? Pure HTML approach :

Non optimized :

<img src='https://placeimg.com/800/400/tech' class='img-responsive img-center'>

Optimized :

In this example we provide multiple urls for the same image. The browser will pick a specific url depending on the width of the screen.

<img
  src="https://www.antoinebrossault.com/wp-content/uploads/2024/09/944_w800.png"
  media="(min-width: 320px) 300w, (min-width: 400px) 400w, (min-width: 640px) 600w, (min-width: 1000px) 800w"
  srcset="
  https://www.antoinebrossault.com/wp-content/uploads/2024/09/944_w300.png 300w,
  https://www.antoinebrossault.com/wp-content/uploads/2024/09/944_w400.png 400w,
  https://www.antoinebrossault.com/wp-content/uploads/2024/09/944_w600.png 600w,
  https://www.antoinebrossault.com/wp-content/uploads/2024/09/944_w800.png 800w"
  alt="" class="img-responsive img-center" />

Check this demo on codepen.

An alternative way to do it is to use the picture HTML tag, the srcset attribute allows an <img> element to specify multiple image sources of different resolutions, letting the browser choose the most appropriate one based on the screen size and pixel density. The tag, on the other hand, allows for more complex image switching by combining multipleelements with media queries, enabling different images to be served based on conditions like viewport width or device type.

<img
  src="https://www.antoinebrossault.com/wp-content/uploads/2024/09/944_w800.png"
  media="(min-width: 300px) 300w, (min-width: 400px) 400w, (min-width: 640px) 600w, (min-width: 1000px) 800w"
  srcset="
  https://www.antoinebrossault.com/wp-content/uploads/2024/09/944_w300.png 300w,
  https://www.antoinebrossault.com/wp-content/uploads/2024/09/944_w400.png 400w,
  https://www.antoinebrossault.com/wp-content/uploads/2024/09/944_w600.png 600w,
  https://www.antoinebrossault.com/wp-content/uploads/2024/09/944_w800.png 800w"
  alt="" class="img-responsive img-center" />

If you want to force the browser to display a specific image based on the size, use the <picture> element; otherwise, use the srcset attribute.

Check this demo on codepen.

How to generate different image sizes?

Jimp

You can use Jimp to generate images with node.js:

Jimp is an image processing library for Node.js that allows users to manipulate images, such as resizing, cropping, and applying filters, directly in JavaScript. It supports a wide range of image formats and offers asynchronous methods for handling image processing tasks efficiently.

Here’s a basic usage of Jimp :

const Jimp = require('jimp');
const fs = require('fs');
// our images are in the ./images directory
const directory = "./images";
const imgs = fs.readdirSync(directory);

(async () => {

    for (let img of imgs) {

        const newImg = await Jimp.read(`${directory}/${img}`);

        await newImg.resize(200, 100);

        await newImg.write(`${directory}/${img}.min.jpg`);
    }

})();

Ipx

IPX is an image proxy library for Node.js that allows for dynamic image optimization and transformation, such as resizing, cropping, and format conversion, on the fly. It’s often used in conjunction with frameworks like Nuxt.js to deliver optimized images based on request parameters, improving performance and responsiveness for web applications.

Here’s an example with Express that give you a service to transform images on the fly :

First you need to install the following libraries :

npm install  express ipx listhen 

Create a file called app.js

// app.js 
import { listen } from "listhen";
import express from "express";
import {
  createIPX,
  ipxFSStorage,
  ipxHttpStorage,
  createIPXNodeServer,
} from "ipx";

const ipx = createIPX({
  storage: ipxFSStorage({ dir: "./public" }),
  httpStorage: ipxHttpStorage({ domains: ["origin-playground.antoinebrossault.com"] })
});

const app = express().use("/", createIPXNodeServer(ipx));

listen(app);

Then run node app.js, and you will have your on-the-fly image optimization service ready.

Then, if you run the following HTTP call, the image hosted on origin-playground.antoinebrossault.com will be scaled down to a width of 800px. More option to discover on the project repo

http://localhost:3001/w_800/https://origin-playground.antoinebrossault.com/images/sm_215118145_porsche-944-1982-side-view_4x.png

How to fix the big image issue? Pure CSS approach:

Another technique is to use CSS rules only (background-images & media queries).

Non-optimized:

.my-bg{
    background-image: url(http://lorempicsum.com/futurama/1200/600/3); 
    height: 600px; 
    width: 1200px; 
    max-width: 100%; 
    background-repeat: no-repeat;
    display: block; 
    margin: auto; 
} 

Optimized:

.my-bg{
    background-image: url(http://lorempicsum.com/futurama/1200/600/3); 
    height: 600px; 
    width: 1200px; 
    max-width: 100%; 
    background-repeat: no-repeat;
    display: block; 
    margin: auto; 
} 

/* We add another URL for devices under 768px */
@media only screen and (max-width: 768px){
    .my-bg{
        background-image: url(http://lorempicsum.com/futurama/768/200/3);
    } 
}

Pitfalls

NB: Some phones need bigger images due to the « device pixel ratio ». With srcset the browser will decide which version to display based on the context (e.g., retina display). If you want more control, use the <picture> element.

<img
  src="https://placeimg.com/800/400/tech"
  media="(min-width: 320px) 300w, (min-width: 400px) 400w, (min-width: 640px) 600w, (min-width: 1000px) 800w"
  srcset="
  https://placeimg.com/300/200/tech 300w,
  https://placeimg.com/400/300/tech 400w,
  https://placeimg.com/600/400/tech 600w,
  https://placeimg.com/800/400/tech 800w"
  alt=""
  class="img-responsive img-center"
/>

For example, if we take the code above: on a 400×736 px smartphone with a DPR (device pixel ratio), the image that will be loaded is the 450×400.

DPR Example 1

On the same screen size (400×736) but with a DPR of 2, the image that will be loaded is the 1200×800.

DPR Example 2

By default JavaScript files are blocking. They create a blank screen during their loading is loaded in the head of the page without any optimization.

How to check if the website contains blocking JavaScript?

Test the website on PageSpeedInsight and look for this warning “Eliminate render-blocking JavaScript in above-the-fold content”. The tool will list the blocking files but I recommend you to double-check in the source code. To do so, look for JavaScript files loaded at the top of the page that don’t contain any defer or async attributes.


How to fix?

There are a couple of ways to fix this issue. One of the best methods is to place the scripts at the bottom of the page and add a defer attribute.

Non-optimized:

<script type='text/javascript' src='./app.js?ver=1.10.1'></script>

Optimized:

<script type='text/javascript' src='./app.js?ver=1.10.1' defer></script>

You may want to use the async attribute, which does almost the same thing except that defer will preserve the execution order.

Use async if your script doesn’t depend on any other scripts (like Google Maps SDK); otherwise, use defer.

NB: If you see defer and async used together, it’s because this was a technique for browsers that did not support defer. Nowadays ALL browsers support defer.


Pitfalls

Most of the time, developers know that loading a script in the head is a bad practice, but sometimes they feel forced to do it.

Common pitfall: Inline JavaScript in the HTML

JavaScript can be executed in an external file but also inside the HTML between two script tags. If you decide to move some JavaScript files from the top to the bottom and add a defer attribute, the website can break because of unsatisfied function definitions due to inline JavaScript.

How to fix that?

There’s a way to defer inline JavaScript by using this piece of code:

window.addEventListener("DOMContentLoaded", () => {
  const scripts = document.querySelectorAll("script[type='defer']");

  scripts.forEach(script => {
    try {
      eval(script.innerHTML);
    } catch (error) {
      if (error instanceof SyntaxError) {
        console.error('[ERROR]', error);
      }
    }
  });
});

This code will defer the inline JavaScript and wait for all the scripts to be loaded and executed before executing inline scripts.

Example

<h1>Hello</h1> 

<script defer src='./jQuery.js'></script>

<script>
   $(document).ready(() => { $('h1').append(' world !'); });
</script>

This example will generate an error because we call the $ function before it’s defined due to the defer attribute (the $ function is defined in jQuery.js).

Example

<h1>Hello</h1>

<script type="defer">
  // jQuery code transformed using vanilla JS and ES7 features
  document.addEventListener('DOMContentLoaded', () => {
    document.querySelector('h1').insertAdjacentHTML('beforeend', ' world !');
  });
</script>

<script defer src='./jQuery.js'></script>

<script>
// Code snippet rewritten in modern ES7+ syntax
window.addEventListener('DOMContentLoaded', () => {
  const deferredScripts = document.querySelectorAll("script[type='defer']");

  deferredScripts.forEach(script => {
    try {
      eval(script.innerHTML);
    } catch (error) {
      if (error instanceof SyntaxError) {
        console.error('[ERROR]', error);
      }
    }
  });
});
</script>

This example will work because we wait for the $ function to be defined before executing the code.

  1. We add the attribute type="defer" to the inline JavaScript script.
  2. We add the code snippet mentioned above.

When all the scripts are executed, the code will replace our custom inline scripts with standard scripts.

See the demo on CodePen


Common pitfall: One of the blocking scripts is an A/B testing script

If one of the blocking scripts is an A/B testing script (ABtasty / Optimizely / Kameloon / Maxymiser, etc.), it’s normal for this script to block the page; otherwise, it will create a flickering effect.

How to fix that?

  1. Don’t load the script on mobile if no mobile tests are running.
  2. Load the script only on pages where tests are running (a small test on the backend will work).

TTFB measures the duration from the user or client making an HTTP request to the first byte of the page being received by the client’s browser

In this step, the server can perform different tasks, like requesting data from a database, calling a web-service, or calculating results…

How to check the Time To First Byte?

  1. Go to https://WebPageTest.orgi

    TTFB Example 1

    In this example, we can see that the TTFB is 4335 ms, which indicates a TTFB issue. Pay attention to the color of the bar. If the bar contains a majority of blue sky, it’s a TTFB issue, but if the bar is mainly dark blue, it’s a content download issue.

    TTFB Example 2

    In this example, the content download took 4188 ms. The usual suspects for this issue are a bloated HTML response and/or a Gzip issue.

Quickly check the Time To First Byte with CURL

If you’re in a rush or want to quickly check the Time To First Byte, you can use this command:

curl -s -L -o /dev/null -w "
    HTTP code : %{http_code} \n 
    Number of Redirects : %{num_redirects} \n
    Last url : %{url_effective}  \n
    Look up time : %{time_namelookup} \n
    Connect: %{time_connect}  \n 
    TTFB: %{time_starttransfer}  \n 
    ? Total time: %{time_total} \n \n" https://google.fr

NB: Run this command more than once to avoid uncached side effects.


How to fix a TTFB issue?

  1. Check for Uncached Database Requests: Identify if the website makes uncached database requests or calls to a web service on the server-side. Uncached requests can lead to performance bottlenecks, especially under high load.

  2. Implement Object Caching with Redis or Memcached: Use Redis or Memcached to cache frequently accessed data and objects (e.g., user sessions, frequently accessed queries). This reduces the need for repetitive database queries and improves response times.

  3. Database Query Caching: Enable query caching if your database supports it (e.g., MySQL’s query cache). Cached queries can significantly speed up repeated requests by storing the results of frequently executed queries.

  4. Use a Reverse Proxy Cache: Deploy reverse proxy caching solutions like Varnish or Nginx. These tools cache the responses from your backend and can serve them quickly without hitting the application server for every request.

  5. Optimize Database Indexing: Ensure that your database tables are properly indexed. Indexing speeds up query execution and retrieval times, reducing the need for extensive data scanning.

  6. Database Connection Pooling: Implement database connection pooling to reuse existing database connections instead of creating new ones for each request. This reduces connection overhead and improves performance.

  7. Optimize Backend Code and Queries: Review and optimize backend code and SQL queries for efficiency. Look for slow queries and refactor them to reduce execution time and resource usage.

  8. Use Asynchronous Processing: If your site makes a synchronous call to a web-service, try to cache the web-service response. If it’s not possible to cache the web-service response, try calling the web-service from the front end by making an asynchronous AJAX request. In other words, load the website’s layout first and load the web-service data afterward (e.g., demo on Amazon.co.uk | demo on Github.com).

Minio Server (Port 9000)

The service running on port 9000 is the primary Minio server. This is the main entry point for interacting with the Minio object storage system. Here are its key functions:

To secure it with Apache2 and let’s encrypt

bitnami/minio:latest 0.0.0.0:32771->9000/tcp, :::32771->9000/tcp, 0.0.0.0:32768->9001/tcp, :::32768->9001/tcp
<IfModule mod_ssl.c>
<VirtualHost *:443>
    ServerAdmin webmaster@myS3.faast.life
    ServerName myS3.faast.life

    # ProxyPass for Node.js application
    ProxyPass / http://localhost:32771/
    ProxyPassReverse / http://localhost:32771/

    DocumentRoot /home/antoine/automation
    ErrorLog /var/log/apache2/.log
    CustomLog /var/log/apache2/.log combined

    <Directory /home/antoine/automation>
        Options Indexes FollowSymLinks
        AllowOverride All
        Require all granted
    </Directory>

ServerAlias mys3.faast.life
SSLCertificateFile /etc/letsencrypt/live/mys3.faast.life/fullchain.pem
SSLCertificateKeyFile /etc/letsencrypt/live/mys3.faast.life/privkey.pem
Include /etc/letsencrypt/options-ssl-apache.conf
</VirtualHost>
</IfModule>

⚠️ heads-up : Accessing `https://mys3.faast.life/` will redirect you to localhost, but if you use a valid path, you will hit the requested resource.

Then I can access a public bucket with the following url :

<scheme> <host> <path>

<https://>  <mys3.faast.life> </public-site/index.html>

That I can access :

https://mys3.faast.life/public-site/index.html

Minio Console (Port 9001)

The service on port 9001 is the Minio Console, a separate component introduced in newer versions of Minio for enhanced administration and monitoring. Here are its main functions:

Here’s the Apache2 configuration for the control plane. In another article on this website, I covered how I managed the web socket redirect to make the Minio file browser work with Apache2.

Below is the Apache2 configuration I used to secure the control plane/console. To obtain the certificate, I use an automation script I created earlier, which I discussed in this article.

With this configuration, your Minio container is secured and properly integrated with Apache2.

bitnami/minio:latest 0.0.0.0:32771->9000/tcp, :::32771->9000/tcp, 0.0.0.0:32768->9001/tcp, :::32768->9001/tcp

To secure it with Apache2 and let’s encrypt

<IfModule mod_ssl.c>
<VirtualHost *:443>
    ServerAdmin webmaster@s3.faast.life
    ServerName s3.faast.life

    ProxyPreserveHost On

    # ProxyPass for Node.js application
    ProxyPass / http://127.0.0.1:32768/
    ProxyPassReverse / http://127.0.0.1:32768/

    RewriteEngine on
    RewriteCond %{HTTP:UPGRADE} ^WebSocket$ [NC]
    RewriteCond %{HTTP:CONNECTION} ^Upgrade$ [NC]
    RewriteRule .* ws://127.0.0.1:32768%{REQUEST_URI} [P]

    DocumentRoot /home/antoine/apps/s3.faast.life
    ErrorLog /var/log/apache2/.log
    CustomLog /var/log/apache2/.log combined

    <Directory /home/antoine/apps/s3.faast.life>
        Options Indexes FollowSymLinks
        AllowOverride All
        Require all granted
    </Directory>



SSLCertificateFile /etc/letsencrypt/live/s3.faast.life/fullchain.pem
SSLCertificateKeyFile /etc/letsencrypt/live/s3.faast.life/privkey.pem
Include /etc/letsencrypt/options-ssl-apache.conf
</VirtualHost>
</IfModule>

I have an ML-2165w printer my dad gave me. It’s a great little laser printer—compact and cheap to run. But I found you can’t install the driver on the newest Mac anymore, even though the hardware works just fine.

I didn’t want to throw this printer away, as it’s still working! I found Michele Nasti’s blog mentioning a way to make it work with the newest Mac. I’m writing an article to keep a copy of the technique in case that blog goes down. I’m also providing the drivers I’ve downloaded from the official HP site.

Here’s the solution:

  1. Download the Mac v11 driver from this link. In the dropdown box, select macOS 11, and you’ll get the driver in the correct version, V.3.93.01. Be sure to avoid the Mac 10.15 driver listed on the same page.

    I’m going to keep a copy of this file on my site just in case

  2. Open the .dmg file.

  3. Click on MAC_Printer, then Printer Driver.pkg, and follow the installation steps.

  4. When prompted to connect the printer, you need to perform an additional step:


– From the list, choose the driver for the Samsung M2060 series.

And there you have it! Your old Samsung ML-2165w printer should now work perfectly with your new Mac.

Again thanks to Michele Nasti for the tip 👍