When I first deployed this blog on Kubernetes, the setup was simple: a single nginx pod serving static files from a ReadWriteOnce PersistentVolumeClaim, populated by copying Hugo’s build output into the pod with kubectl cp. It worked. Then I added two new nodes to the cluster and realized I couldn’t actually use them – at least not for the blog.
The Problem with ReadWriteOnce
The blog deployment looked like this:
volumes:
- name: site
persistentVolumeClaim:
claimName: blog-site-pvc # ReadWriteOnce
ReadWriteOnce means the volume can only be mounted by pods on a single node. With one replica, that’s fine. But the moment you try to scale to two replicas, the second pod might land on a different node – and it won’t be able to mount the volume. Kubernetes will leave it stuck in ContainerCreating forever.
This also killed rolling updates. The default RollingUpdate strategy tries to spin up a new pod before killing the old one. If the new pod lands on a different node, it can’t mount the PVC, and you’re stuck with the old pod running and the new one hanging. The only option was Recreate, which tears down the old pod first – meaning a few seconds of downtime on every deploy.
For a personal blog, a few seconds of downtime is fine. But I’d just added two HP servers to the cluster, bringing the total to five nodes. Having pods pinned to a single node defeated the point of expanding the cluster.
Options I Considered
Bake Content into the Docker Image
The Dockerfile already supported this – a multi-stage build that runs hugo --minify and copies the output into an nginx:alpine image. The problem is everything around it: you need a container registry (GHCR, Docker Hub, or a self-hosted one), a CI pipeline to build and push images on every change, and imagePullSecrets if the registry is private.
For a static blog that changes once a week, that’s a lot of infrastructure to maintain for a few HTML files.
hostPath Volume
Mount the Hugo build output directly from the control plane node’s filesystem. Simple, no registry needed, no NFS. But hostPath only works for pods scheduled on that specific node. You’d need nodeAffinity to pin the blog pods there – which means you’re right back to single-node deployments, just with extra steps.
NFS from the NAS
The cluster already had a UGreen NAS at 10.0.0.200 serving photos to Immich via NFS. Adding another export for the blog was trivial. NFS volumes in Kubernetes are inherently ReadWriteMany – any pod on any node can mount them. No registry, no CI pipeline, no node pinning.
What I Did
NAS Setup
Created a new shared folder on the NAS and enabled NFS access for the cluster subnet. The NAS now exports two paths:
| Export | Purpose |
|---|---|
/volume1/Photos | Immich photo library (read-only by Immich) |
/volume1/code/blog | Hugo build output for the blog |
Mount on the Build Node
The control plane node (mini-pc-3) is where I write and build the blog. I mounted the NAS share permanently:
# /etc/fstab
10.0.0.200:/volume1/code /mnt/nas-code nfs defaults,_netdev 0 0
Then pointed Hugo’s output directly at it:
# hugo.toml
publishDir = "/mnt/nas-code/blog"
Now hugo --minify writes the built site straight to the NAS. No intermediate copy, no kubectl cp.
Updated Kubernetes Deployment
The deployment changed in three ways:
1. NFS volume instead of PVC
volumes:
- name: site
nfs:
server: 10.0.0.200
path: /volume1/code/blog
readOnly: true
Pods mount the NFS share read-only. Hugo writes to it from the build node; nginx only needs to serve files.
2. Two replicas with rolling updates
replicas: 2
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 0
maxSurge: 1
maxUnavailable: 0 is the key setting. It tells Kubernetes: don’t kill the old pod until the new one is ready. Combined with a readiness probe, this guarantees zero downtime during rollouts.
3. Readiness probe
readinessProbe:
httpGet:
path: /
port: 80
initialDelaySeconds: 2
periodSeconds: 5
The rolling update strategy only works if Kubernetes can tell when the new pod is actually ready to serve traffic. Without a readiness probe, Kubernetes assumes the pod is ready the moment the container starts – which might be before nginx has loaded its config.
The Deploy Workflow Now
Before:
hugo --minify
kubectl cp public/. blog-pod:/usr/share/nginx/html/ -n photo-gallery
After:
hugo --minify
That’s it. Hugo builds directly to the NAS, and every blog pod is already serving from that NFS mount. No copying, no pod restarts, no downtime.
Trade-Offs
This approach isn’t free of downsides:
NAS is a dependency. If the NAS goes offline, the blog goes down. But the NAS is a dedicated appliance that’s always on – it’s more reliable than any individual node in the cluster. And since the control plane (mini-pc-3) is also a single point of failure for the entire cluster, the NAS doesn’t add meaningful risk.
NFS latency. Serving static files over NFS adds a network hop compared to local storage. For a blog with sub-megabyte HTML pages, this is imperceptible. If you were serving large media files, you’d want to benchmark this.
No versioned deployments. With baked images, you can roll back to a previous version by changing the image tag. With NFS, the “current version” is whatever’s on the NAS. If a bad build goes out, you’d need to rebuild and re-publish. For a static blog, hugo --minify takes under a second, so this isn’t a real concern.
The Cluster Today
The cluster grew from three mini-PCs to five nodes with the addition of two HP servers:
| Node | Role | Hardware |
|---|---|---|
| mini-pc-3 | Control plane | Mini PC |
| mini-pc-1 | Worker | Mini PC |
| mini-pc-2 | Worker | Mini PC |
| hp-server-1 | Worker | HP Server |
| hp-server-2 | Worker | HP Server |
Blog pods can now land on any worker node. During a rolling update, the new pod spins up on whichever node has capacity, passes its readiness check, and starts receiving traffic – all before the old pod terminates. The blog stays up the entire time.
Key Takeaways
ReadWriteOncePVCs are a hidden scaling wall – they work fine for single replicas but silently block horizontal scaling and zero-downtime updates- NFS is underrated for static content – if you already have a NAS, it’s the simplest path to
ReadWriteManywithout provisioning a distributed storage system maxUnavailable: 0is what makes rolling updates zero-downtime – without it, Kubernetes may kill the old pod before the new one is ready- The simplest deploy pipeline is no pipeline – building directly to a shared filesystem eliminates CI/CD, registries, and image management for workloads that don’t need them