TrueNAS Scale Migration, or: Can't Escape K8s
What and Why
I run a small NAS for myself and my family. I hesitate to call it a "homelab" because it's really just the one box.
Until recently this was running TrueNAS Core, which is based on FreeBSD. This was good for cred, but difficult for me to use in practice. I've spent two decades learning how to use Linux, and very little of that transferred. Maybe if I knew how to use a Mac it'd be easier.
TrueNAS has announced the apparent deprecation of the FreeBSD offering. That gave me the excuse I needed to finally to migrate to TrueNAS Scale, the "new" Debian-based version.
This past weekend I finally got around to doing that migration!
Reinstalling
My NAS normally runs headless, and my consumer motherboard definitely doesn't have IPMI.
So after making sure it wasn't in use I unplugged it and dragged it over to my workbench.
I plugged in a keyboard and monitor, inserted a flash drive I'd dd
'd an image onto1,
and walked through a very minimal installer.
When I'm setting up Linux I have all sorts of Opinions on dmcrypt and lvm. I had some grand designs on splitting the NAS's boot SSD into halves and installing the two OSes side by side, but I abandoned that. I just hit next-next-next-done, hit the "upgrade" option, and went to get a snack. When I came back the system had rebooted… back into TrueNAS Core.
It turns out I can't actually remember which is which2, and I'd written the BSD one onto the flash drive. Sigh. At least now I knew the key that dumped the motherboard into boot media selection.
A Clean Break
In theory there's supposed to be a way to migrate configuration from Core to Scale, but I didn't bother. I'd bodged together a lot of stuff outside the guard rails. Jail shells are similar to Linux containers but BSD and Linux are not.
So after rebooting, importing my ZFS pool, and setting the hostname, I clicked the "App" tab and installed the most important application, Plex I saw something in the corner about installing a "chart…"
oh no.
Oh Yes
Yeah, it's k8s. And helm. It's using k3s, so it's not completely ridiculous, but still. What. Apparently I'm not the only person who was exasperated, since the next version will switch to docker, but that doesn't help me now.
I almost avoided writing anything, too. My first stop was setting up my household dashboard3, and that was just one container. After some editing to remove the secrets from the executable and into environment variables, I just needed to fill out a few fields in the GUI.
But of course I inevitably ran up against a limitation. Despite being k8s under the hood, which loves nothing more than a sidecar4, there was no way to add a pod with multiple containers. So of course I had to dive off the deep end myself.
First off: is there some official way to modify the YAML? No, of course not.
Is there a way to add my own helm chart? Not easily.
Can I just get access to kubectl
? Unfortunately, yes.
The Paradox of Expertise
I often run into a situation when venturing outside my normal tech bubbles where I know both too much and too little. When deploying a "Custom App" on TrueNAS, there's a port forwarding setting, which obviously I don't have access to for my own Deployment.
Now how does that work? All the forum posts are unhelpful or condescending (the TrueNAS forums are MEAN!). So it's up to me and my Linux Skills.
I've got a working app running on port 9045, so let's figure this out.
There's nothing in netstat
:
root@montero[~]# netstat -lpnt | grep 9045 root@montero[~]#
Nor iptables
:
root@montero[~]# iptables -L | grep 9045 root@montero[~]# iptables -L -t nat | grep 9045 root@montero[~]#
On the k8s side, there's no ingress controller:
root@montero[~]# k3s kubectl get ingress --all-namespaces No resources found
And no stand-out annotations in the service:
root@montero[~]# k3s kubectl get svc -n ix-den-tv den-tv-ix-chart -o jsonpath={.metadata.annotations} {"meta.helm.sh/release-name":"den-tv","meta.helm.sh/release-namespace":"ix-den-tv"} root@montero[~]# k3s kubectl get svc -n ix-den-tv den-tv-ix-chart -o jsonpath={.metadata.labels} {"app.kubernetes.io/instance":"den-tv","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/name":"ix-chart","app.kubernetes.io/version":"v1","helm.sh/chart":"ix-chart-2403.0.0"}
Eventually I figured out that I just needed to set a NodePort on the service and it would work. How? No idea! Some magic with kube-router5, probably. Either way, I eventually got my Deployment going, copied here for posterity:
--- apiVersion: v1 kind: Namespace metadata: name: transmission --- apiVersion: v1 kind: Secret metadata: namespace: transmission name: wireguard-private stringData: WIREGUARD_PRIVATE_KEY: <nice try bucko> --- apiVersion: v1 kind: Service metadata: name: transmission namespace: transmission spec: selector: app.kubernetes.io/name: ellie-transmission type: NodePort ports: - protocol: TCP port: 9091 nodePort: 9091 targetPort: web name: web --- apiVersion: apps/v1 kind: Deployment metadata: namespace: transmission name: transmission labels: app.kubernetes.io/name: ellie-transmission spec: selector: matchLabels: app.kubernetes.io/name: ellie-transmission template: metadata: labels: app.kubernetes.io/name: ellie-transmission spec: containers: - name: transmission image: linuxserver/transmission:4.0.5 ports: - containerPort: 9001 name: web - containerPort: 51413 name: torrent-tcp - containerPort: 51413 name: torrent-udp protocol: UDP volumeMounts: - mountPath: /config name: config - mountPath: /downloads name: download-complete - name: vpn image: qmcgaw/gluetun env: - name: VPN_SERVICE_PROVIDER value: nordvpn - name: SERVER_COUNTRIES value: Canada - name: SERVER_CITIES value: Vancouver - name: VPN_TYPE value: wireguard - name: DNS_KEEP_NAMESERVER value: "on" envFrom: - secretRef: name: wireguard-private securityContext: capabilities: add: - NET_ADMIN volumes: - name: config hostPath: path: /mnt/panini/ix-applications/releases/transmission/volumes/ix_volumes/config type: "" # emptyDir: # sizeLimit: 500Mi - name: download-complete # emptyDir: # sizeLimit: 1Gi hostPath: path: /mnt/panini/media/media type: ""
And after spending most of a day chasing down a typo in a port, I had my workload running smoothly. Of course, it still doesn't show up in TrueNAS. No idea how that will work (maybe the websocket?) I don't even know if it'll survive a version bump! But it's there for now.
Conclusion
Now, Kubernetes isn't a horrible choice for this kind of work. Helm is a good templating system, even if I have tiller flashbacks. Using k3s makes… not no sense.
The thing is… I go out of my way to make sure my hobbies are as far from my work as possible. I write in Rust or Haskell, I use Nix, I do web "design." Weekends are not for work! Weekends are for silly projects and and reading yuri. Instead I learned things I can use at my job. And for that, TrueNAS will never be forgiven.
At least now I don't need to remember the arguments to BSD sed
.
Footnotes:
The way that works is fascinating, btw
I repeatedly screwed it up while writing this post
Under FreeBSD this was running on a Linux VM. I haven't figured out how set up VMs yet on Scale, the bridging configuration is very "BYO."
WHY YOU WANT SIDECAR FOR KUBERNETES? IS NOT GOOD ENOUGH AS PROCURED FROM CORE INFRASTRUCTURE TEAM? YOU THINK NEEDS IMPROVEMENT?