I run a small NAS for myself and my family.
I hesitate to call it a "homelab" because it's really just the one box.
Until recently this was running TrueNAS Core, which is based on FreeBSD.
This was good for cred, but difficult for me to use in practice.
I've spent two decades learning how to use Linux, and very little of that transferred.
Maybe if I knew how to use a Mac it'd be easier.
TrueNAS has announced the apparent deprecation of the FreeBSD offering.
That gave me the excuse I needed to finally to migrate to TrueNAS Scale, the "new" Debian-based version.
This past weekend I finally got around to doing that migration!
Reinstalling
My NAS normally runs headless, and my consumer motherboard definitely doesn't have IPMI.
So after making sure it wasn't in use I unplugged it and dragged it over to my workbench.
I plugged in a keyboard and monitor, inserted a flash drive I'd dd'd an image onto1,
and walked through a very minimal installer.
When I'm setting up Linux I have all sorts of Opinions on dmcrypt and lvm.
I had some grand designs on splitting the NAS's boot SSD into halves and installing the two OSes side by side,
but I abandoned that.
I just hit next-next-next-done, hit the "upgrade" option, and went to get a snack.
When I came back the system had rebooted… back into TrueNAS Core.
It turns out I can't actually remember which is which2, and I'd written the BSD one onto the flash drive.
Sigh.
At least now I knew the key that dumped the motherboard into boot media selection.
A Clean Break
In theory there's supposed to be a way to migrate configuration from Core to Scale,
but I didn't bother.
I'd bodged together a lot of stuff outside the guard rails.
Jail shells are similar to Linux containers but BSD and Linux are not.
So after rebooting, importing my ZFS pool, and setting the hostname,
I clicked the "App" tab and installed the most important application, Plex
I saw something in the corner about installing a "chart…"
oh no.
Oh Yes
Yeah, it's k8s. And helm.
It's using k3s, so it's not completely ridiculous, but still. What.
Apparently I'm not the only person who was exasperated,
since the next version will switch to docker, but that doesn't help me now.
I almost avoided writing anything, too.
My first stop was setting up my household dashboard3, and that was just one container.
After some editing to remove the secrets from the executable and into environment variables,
I just needed to fill out a few fields in the GUI.
But of course I inevitably ran up against a limitation.
Despite being k8s under the hood, which loves nothing more than a sidecar4,
there was no way to add a pod with multiple containers.
So of course I had to dive off the deep end myself.
First off: is there some official way to modify the YAML? No, of course not.
Is there a way to add my own helm chart? Not easily.
Can I just get access to kubectl? Unfortunately, yes.
The Paradox of Expertise
I often run into a situation when venturing outside my normal tech bubbles where I know both too much and too little.
When deploying a "Custom App" on TrueNAS, there's a port forwarding setting,
which obviously I don't have access to for my own Deployment.
Now how does that work? All the forum posts are unhelpful or condescending (the TrueNAS forums are MEAN!).
So it's up to me and my Linux Skills.
I've got a working app running on port 9045, so let's figure this out.
There's nothing in netstat:
Eventually I figured out that I just needed to set a NodePort on the service and it would work.
How? No idea! Some magic with kube-router5, probably. Either way, I eventually got my Deployment going, copied here for posterity:
And after spending most of a day chasing down a typo in a port,
I had my workload running smoothly.
Of course, it still doesn't show up in TrueNAS.
No idea how that will work (maybe the websocket?)
I don't even know if it'll survive a version bump! But it's there for now.
Conclusion
Now, Kubernetes isn't a horrible choice for this kind of work.
Helm is a good templating system, even if I have tiller flashbacks.
Using k3s makes… not no sense.
The thing is… I go out of my way to make sure my hobbies are as far from my work as possible.
I write in Rust or Haskell, I use Nix, I do web "design."
Weekends are not for work!
Weekends are for silly projects and and reading yuri.
Instead I learned things I can use at my job.
And for that, TrueNAS will never be forgiven.
At least now I don't need to remember the arguments to BSD sed.
Several of our gauges involve making real requests to APIs.
Ordinarily that's not a problem,
but when debugging or iterating you can run up your usage very quickly.
The solution? Fake data!
We could just use static data,
but what's the fun in that?
This way we can tell when the backend gets updated.
Traits in rust often behave a lot like Objects,
but here's one way they're very different:
If my implementation defines get_several,
there's no way to use the Mock implementation.
By breaking it out, we can call this default and then add our additional logic.
Yew recommends trunk as a tool for development.
It handles compiling, bundling assets, and serving webassembl applications.
It even does automatic reloads when code changes.
Configurations, interestingly, come in the form of html files.
Here's mine:
When I first built this application, I deployed in a Modern Way.
I had a Dockerfile, I pushed it to my Dockerhub account,
and pulled it down to run it.
But this had some annoying properties.
For one, because it was public, I couldn't add any secrets to the file.
And since the QR code for the wifi is secret
I would've needed to generate the image at runtime instead of compile time.
Plus, it was just overkill. Instead, now I just build a tarball.
This uses the Cargo-make rust-script backend to create an archive, grab the files, and write them all out.
[tasks.tar]
dependencies = ["build-all"]
workspace = falsescript_runner = "@rust"script = '''//! ```cargo//! [dependencies]//! tar = "*"//! flate2 = "1.0"//! ```fn main() -> std::io::Result<()> { use std::fs::File; use tar::Builder; use flate2::Compression; use flate2::write::GzEncoder; let file = File::create("den-tv.tar.gz")?; let gz = GzEncoder::new(file, Compression::best()); let mut ar = Builder::new(gz); // Use the directory at one location, but insert it into the archive // with a different name. ar.append_dir_all("static", "den-head/dist")?; ar.append_path_with_name("target/release/den-tail", "den-tail")?; ar.into_inner()?.finish()?.sync_all()?; Ok(())}'''
Then to deploy it, I just use ansible.
I copy the file over and extract it with unarchive:
It runs on a virtual machine on my NAS, so it's easily accessible over the network.
The Frontend Deploy
The frontend is served by, what else, a Raspberry Pi.
I planned to use cage to automatically start a full-screened browser.
But for whatever reason, on the version of Raspbian I'm running cage hard-crashes after a minute or two.
Instead, I'm just using the default window manager and a full-screened Chrome2.
I've got a wireless keyboard I can grab to make changes if need be,
but it's been rock solid.
Automatic Reloads
There's one last trick: We know how to restart the backend when the code changes, but what about the frontend?
Because den-message is shared between the backend and frontend,
it's also available on the websocket side.
When we receive the Hello message, we check to see if it matches the version the webassembly was compiled with:
fnupdate(&mutself, ctx: &yew::Context<Self>, msg: Self::Message) -> bool{match msg {// snipOk(DenMessage::Hello{ version }) => {if version != den_message::VERSION{gloo_console::log!("reloading for version mismatch");
let_ = window().location().reload();
}}}true}
If the backend sends a different version, the page knows to reload.
Since the backend serves the frontend, the next reload will always have the newest version.
The coolest effect of this is that I can sit at the kitchen table and run ansible-playbook to ship a new version.
Then a few seconds later, the screen on the other side of the table automagically refreshes and shows me my changes.
Pretty snazzy!
Conclusion
I hope you've enjoyed this series, or at least found it informative.
This project is absolutely over-engineered, and over-complicated.
It took me multiple weeks to build, but I learned a ton.
Along the way my searches led me to a lot of random folks' blog posts about things they've done.
I hope if nothing else, these posts show up in someone's search results and help them solve a problem.
So we have our data.
We need some way to display it in a human-friendly format.
Obviously I don't have anything against pure json,
but it does not make for good information density.
If we're building a frontend application,
the most obvious answer is Javascript.
But I'm not going to be writing Javascript in my free time.
That'd be like writing a Go backend: completely unbecoming.
What do we use instead?
That was rhetorical, we're obviously using rust.
There's a number of front-end Rust libraries,
but the three I considered were dioxus, percy, and yew.
I'd previously used Yew for ezeerust,
a web frontend for a z80 emulator I wrote.
The others I just got from various blog posts other people have written.
Since this is a purely personal project I engaged in some vibes-based-engineering.
And by that I mean I started writing this thing in September and have no idea why I picked what I did.
Yew it is!
Connect for
Our data is waiting for us on the other end of the websocket,
so the first thing to do is connect.
Since this is a wasm app intended to run in a browser,
we're using gloo_net for websockets.
But already this looks pretty familiar!
Yew and Me
Yew is similar to React.js, which means it's declarative.
Where a vanilla Javascript app might say "change #busupdate .route26 to this text,"
you instead say "The bus route should look like this" and the system figures out how to efficiently make changes.
This looks remarkably similar to actix!
We've got a Context and we're going to send a stream somewhere.
Here's the signature for send_stream we're calling here:
But while an Actix application features a collection of quasi-autonomous Actors sending each other async messages,
Yew applications are built out of a tree of Component objects.
We'll have one Application component that creates lots of little Gauge components,
and they'll all create smaller components still.
Also an Actor, a Component only handles one kind of message.
Let's look at the trait:
Usually Component::Message would be an Enum type,
but in our case we only care about the results of parsing websocket inputs
we called with add_stream:
typeMessage = Result<DenMessage, DecodeError>;
Then we just need an update handler:
fnupdate(&mutself, ctx: &yew::Context<Self>, msg: Self::Message) -> bool{match msg {Ok(DenMessage::Update(update)) => self.handle_gauge(update),
// stay tuned for part 3!}}
All handle_gauge will do is update the fields of the application which look like this.
LastUpdate is a wrapper that provides a little housekeeping,
specifically tracking when last a field was updated.
This information is used to detect stale data.
If a gauge hasn't been updated in a while,
it'll visually dim itself so we know not to trust it.
Render Unto Caesar
Now that we have our data secured, we need to display it!
Somehow this internal state needs to become HTML.
And Yew has a very nifty mechanism for doing this:
the html!() macro.
Similar to React's JSX,
this lets us write natural-ish HTML.
Here's the snippet for the bus updates:
We can see the stale and slug arguments that correspond to
attributes we passed in.
Children is a special value that allows us to wrap other tags in <Gauge></Gauge> tags.
Otherwise, we'd have <Gauge /> and it wouldn't be nearly as expressive.
But this a a pretty simple component.
BusGauge is where it gets interesting.
Yew components are supposed to store most of their data in Properties.
the component gets re-rendered.
This is good!
That's how information trickles down the component graph from the root.
The BusGauge struct will exist for the life of the application.
If we stored information there at create time,
It'd never be updated when App sent us new data.
The Rc there is because Properties are cloned very frequently,
so it's a good idea to make those clones cheap.
Fair enough. bus_route similarly maps down to individual_arrival(),
which handles the numbers.
That's where the interesting stuff happens.
Let's look back at BusArrival:
What happens if mins is less than zero?
We skip it, of course.
individual_arrival returns Option<Html> instead of Html,
because a departure time of -1 isn't too useful.
But otherwise, we render it out.
bool::then is a handy method that returns Some if true, or None if false.
Could this be an if? sure!
But I really like chains like this.
Debate me in the comments1.
Then we just write out the minutes, with the little icon if it's live.
It probably should be a fancy SVG icon,
but I'm a backend person at heart, cut me some slack.
We're all done, though! We've rendered our gauge!
…once.
Nothing But Time
There's actually two kinds of Component you can write in Yew:
function components (like the Gauge) and struct components (like BusRoute).
The function components have a lot less boilerplate,
but the struct components give you a lot more control over the lifecycle.
We're using that here.
Here's BusGauge::create, called when our component is initialized:
First, we get a reference to ourselves.
Then, every 30 seconds, we send ourselves an empty message.
Interval is from the gloo_timers package, and by calling forget we ensure it will run indefinitely.
The content doesn't matter: any message will call
Component::update.
update returns a boolean, which represents whether we should re-render our element.
By doing so unconditionally, we re-render our gauge every 30 seconds.
And because the minute offset is calculated at render-time, not on the backend,
it'll never be more than 30 seconds out of date.
I actually use this for a World Clock gauge too.
By setting the update interval to every second and sticking a Local::now() in the render,
you've got a nice little clock that never goes stale.
CSS
Here's the part of web development that feels the most black magic to me.
I've got to turn this:
Into something that conveys information usefully.
Now, let's do some expectation setting.
I picked colours mostly based on named HTML colours.
This is not going to win any design awards.
But it will, hopefully, be legible.
Let's get started!
Variable Speed
Did you know CSS has variables now?? Check this out:
And we can specify the sizes we want in terms of fr units.
grid-template-rows: .2fr 1fr .8fr;
This roughly means "10%, 50%, 40%."
The fr values are a ratio, rather than absolute values.
And from there, it's just some basic styling:
/* reset the default padding and margins from ul */ul{padding: 0;
margin: 0;
}/* we don't use the headings */h2,h3{display: none;
}/* make a little box with the route number */bus .route-line{background-color: var(--2nd);
padding: .5em;
list-style: none;
text-align: center;
}/* get out of here bullets */.bus li{list-style: none;
margin: 0.5em;
}/* I can deny it no longer! ...i am small */.bus .arrival-live{font-size: var(--font-tiny);
}
One of the last changes I made was for legibility.
The display we're using is only 720p,
and I wanted to be able to see it from a distance.
For the font, I went with Overpass,
based on the venerable Highway Gothic used on American highway signs2.
The other thing was slightly bolding everything:
:root{font-weight: 600;
}
The End Result
Next Time
Deployment! I'll walk through how deployed the client,
the server, and the development tooling I build along the way
Our house in Vancouver came with a TV pre-installed in the kitchen,
right above the fridge.
This was, originally, for monitoring all of the security cameras.
But surveillance cameras show the exact same thing, day in and day out.
What if, instead, I aggregated a bunch of different datums our house cares about?
A former housemate had a little web app that displayed the weather,
and when the next trash day was.
But I was feeling a little more ambitious than that,
and I never miss an opportunity to over-engineer something.
There's two important things to notice here.
One, planes are cool.
Two, both the ADSB and the Slack message want to be near-real-time1,
which means this can't just be a server side application.
The First Architecture
Client Side Best Side
My first idea was to have everything happen on the client side.
There's a half dozen or so APIs that we need to aggregate,
but if this going to only run in one or two places, why have a backend at all?
It turns out there's a couple reasons.
One is CORS,
an annoying but necessary security feature that won't let you just call any old URL you want.
If you control the endpoint you can add a couple headers that say "Hey, whatever, do what you want."
But I don't control a lot of the endpoints, so I was going to need a proxy anyway.
The other reason was that making requests from inside Webassembly was just a little awkward.
For example, here's how reqwest, a common Rust library, makes a request?
But on the Webassembly side, you can't just make arbitrary socket calls.
You need to use the browser's built-in XHR methods.
Request::get("/path")
.send()
.await?
.unwrap()
And while that interface is the same,
pretty much no library you'd want to use comes with built-in support.
You lose access to a lot of the ecosystem advantages Rust usually gives you.
Before I gave up, for example, I wrote my own slack client because none of the existing ones supported WASM.
You'd expect Rust would have a… trait HttpClient that had a bunch of implementations, but no such luck.
Maybe once AsyncTraits stabilise.
The architecture I eventually settled on was more traditional, consisting of three crates:
den-tail is the backend, running in a VM on my NAS
den-head is the frontend, running in webassembly in a browser
den-message represents the JSON-encoded wire format2, transmitted over websockets from den-tail to den-head.
In this post we'll discuss the first one, and part 2 will introduce our frontend.
The Backend Situation
Here's the basic problem statement:
The backend needs to fetch updates from a lot of different sources, on a lot of different schedules.
Bus departures should be every minute or so, but the trash schedule needs once a day at most.
And all that data needed to flow back over the websocket to the web frontend.
Are you thinking what I'm thinking?
Actor model!!
There's a neat actor model library called actix.
And even better for my purposes, it's mostly a web framework that happens to use actors.
I don't have to choose a web framework!
An actor can receive several kinds of messages,
which are just structs or enums.
The compiler even checks that you're sending messages to actors that understand them!
--> den-tail/src/ws.rs:37:12
|
37 | a.send(MyStruct2);
| ---- ^^^^^^^^^ the trait `actix::Handler<MyStruct2>` is not implemented for `WsActor`
| |
| required by a bound introduced by this call
|
= help: the following other types implement trait `actix::Handler<M>`:
<WsActor as actix::Handler<GaugeUpdateMessage>>
<WsActor as actix::Handler<MyStruct>>
The Data Must Flow
Let's see how data actually moves through the system.
To do that, we'll take a look at the Bus Arrival.
I call the individual displays "Gauges."
Here's what the bus gauge looks like:
This means, the 46 bus has arrivals in 9, 18, and 55 minutes.
The 7's arrival is real-time, the others are just from the schedule.
First question: How do we get this information?
The API
I live in Vancouver, and translink has a real-time API we can use.
I looked into using Transit's API, but it's limited to 1500 calls a month.
That's plenty for an app used once a day, but it works out to 2 an hour for an always-on application.
No good!
There wasn't a crate, but it's not too complicated an API:
There's a lot, but we don't care about most of it.
Just the route number, the expected leave time, and the schedule status.
Rust has a really excellent SERialize/DEserialize library called, appropriately serde.
It's very simple to use! You just create a normal struct,
add a few annotations when the JSON object doesn't quite match Rust's conventions,
and derive Deserialize. Magic!
Then we have to retrieve it.
There's a lot of Rust http clients,
and I initially picked Reqwest.
But after I settled on actix I moved everything to awc.
There's a neat trick here with the Rust type system:
because we know that we're returning a Vec of Route structs,
we don't need to tell json() what format to use!
It'll automagically call Vec<Route>::deserialize.
There's a little more post-processing.
We parse the date (annoyingly complicated, since the date format changes)
and whether it's live or not3.
Here's the end result, from den-message.
This is the wire format that will eventually be sent to the frontend
Obviously this isn't the only gauge we need to update.
But the logic for every one is the same: retrieve an update, do some parsing, return it.
And when you need a lot of implementations of the same thing, you make a trait!
The ?Send is a consequence of Actix.
Send is a Rust trait meaning "safe to send between threads."
But it usually comes with some overhead, involving a mutex or some other synchronization primitive.
But Actix's own future wrapper, ActorFuture,
doesn't require Send.
And since awc is mainly for the actix system, it isn't Send either.
Until I figured out the ?Send, I got a lot of gnarly error messages,
and I stuck with Reqwest (which is Send).
But eventually the asynctrait docs gave me the answer.
Anyway!
GaugeUpdate is an enum, with variants for all the gauges' associated data.
Every kind of data we collect can be turned into a GaugeUpdate
Since we're using an actor model,
every updater is going to get its own actor.
But since we've got a trait,
we can use the single actor type for all of them.
pubstructUpdateActor{// Not having UpdateActor be typed makes it easier to construct collectionsupdater: Rc<dynGaugeUpdater>,
name: &'staticstr,
}
Probably both the Rc and the dyn could be removed with some clever typings,
but this works well enough.
Since actors are all about messages,
we'll use an empty unit-like struct to indicate we want an update.
The actor could make its own schedule,
but instead we use an UpdateSupervisor to periodically send RequestUpdate to every actor
using the runinterval method on an actor's context.
The actors are all run from a Supervisor,
so they'll be restarted in the face of any Err results.
They can't do anything about panics,
so we are extra diligent to not call unwrap or expect.
Cache Money
Let's go back to UpdateActor and see how it handles those RequestUpdate messages.
There's some housekeeping to appease the almighty borrow checker,
but in essence we call updater.update().await and pass the result to GaugeCache::update_cache:
What's GaugeCache::update_cache?
I don't know if this is a common pattern in Actix,
but in Erlang it's not typical to send messages directly to actors.
Instead, there will be functions that send the appropriate messages for you:
GaugeCache::update_cache performs a similar task.
It looks up the running cache instance from the supervisor
(which will start it if necessary.)
From there, it sends the update in a format the cache can understand.
First, we store our client in the client list so it can receive future updates.
Then, we send a catch-up: the most recent messages of every kind.
That way, a reconnecting client doesn't need to wait for fresh updates to come down the pipe.
Some of the less time-critical updaters only run once a day, so they'd be waiting a while!
Web Sock It To Me
Actix handles websockets with the actix-web-actors crate.
You'd think all of actix-web would be web-actors, but I guess not.
Regardless, it plugin in nicely to our existing menagerie of actors.
The way this works is actually pretty interesting.
First of all, for incoming messages, we use a StreamHandler.
Where a regular Handler handles a single message,
StreamHandler works with a Stream of messages.
pubtraitStreamHandler<I>whereSelf: Actor,
{fnhandle(&mutself, item: I, ctx: &mutSelf::Context);
//snip}
For our implementation, there aren't many incoming messages we care about:
We take an Actor (which must be a StreamHandler) and a Stream.
The Stream is over Bytes, which will be decoded into the Message we expect.
It's designed to take the web::Payload actixweb uses.
But what's that return type?
Normally we'd expect a Self to be returned,
but instead we get a stream of Bytes
That stream is how outgoing messages get sent to the client.
When you call WebsocketContext::text to send a message,
behind the scenes it's enqueueing a message which ultimately ends up in that stream.
(We'll get to DenMessage in a later post,
but for now it's just a wrapper enum around GaugeUpdate.)
Handle is synchronous, so ctx.text can't wait for a message to be sent.
Instead, it gets added to WebsocketContext's internal queue
that's eventually set back to the client by the whims of Actix's scheduler.
How does the cache know to send us GaugeUpdateMessage?
We need to subscribe to the cache.
If we look at the Actor trait,
we can see that there's some lifecycle hooks we can use:
where Recipient means "Any actor that can receive a GaugeUpdateMessage.
In practice this will always be the same kind of actor,
but there's no reason to hardcode that.
This constrains us to only sending GaugeUpdateMessage,
instead of every message we've got a Handler for,
but that's okay.
Wiring it up gets us this:
fnstarted(&mutself, ctx: &mutSelf::Context){ifletErr(e) = GaugeCache::connect(ctx.address().recipient()){error!("Failed to connect to cache: {:?}", e);
ctx.stop();
}}
There's a similar stopped method so we don't send messages to a client not listening
(there's a metaphor there, probably).
Now all that's left is to actually run actix!
Their documentation is very good, this is all we need:
There's the incoming stream for upstream Websocket messages.
ws::start will call HttpResponseBuilder::streaming to stream the downstream half.
And that's the end of the backend!
Our bus arrival is traveling over the websocket connection.
What does that wire format look like?
I'm not actually sure!
The beauty of using Rust on both ends is that we just need to make sure serialisation is reversible.
The libraries will handle the rest.
Next Time
We'll build out the frontend and actually use this data we fired down the pipe!
I briefly considered using gRPC, but those same restrictions raised their heads:
browsers don't provide low-enough level socket access for GRPC to work without a weird proxy.