Welcome back to Linux Prepper Podcast, I'm the host James, show about self-hosting,
false, curious, cis admins, home lab enthusiasts, people who want to DIY things themselves
or just try self-hosting their own gear.
You've ever tried this and failed, well, welcome to the club, we're going to deal with
self-reliance, local first tooling, and today's episode is all going to be about repurposing
old machines.
We're also going to dig into basic file processes, file networking for a local or offline
style setup in the home lab, and then we'll finish it off with trivia picks from the
recent Linux Fest Northwest.
So buckle up, here we go.
Want to thank the sponsor of this podcast, which is a meradroid.com, a meradroid is
the United States Northern California-based distributor of single-board computers, especially
o-droids from hard kernel and official partner of Nabukasa and home assistant hardware.
They offer global shipping, and they have friendly customer service you can call them
on the phone.
They're awesome people, super nice, they'll help you out.
It's definitely an advantage to order from them versus ordering directly from overseas
and hard kernel and such, and they're just an all-around great provider.
Can't recommend them enough, thank you for sponsoring the show.
You can use Linux proper a checkout or use the referral link.
So what is a battle station?
A battle station really is just a cool term people have put online for a gaming type
setup in addition to an office, it's like you have some comfortable gaming setup with
an ergonomic chair setup to kick butt in an online game.
So we're going to take the same concept, but adapt it to the Linux prepper style user and
administrator, which means instead of for going for something fancy, we're going to actually
just re-appropriate the gear we already have to make sure that it's working in a foundational
way that supports us in getting work done and assumes that we'll run into problems and
snapfoos.
So my focus with this has been all the hardware and all the software.
We're going to break our battle station segment into four sections.
First, we're going to talk about graphics and gaming performance, 32-bit specifically.
Then we'll jump into pipe wire, we'll follow that up with managing memory between 16
and 1 gigabytes on a machine, and we're going to end it off with local file networking
between two machines as the basis of a minimalist home lab before trivia.
I want to give a caveat that I got really deep in the weeds on this over the last six
months, and there's so much material, so I'm going to try to keep this really focused,
and that way we can break it into different chunks, different episodes.
Do send me your thoughts because this is the longest deep dive I've ever had, and I hope
you enjoy this condensed version of what was previously two hours.
Battle station, gaming setup, usually with a computer, monitor, some gaming peripherals designed
for optimal performance right and for your own comfort.
So in this case, we're going to build out our own little Linux pepper battle station.
Two of them in this case, with the assumption that at least one is going to break down,
I've already lost two laptops, what's another one, you know?
So one's not good enough, we're going for two.
This means anything I'm running on one machine, I have to be able to run on the other,
and that means the directories have to be managed equally across the machines.
So if I work on machine A, and I work on something else on machine B, it's not going
to mess me up.
I've got a system I can understand, and that's what we're delving into today.
We're going beyond the first steps, and now we're going into the next steps, which is
making machines we can actually use.
Take recap of what happened with that fancy laptop.
It was the fact that I had a new laptop, but it had broken.
I couldn't get access to the BIOS, the screen was damaged, so that inspired me to crack
up in my discarded gear drawer.
I call it my cabinet of shame, and I also have a steamer trunk of the same volt projects
and all parts.
So I dug into that, and you can see in the forum, I have a detailed technical breakdown
of what I used.
What the base suggests is two Intel Nook thin client computers, one has a solitary USB
3 port, it's an i5 that I upgraded to 16 gigabytes of RAM, and the other one has a USB
2.0 port, and I dug around and I was able to get it up to 12 gigabytes of RAM from the trunk.
And I have two matching 24 inch monitors, so I put one on each.
I have a keyboard, my Dell, which I've mentioned before, it's just an old Dell, you can
see it for a few bucks, and then I got some mice, they sell for about $20, and they're
absolutely amazing, I don't link to them in the forum, and I have two simple desks.
I found some TP-Link Wi-Fi adapters, and Bluetooth adapters for USB 3, since USB falls
back to 2 or 3, I figured that would work.
You'll remember I only had one solitary USB 3 on one, and some USB 2 on the other.
That means in order to use devices, I was going to need to start chaining together USB
devices.
That's what I need.
What I ended up doing was I got powered hubs, and this is something that I've played with
in the past, but I decided to buy quality powered hubs like 10 port USB individual on
off high amperage, I think they're at 5 amp.
Because of backwards compatibility, even USB 2 works totally fine with the USB 3 powered
hub on an Intel Nook.
You just want it to work, and the speed is less important.
I've used it successfully to run the mouse, keyboard, Wi-Fi adapter, Bluetooth adapters,
video capture devices, an external USB monitor, small projector, everything is run without
issue.
I haven't had a single device that didn't work, which is a great testament to how well-supported
devices are these days, and honestly it's good enough scenario.
So the main thing that I noticed to get those machines upright is to go to a wired.
So running gigabit wired between them makes the biggest difference.
But as you'll see, it's not a deal breaker.
So what do I actually need for my battle station personally?
I need a USB microphone, which means I got one on both.
I need headphones.
I got them on both.
I need some pen and paper at both, and then I need some kind of chair and some tea.
So I've got my desk together.
I got my battle stations together.
And it was just a matter of getting my systems up to date running Ubuntu.
I took the Ubuntu LTSs, which were an LTS behind or two LTSs behind.
I got those up to date.
I'd already moved all of my data into cold storage in the past.
So I updated those backup images and then updated my systems.
And in general, it was a pretty seamless process.
I'm now running KDE Plasma on both, and what I realized on these older pieces of hardware,
they have I 915 Intel graphics stacks inside, which is not powerful and not good.
I ran into a lot of issues related to that.
So what I would actually recommend on older hardware like this, even though it's an I3
or an I5, I would dump KDE and I would move to LXQT, which is open box based, or you
could use XFCE, or you could use I3 if you were open to Tiling Windome Manager.
All of those would work really well.
So yes, in running these, you are literally going back to an X-based system.
Hello, X, my old friend, but it's going to run better.
So you will get better performance by moving back to X instead of staying on Wayland.
It's worth considering on old hardware.
And there's also those minimal like anti-X type projects and MX Linux that you can run as
well.
You still have some support in 32-bit land, but you do have to be careful.
The underlying dependencies are breaking.
Multi-architecture support for Debian is still available.
Despite 32-bit dropping off overall, you can still make it happen as well as with arch
and void Linux, but there are still 32-bit options out there.
All of it is trying to be brought out to pasture.
Good news is there is still support.
I've been focusing on devices that I can run 64-bit on, but something that I want to
mention in terms of 32-bit is I've been struggling a lot with running 32-bit software.
The libraries and dependencies of 32-bit are problematic in their support, even in reinstalling
X, reinstalling multi-architecture and 32-bit support.
I find myself having to dig back through like, for example, the Debian repositories trying
to find these dependencies that have changed, whereas I thought that I would just be able
to run software 64 or 32-bit from 12-15 years ago that I personally own the Steer M3.
I've found a lot of problems with it, and a lot of that comes back to gaming, right?
Gaming type tools, but I'm really surprised to see that overall platform support for software
you would imagine would work, does not.
This is one place, Windows itself crushes Linux.
Windows still offers that 32-bit support, just like Wine gives you that 32-bit support.
If you want to run 32-bit software in Linux, it's painful on modern operating systems.
Even tools like I was able to run without issue in the last, say, five years is performing
worse in 2026 than it was at that time.
That's frustrating, and obviously your repositories have changed and packages have changed,
but I see this as a limitation of Linux itself in terms of its 32-bit support and support
for legacy code, which is just not as good as you would imagine.
So we keep chasing the shiny and new, but the truth is, is the old code you have, it doesn't
mean that it's aging well, and you might be surprised when you try to run it and you run
into difficulties with these old applications.
I realize we're really focused on gaming right now, but it's important because if we step
away from that 32-bit and old systems, and we come back to these Intel Nooks with these
Intel I-915 stacks from 12 years ago, whatever, and you look at those in terms of, say,
this use Steam, well, the problem is that Steam itself and Proton technology is not supporting
these old titles either.
That's what's recommended is like, oh, use Proton, we'll guess what, Proton drops support
for the Intel graphics stack back around seven, so you have to install old versions and
pray that they work, and all these things were working perfectly in the past, when the last
couple years, you could run any of these titles and now all of it's totally broken.
So, that's just not good, and this is not even about Windows games, we're talking about
Linux native games that are broken, and then you're trying to run them through Proton, which
is also not supporting the graphics stacks because they want to support the new graphics
stacks of Vulkan, whatever, it's just broken all over the place, it's not good.
So this is not good, if you bought a game in the past, you expect that you can use it,
and apparently that is not the case, and it's very frustrating, and then you're back in
the weeds, like it's 20 years ago, and you're trying to fix this dependency, and that
dependency, and you're in deep package on the command line, it's like, dude, what happened?
What happened to just being able to like, have stuff run, and you assumed it would still
run back in the future?
Maybe this incessant need to throw away the old for the new at all times, is holding
us back, maybe that's holding us back, so let me think about.
There is some good news, if you were like, man, I really want to run a modern game, well,
I have an option for you that is really exciting.
The love 2D engine is coated in C++ and uses Lua throughout.
It's ultra light and can still be built for 32-bit, which means you can also rebuild
games of packaged through Steam for app images, and run them on a 32-bit compiled engine
known as love, so what you can do is you can purchase a game on Steam, like Bellatro,
you unpack a jit, take that love file, and run it through your 32-bit compiled version
of the game engine, and Bob's your uncle, you're able to run these games on a 32-bit system
right now, and that is the power of a free and open engine.
Specific notes for that will be included in the show notes, if you want to jump on the
forum and try doing this yourself, as a quick shout out to these love 2D coated games
that you can still run on 32-bit, not to mention the Intel graphics stack, be sure to
check out one is Moon Ring created by the creator of Fable, it's an old-school ultimate
style RPG, you can get it for free, so you should be able to get the app image directly
off itch and build it for 32-bit or run it on your local system.
There's also Bellatro, which is a super famous card game right now, it's a major seller
where you basically create game-breaking hands of cards, and there's Arco, which is
a Meso American turn-based fighting game, and I highly recommend checking out Arco,
it's worth noting the developer's listly mentioned, most likely will never make another
game of this kind of passion project, because it isn't sold well, even though it's been
critically acclaimed, and you can play the demo of the game, I tried it on Steam, it's
super fun, great pixel art highly recommend checking out Arco, and Bellatro, and Moon Ring.
As we wrap up this section talking about the more visual side of older systems, do keep
in mind that if you're running headless, the situation looks far, far better, especially
because of Go and Rust, Go and Rust, all these applications I found over 100, great tools
that we recommend all the time on all these different podcasts, right, like uptime, kuma,
all these different tools, they're all written in Go and Rust, which means you can use
them on 32-bit, awesome, they run on 32-bit, they run on 64-bit, and they're perfect
for running at home on low systems, like on very low RAM, one gigabyte systems, you could
run multiple of these applications, reverse proxy, DNS, DHCP, and there's a lot of options
up there, so in that way 32-bit hardware is still extremely usable, but let's jump back
into the desktop portion of things again, but this time, let's focus on audio.
On the audio front of things, I've had a fantastic experience, back in February, I started
looking into my options for pipe wire, for audio interfaces multiple of them managing them
on this local machine, because that was one of my goals, right, with getting started in
this whole project, and one of the things I found is that pipe wire support for audio
on Linux is amazing, it's fantastic, and I've had it such a good experience.
If you look at the forum post, you can see the crazy picture of all the stuff routed.
What you get is all your different USB connections, your Bluetooth connections, plug them
all in, like me, you might be wondering, well, how do I access an audio device, for example,
an audacity, or in any audio recording tool reaper, how do I run this input to this output?
I'm confused, and I was struggling with that.
There's three different tools that are really important, and there's more, but this is a good
start.
Number one, wire plumber.
Wire plumber is the session and policy manager for the pipe wire API.
Wire plumber is the actual configuration files that are being managed for the audio interface,
so you can create wire plumber configs, and that'll get you covered.
It gets better.
There is also the QPW Graph, which is a GUI tool that shows your audio assignments in
video in real time and MIDI.
So definitely save your existing configuration, because it's really easy to screw things
up when you have all the patch points, like an old telephone operator, you can just
wire anything into anything, it's so fun, and it's all visual.
So QPW Graph, for some reason, for example, you don't hear something like in your headphones
and say your playing music on your speakers while you're recording, you could at any time
open up QPW Graph and say either, I want to take the speaker output, or let's say you're
playing from Firefox, I want to take the Firefox output and route either to the headphones,
and then it's done.
It's works.
You can do it on the left ear, the right ear, and stereo, you can split it up however you want.
Very, very cool.
I would say I was confused when I was trying to do things directly in different applications,
as well as I was confused, because I'm so used to ALSA Mixer and Pulse Audio.
Pipewire is a drop in replacement for these applications, but for configuring pipe wire,
you'll want to do it directly in wire plumber, or with something like QPW Graph.
For jack assignments, that's PW hyphen jack, that'll manage jack assignments, which is like
pro audio stuff.
Here's my note on jack.
You don't actually start a jack server when you're using pipe wire, as pipe wire emulates
jack.
Instead, you will configure the application to use jack, which will then connect through
pipe wire without needing the separate jack server.
Even with the basic wiring in wire plumber, it gets you 95% of the way there.
I am happy to say once I figured out this basic routing in pipe wire, everything has
been awesome ever since, and I have been able to run 4, 5, 6, 8 devices, and everything
has been cherry.
So I highly recommend learning about pipe wire configuration, it rules.
If you enjoy hearing about pro audio and all this kind of fun DIY approaches, I'm going
to include a bonus episode with this one on Apex Twin, the Electronic Pioneer, who has
made an entire 40-year career out of using any and every imaginable tool, be it the command
line, hardware sense, whatever, to make crazy, amazing music for years, and it just every
tool is pushed to the absolute limit, and if that interests you, you can check out my dedicated
episode to Apex Twin, who would use the absolute ever-love and everything out of tools
like pipe wire.
Ways to support the show.
The best thing you can do to support this show is to share it with somebody else.
If you know anybody that likes this show, please share it with them.
The second best thing you can do is let me know.
You can send me an email, podcast@livingcartoon.org, you can drop in the matrix or send me
a message if you like the show.
Seriously, this show is so small, and it actually does inspire me to want to continue.
And if this show interests you, let me know because I have over 15 pages of notes in regards
to this show.
Oh, hello, Carl.
You can hear him.
That's my cat.
Back to the episode, I would be remiss not to mention the ram-shaped elephant in the room,
which is, of course, memory.
Memory is crazy, memory is limited.
So let's take a moment to specifically address memory, no pun intended.
Related to everything here, right, is memory.
Memory prices, disc prices being insane.
So what I'm saying is that having low memory on a computer inherently means you're going
to hit the limits of that computer quickly.
And I definitely found that, right?
I'm not going to lie in any way.
I hit the limits of memory immediately in terms of swap.
And so what I dug into a lot is Z-RAM.
And Z-RAM really helped with performance with some tuning.
And I just want to read what this person posted to Reddit and I'll link it because it's useful.
Z-RAM is a Linux kernel module that creates a swap space in RAM, as opposed to having it
on your disk.
Swap partitions on a hard drive or an SSD are slow.
And I found this.
I hit my limits in them because of browsers.
The web browser is the problem right a lot of the time.
But this results in stuttering where the machines decides to start using swap space.
Z-RAM is an alternative to swap space that compresses memory before storing it in a designated
space in RAM.
This means you get more efficient use of the RAM as your system will not compress files
in memory when needed.
In using Z-RAM, I did find notable improvement in my daily browsing experience.
And I'm not talking about any kind of browser level manipulation of things just in terms
of the computer itself and allocating Z-RAM to performance.
So Z-RAM is kernel supported on all the major Linux distributions but it doesn't mean it's
enabled by default.
For me, I had to install the Z-RAM hyphen tools package.
And from there, I was able to set up a Z-RAM partition.
That part was easy and it did work really well.
I'm going to pull up my notes and say exactly what I did.
Let me look.
All right.
Let's go back through my system de-configuration of Z-RAM swap.
Systems with 12 gigabytes of RAM.
How systems behave?
12 gigabytes of RAM, enough for basic use.
But once you're adding kd-plasma, gaming, web browsers, then I'm running out of memory
which is exactly what's happening.
So let's talk about the role of Z-RAM and 12 gigabytes of RAM system.
It's absolutely critical.
It acts as a buffer before I jump into that disk swap.
And what it does is it prevents kd-e-plasma itself from freezing.
Because that's what was happening to me with 12 gigabytes of RAM.
It's like, you watch a YouTube video, browse the internet and be doing something.
And the computer just seizes up.
And I'm like, oh.
So what it does is it's going to grant me an additional four or six gigabytes of compressed
RAM that's going to help keep the system from being responsive.
Instead of it just like locking up because once it locks up, there's like nothing I can
do.
It's like a frozen robot.
And I'm just like, ah.
So that's not good.
It's giving me a little bit of a safety net, a little bit of a, you know, a little space
to work with, take some of the pressure off.
So it, it did help and it helped on both the computers.
So let's talk about my 16 gigabyte RAM system.
So 16 gigabytes obviously is a little bit more comfort.
I can do more multitasking.
But as you ram in this case, still important, it's giving me like extra six gigs compressed
RAM, which is again preventing kd-plasma in this case from blocking up.
In case browser spikes take up all my memory because, you know, browser tabs themselves
are taking up so many hundreds of mags or 75 mags at minimum per tab and things just
add up quickly.
So swap still there, but it's just, it's just rarely used as the intention.
So in this case, Z-RAM is about half the size of the available RAM.
Shop file is a quarter of the size and then I'm taking my, it's called swapiness.
I'm taking my swapiness down and I'm using a program called early out of memory.
And I am running that let's talk a little bit about early out of memory.
All right, this is from the project GitHub page.
What is early out of memory?
So early out of memory wants to be simple and solid is written in pure sea.
It has no dependencies, it is an extensive test suite, unit and integration tests, which
is written in, oh, who knew, go early out of memory checks the amount of available memory
and free swap up to 10 times a second.
By default, if both are below 10% it will kill the largest process, whatever has the highest
out of memory score.
Reason this matters is because if your computer's about to seize up, it's better to kill
the process than to kill your entire computer, and I'm specifically talking about the web browser.
So we're just giving ourselves multiple levels between ZRAM, swap, and the out of memory
Damon to prevent losing everything.
And okay, why not trigger the kernel out of memory killer instead?
You can make early out of memory trigger the kernel out of memory killer by passing along
the flag.
However, in some Linux kernels, triggering the kernel out of memory killer does not work.
That is, it may only free some graphic memory that will be allocated again and not actually
killed that process.
You can see how this looks on machines such as Intel integrated graphics, which is what
I'm running.
How much memory does early out of memory use?
About two megabytes, the only 220 kilobits is private memory.
The rest is a libc library that is shared with other processes.
So I have had no problem running early out of memory on my computers.
It hasn't caused me any kind of trouble, but it's really once I did that ZRAM in addition
to this that I've had much better performance.
And this is a good little setup, ZRAM, swap, reduced swapiness, and getting out of memory
Damon or similar in case something just eats through all your RAM, which browsers are want
to do, you just got to protect yourself from the whole system seizing up.
And let's say while we're talking about this, that you're like, what is this whole
swappingest thing and why does that matter before we jump to this out of memory Damon killing
things?
What's the deal with swapiness?
So swapiness is controlling when the kernel starts moving the memory out of RAM.
On a system where now I'm running ZRAM and swap, that means it's affecting me on two
layers.
First you have the fast compressed ZRAM, which is used before swap.
So swapiness doesn't change that.
Then you have the disk swap, which is slower, that's where swapping is matters, right?
At a rating that I have around 20, it means I want you to avoid using the disk swap for as
long as possible because it's slow and limited.
That means on my 12 gigabit system and even on 16, I'm going to go through memory pressure
and swapiness is going to determine how gracefully I'm working through my memory.
My goal is just to keep the system responsive.
And what I'm saying is if swapiness does not need to occur, I'm discouraging it.
So how does early out of memory interact with the ZRAM, right?
Early out of memory is this watchdog.
It's preventing the system from freezing once the memory becomes exhausted.
So it's monitoring the RAM, monitoring the swap, and once things fall below this threshold
of like, you know, so many percentage points, single digit, 10%, then the largest memory
hogs start getting killed off.
This is to protect KDE plasma and the system itself from locking up.
And this is before the kernel's out of memory killer like we already mentioned.
So why does this matter?
It's because KDE plasma does freeze on me.
If I exhaust the RAM with the browser, my system walks up.
At least I'm able to kill these runaway processes before the system locks up.
And with ZRAM, I get a little more safety bandwidth time before I have to kill those processes.
If you had a larger amount of RAM, then this doesn't really matter anymore.
Once you have like 32 gigabytes, you've got a lot to work with.
But in this case, I don't.
And let's take a quick moment to talk about, does any of this matter when you have extremely
limited RAM?
Okay, so when dealing with 8 gigabytes of RAM, it seems like so far for me, things state
pretty much the same.
ZRAM, roughly half of the actual RAM, swap about a quarter of that 8 gigabytes.
And really all we're trying to do is prevent freezes from when the browser spikes, the early
out of memory still recommend it.
8 gigabytes still obviously is enough that your system can work.
You just don't want, you don't want it to die.
Just drop down into the 4 gigabyte of RAM.
Let's say that now we're using XFCE or something like that.
So now it's just like, we got to keep the system responsive period.
So we're still doing roughly the same ratios, 2 gigabytes of ZRAMs or half of it, roughly
the same in swap, a higher degree of swapping S because it's just going to happen.
And we're still using early out of memory.
And then in this case, we just want to try to avoid these heavy services.
So if you only have limited RAM, obviously, you don't want to be running, you know, snaps,
containers, any kind of preloading thing or heavy workloads because it's not going to work.
It's just total border line, right?
Because even like say XFCE is going to use like several hundred megabytes just to run.
So ZRAM will still give you some space swap will still help with crashes and early out
of memory will still help with systems locking up.
But you just have to keep things very, very limited, obviously.
And now we're getting into the space where, you know, the distributions themselves don't
recommend having this amount of memory or even close to it.
But what about if we dropped two gigabytes of RAM?
So if we get it down to two gigabytes of RAM, now we're just trying to minimize the actual
amount of RAM that's being used to keep the system running.
So you could still do ZRAM.
Now you're basically doing half again, swap file, half high swapping S because you're going
to use it all the time, still using out of memory.
But all like notification background services, everything we don't need, we're disabling
it, right?
No more indexing, no more in extra anything, extremely tight.
So it's like a VPS style system now.
And if you're doing desktop, you're just keeping it light.
So you could use like XFCE or something, it's just, it's going to be slow, it's not going
to be good.
And then your browser, obviously, you just got to be careful with Firefox, things like
that.
We're not going to go into that.
But you can really tweak it, obviously, in order to not multitask so hard.
Once we drop into the one gigabyte RAM territory, now we're talking obviously like headless
server.
We're just running a terminal.
That's what I am anyway.
At that point, you could still use ZRAM.
You could still use swap of high swapping S, but you just want to disable all extra services
like very aggressively.
So I want gigabyte of RAM, you're just like anything you don't need, you're just taking
off because you're just hitting the threshold all the time, right?
And ZRAM and swap might just keep your system running period.
But I think the main thing to me is like, you just don't run anything extra, at least
I don't.
So even compose for me on a one gigabyte system, I just don't use it.
I mean, I'll just, you know, I'll do that on a fancier machine or whatever.
But I just hold this, give some ideas of RAM usage and please write into the show if you
have more ideas of how to maximize RAM.
But I did find that this did really help with performance for myself.
And once I get into a headless bare minimum one gigabyte, I more just worry about only
running the literal thing that I want to run and I don't run anything else.
That's the honest truth.
All right.
Now that we've jumped through all these concerns and all these whatever's, what actually
are we going to run on these devices?
Glad you asked.
This is something I'm very excited about.
Touching back from the last episode, having, you know, basic a recursive little DNS for
faster lookups and having these devices with limited RAM, but basic performance.
Well, obviously you're going to want, say, these two little computers to communicate together.
No matter how much RAM or how much performance they have, you want to get basic tooling done.
This is exciting because the tooling we're going to talk about is the same tooling that's
been powering Linux and Unix machines for decades.
Let's jump in on that.
So one thing you can do obviously is you can take a device, an old pi three, which a listener
did as part of the dirt.
That's the DIY unfinished resurrection project challenge to work on something that you've
been putting off.
They took an old pi three and they installed a service called octopi octoprint onto
it, just to manage a 3D printer in place of an SD card as a way to send files over Wi-Fi.
And it previously had been used on a pi five, but the downgraded to the pi three, because
it has full size USB ports, it has wired ethernet, and it gets the job done.
So it worked well enough, right?
So like that's an option is you have a sort of form fit application you can run.
But let's go even before that and let's just say you're connected to this machine.
Well, the first thing you're basically going to want to do with any machine is you're
going to want to move a file or a folder around.
Let's focus on the terminal.
You're going to use the GNU core utils, something like the CP MV command, copy move, copy
a directory from one location to the other, right?
I want to copy because I still want to leave my original copy where it is in case anything's
wrong.
I have it in two places.
Now let's extend that to these two machines.
They're on the same network.
So I connect them using good old SSH.
I make an SSH connection between the machines to move files, but I'm like, well, that doesn't
really make sense.
I'll just use the scp command.
In this case, scp literally means copy over SSH.
Who are we?
We are somebody that just needs to send something to a place.
One way, we're going to use the scp command to send from where our directory is right
now to user at remote directory and send the file or folder.
scp is equally useful to pull in reverse.
So you can scp to your local directory from remote user at host, from x directory that
you remember, and battle pull from the remote machine to your local machine without having
to do anything else is pretty amazing.
So you can send or receive files using scp very, very useful.
That works.
I do that all the time to send myself something between machines, and it works.
But it's also limited, because think about it, you're moving a file or a folder.
And right away, you're like, even though I'm just sending this off somewhere, I want
to maintain my time stamps.
I want to do a little check summing to make sure that this process is working properly.
I want to make sure that the ownership and all these things don't change.
Maybe I'm making changes on your behalf as the administrator, and I don't want you to
know about it, because it's a network drive or something.
So in that case, I'm not going to use the scp command anymore.
Now I'm going to use our sync.
And our sync has been around for 29 years.
It's an unbelievable tool.
It is the basis of all modern backups, more or less.
So what you're doing with our sync is you're doing the same concept.
You're moving a file from one place to the other or a directory of files, but now you
have really comprehensive control over what our sync does.
So you can still use scp is all foundational, but with our sync, hmm, I want to move
this series of directories to this new machine, but I'm worried I'm going to screw the
command up.
So I'm going to do a dry run, and you can literally hyphen, hyphen, dry run, or hyphen
in, and you can actually see what will happen without taking action.
So this makes just for the dry run capabilities alone in the analysis.
Our sync is an amazing tool, and of course it can be run through SSH, or locally, or do
an external disk or for backups.
Our sync killer in a one-way fashion.
So if you want to have a more bi-directional approach to our sync, because now it's like,
okay, now I'm not doing the one way anymore.
You might use a tool like Unison.
Unison has been around for 25 years.
It is built off of our sync.
But our sync is designed to be run on two machines, which is perfect for me.
I have Unison set up, why?
Because I have these audio recordings that I'm moving back and forth.
Obviously I could use some other tool, but it doesn't make sense because these files are
pretty large.
I'm working with 40 gigabytes worth of audio files.
My next cloud is 5 gigabytes.
Google Drive is what?
15 gigs.
I don't need any of that cloud stuff, though.
I can do this machine to machine.
So what Unison does is it does what I was doing before with SCP and our sync.
It's actually being run on the two machines.
When you call it only, it's not real time.
Just on the state of the directory you've named, both machines are looking at that state
and they're adjusting accordingly to keep perfect validation across both machines.
So it's like this directory is what it's supposed to be on machine A and machine B.
And even if one of those Unison links goes offline when they come back up, they'll sort
it out.
So you can be making changes and it'll get resolved.
That's all Unison does.
It doesn't like do any extra special anything.
It just keeps this directory as perfect as possible.
And you can do a hub and spoke approach and run multiple iterations of Unison.
But Unison's not designed to be run on top of itself.
So it's going to stay at that two machine limit.
It's not designed to work it out between four machines or six machines or eight machines
on the same directory.
No, it's not designed for conflict management.
It's just designed to keep this directory that you've defined perfect two machines.
If you want something that scales up beyond Unison, keep in mind, the tools we're talking
about, these will run on potatoes.
I'm talking, I ran these on a 400 megahertz box that I have at my house.
So all of these tools run great on anything that can run.
So if you can't run SSH, you basically can't use the computer for networking.
And SCP, R-Sync, and even Unison will get you there.
But if you want to scale up, that means you're going to obviously be using more resources.
But now we go to another excellent tool which runs in real time.
And no, that tool is not next cloud, it's not any sort of cloud service.
We could use cloud services, but that's not the point of this episode.
And even if we did, there's a big caveat with next cloud, which is that it's screw stuff
up.
And I have been using next cloud for years.
I use it offline, like I'll run, for example, it only on Wi-Fi.
I have the next cloud desktop app on at this moment, but usually I just have it turned
off on these machines because I don't want to use the RAM, right?
And in this case, I don't need to move the data to an additional server.
I don't need to use web dev and I don't fully trust web dev because I do have issues
with like these kind of database files or these large, chunky directories.
When all I'm trying to do is just migrate my changes from machine to machine.
In that case, I personally don't want to use next cloud or I don't trust it as much
as this next tool, which does not require a server.
And that would be sync thing, right?
We're going beyond Katie Connect because that doesn't scale in this case.
So that's a way to send things between a folder on your phone and your computer or whatever,
but yeah, we're moving beyond that.
Sync thing can be used on as many devices as you want.
What's cool is you actually have a global look at each node at each device on the system,
right?
And they're running in a peer to peer fashion.
It can scale up as much as you want.
And what happens is every single node or client is aware of each other, even if no changes
are being made.
So say you're running five sync things and they're all working together.
They are aware of each other.
So they know, for example, oh, node five is offline.
Node three has gone offline in the last hour.
And that means that changes can be propagated in a peer to peer fashion.
There's not a central node.
And the nodes will help each other make this sync possible.
So sync thing is a super useful tool and I have found that it works well in a similar
fashion to unison where I turn it on when I need it, I turn it off when I don't need it.
And I think if the only goal is to keep that folder synchronized, obviously, you should go
back to unison between two machines.
But if you want to have more flexibility with phone clients and stuff, then sync thing
is a solid bet, even with like two gigabytes of RAM or whatever on a box, I say minimum.
Sync thing is good.
Use it when you want it.
And I would use sync thing over next cloud because you just don't need the server aspect.
I just need to move the folders back and forth.
I can do it over VPN, whatever, sync thing will make it happen.
No tools perfect, but these are always tools you can fall back on.
And because I'm a guy that comes especially from next cloud, even next cloud themselves
I recommended sync thing for just moving large data sets and our sync as these tools work.
And it's avoids having to do the whole sync up to the server, sync down from the server
to your device.
You're just moving device to device.
And plus you have things like doing deltas, which is looking for what changed and you get
functionality with compression, whether it's with sync thing or our sync.
So this is good.
And I think we'll leave it there for this episode as far as these tools.
But these are like the foundational tools that you can use to build yourself a basic
home lab.
So you're moving folders.
You know where they are and now we're starting to do backups, right?
Using one way synchronization, really what you're doing is you're making a backup.
You're moving your data to another location.
Maybe you're doing some level of compression or encryption.
And I'd like to get into that in a future episode.
But I'll give you one more tool because you can run it on a low-end machine.
And that is our clone.
Once again, both of these tools are coded in go.
So our clone is a command line tool that is inspired directly by our sync.
But it's for cloud providers.
You can use our clone as a way to access a remote cloud provider like Google Cloud, Google
Drive, Dropbox, Next Cloud Web Dev, S3, whatever.
You can connect to a service provider, a backup service, which people do all the time.
Using our clone, you'll be able to have a mount like a fuse style mount of a remote
location that otherwise does not support it in order to send our sync, like directories
to that location or pull in either direction.
So it's not designed for real-time usage, it's inherently a bit flaky to mount these providers
that otherwise do not support this technology.
It's, you know, I wouldn't fully trust it.
I know people do, but I would be wary.
But if you're goal is to get your data out of Dropbox or whatever, our clone will make
it happen.
And if you limit the threading and the amount of transfers and stuff, and you just really
lock that down on the configuration level because it's command line tool, you can run it
on an extremely minimal machine and our clone will work.
Anyway, that's enough of these tools for this time.
Something you'll notice from this sort of experimentation and local first tooling, like
we're talking about, is really what you have is you just have a home lab.
That's what this is, right?
This is a home lab setup.
Every one of these tools is a valid tool that can be used professionally that you can fall
back to, even if you're using other things, you can always use this tooling in addition.
And that's what people do.
So congratulations.
If you're running any of these tools at home, you're running a home lab, you know, try
using SSH to make a connection, try using SCP to copy a file, try using R-Sync to verify
that file.
You could try using unison between multiple machines, keep a file in sync, or you can make
backups with R-Sync.
And these are all valid tools that you can use forever.
And you have a basic kind of peer-to-peer connection between your machines, whether it's local
or then through a VPN.
And that's what I'm working towards is giving you the tooling to have a rock solid, awesome
setup that you actually understand.
And we're building up, right?
We're building up towards playing with containers and playing with these other things.
But either way, I think we're addressing the biggest thing these devices have locally in
regards to how well do you actually understand a tool that you're using.
And how well is it actually working for you?
So that's where we're coming from is we're going to understand the tool.
And that way, however you've set it up, we don't care.
But the point is you at least know what it's doing.
And it's worth taking the moment to figure out as we continue building up into more and
more workflows beyond, say, now taking a machine and a device and growing that to a couple
machines, couple devices, and beyond.
I think we'll close this thing out by going over some of the trivia winners from the recent
Linux Fest Northwest, which will be also dropped as a bonus to this, talking about that
experience.
It was a lot of fun.
But let's go into the some of the named projects to end out this episode that people
who played trivia named off, I told them they can name any project they want and I will
share it on the show and let's talk about those.
So trivia was a mix this time of questions, which people found quite challenging in regards
to Linux open source trivia, followed by a practical bonus round of free and open tooling
that people have actually done in their daily lives, whether or not they've done it
as a point.
The top score was Romeo with total of 30 points, 22 in the main route, eight in the bonus.
Romeo named off Ubuntu.
Romeo also gave a talk on Ubuntu, Linux Fest Northwest called Ubuntu without handlebars.
We'll do a link to the room, you can check out that talk if you want, recording for it
available on YouTube trailing immediately behind right on the shoulder of Romeo by a single
point was Matthias of Victoria Metrix, Victoria Metrix being a drop-in replacement for
Prometheus Grafana.
That was 21 points plus eight in the bonus, so 29, it was very close.
The other score, very interesting, which got 10 on the daily application of tooling in
their life was Salt, who named Segal, Segal Conference at the University of Washington
is currently accepting talks if you want to submit for that.
Let's keep going.
From there, we had the GNU boot project.
We also had meshtastic, the communication decentralized network, Pi Club, which is based
out of Washington, Slackware, the gentleman for that said he runs a Slackware mirror
with his father, and they run all of their software through the mirror locally and even
offline.
That's interesting.
We also have the Triton Data Center, WWU, that's Western Washington University, furry
tail, noster, fusion PBX, next cloud, another Pi Cloud, clonezilla, take the stone.xyz,
next book, Libra phone, open street maps, Boo Kitty, zipline recruiters, and yeah, if you
want to check out any of those, feel free.
A big thank you to everyone who participated in the trivia, and also thank you to Linux
Fest Northwest for letting me be there at a table at a great time, and I recommend anyone
interested, consider being a part of Segal this fall, and check out the link if you want
to submit as a speaker to participate in Segal all about GNU and FOSS.
Coming up in the next episode, we'll be talking about monitoring of services, making
sure things are running, making sure that the networking is running smoothly, because once
it's actually something's online, how do we know?
So we'll dig into that next episode, we'll also touch base on a little more on the Linux
Fest Northwest conference, what happened there, what you can expect at Segal, and it'll
be a good time.
If you have any thoughts or comments, do send them in to the show.
Otherwise, thank you so much for tuning in to Linux Prepper and have a wonderful day.
Bye.