Repurposing Battlestations from old hardware, local sharing offline and LinuxFest Northwest Trivia
S01:E03

Repurposing Battlestations from old hardware, local sharing offline and LinuxFest Northwest Trivia

Bellingham, WA

Episode description

(00:00)

Previous episode on local networking and off-line tooling

(00:37)

Ameridroid Sponsor - Linuxprepper at checkout

(01:20)

Battlestations, and how they relate to Linux Prepper

(02:35)

Your input much appreciated on this long term exploration

(03:08)

Two Desktops from Thin Clients, Detailed Notes

(07:00)

Older Intel Graphics and X Window Manager Support vs Wayland

(07:52)

32-bit and Multi-Architecture Support, Wine, Proton

(12:00)

Love 2D Engine and Compiling Back 32-bit Support

(13:07)

Moonring, Free RPG Game inspired by Ultima

(13:21)

Balatro, Card Game

(13:27)

Arco - Turned-based, Meso-american Adventure Game

(14:03)

100 Tools You can Selfhost on Minimal Hardware

(14:57)

Pipewire, Wireplumber, QPWGraph, pw-jack

(18:30)

Aphex Twin Premium Episode Coming Soon

(19:22)

Memory Management from 16gb to 1gb RAM

(20:11)

Zram, adding efficient memory compression

(21:26)

12GB RAM total notes on setting up Zram, Swap, Swappiness

(22:39)

16GB RAM total notes on my other Intel NUC

(23:37)

EarlyOOM Daemon monitoring Zram and Swap

(26:14)

Swappiness, what triggering it does

(27:55)

Considerations for 8gb to 1gb of total RAM

(32:08)

DURP Challenger downgrading to Pi 3 as appliance

(33:02)

File Management with Core Utils through SSH

(33:31)

SCP for One-Way Transfers

(34:29)

Rsync - One-way Transfers with Comprehensive Control

(36:02)

Unison - Resilient Bi-directional Rsync for Two Locations

(38:09)

Scaling up without cloud services or servers

(39:34)

Syncthing - P2P Real-time Sync with client support

(42:24)

Rclone - Rsync-like support for Cloud Providers: Dropbox, Google Drive, S3

(43:30)

Minimal Homelab Implications; Try The Tools and Win

(45:01)

Container Concerns vs Understanding Tooling

  • Understanding how tools actually work when deploying

(45:42)

Trivia Winners from LFNW and Named Projects!

(46:13)

LinuxFest Northwest 2026 Wikipage with Youtube Links

(48:01)

SeaGL Call for Speakers Open Now!

This podcast contains extra episodes, only available to premium subscribers.

Unlock Subscribe
Download transcript (.srt)
0:00

Welcome back to Linux Prepper Podcast, I'm the host James, show about self-hosting,

0:04

false, curious, cis admins, home lab enthusiasts, people who want to DIY things themselves

0:11

or just try self-hosting their own gear.

0:13

You've ever tried this and failed, well, welcome to the club, we're going to deal with

0:17

self-reliance, local first tooling, and today's episode is all going to be about repurposing

0:22

old machines.

0:23

We're also going to dig into basic file processes, file networking for a local or offline

0:29

style setup in the home lab, and then we'll finish it off with trivia picks from the

0:34

recent Linux Fest Northwest.

0:36

So buckle up, here we go.

0:38

Want to thank the sponsor of this podcast, which is a meradroid.com, a meradroid is

0:44

the United States Northern California-based distributor of single-board computers, especially

0:49

o-droids from hard kernel and official partner of Nabukasa and home assistant hardware.

0:55

They offer global shipping, and they have friendly customer service you can call them

0:58

on the phone.

0:59

They're awesome people, super nice, they'll help you out.

1:02

It's definitely an advantage to order from them versus ordering directly from overseas

1:06

and hard kernel and such, and they're just an all-around great provider.

1:11

Can't recommend them enough, thank you for sponsoring the show.

1:14

You can use Linux proper a checkout or use the referral link.

1:18

So what is a battle station?

1:20

A battle station really is just a cool term people have put online for a gaming type

1:26

setup in addition to an office, it's like you have some comfortable gaming setup with

1:32

an ergonomic chair setup to kick butt in an online game.

1:37

So we're going to take the same concept, but adapt it to the Linux prepper style user and

1:42

administrator, which means instead of for going for something fancy, we're going to actually

1:47

just re-appropriate the gear we already have to make sure that it's working in a foundational

1:52

way that supports us in getting work done and assumes that we'll run into problems and

1:58

snapfoos.

1:59

So my focus with this has been all the hardware and all the software.

2:05

We're going to break our battle station segment into four sections.

2:08

First, we're going to talk about graphics and gaming performance, 32-bit specifically.

2:14

Then we'll jump into pipe wire, we'll follow that up with managing memory between 16

2:20

and 1 gigabytes on a machine, and we're going to end it off with local file networking

2:26

between two machines as the basis of a minimalist home lab before trivia.

2:32

I want to give a caveat that I got really deep in the weeds on this over the last six

2:37

months, and there's so much material, so I'm going to try to keep this really focused,

2:41

and that way we can break it into different chunks, different episodes.

2:45

Do send me your thoughts because this is the longest deep dive I've ever had, and I hope

2:49

you enjoy this condensed version of what was previously two hours.

2:54

Battle station, gaming setup, usually with a computer, monitor, some gaming peripherals designed

2:58

for optimal performance right and for your own comfort.

3:02

So in this case, we're going to build out our own little Linux pepper battle station.

3:06

Two of them in this case, with the assumption that at least one is going to break down,

3:11

I've already lost two laptops, what's another one, you know?

3:14

So one's not good enough, we're going for two.

3:18

This means anything I'm running on one machine, I have to be able to run on the other,

3:23

and that means the directories have to be managed equally across the machines.

3:27

So if I work on machine A, and I work on something else on machine B, it's not going

3:32

to mess me up.

3:33

I've got a system I can understand, and that's what we're delving into today.

3:37

We're going beyond the first steps, and now we're going into the next steps, which is

3:41

making machines we can actually use.

3:44

Take recap of what happened with that fancy laptop.

3:48

It was the fact that I had a new laptop, but it had broken.

3:51

I couldn't get access to the BIOS, the screen was damaged, so that inspired me to crack

3:55

up in my discarded gear drawer.

3:57

I call it my cabinet of shame, and I also have a steamer trunk of the same volt projects

4:01

and all parts.

4:03

So I dug into that, and you can see in the forum, I have a detailed technical breakdown

4:07

of what I used.

4:09

What the base suggests is two Intel Nook thin client computers, one has a solitary USB

4:15

3 port, it's an i5 that I upgraded to 16 gigabytes of RAM, and the other one has a USB

4:21

2.0 port, and I dug around and I was able to get it up to 12 gigabytes of RAM from the trunk.

4:28

And I have two matching 24 inch monitors, so I put one on each.

4:32

I have a keyboard, my Dell, which I've mentioned before, it's just an old Dell, you can

4:35

see it for a few bucks, and then I got some mice, they sell for about $20, and they're

4:41

absolutely amazing, I don't link to them in the forum, and I have two simple desks.

4:46

I found some TP-Link Wi-Fi adapters, and Bluetooth adapters for USB 3, since USB falls

4:52

back to 2 or 3, I figured that would work.

4:56

You'll remember I only had one solitary USB 3 on one, and some USB 2 on the other.

5:01

That means in order to use devices, I was going to need to start chaining together USB

5:06

devices.

5:07

That's what I need.

5:08

What I ended up doing was I got powered hubs, and this is something that I've played with

5:13

in the past, but I decided to buy quality powered hubs like 10 port USB individual on

5:19

off high amperage, I think they're at 5 amp.

5:23

Because of backwards compatibility, even USB 2 works totally fine with the USB 3 powered

5:28

hub on an Intel Nook.

5:30

You just want it to work, and the speed is less important.

5:34

I've used it successfully to run the mouse, keyboard, Wi-Fi adapter, Bluetooth adapters,

5:43

video capture devices, an external USB monitor, small projector, everything is run without

5:49

issue.

5:50

I haven't had a single device that didn't work, which is a great testament to how well-supported

5:54

devices are these days, and honestly it's good enough scenario.

5:58

So the main thing that I noticed to get those machines upright is to go to a wired.

6:03

So running gigabit wired between them makes the biggest difference.

6:07

But as you'll see, it's not a deal breaker.

6:10

So what do I actually need for my battle station personally?

6:12

I need a USB microphone, which means I got one on both.

6:15

I need headphones.

6:16

I got them on both.

6:18

I need some pen and paper at both, and then I need some kind of chair and some tea.

6:23

So I've got my desk together.

6:24

I got my battle stations together.

6:27

And it was just a matter of getting my systems up to date running Ubuntu.

6:30

I took the Ubuntu LTSs, which were an LTS behind or two LTSs behind.

6:37

I got those up to date.

6:39

I'd already moved all of my data into cold storage in the past.

6:43

So I updated those backup images and then updated my systems.

6:48

And in general, it was a pretty seamless process.

6:51

I'm now running KDE Plasma on both, and what I realized on these older pieces of hardware,

6:59

they have I 915 Intel graphics stacks inside, which is not powerful and not good.

7:07

I ran into a lot of issues related to that.

7:09

So what I would actually recommend on older hardware like this, even though it's an I3

7:14

or an I5, I would dump KDE and I would move to LXQT, which is open box based, or you

7:21

could use XFCE, or you could use I3 if you were open to Tiling Windome Manager.

7:28

All of those would work really well.

7:30

So yes, in running these, you are literally going back to an X-based system.

7:35

Hello, X, my old friend, but it's going to run better.

7:39

So you will get better performance by moving back to X instead of staying on Wayland.

7:45

It's worth considering on old hardware.

7:48

And there's also those minimal like anti-X type projects and MX Linux that you can run as

7:55

well.

7:56

You still have some support in 32-bit land, but you do have to be careful.

7:59

The underlying dependencies are breaking.

8:02

Multi-architecture support for Debian is still available.

8:06

Despite 32-bit dropping off overall, you can still make it happen as well as with arch

8:12

and void Linux, but there are still 32-bit options out there.

8:15

All of it is trying to be brought out to pasture.

8:18

Good news is there is still support.

8:20

I've been focusing on devices that I can run 64-bit on, but something that I want to

8:27

mention in terms of 32-bit is I've been struggling a lot with running 32-bit software.

8:32

The libraries and dependencies of 32-bit are problematic in their support, even in reinstalling

8:38

X, reinstalling multi-architecture and 32-bit support.

8:42

I find myself having to dig back through like, for example, the Debian repositories trying

8:48

to find these dependencies that have changed, whereas I thought that I would just be able

8:53

to run software 64 or 32-bit from 12-15 years ago that I personally own the Steer M3.

9:02

I've found a lot of problems with it, and a lot of that comes back to gaming, right?

9:07

Gaming type tools, but I'm really surprised to see that overall platform support for software

9:14

you would imagine would work, does not.

9:16

This is one place, Windows itself crushes Linux.

9:22

Windows still offers that 32-bit support, just like Wine gives you that 32-bit support.

9:27

If you want to run 32-bit software in Linux, it's painful on modern operating systems.

9:34

Even tools like I was able to run without issue in the last, say, five years is performing

9:39

worse in 2026 than it was at that time.

9:43

That's frustrating, and obviously your repositories have changed and packages have changed,

9:48

but I see this as a limitation of Linux itself in terms of its 32-bit support and support

9:55

for legacy code, which is just not as good as you would imagine.

10:00

So we keep chasing the shiny and new, but the truth is, is the old code you have, it doesn't

10:05

mean that it's aging well, and you might be surprised when you try to run it and you run

10:10

into difficulties with these old applications.

10:14

I realize we're really focused on gaming right now, but it's important because if we step

10:19

away from that 32-bit and old systems, and we come back to these Intel Nooks with these

10:25

Intel I-915 stacks from 12 years ago, whatever, and you look at those in terms of, say,

10:33

this use Steam, well, the problem is that Steam itself and Proton technology is not supporting

10:40

these old titles either.

10:42

That's what's recommended is like, oh, use Proton, we'll guess what, Proton drops support

10:47

for the Intel graphics stack back around seven, so you have to install old versions and

10:52

pray that they work, and all these things were working perfectly in the past, when the last

10:58

couple years, you could run any of these titles and now all of it's totally broken.

11:03

So, that's just not good, and this is not even about Windows games, we're talking about

11:09

Linux native games that are broken, and then you're trying to run them through Proton, which

11:14

is also not supporting the graphics stacks because they want to support the new graphics

11:18

stacks of Vulkan, whatever, it's just broken all over the place, it's not good.

11:23

So this is not good, if you bought a game in the past, you expect that you can use it,

11:28

and apparently that is not the case, and it's very frustrating, and then you're back in

11:32

the weeds, like it's 20 years ago, and you're trying to fix this dependency, and that

11:36

dependency, and you're in deep package on the command line, it's like, dude, what happened?

11:41

What happened to just being able to like, have stuff run, and you assumed it would still

11:44

run back in the future?

11:49

Maybe this incessant need to throw away the old for the new at all times, is holding

11:54

us back, maybe that's holding us back, so let me think about.

12:00

There is some good news, if you were like, man, I really want to run a modern game, well,

12:07

I have an option for you that is really exciting.

12:10

The love 2D engine is coated in C++ and uses Lua throughout.

12:16

It's ultra light and can still be built for 32-bit, which means you can also rebuild

12:21

games of packaged through Steam for app images, and run them on a 32-bit compiled engine

12:29

known as love, so what you can do is you can purchase a game on Steam, like Bellatro,

12:35

you unpack a jit, take that love file, and run it through your 32-bit compiled version

12:41

of the game engine, and Bob's your uncle, you're able to run these games on a 32-bit system

12:45

right now, and that is the power of a free and open engine.

12:50

Specific notes for that will be included in the show notes, if you want to jump on the

12:54

forum and try doing this yourself, as a quick shout out to these love 2D coated games

12:59

that you can still run on 32-bit, not to mention the Intel graphics stack, be sure to

13:05

check out one is Moon Ring created by the creator of Fable, it's an old-school ultimate

13:10

style RPG, you can get it for free, so you should be able to get the app image directly

13:14

off itch and build it for 32-bit or run it on your local system.

13:19

There's also Bellatro, which is a super famous card game right now, it's a major seller

13:23

where you basically create game-breaking hands of cards, and there's Arco, which is

13:28

a Meso American turn-based fighting game, and I highly recommend checking out Arco,

13:33

it's worth noting the developer's listly mentioned, most likely will never make another

13:39

game of this kind of passion project, because it isn't sold well, even though it's been

13:43

critically acclaimed, and you can play the demo of the game, I tried it on Steam, it's

13:47

super fun, great pixel art highly recommend checking out Arco, and Bellatro, and Moon Ring.

13:55

As we wrap up this section talking about the more visual side of older systems, do keep

14:01

in mind that if you're running headless, the situation looks far, far better, especially

14:06

because of Go and Rust, Go and Rust, all these applications I found over 100, great tools

14:12

that we recommend all the time on all these different podcasts, right, like uptime, kuma,

14:18

all these different tools, they're all written in Go and Rust, which means you can use

14:23

them on 32-bit, awesome, they run on 32-bit, they run on 64-bit, and they're perfect

14:28

for running at home on low systems, like on very low RAM, one gigabyte systems, you could

14:35

run multiple of these applications, reverse proxy, DNS, DHCP, and there's a lot of options

14:41

up there, so in that way 32-bit hardware is still extremely usable, but let's jump back

14:48

into the desktop portion of things again, but this time, let's focus on audio.

14:54

On the audio front of things, I've had a fantastic experience, back in February, I started

14:59

looking into my options for pipe wire, for audio interfaces multiple of them managing them

15:05

on this local machine, because that was one of my goals, right, with getting started in

15:08

this whole project, and one of the things I found is that pipe wire support for audio

15:13

on Linux is amazing, it's fantastic, and I've had it such a good experience.

15:20

If you look at the forum post, you can see the crazy picture of all the stuff routed.

15:23

What you get is all your different USB connections, your Bluetooth connections, plug them

15:29

all in, like me, you might be wondering, well, how do I access an audio device, for example,

15:34

an audacity, or in any audio recording tool reaper, how do I run this input to this output?

15:40

I'm confused, and I was struggling with that.

15:43

There's three different tools that are really important, and there's more, but this is a good

15:48

start.

15:49

Number one, wire plumber.

15:51

Wire plumber is the session and policy manager for the pipe wire API.

15:56

Wire plumber is the actual configuration files that are being managed for the audio interface,

16:02

so you can create wire plumber configs, and that'll get you covered.

16:05

It gets better.

16:06

There is also the QPW Graph, which is a GUI tool that shows your audio assignments in

16:13

video in real time and MIDI.

16:16

So definitely save your existing configuration, because it's really easy to screw things

16:21

up when you have all the patch points, like an old telephone operator, you can just

16:24

wire anything into anything, it's so fun, and it's all visual.

16:28

So QPW Graph, for some reason, for example, you don't hear something like in your headphones

16:35

and say your playing music on your speakers while you're recording, you could at any time

16:42

open up QPW Graph and say either, I want to take the speaker output, or let's say you're

16:47

playing from Firefox, I want to take the Firefox output and route either to the headphones,

16:53

and then it's done.

16:54

It's works.

16:55

You can do it on the left ear, the right ear, and stereo, you can split it up however you want.

16:59

Very, very cool.

17:00

I would say I was confused when I was trying to do things directly in different applications,

17:05

as well as I was confused, because I'm so used to ALSA Mixer and Pulse Audio.

17:10

Pipewire is a drop in replacement for these applications, but for configuring pipe wire,

17:16

you'll want to do it directly in wire plumber, or with something like QPW Graph.

17:20

For jack assignments, that's PW hyphen jack, that'll manage jack assignments, which is like

17:25

pro audio stuff.

17:26

Here's my note on jack.

17:28

You don't actually start a jack server when you're using pipe wire, as pipe wire emulates

17:32

jack.

17:33

Instead, you will configure the application to use jack, which will then connect through

17:37

pipe wire without needing the separate jack server.

17:41

Even with the basic wiring in wire plumber, it gets you 95% of the way there.

17:45

I am happy to say once I figured out this basic routing in pipe wire, everything has

17:50

been awesome ever since, and I have been able to run 4, 5, 6, 8 devices, and everything

17:55

has been cherry.

17:56

So I highly recommend learning about pipe wire configuration, it rules.

18:01

If you enjoy hearing about pro audio and all this kind of fun DIY approaches, I'm going

18:07

to include a bonus episode with this one on Apex Twin, the Electronic Pioneer, who has

18:12

made an entire 40-year career out of using any and every imaginable tool, be it the command

18:18

line, hardware sense, whatever, to make crazy, amazing music for years, and it just every

18:23

tool is pushed to the absolute limit, and if that interests you, you can check out my dedicated

18:29

episode to Apex Twin, who would use the absolute ever-love and everything out of tools

18:35

like pipe wire.

18:37

Ways to support the show.

18:41

The best thing you can do to support this show is to share it with somebody else.

18:45

If you know anybody that likes this show, please share it with them.

18:49

The second best thing you can do is let me know.

18:52

You can send me an email, podcast@livingcartoon.org, you can drop in the matrix or send me

19:00

a message if you like the show.

19:01

Seriously, this show is so small, and it actually does inspire me to want to continue.

19:07

And if this show interests you, let me know because I have over 15 pages of notes in regards

19:14

to this show.

19:15

Oh, hello, Carl.

19:16

You can hear him.

19:17

That's my cat.

19:18

Back to the episode, I would be remiss not to mention the ram-shaped elephant in the room,

19:22

which is, of course, memory.

19:25

Memory is crazy, memory is limited.

19:28

So let's take a moment to specifically address memory, no pun intended.

19:34

Related to everything here, right, is memory.

19:36

Memory prices, disc prices being insane.

19:40

So what I'm saying is that having low memory on a computer inherently means you're going

19:45

to hit the limits of that computer quickly.

19:48

And I definitely found that, right?

19:50

I'm not going to lie in any way.

19:53

I hit the limits of memory immediately in terms of swap.

19:58

And so what I dug into a lot is Z-RAM.

20:02

And Z-RAM really helped with performance with some tuning.

20:05

And I just want to read what this person posted to Reddit and I'll link it because it's useful.

20:10

Z-RAM is a Linux kernel module that creates a swap space in RAM, as opposed to having it

20:16

on your disk.

20:17

Swap partitions on a hard drive or an SSD are slow.

20:21

And I found this.

20:22

I hit my limits in them because of browsers.

20:23

The web browser is the problem right a lot of the time.

20:26

But this results in stuttering where the machines decides to start using swap space.

20:32

Z-RAM is an alternative to swap space that compresses memory before storing it in a designated

20:37

space in RAM.

20:38

This means you get more efficient use of the RAM as your system will not compress files

20:43

in memory when needed.

20:45

In using Z-RAM, I did find notable improvement in my daily browsing experience.

20:51

And I'm not talking about any kind of browser level manipulation of things just in terms

20:56

of the computer itself and allocating Z-RAM to performance.

21:01

So Z-RAM is kernel supported on all the major Linux distributions but it doesn't mean it's

21:05

enabled by default.

21:07

For me, I had to install the Z-RAM hyphen tools package.

21:11

And from there, I was able to set up a Z-RAM partition.

21:15

That part was easy and it did work really well.

21:18

I'm going to pull up my notes and say exactly what I did.

21:21

Let me look.

21:22

All right.

21:23

Let's go back through my system de-configuration of Z-RAM swap.

21:28

Systems with 12 gigabytes of RAM.

21:30

How systems behave?

21:31

12 gigabytes of RAM, enough for basic use.

21:34

But once you're adding kd-plasma, gaming, web browsers, then I'm running out of memory

21:40

which is exactly what's happening.

21:41

So let's talk about the role of Z-RAM and 12 gigabytes of RAM system.

21:46

It's absolutely critical.

21:47

It acts as a buffer before I jump into that disk swap.

21:52

And what it does is it prevents kd-e-plasma itself from freezing.

21:56

Because that's what was happening to me with 12 gigabytes of RAM.

21:58

It's like, you watch a YouTube video, browse the internet and be doing something.

22:03

And the computer just seizes up.

22:05

And I'm like, oh.

22:07

So what it does is it's going to grant me an additional four or six gigabytes of compressed

22:12

RAM that's going to help keep the system from being responsive.

22:15

Instead of it just like locking up because once it locks up, there's like nothing I can

22:18

do.

22:19

It's like a frozen robot.

22:20

And I'm just like, ah.

22:22

So that's not good.

22:23

It's giving me a little bit of a safety net, a little bit of a, you know, a little space

22:27

to work with, take some of the pressure off.

22:30

So it, it did help and it helped on both the computers.

22:35

So let's talk about my 16 gigabyte RAM system.

22:37

So 16 gigabytes obviously is a little bit more comfort.

22:39

I can do more multitasking.

22:41

But as you ram in this case, still important, it's giving me like extra six gigs compressed

22:47

RAM, which is again preventing kd-plasma in this case from blocking up.

22:53

In case browser spikes take up all my memory because, you know, browser tabs themselves

22:57

are taking up so many hundreds of mags or 75 mags at minimum per tab and things just

23:03

add up quickly.

23:05

So swap still there, but it's just, it's just rarely used as the intention.

23:12

So in this case, Z-RAM is about half the size of the available RAM.

23:19

Shop file is a quarter of the size and then I'm taking my, it's called swapiness.

23:25

I'm taking my swapiness down and I'm using a program called early out of memory.

23:31

And I am running that let's talk a little bit about early out of memory.

23:36

All right, this is from the project GitHub page.

23:39

What is early out of memory?

23:42

So early out of memory wants to be simple and solid is written in pure sea.

23:45

It has no dependencies, it is an extensive test suite, unit and integration tests, which

23:50

is written in, oh, who knew, go early out of memory checks the amount of available memory

23:55

and free swap up to 10 times a second.

23:59

By default, if both are below 10% it will kill the largest process, whatever has the highest

24:04

out of memory score.

24:06

Reason this matters is because if your computer's about to seize up, it's better to kill

24:10

the process than to kill your entire computer, and I'm specifically talking about the web browser.

24:16

So we're just giving ourselves multiple levels between ZRAM, swap, and the out of memory

24:22

Damon to prevent losing everything.

24:27

And okay, why not trigger the kernel out of memory killer instead?

24:31

You can make early out of memory trigger the kernel out of memory killer by passing along

24:35

the flag.

24:36

However, in some Linux kernels, triggering the kernel out of memory killer does not work.

24:42

That is, it may only free some graphic memory that will be allocated again and not actually

24:46

killed that process.

24:49

You can see how this looks on machines such as Intel integrated graphics, which is what

24:53

I'm running.

24:55

How much memory does early out of memory use?

24:57

About two megabytes, the only 220 kilobits is private memory.

25:01

The rest is a libc library that is shared with other processes.

25:05

So I have had no problem running early out of memory on my computers.

25:09

It hasn't caused me any kind of trouble, but it's really once I did that ZRAM in addition

25:15

to this that I've had much better performance.

25:18

And this is a good little setup, ZRAM, swap, reduced swapiness, and getting out of memory

25:28

Damon or similar in case something just eats through all your RAM, which browsers are want

25:32

to do, you just got to protect yourself from the whole system seizing up.

25:36

And let's say while we're talking about this, that you're like, what is this whole

25:39

swappingest thing and why does that matter before we jump to this out of memory Damon killing

25:45

things?

25:46

What's the deal with swapiness?

25:47

So swapiness is controlling when the kernel starts moving the memory out of RAM.

25:52

On a system where now I'm running ZRAM and swap, that means it's affecting me on two

25:57

layers.

25:59

First you have the fast compressed ZRAM, which is used before swap.

26:03

So swapiness doesn't change that.

26:05

Then you have the disk swap, which is slower, that's where swapping is matters, right?

26:10

At a rating that I have around 20, it means I want you to avoid using the disk swap for as

26:15

long as possible because it's slow and limited.

26:19

That means on my 12 gigabit system and even on 16, I'm going to go through memory pressure

26:23

and swapiness is going to determine how gracefully I'm working through my memory.

26:30

My goal is just to keep the system responsive.

26:33

And what I'm saying is if swapiness does not need to occur, I'm discouraging it.

26:38

So how does early out of memory interact with the ZRAM, right?

26:42

Early out of memory is this watchdog.

26:44

It's preventing the system from freezing once the memory becomes exhausted.

26:49

So it's monitoring the RAM, monitoring the swap, and once things fall below this threshold

26:53

of like, you know, so many percentage points, single digit, 10%, then the largest memory

26:58

hogs start getting killed off.

27:00

This is to protect KDE plasma and the system itself from locking up.

27:05

And this is before the kernel's out of memory killer like we already mentioned.

27:09

So why does this matter?

27:10

It's because KDE plasma does freeze on me.

27:13

If I exhaust the RAM with the browser, my system walks up.

27:16

At least I'm able to kill these runaway processes before the system locks up.

27:20

And with ZRAM, I get a little more safety bandwidth time before I have to kill those processes.

27:28

If you had a larger amount of RAM, then this doesn't really matter anymore.

27:31

Once you have like 32 gigabytes, you've got a lot to work with.

27:35

But in this case, I don't.

27:37

And let's take a quick moment to talk about, does any of this matter when you have extremely

27:41

limited RAM?

27:42

Okay, so when dealing with 8 gigabytes of RAM, it seems like so far for me, things state

27:49

pretty much the same.

27:50

ZRAM, roughly half of the actual RAM, swap about a quarter of that 8 gigabytes.

27:58

And really all we're trying to do is prevent freezes from when the browser spikes, the early

28:02

out of memory still recommend it.

28:04

8 gigabytes still obviously is enough that your system can work.

28:06

You just don't want, you don't want it to die.

28:10

Just drop down into the 4 gigabyte of RAM.

28:13

Let's say that now we're using XFCE or something like that.

28:16

So now it's just like, we got to keep the system responsive period.

28:20

So we're still doing roughly the same ratios, 2 gigabytes of ZRAMs or half of it, roughly

28:26

the same in swap, a higher degree of swapping S because it's just going to happen.

28:33

And we're still using early out of memory.

28:35

And then in this case, we just want to try to avoid these heavy services.

28:38

So if you only have limited RAM, obviously, you don't want to be running, you know, snaps,

28:41

containers, any kind of preloading thing or heavy workloads because it's not going to work.

28:48

It's just total border line, right?

28:50

Because even like say XFCE is going to use like several hundred megabytes just to run.

28:57

So ZRAM will still give you some space swap will still help with crashes and early out

29:02

of memory will still help with systems locking up.

29:05

But you just have to keep things very, very limited, obviously.

29:09

And now we're getting into the space where, you know, the distributions themselves don't

29:13

recommend having this amount of memory or even close to it.

29:16

But what about if we dropped two gigabytes of RAM?

29:19

So if we get it down to two gigabytes of RAM, now we're just trying to minimize the actual

29:23

amount of RAM that's being used to keep the system running.

29:27

So you could still do ZRAM.

29:29

Now you're basically doing half again, swap file, half high swapping S because you're going

29:35

to use it all the time, still using out of memory.

29:38

But all like notification background services, everything we don't need, we're disabling

29:42

it, right?

29:43

No more indexing, no more in extra anything, extremely tight.

29:47

So it's like a VPS style system now.

29:51

And if you're doing desktop, you're just keeping it light.

29:55

So you could use like XFCE or something, it's just, it's going to be slow, it's not going

29:59

to be good.

30:00

And then your browser, obviously, you just got to be careful with Firefox, things like

30:03

that.

30:04

We're not going to go into that.

30:05

But you can really tweak it, obviously, in order to not multitask so hard.

30:10

Once we drop into the one gigabyte RAM territory, now we're talking obviously like headless

30:16

server.

30:17

We're just running a terminal.

30:18

That's what I am anyway.

30:19

At that point, you could still use ZRAM.

30:21

You could still use swap of high swapping S, but you just want to disable all extra services

30:28

like very aggressively.

30:30

So I want gigabyte of RAM, you're just like anything you don't need, you're just taking

30:35

off because you're just hitting the threshold all the time, right?

30:39

And ZRAM and swap might just keep your system running period.

30:43

But I think the main thing to me is like, you just don't run anything extra, at least

30:47

I don't.

30:48

So even compose for me on a one gigabyte system, I just don't use it.

30:54

I mean, I'll just, you know, I'll do that on a fancier machine or whatever.

30:59

But I just hold this, give some ideas of RAM usage and please write into the show if you

31:03

have more ideas of how to maximize RAM.

31:06

But I did find that this did really help with performance for myself.

31:10

And once I get into a headless bare minimum one gigabyte, I more just worry about only

31:16

running the literal thing that I want to run and I don't run anything else.

31:20

That's the honest truth.

31:21

All right.

31:23

Now that we've jumped through all these concerns and all these whatever's, what actually

31:26

are we going to run on these devices?

31:30

Glad you asked.

31:31

This is something I'm very excited about.

31:33

Touching back from the last episode, having, you know, basic a recursive little DNS for

31:38

faster lookups and having these devices with limited RAM, but basic performance.

31:45

Well, obviously you're going to want, say, these two little computers to communicate together.

31:51

No matter how much RAM or how much performance they have, you want to get basic tooling done.

31:55

This is exciting because the tooling we're going to talk about is the same tooling that's

31:59

been powering Linux and Unix machines for decades.

32:03

Let's jump in on that.

32:04

So one thing you can do obviously is you can take a device, an old pi three, which a listener

32:09

did as part of the dirt.

32:11

That's the DIY unfinished resurrection project challenge to work on something that you've

32:16

been putting off.

32:17

They took an old pi three and they installed a service called octopi octoprint onto

32:22

it, just to manage a 3D printer in place of an SD card as a way to send files over Wi-Fi.

32:30

And it previously had been used on a pi five, but the downgraded to the pi three, because

32:34

it has full size USB ports, it has wired ethernet, and it gets the job done.

32:39

So it worked well enough, right?

32:40

So like that's an option is you have a sort of form fit application you can run.

32:46

But let's go even before that and let's just say you're connected to this machine.

32:50

Well, the first thing you're basically going to want to do with any machine is you're

32:54

going to want to move a file or a folder around.

32:57

Let's focus on the terminal.

32:59

You're going to use the GNU core utils, something like the CP MV command, copy move, copy

33:05

a directory from one location to the other, right?

33:08

I want to copy because I still want to leave my original copy where it is in case anything's

33:13

wrong.

33:14

I have it in two places.

33:15

Now let's extend that to these two machines.

33:18

They're on the same network.

33:20

So I connect them using good old SSH.

33:23

I make an SSH connection between the machines to move files, but I'm like, well, that doesn't

33:27

really make sense.

33:28

I'll just use the scp command.

33:30

In this case, scp literally means copy over SSH.

33:34

Who are we?

33:35

We are somebody that just needs to send something to a place.

33:38

One way, we're going to use the scp command to send from where our directory is right

33:43

now to user at remote directory and send the file or folder.

33:49

scp is equally useful to pull in reverse.

33:52

So you can scp to your local directory from remote user at host, from x directory that

34:00

you remember, and battle pull from the remote machine to your local machine without having

34:06

to do anything else is pretty amazing.

34:08

So you can send or receive files using scp very, very useful.

34:13

That works.

34:14

I do that all the time to send myself something between machines, and it works.

34:19

But it's also limited, because think about it, you're moving a file or a folder.

34:25

And right away, you're like, even though I'm just sending this off somewhere, I want

34:29

to maintain my time stamps.

34:32

I want to do a little check summing to make sure that this process is working properly.

34:36

I want to make sure that the ownership and all these things don't change.

34:41

Maybe I'm making changes on your behalf as the administrator, and I don't want you to

34:45

know about it, because it's a network drive or something.

34:47

So in that case, I'm not going to use the scp command anymore.

34:51

Now I'm going to use our sync.

34:54

And our sync has been around for 29 years.

34:58

It's an unbelievable tool.

34:59

It is the basis of all modern backups, more or less.

35:02

So what you're doing with our sync is you're doing the same concept.

35:06

You're moving a file from one place to the other or a directory of files, but now you

35:11

have really comprehensive control over what our sync does.

35:16

So you can still use scp is all foundational, but with our sync, hmm, I want to move

35:23

this series of directories to this new machine, but I'm worried I'm going to screw the

35:29

command up.

35:30

So I'm going to do a dry run, and you can literally hyphen, hyphen, dry run, or hyphen

35:36

in, and you can actually see what will happen without taking action.

35:41

So this makes just for the dry run capabilities alone in the analysis.

35:44

Our sync is an amazing tool, and of course it can be run through SSH, or locally, or do

35:50

an external disk or for backups.

35:53

Our sync killer in a one-way fashion.

35:56

So if you want to have a more bi-directional approach to our sync, because now it's like,

36:01

okay, now I'm not doing the one way anymore.

36:03

You might use a tool like Unison.

36:05

Unison has been around for 25 years.

36:08

It is built off of our sync.

36:10

But our sync is designed to be run on two machines, which is perfect for me.

36:16

I have Unison set up, why?

36:19

Because I have these audio recordings that I'm moving back and forth.

36:22

Obviously I could use some other tool, but it doesn't make sense because these files are

36:25

pretty large.

36:26

I'm working with 40 gigabytes worth of audio files.

36:30

My next cloud is 5 gigabytes.

36:33

Google Drive is what?

36:34

15 gigs.

36:35

I don't need any of that cloud stuff, though.

36:37

I can do this machine to machine.

36:39

So what Unison does is it does what I was doing before with SCP and our sync.

36:44

It's actually being run on the two machines.

36:47

When you call it only, it's not real time.

36:50

Just on the state of the directory you've named, both machines are looking at that state

36:58

and they're adjusting accordingly to keep perfect validation across both machines.

37:05

So it's like this directory is what it's supposed to be on machine A and machine B.

37:11

And even if one of those Unison links goes offline when they come back up, they'll sort

37:16

it out.

37:17

So you can be making changes and it'll get resolved.

37:21

That's all Unison does.

37:22

It doesn't like do any extra special anything.

37:27

It just keeps this directory as perfect as possible.

37:30

And you can do a hub and spoke approach and run multiple iterations of Unison.

37:36

But Unison's not designed to be run on top of itself.

37:40

So it's going to stay at that two machine limit.

37:44

It's not designed to work it out between four machines or six machines or eight machines

37:48

on the same directory.

37:49

No, it's not designed for conflict management.

37:51

It's just designed to keep this directory that you've defined perfect two machines.

37:57

If you want something that scales up beyond Unison, keep in mind, the tools we're talking

38:02

about, these will run on potatoes.

38:04

I'm talking, I ran these on a 400 megahertz box that I have at my house.

38:08

So all of these tools run great on anything that can run.

38:13

So if you can't run SSH, you basically can't use the computer for networking.

38:18

And SCP, R-Sync, and even Unison will get you there.

38:24

But if you want to scale up, that means you're going to obviously be using more resources.

38:28

But now we go to another excellent tool which runs in real time.

38:32

And no, that tool is not next cloud, it's not any sort of cloud service.

38:37

We could use cloud services, but that's not the point of this episode.

38:41

And even if we did, there's a big caveat with next cloud, which is that it's screw stuff

38:45

up.

38:46

And I have been using next cloud for years.

38:49

I use it offline, like I'll run, for example, it only on Wi-Fi.

38:54

I have the next cloud desktop app on at this moment, but usually I just have it turned

38:58

off on these machines because I don't want to use the RAM, right?

39:02

And in this case, I don't need to move the data to an additional server.

39:08

I don't need to use web dev and I don't fully trust web dev because I do have issues

39:13

with like these kind of database files or these large, chunky directories.

39:17

When all I'm trying to do is just migrate my changes from machine to machine.

39:22

In that case, I personally don't want to use next cloud or I don't trust it as much

39:27

as this next tool, which does not require a server.

39:31

And that would be sync thing, right?

39:35

We're going beyond Katie Connect because that doesn't scale in this case.

39:39

So that's a way to send things between a folder on your phone and your computer or whatever,

39:45

but yeah, we're moving beyond that.

39:47

Sync thing can be used on as many devices as you want.

39:50

What's cool is you actually have a global look at each node at each device on the system,

39:58

right?

39:59

And they're running in a peer to peer fashion.

40:00

It can scale up as much as you want.

40:02

And what happens is every single node or client is aware of each other, even if no changes

40:07

are being made.

40:08

So say you're running five sync things and they're all working together.

40:13

They are aware of each other.

40:15

So they know, for example, oh, node five is offline.

40:19

Node three has gone offline in the last hour.

40:22

And that means that changes can be propagated in a peer to peer fashion.

40:26

There's not a central node.

40:28

And the nodes will help each other make this sync possible.

40:31

So sync thing is a super useful tool and I have found that it works well in a similar

40:37

fashion to unison where I turn it on when I need it, I turn it off when I don't need it.

40:44

And I think if the only goal is to keep that folder synchronized, obviously, you should go

40:48

back to unison between two machines.

40:50

But if you want to have more flexibility with phone clients and stuff, then sync thing

40:54

is a solid bet, even with like two gigabytes of RAM or whatever on a box, I say minimum.

40:59

Sync thing is good.

41:00

Use it when you want it.

41:03

And I would use sync thing over next cloud because you just don't need the server aspect.

41:08

I just need to move the folders back and forth.

41:11

I can do it over VPN, whatever, sync thing will make it happen.

41:16

No tools perfect, but these are always tools you can fall back on.

41:21

And because I'm a guy that comes especially from next cloud, even next cloud themselves

41:26

I recommended sync thing for just moving large data sets and our sync as these tools work.

41:31

And it's avoids having to do the whole sync up to the server, sync down from the server

41:36

to your device.

41:37

You're just moving device to device.

41:39

And plus you have things like doing deltas, which is looking for what changed and you get

41:45

functionality with compression, whether it's with sync thing or our sync.

41:49

So this is good.

41:52

And I think we'll leave it there for this episode as far as these tools.

41:58

But these are like the foundational tools that you can use to build yourself a basic

42:02

home lab.

42:03

So you're moving folders.

42:05

You know where they are and now we're starting to do backups, right?

42:10

Using one way synchronization, really what you're doing is you're making a backup.

42:13

You're moving your data to another location.

42:16

Maybe you're doing some level of compression or encryption.

42:19

And I'd like to get into that in a future episode.

42:22

But I'll give you one more tool because you can run it on a low-end machine.

42:26

And that is our clone.

42:29

Once again, both of these tools are coded in go.

42:32

So our clone is a command line tool that is inspired directly by our sync.

42:38

But it's for cloud providers.

42:40

You can use our clone as a way to access a remote cloud provider like Google Cloud, Google

42:46

Drive, Dropbox, Next Cloud Web Dev, S3, whatever.

42:51

You can connect to a service provider, a backup service, which people do all the time.

42:56

Using our clone, you'll be able to have a mount like a fuse style mount of a remote

43:01

location that otherwise does not support it in order to send our sync, like directories

43:07

to that location or pull in either direction.

43:10

So it's not designed for real-time usage, it's inherently a bit flaky to mount these providers

43:19

that otherwise do not support this technology.

43:21

It's, you know, I wouldn't fully trust it.

43:23

I know people do, but I would be wary.

43:25

But if you're goal is to get your data out of Dropbox or whatever, our clone will make

43:29

it happen.

43:30

And if you limit the threading and the amount of transfers and stuff, and you just really

43:34

lock that down on the configuration level because it's command line tool, you can run it

43:38

on an extremely minimal machine and our clone will work.

43:42

Anyway, that's enough of these tools for this time.

43:45

Something you'll notice from this sort of experimentation and local first tooling, like

43:50

we're talking about, is really what you have is you just have a home lab.

43:54

That's what this is, right?

43:55

This is a home lab setup.

43:57

Every one of these tools is a valid tool that can be used professionally that you can fall

44:02

back to, even if you're using other things, you can always use this tooling in addition.

44:06

And that's what people do.

44:08

So congratulations.

44:09

If you're running any of these tools at home, you're running a home lab, you know, try

44:14

using SSH to make a connection, try using SCP to copy a file, try using R-Sync to verify

44:22

that file.

44:23

You could try using unison between multiple machines, keep a file in sync, or you can make

44:29

backups with R-Sync.

44:31

And these are all valid tools that you can use forever.

44:35

And you have a basic kind of peer-to-peer connection between your machines, whether it's local

44:39

or then through a VPN.

44:41

And that's what I'm working towards is giving you the tooling to have a rock solid, awesome

44:48

setup that you actually understand.

44:50

And we're building up, right?

44:52

We're building up towards playing with containers and playing with these other things.

44:56

But either way, I think we're addressing the biggest thing these devices have locally in

45:00

regards to how well do you actually understand a tool that you're using.

45:07

And how well is it actually working for you?

45:09

So that's where we're coming from is we're going to understand the tool.

45:13

And that way, however you've set it up, we don't care.

45:16

But the point is you at least know what it's doing.

45:19

And it's worth taking the moment to figure out as we continue building up into more and

45:24

more workflows beyond, say, now taking a machine and a device and growing that to a couple

45:30

machines, couple devices, and beyond.

45:34

I think we'll close this thing out by going over some of the trivia winners from the recent

45:39

Linux Fest Northwest, which will be also dropped as a bonus to this, talking about that

45:44

experience.

45:45

It was a lot of fun.

45:46

But let's go into the some of the named projects to end out this episode that people

45:51

who played trivia named off, I told them they can name any project they want and I will

45:55

share it on the show and let's talk about those.

45:59

So trivia was a mix this time of questions, which people found quite challenging in regards

46:06

to Linux open source trivia, followed by a practical bonus round of free and open tooling

46:14

that people have actually done in their daily lives, whether or not they've done it

46:17

as a point.

46:19

The top score was Romeo with total of 30 points, 22 in the main route, eight in the bonus.

46:26

Romeo named off Ubuntu.

46:27

Romeo also gave a talk on Ubuntu, Linux Fest Northwest called Ubuntu without handlebars.

46:33

We'll do a link to the room, you can check out that talk if you want, recording for it

46:37

available on YouTube trailing immediately behind right on the shoulder of Romeo by a single

46:45

point was Matthias of Victoria Metrix, Victoria Metrix being a drop-in replacement for

46:50

Prometheus Grafana.

46:53

That was 21 points plus eight in the bonus, so 29, it was very close.

46:59

The other score, very interesting, which got 10 on the daily application of tooling in

47:05

their life was Salt, who named Segal, Segal Conference at the University of Washington

47:10

is currently accepting talks if you want to submit for that.

47:14

Let's keep going.

47:16

From there, we had the GNU boot project.

47:21

We also had meshtastic, the communication decentralized network, Pi Club, which is based

47:27

out of Washington, Slackware, the gentleman for that said he runs a Slackware mirror

47:33

with his father, and they run all of their software through the mirror locally and even

47:38

offline.

47:39

That's interesting.

47:41

We also have the Triton Data Center, WWU, that's Western Washington University, furry

47:47

tail, noster, fusion PBX, next cloud, another Pi Cloud, clonezilla, take the stone.xyz,

47:57

next book, Libra phone, open street maps, Boo Kitty, zipline recruiters, and yeah, if you

48:08

want to check out any of those, feel free.

48:11

A big thank you to everyone who participated in the trivia, and also thank you to Linux

48:17

Fest Northwest for letting me be there at a table at a great time, and I recommend anyone

48:23

interested, consider being a part of Segal this fall, and check out the link if you want

48:28

to submit as a speaker to participate in Segal all about GNU and FOSS.

48:35

Coming up in the next episode, we'll be talking about monitoring of services, making

48:38

sure things are running, making sure that the networking is running smoothly, because once

48:42

it's actually something's online, how do we know?

48:46

So we'll dig into that next episode, we'll also touch base on a little more on the Linux

48:51

Fest Northwest conference, what happened there, what you can expect at Segal, and it'll

48:57

be a good time.

48:58

If you have any thoughts or comments, do send them in to the show.

49:01

Otherwise, thank you so much for tuning in to Linux Prepper and have a wonderful day.

49:05

Bye.