Welcome back to Linux Prepper Podcast.
Please enjoy this bonus episode on a great year in Linux.
Hb and I will deep dive into containers, things like Docker, specifically who are they
for and who are they not for.
So if you're a home enthusiast, thinking about containers and now you can enjoy that talk.
And if you have any thoughts, please do send them in the show, podcast@livingcartoon.org,
or you can join discuss.james.network.
I wanna give a little precursor for this
because you should understand if you're interested
in containers, then you are someone who is interested
in self-hosting your own services,
be it at home or otherwise.
Maybe you already do.
That's great.
And you're curious how containers may help your existing setup.
So if that's the case, this episode was made for you.
And we're focusing on Docker, which is just an industry standard thing.
But it applies what we're talking about with containers,
and turning those containers and run commands into
a Compose File structure.
This can be applied to Pod, Man, or any other iteration of container virtualization type
of technologies.
So who this is for is for the self-hosters and the DIYers.
Anyone running a service, I mean, want to understand how containers could apply to them.
What this talk will not teach you, which you must understand in terms of running services
in general, but also containers, is file system structure, how to have proper backups, and
most importantly, how to manage file permissions.
Because whether you're in a container or not, at some point you will need to have a very
just basic solid understanding of permissions and how they apply to yourself as well as
any other user using your services or however you choose to run your system.
So that said, if you're running your own services and you're curious about how containers might help you
What basic containers are what compose files are or
Examples of how to these can apply and scale across numbers of different services because that's where it gets really exciting
And please do send in your feedback podcast at living cartoon.org and share this if you find it useful
Really really hope it gives you a nice back and forth discussion take on how containers could be applicable to you.
And we'll also have links to the let me discussions
where we asked users not interested in containers
about why that is.
So yeah, thanks so much for listening and enjoy.
Ameriadroid.com is the sponsor of Linux Pepper Podcast. They are the U.S.
based distributor of single-board computers and home automation products for companies like
HardColonel who produce O Droid and other single-board computers.
It saves you the hassle of having to order overseas and you also have a friendly customer service.
Call them on the phone. They have global shipping options and we are proud to have them sponsor the show.
You can use Linux Prepper at checkout or I'll include some links to some of my favorite products.
I'm big thank you to america droid.com.
So what are containers and what do they do?
Containers are just giving you a reproducible build of the whole software package.
It's making it so you can define what is ephemeral and what actually lives on the file
system.
What lives on the file system is called mounts.
It's been called volumes.
You're setting locations on the file system where data actually lives.
Everything else is ephemeral and can come and go.
The software that you don't really care about,
that's running the actual service that you'll, you know,
replace or update or whatever.
And it's a really useful way to set something up
and also blow it away at a moment's notice and replace it,
while having the exact directory of data as you want
to continue to live where you expect them to be accessed.
I asked HB for an example. tell me about your text adventure game. Yeah, so this text adventure game which is called mermaids a
secret journey
Was inspired actually by my daughter so
We've been working on it for a little while, but basically she had these sketches that she made for me. And as you know, James, you know, I've been doing ANSI art for a long time. I fired
up Dura Dura, which is a great editor, but it does a lot more than that. So I've been drawing
up the scenes for that with Dura Dura and I actually had a, you know, magpie magazine, if
you guys aren't familiar with that, that's a Raspberry Pi magazine. 2017, man. Pretty old. And you know, I was looking through it and there's this text adventure
guide, you know. It's a pretty minimal one. What I have now is as much more advanced. It just kind of
reminded me of the fact that I wanted to do that. And I went ahead and got started on it. I'm
actually done with the first level. There's still some things that I got to figure out but it worked really well. I'm going to migrate
to a different shelf system because actually I'm able to do
TTID. I don't know if folks are familiar with that. I can't really
adjust too much. So with TTID at least I was able to get a basic web
server, kind of like a wrapper to help the shell run like a normal
game.
So now it's centered in the screen.
It's a lot larger.
The art is, you know, populating correctly.
I've got that in a darker container and, you know, I find it pretty ideal for something
like this.
You know, I'm developing it locally.
I don't want to have to buy it VPS to work with it.
I have space on other servers in my
apartment, but of course those, you can take a lot more energy. Then I would like, especially for
something like this, so I can basically build my own images, round them with Docker compose,
see how they're doing, make changes, restart the container, and see things
pretty immediately, which is nice, especially something like this like a
text adventure game, you kind of have to tinker with it a lot to get it to work.
Like what does it fix? Going to the containerized T2ID solves for you working on
the game. And for me, you know, just kind of systemizing it a little bit or systemizing it or other,
so that I can be guaranteed the image is always going to be the same, right?
And if I have to rebuild the image, it doesn't take a long time to do.
I'm almost done with the first level and pretty much start building another image at that point.
I can shoot it to Docker Hub, in case people want to try it.
I haven't pushed it to GitHub yet, but I do have it set up for Git right now.
So I'm still getting all the versioning and everything.
It just makes it easier for me to test things.
And I can kind of think that's one of the big reasons why I started to like Docker a lot.
And as part of developing it, do you find yourself running like a number of different iterations of it via containers at a given time?
Yeah, I mean, I'm changing it all the time, but the thing that makes it easy.
I mean, as you know, with Docker compose, you know, basically just have to restart the container
as I'm going along.
So I'm working on the Python code.
If I need to make a change, I can rebuild the image pretty fast because it's some minimal.
I mean, there's really not anything in there except for T2ID.
I did my best to kind of avoid too many dependencies if I could.
So I didn't import a lot of libraries.
It's a really simple game anyway.
So it's mostly text and since it's shell, I mean, it's pretty basic on its own.
Doesn't Python already offer some virtualization options with virtual environments and such?
Do you like, is that kind of redundant in terms of Docker as well? Or is it just something
you're more comfortable with? Or can you speak to that?
Well, it could be redundant in that sense, but I don't really have to virtualize it inside
the Docker container. But while I'm testing it, you know, keeping my system clean is important to me
'cause it can get really messy after a while.
So I am comfortable with using virtual environments
with Python now, as far as like staying organized,
not corrupting my system in any way.
I don't wanna have a bunch of stuff going on,
getting installed globally, this causing conflicts.
It's in a similar way, you know, I mean, virtualizing something like that
in a virtual environment.
I'm not going to say it's the same because someone will beat me up.
Definitely the philosophy is similar to kind of keep things contained
organized and to avoid conflicts, minimize dependencies.
Yeah, all those things that think are great.
Obviously you have this project, which is great.
When does it make sense for someone to try out containers?
I was running everything on bare metal for a long time,
and partly I was kind of intimidated by Docker in a lot of ways,
because I don't really understand it.
I watched a lot of videos on YouTube about it,
and started to understand it's not that confusing.
It's not that complex.
Everybody can give it a shot at least, but I think it's the right
time. There's no real penalty for trying, you know, containers. For me, I just thinking about
the bare metal days. Like, I know some people prefer that and I understand that there was a time
when I did as well, but I can think of at least one situation where I'm running a VPS paying for the VPS, it's not anywhere near where I'm at,
so the lag time is already bad.
And I'm running a program bare metal
that's having a lot of issues,
and I have to restart the VPS over and over
to get it to run correctly,
and that kind of troubleshooting to me.
I mean, even as someone who does this for you know I don't work for a tech company I can honestly say
that moving to containers allowed me to run services on very minimal
hardware so I've got an Intel Nick for example that's just sitting on my
fireplace and that was my first real server in the house.
And having that was able to, you know, spin up or tainer and manage my containers and run all kinds
of services, try them out. If I didn't like them, I could just nuke them pretty easily. So that was,
I think one of the first times that I really started to try it out was mostly
as a learning thing.
And I think that that's probably one of the better things.
If people are working professionally in the tech field, it's just something you're going
to have to know whether you like it or not.
I hate to say that with that's probably the truth.
Yeah, that's fair.
I guess my other, then my question to you in terms of that too is if you're running containers,
you know, in Docker or in
Cloudman, it doesn't matter. There's the option to basically run a container. So
you, you get a Docker file, we'll say, onto the file system, and then you
execute that with a Docker run command or with a compose file, either one, and then the service will start up. So let's
say you're running like a Calibre web, the ebook tool. So you run it. It doesn't matter if it's on
arm or you have to get supported. It spins up and then you get this service running on your local
host domain on your local network that you can play around with.
And do you think when someone does that,
that it's better to use a compose file versus a run command?
Or I'm just curious what you think in terms of using a tool
like pertainer for management.
How do you recommend that for someone,
especially getting started?
- You know, I would recommend that they understand how Docker run works.
To me, it feels a lot like Bash scripting is pretty similar.
But I would say that for me anyway, Docker compose is a lot cleaner way to do it.
I'm just going to say that primarily, you know, I like to keep myself organized, so I have a folder on whatever system I'm working with that has all the Docker compose files, any data or whatever is in the same container as the Docker compose file related to that service. And especially just being able to restart containers pretty easily.
It's just really a nice thing to be able to do.
So I feel like Docker compose, it's a little intimidating to people.
And you know at first because YAML is a bit picky sometimes
about syntax and things like that and then somewhat not,
you know, I could probably send you a Docker compose
follow right now, for example,
when it has everything set up and you could reproduce the setup that I have, that's that's pretty amazing to me. So yeah, I always
offer the cleaner option just because I like to keep track of where things are, you know,
what they're dealing and kind of an extension of what I said about, you know, using pertainer,
not that that's the only one that you can use to monitor stuff. Certainly it's nice to be able to see, "Oh, this container
is down. Why? I can restart it now and see what's going on." Or I can check the logs immediately
in this UI. Convenient features like that, it's pretty nice to be able to do that just for
the sake of being able to read something instead of looking at it in a terminal and it's like 200
lines of log files. That's just how I feel about it. Yeah, definitely. Yeah, so for me personally,
I really embraced Docker some years ago in testing for next cloud pi,
which I'm running, like was running,
I think on a version of only unlike a pi too
or something like that.
And I was running it and I was kind of turned off
by the regular image, which of it's what people would
normally use like this all in one image,
Debian system that you would install on the device, right?
Covering the whole length of the device, but I was doing testing.
So I wanted to be able, the ability to start and stop it a lot and try different
iterations, because I was trying to write documentation and such.
So because of that, I moved to Docker for the same reason,
even on a low level device, like, yeah, I worked fine to have not only my
production instance, but then
little testing instances that I could spin up to test updates and things like that and
provide documentation and file GitHub issues. And that worked really well for me, even
as the hardware changed, kept using these Docker images and just migrating them over.
When you mentioned having to deal with a full- a demo image. I think that's one of the appeals of using
darker and first place is that you get to choose what's on the image. You can
use Alpine, something that's incredibly minimal to run an application. The
performance of it will be very noticeable. At least to me, I've run a bunch of
these things over the last years and it's just unreal
how fast something will come up.
But also, that kind of flexibility, like you said, right?
You don't wanna deal with this huge, W-image.
So I have an example of this.
It's actually on my Docker Hub.
Right now, come for UI is an AI program
that I've been using a lot lately.
It's for images, so their AI generated images,
and it's really an amazing piece of software, to be honest.
I mean, I don't work for them or anything,
but I think that their stuff is really great.
But anyway, I checked the repository,
and I put it on my computer,
and I built a Docker image around that
so that I could use that in a stack that I was building
for AI, which is called the ultimate bacon cheeseburger stack.
And essentially it's a bunch of AI services that I like to use. So I didn't have an image,
which is the point here. I didn't have an image that I could use for Docker for comfy UI.
So I did go ahead and just build one and work just fine actually you know work to
love better for me because you know if I go and run Docker compose down and
Docker goes up for some other reason right it's working just fine every time
and that's pretty great and also I was able to have those folders you know we
talked about mounts and things like that we're all my images that I
generated they're stored right on my computer. Of course I could do that with their Python program as well, but to me it's just a lot
cleaner like I said I favor that. I'm sure Model context protocol, things like that who've been
pretty popular lately. I was able to build my own MCP image, LLM stuff I was doing, and integrate
that with OpenWebUI and my Obsidian Notes which
there actually wasn't a real clear tutorial for that anywhere but I was able to actually
get it working just kind of shocker but those are the kind of things I would like to
highlight at least is that you have the ability to kind of take these open source repose
and make your own images.
That can be really useful for a lot of reasons.
Yeah, definitely.
Your reminding me, too, I'm reminded of why I migrated to compose files.
And I realized that there's two different reasons, which you also connect into.
So one is running Docker run is also to me to not feel clean because I would be like,
what did I do?
And so I would export my history, my bash history into a file so I could look at it later.
And that felt weird.
I mean, I still do it, but I wanted to be really clear of like how I'd run Docker.
And I guess when you run Docker at like the simplest level, what you're doing is
you're doing Docker run and then you're defining some variables. So you're defining like
environment variables, possibly, let's say your time zone permissions and the
amount location where the data lives as a command and you type that out along with what image you're using.
So say it's kalee berry web slash kalee berry web,
connected to their GitHub or something.
And then the service spins up from there.
And so let's say something changes,
the image location changes, anything changes,
and you're trying to update it over in the future.
For me, I got confused, 'cause I was like, what command did I run?
And so then I'm going back through the history and I'm like, this is awkward.
And I thought, I don't want to do it this way.
So because of that, and we're over to compose, and compose is just a composed.yaml file.
And it's the same basic thing.
You're executing a command.
It's just broken out syntactically. It's the same basic thing. You're executing a command. It's just broken out syntactically.
It's the same command and there's a great tool that I've liked over the years called composerized.
It's a website and you just throw around command and it turns it into a compose file.
And then what you end up doing, I feel like this is what you end up doing in Docker anyway,
the beauty of Docker is you can edit things. So the Compose file, you can edit the Compose file,
and then you just have your directory on the file system.
Let's say, home/myuser/compose,
and each time you have a new Compose file, for me anyway,
I just make a subdirectory, call it, you know,
"Colybrae web," and that, and make my Compose.yaml.
And that's where I paste the same basic thing.
It's just instead of Docker run,
it has it laid out line by line of saying,
these are my environment variables.
This is my time zone.
This is the image I'm using.
This is the version you can be as specific as you want.
But also, this is where the data lives.
And that way, anytime I'm confused, I can go back to that exact file and if I iterate on that file I can always back it up
so I can do like a
backup of my compost file as is which I always do before I edit it and make any changes because like you said if you edit a
Compose file and it fails. It's very binary. It's like it didn't work
And then you might have to just go back to what you had written there before or whatever,
but it makes it really easy to keep track of what the heck you're doing and to
iterate in that way. So once I changed to compose files, I was just for testing
next CloudPy even. I've never gone back and it scales really easily too. And
where that connects is you and I worked on services for noise bridge together.
And that was all done in shared compost files connected to a reverse proxy on a VPS.
And that were great. And there was like what? Half dozen people dinking around on it at a time.
And it worked really well. Right? We just paid for a VPS up front for a few years.
And everyone edited it together using using compost files and worked great.
So it definitely is like, I would not want to run Docker run commands, you know, and like
any kind of shared environment or even for myself.
So yeah, the fine compost files super duper helpful personally.
Yeah, we're on the same page for that exactly because it's just being able to find what you put in the in the
YAML file is a huge benefit over you know Docker run where you're checking your history
is exactly what I was doing like hey what did I do and then running Docker PS is this thing even
up you know trying to figure things out it's a little bit more tedious than it needs to be
you know having a Docker compose file,
you see people post them all the time.
There's awesome Docker compose, for example,
on GitHub and you can just see other people's files
and try them out.
I mean, that's just really cool that you can do that.
- And it just makes it easier to understand
what's happening within the container.
It just takes a moment to wrap your brain around and it does because it's like living
in the matrix or something.
Like you're spinning up on a femoral container within your system.
And so it's a femoral container within your system.
It's there.
But for example, you might have to issue a command through
Docker in order to issue a command directly within that
container.
If you wanted to do something to it in the femoral state,
you could enter it with exec, hyphen, IT, and you could do a
bin bash session and actually go inside the container. If for some reason you wanted to do that and you wanted to enter the container, you could do a bin bash session and actually go inside the container.
If for some reason you wanted to do that
and you wanted to enter the container, you could.
And I think that idea is something
that people find very confusing.
You can totally do it.
But end of the day, what you want is,
is you want to control things previous to that
at the like the compose at the run level of executing
the container and having it already in the state that you want.
If that makes sense.
So it's it's giving you flexibility in how you mess with a service and the more you
iterate on something to my to my opinion, the more useful containers become.
Yeah, absolutely.
I don't always use them.
I just wrote a, I think I mentioned to you,
I wrote a video rendering program,
and that's just very Python in a folder.
Lessially it's on GitHub too.
I have a feeling you need to containerize that one.
So I think there's sometimes where it's,
maybe you just don't need to do it.
Certainly for web applications and other services it's like really nice to have something
that's lean. Like I mentioned, a lot of my first services that I ran were running on
it until Nook does not have that many resources. But because of the fact that I'm very
using very minimal container images.
I may able to actually get a lot of work done on it.
Kind of surprising to be honest.
Like I didn't think that it would be able to handle it,
but it's been doing fine.
I've had zero downtime in the last couple of months
and I haven't had to do a lot of work to maintain that.
So it's pretty nice.
- Yeah, totally.
In a similar vein, like I've had my compose files running
for number of years and they just keep going, you know?
And I have them set to auto restart,
unless I personally shut them down.
They just bought back up.
So if the power goes off and the power comes back on,
they just spin back up again and continue on as normal.
When I'm using doc or because I have
composed files and I have clear notes to of what I've
what I've changed, what I've done, I can review those even the
composed files themselves and I know what the service is. Like I
know the it or like what version of the software I'm running, I
know where the data lives. So I can get access to it when I need
it. And I can pick up where I left off. And so for me, I haven't actually needed any
kind of further maintenance tool. So I tried Portainer, for example, and I don't, I don't
need it because I can just look at the compose file or I just execute Docker PSA. and that tells me what services are running or the
last time they were offline.
And that gives me enough information to know if something's wrong.
So all I need to do is use the basic, you know, bare bones command, the PS command to list
what's running and what's not, and I know what's wrong with the system.
And I don't need any further real monitoring beyond that because everything just continues in a functional state.
So this is nothing about Docker that, yeah, I love. Oh, and so when I recorded with Rob and Robin mentioned,
Kalibri Web is added a bunch of new AI features and he's frustrated about it, but I realized I haven't updated my Cali Bray web in some time,
so that hasn't happened to me yet.
I haven't encountered any of that.
But the tool is also running fine.
So that's why I just kind of didn't think about updating it,
because I'm just running it internally.
But I like that about Docker.
>> There's a lot of stability in that,
and like you mentioned, if there are features that people don't like, there's always that version of the image that can be reproduced,
especially like when I mentioned the comfy UI image that I made, there's nothing on there that I don't like.
But it's stable, it works, it's been working. It's not the newest version either,
you know, but it works. So, you know, I can go and pull from the repository again, build a new
image and have their latest image if I want or I cannot do it, you know, because the thing with the
Docker is that you have all the code in a working state, it's reproducible. So you do have that benefit for sure. And I think
that's a big one. Yeah, that's a big one. And another big one just to put it in perspective too,
if like experimenting with services is a really easy way, common way to use something like Docker is you spin up a container.
As part of that, you can assign a basically virtualized network within Docker.
Let's say we call it my big network.
You can attach my big network to multiple containers,
which can include things like reverse proxy or whatever.
That way, you can connect all your services
that you've defined together into the same network
automatically like they can see each other,
which is so useful.
These are ways these tools are so useful.
It's like I want the data in this directory,
let's say in my Jellyfin server,
I want that to be seen by my ARIA downloader app,
which is downloading files, you know,
I want to watch into that directory.
And I want both of these programs accessible to a reverse proxy,
and it's like boom, it's done.
So the way you can connect any number of services together
within containers is awesome.
Because if you can do a three, you can do it with 30.
And that part of it is really cool too.
And that's something where I am not sure, I'm not as comfortable doing that without containers
personally because I can do so many iterations across any number of containers, which is
super handy.
All even within the same compost bottle if I want.
Yeah, absolutely. The AI stack that I mentioned is all on a network called AI
Net. And so I know that, you know, those those containers are all able to talk to
each other, which they do actually have to do. So the MCP container, for
example, has to be able to reach the open web UI and
comfy also has to be able to reach open web UI and
There's some other stuff on there that needs to reach other you know other services
So that's another way to keep things neat as to have those networks isolated
You know and certainly you know I have other virtual networks as well
for other services.
So it's just all together, you know,
nice to be able to tie things together
and kind of, you know, reminds me also
with working with some larger stacks and composites
just kind of crazy that you can bring, you know,
all these services up and down,
matter of, you know, seconds sometimes.
- Absolutely.
Another thing that I've noted personally
in running containers coming from Next Cloud Pi,
especially, is like a tool like Next Cloud,
which has so many different components, right?
It's got the web server, the database,
all these different parts.
That's basically the most complicated thing
you could possibly run.
And most things are much, much simpler
in terms of what is required as far as different components.
And so I've actually, what I've found is it's easier
to run 40 random services than it is to run something
like next cloud that has so many needs.
In terms of how it's set up,
it has so many configuration things,
which is why Next Cloud Pyxis,
that offers the whole Debian stack as part of it.
It's just so complex, but in general,
I found containers are very, very simple,
and I like that about it.
It's a lot to wrap your head around,
I think, like in terms of abstracting a hardware layer into software, but it's also not,
it's not a horribly scary thing. And I think if you're starting with something like
NetCloud, you're making it as hard as it could basically be. And if you try other tools that are simpler, it's pretty painless.
Yeah, I mean, I can think of at least one service that would be nice one for people to
test is etherpad. It's probably the most painless thing to set up. It's pretty much ready
to go as soon as you put the container up, it really says my my experience anyway.
So that one's pretty simple.
- Yeah, totally.
Like you just can basically spin up the service
on local host and then just access it
on your local network, you know, with IP address
and maybe a port number and that's it.
And then because you're using a container,
you can also redefine things
like what the port number is, which is also very useful directly in the container. You could be like,
oh, this for some reason, this port's not available. I'm going to write my own. And I think this
is where containers become so useful is like, you can define something that makes sense to you
personally. And it doesn't have to make sense to like the broader world. It's nice. Testing locally too is really, and you know,
fun thing to be able to do because there's so little pressure and I think even
with Docker makes it even better because, you know, like you said, it's a lot of it is ephemeral,
you know, you can change the image however you want. If you write some new code, you can update the image and try it again.
You can test it on your local network.
You could even have people that are on your local network, test it out too.
They want to try it.
It's pretty ideal.
You don't have to send something out into a larger web just to try it or whatever.
Just makes a lot more sense.
Listing in S issues.
- Interesting note, I'll try to find the article to add in here,
but I saw that there was recently,
I don't know if it was basically,
there was a look over of all the different images
that have been posted related to containers.
And so many of them include like personal information that shouldn't be listed.
People defining basically passwords and things like as variables in their image and then
posting that to GitHub or whatever.
It's interesting.
Apparently there was like many of them, like tens of thousands or more images that were
leaking credentials.
Yeah, that's definitely a problem.
Docker has secrets.
If you aren't aware of that, you can go to the documentation and look at that or just something as simple as using an
.env file, or just being aware of the fact that you shouldn't be putting passwords in your,
especially in your GitHub repositories.
There's lots of ways to protect your information
and you should definitely be doing that
because guess what hackers are looking for that stuff
on the repos.
There's specific, I'm just gonna do a security
preachy bit here.
(laughs)
There's plenty of Google Docs that are specifically
for looking for credentials on GitHub.
Yeah, absolutely. And what HB is referring to is anything that you type into Docker when you're
say when you're running a command, there's no reason that you actually have to keep
right, like reiterating the same information. For example, like, it doesn't have to just be a password or something.
It could be like your time zone.
You can create a environment variable as a file.
And you could call it, say, time zone or, you know, like, user permissions.
And then you can define your user group permissions for your containers there.
And then instead of handwriting it every time, you just say, I want this container to look
at user permissions.env.
And that's where it'll pull this information from automatically.
And you can use that across a number of different containers.
But these are ways to also maintain credentials and stuff.
So you're not actually writing them in.
In the container, you're just saying, look to this file,
which then if you shared it online,
the person doesn't have access to that file
so they don't get that information.
- And make sure you check in your Get Ignore file,
your dot Get Ignore file.
If you don't know what that is, look it up.
It'll basically keep your files from getting pushed
to GitHub.
So definitely if you had a bunch of passwords in there,
you don't want to get pushed over there. So it's one of the ways you avoid that. And yeah,
like the environment of variables too, if you're pushing a repo for people to use,
just including your documentation, like, hey, here's the variables that you need to set up
yourself with your own information instead of, you know, leaking anything that you don't need to.
And a reason why you would share this kind of information, if you're like, "Well, I wouldn't
do that."
Well, one reason that you would want to share container information is it makes it so easy
to problem solve.
Because if you say everyone's using the same, you know, Docker image, so we call it an image
that's then spinning up the container.
If you've defined anything,
you can always share that, you know, like in a paste bin,
or it doesn't have to be on GitHub,
in order to get people to help understand, you know,
if you'd written something wrong in the command,
it's like, "Oh, it's not working anymore."
And you can share that.
And then the person can give you feedback, like,
"Oh, the maintainer changed. Everything you can share that. And then the person can give you feedback like, oh, the maintainer changed.
Everything you wrote is correct.
It's just the actual image link you've written in here
is now incorrect, something like that.
So there's reasons to share this more universally
with people as you use it.
And you can comfortable with it.
You just want to be careful that you're not leaking
information.
Do you want to respond to some of these people's comments
in regards to using sticking with bare metal?
- Do you want me to?
- Sure, I mean, did any jump out at you?
So I asked people on Lemme what they thought,
who are sticking with bare metal as we call it,
as using the machine itself to run a service,
and they're not doing containers by choice.
And I was curious why that is.
Not in a judgey way, just like literally like, why are people just, you know, as of 2025
now, 2026 sticking with not using containers, like what about it makes them like no thanks.
The number one answer, which makes sense is that people are just happy with what they're
running.
And I think that's true universally.
Like if you're using something, it's working.
It makes sense.
You don't need to like mess with your recipe, you know?
If it's in production, yeah, why, like, why mess with it?
I get that.
And I think a lot of answers related to that, you know?
It's like more just the old school.
I'm already doing it, so I don't care.
Which makes sense.
Like this person says, to me, Docker is doing it, so I don't care. Which makes sense. Like this person says, to me,
Docker is an abstraction layer that I don't need.
Oh, but this person says, I do like VMs and Proxmox,
and Alexi.
(laughing)
All right, so whatever.
I don't have the time to learn Docker and K8,
but I have using virtual Bishidz of Proxmox.
So, I've even, whatever.ishids of Braxmox. So, I mean, whatever.
There's a virtualization happening there.
(laughing)
- Yeah, there's, I think there's very opinions on them.
You definitely have some people who are,
you know, I guess we could say a little more old school
who've been doing things for a long time
and they're probably professionals in this field and so they are comfortable working with bare metal which you know there's
nothing wrong with that either and I think the larger point probably is that you have choices
like Dr. Santo only choice you can you can use whatever you want you know you know even just
thinking about virtual machines,
I think we talked about that a little bit.
I don't like virtual blocks.
You know, I'll go on records and I do not enjoy using it.
I do like VMware, which some people hate.
But guess what? I like it, and that's my choice.
So I had choices between those two, but additionally, QAMU, QAMU, is that how you say it?
I think it's easy to say it.
Whatever.
Anyway, I like that.
I have scripts on my machine right now that I can just run up a, you know, bring up a
pair of OS, you know, Instance.
I can also recently just out of weird curiosity from watching a couple weird YouTube videos
put a cameo script that I brought up,
TempleOS, just to see what was going on with that.
And I think that it just depends on what you wanna use.
I like continuous real lot of different reasons.
Sometimes like I said, I make things
that I don't feel the need to containerize.
But overall, I don't see why people wouldn't want to use them or at least try them.
There's certainly a lot of benefits, but everybody likes the things that they like, and
I'm not here to tell them any different.
Yeah, like here's a person's comment.
I've done it both ways. to tell any different. Yeah, like here's a person's comment says,
I've done it both ways.
At this point, I would not go back to bare metal
because I find myself encountering dependency hell.
Dependency hell.
Yes, that's why they describe it.
Dependency hell like over time.
I can agree with them on that.
Yeah, but like here's a different one says,
"I don't want to add any additional overhead
"or complexity to my system
"because I'm already comfortable with it.
I see legitimate use cases for Docker and work purposes where we use virtual machines constantly.
I just don't want to benefit from that in my home, which also makes sense.
Yeah.
I mean, you've got so much liberty when you're when you're working in a home lab and that's
like the big draw, I think for a lot of people is you can do whatever you want.
Like I'm sure folks who work in you know the tech industry
professionally there's a lot of constraints sometimes they get pushed to do
things that they don't want to do they're using products that they don't want to
use but at home you can pretty much do whatever you want for me I have a one
I won't say that I have a low spec yes set up over here because I do have some
powerful machines now the ones that I primarily use are spec set up over here because I do have some powerful machines now.
The ones that I primarily use are much lower spec
and I get a lot more done because I'm using Docker
on these machines.
Otherwise I would not be able to accomplish as much as I do.
- Yeah, makes sense.
I mean, like here's another person says,
"All of my services run on bare metal.
It's easy.
The backups are working.
I have a simplified workflow.
I don't have to worry about things like virtualized routing.
It's a very tiny system, but I am able to run a number
of services, peer-to-year, go-to social search engine,
custom sites, backups, matrix, and a whole lot more
without a single container.
And it's using less RAM and doing a DD once in a while
keeps everything as it should be.
It's been going for four years, ish.
It works great.
I used to over complicate everything with Docker
and compose files, but I would have to keep up
with those underlying changes all the time,
which in my opinion sucked,
and it's not something that I care to do with my weekend.
And this is keep in mind,
I do use Docker Kubernetes, et cetera at work.
It's great when you have those resources and other people to help keep things up to date,
but I just want to relax.
It's not the end of the world.
Yeah, I totally get that.
Yeah, totally get that.
And that sounds like what I said earlier, you know, like at work, you got, you know,
at your disposal, you know, you could have hundreds of people working on a project.
There's no telling what that situation might look like, but it's distributed work, dressed as well.
So, yeah, I figure out how many you want to just relax and play with things and have fun,
and things are already working, kind of a, you know, if any broke don't fix its situation.
Yeah, and one, to your point, too, the too, the beauty of trying any sort of containers is like,
you don't have to hurt yourself doing it either.
Like there's nothing wrong with literally spinning something
up to test it and then making it go away,
which is by design how it works.
That's the good news.
If you're already running services and you're curious,
you can just spin something up for the purpose of testing
and spin it down. I'll link to this long discussion thread of people's thoughts and, you
know, everyone's entitled their opinion. It's totally fine. And I was just, I was really curious
about it and personally, I have been using containers for a long time. But it's just something
like anything it just takes time to get used to or to wrap your brain around. But it is like an industry standard that's heavily used.
So definitely worth knowing, I would say.
It's not, you know, it's worth being aware of for sure.
Especially if you want to run multiple things simultaneously.
Well, I think that's good on that.
Yeah. Anything you have coming up, you want to be able to be aware of?
(laughs) - Yeah.
Anything you have coming up, you won't be able to be aware of?
- Just, you know, check out like GitHub
if you get the chance.
It's just github.com, hungry bo-gart,
which like actually is hungry dash bo-gart.
And you can see some of my projects
most recently, the mermaids, a secret journey.
Hope you like this text adventure.