Boom, boom, boom, boom, boom, boom,
Linux Prepper.
Welcome back to Linux Prepper.
A self-hosted show on using free and open source technology
to DIY everything myself while still enjoying life.
Shows inspired by Linux, BSD, open source, and FOS.
It's part of James.Network and Living Cartoon Company.
If you're interested in the episode
and want to send me thoughts,
you can email podcast@james.network.
There is also a matrix chat you are welcome to join
and a forum at discuss.james.network.
Note the show is subject to change,
depending on how it's received.
There's no hard commitments, just frustration and fun.
So people have requested a more concise version of the show,
which I'm actually giving you today,
plus a massive interview
that's world premiering NextCloud Atomic
by discussing it at length with the developer Tobias
who's known for NextCloud Pie and NextCloud Secrets.
We'll also be joined by Marcel,
another NextCloud developer you might remember
from the NextCloud Recognize AI interview we did
on the use of AI and machine learning in NextCloud as an open source
project. So Marcel is also known as the developer of NextCloud Bookmarks and Flockus. And despite
me barely talking in it, the interview actually ran two hours 30 minutes plus making it the longest
NextCloud interview I'm aware of in existence. So I hope you enjoy it. Do let me know your thoughts
and note some portions of the interview
have been pulled for release in the future
just because this is already so long.
But your feedback is much appreciated.
Let's go, let's give a little context.
The next cloud conference is about to start in Berlin, Germany.
That is from September 27th to the 28th. You can attend
in person if you were in Berlin for free or watch online as it live streams. Note that
this is nine hours solidly ahead of America. So you'll probably be asleep when it happens
if you're in the US, where you'll be staying up all night. And recordings will be made
available both in full and in sections on YouTube over the coming days.
Tobias will be presenting a lightning talk
about the development of next cloud atomic.
I don't know exactly what he'll talking about,
but we can tune in.
It'll be the condensed version seven minutes.
Also, just wanna make a quick note
that Seagull Conference is coming up November 7th to 8th
in Seattle, I will be tabling and presenting there.
The schedule for that should be available on October 1st.
Link to that in the show notes.
I'd also like to take this moment to thank our sponsor.
That would be Ameradroid, the United States-based distributor of single board computers and home
automation products.
They offer comprehensive, inexpensive shipping solutions globally, have excellent customer service, and are a lot easier to deal with than ordering directly overseas with
the longer lead times, fewer shipping options, and tariffs.
And I've been a happy customer with them for years.
I'll leave referral links in the show notes, or you can use Linux Prepper at checkout to
support the show.
I do have a device I'd recommend from them.
If you're curious, it's called the O-Droid H4 series. There's the H4+, and Ultra. These are X86 boards. They're not
ARM. So it's just full X86. They do not include RAM or a case, but they do include
SATA ports and M2 slot for NVM expansion. This means you get to add up to 48GB of RAM, and
the M2 slot can be used for networking expansion.
I've asked around and confirmed that the ODRED H4 works just fine for basic TrueNAS Scalebox or backup server,
runs Proxmox, Open Media Vault, NextCloud, great tool, check it out, I'll leave a link in the show notes. All right, skipping ahead, let's go over a quick overview and review of Next Cloud itself and the related tooling,
which is being discussed in the interview.
Feel free to use the timestamps to skip ahead
to the interview.
So Next Cloud, let's talk Next Cloud.
Next Cloud is the most popular open source content
collaboration platform in the world.
It started as a Dropbox alternative.
It's grown to be rebranded as NextCloud Hub
because it's more of a Google Suite alternative.
Now, for enterprise clients with millions of users,
in addition to home enthusiasts with one or more users.
So it can be expanded with hundreds of apps and integrations
for things like Office tooling, bookmarks, you name it.
So NextCloud the company offers paid enterprise subscriptions for hundred plus user instances
They're paying per person right that is not our interest in this show
But their software is fully open source a GPL license and it has basic community support
Though keep in mind this is a modular system,
which means that it is built on a standard web stack, right?
You have a web server, you have a database,
you might have caching from something like Redis,
you choose how the system is made,
much like WordPress, probably the most common way
to use a website, run a website in the world.
You'll notice that there's a lot of providers
of WordPress, just like there's a lot of providers of NextCloud because anyone can run it, which
includes companies reselling, right? So companies might do this. I think this is also one of the
biggest negatives of NextCloud is that a lot of the people that are reselling it to you are trash.
I'll give a shout out to a company that's great.
It's called Hetsner.
Hetsner in Germany, I have seen NextCloud themselves
using Hetsner to host things for them for their conferences
when I attended and I can recommend Hetsner.
I don't know anyone that really has a problem with them.
They were great.
They offer very inexpensive terabyte plus options with the limited users of NextCloud.
So I would use Hetsner.
And basically everyone else offering NextCloud personally, I would not recommend.
So I have a history with NextCloud as far as being an form admin over a number of years
with them.
And as terms of all the resellers offering NextCloud,
I would not use any of them.
I would of course recommend running it yourself
or running it on something like a VPS
where you still have some level of control.
You never know when someone's just gonna go away
with a plan or with support.
And that's a thing that can happen on any system
and that includes NextCloud.
So I personally have been a volunteer with NextCloud
over a number of years.
Volunteer is an admin for their help forum,
and I participated as a speaker at their conference
three times.
I've not mentioned it so much on the show,
but I wanted to give a moment just to give you
some clarification about this.
I'm really pleased to have this opportunity,
and I hope you enjoy the interview.
It does run really long,
which is why I'm gonna split it up across at least two episodes.
Right now the main goal is to talk about
next cloud atomic as the successor to next cloud pi
because that's gonna be discussed also
at the upcoming conference.
Let's take a moment to do some self-hosted
relating spotlights. Of course these will some self-hosted relating spotlights.
Of course, these will be self-hosted options for NextCloud.
So what is NextCloud Pie?
Well, just like I was talking about the modular nature
of NextCloud, NextCloud Pie is a ready-to-use image
of NextCloud for x86, but also ARM and Debian systems
in general.
It supports the major single- board devices and virtual machine devices.
It gives you a next cloud with as little pain as possible
for the sake of running it and having it work as a home enthusiast.
So what it is, is it's an easy to use image. It takes the pain out of next cloud.
It's good for people who just want something that works.
What this isn't. It isn't a virtual playground design for system experts if you knew exactly what you wanted to begin with
Then you're probably not looking at next cloud pie. You're probably making your own
Next cloud so next cloud pie is notably not good at supporting things like containers
Docker these sorts of things. It's. It's meant as a set and forget option.
You set it up and then you are as hands off as possible.
If that's your goal, I think you will really like
Next Cloud Pi and I've run it for many years,
it's been great.
Next Cloud, AIO or All in One
is a Docker-based solution of NextCloud
that aims to simplify the installation and management of NextCloud
by combining all the necessary components into a single container.
It provides an easy way to deploy NextCloud with features like file sharing,
communication tools, and backup solutions.
Further, NextCloud AO is built and maintained
by NextCloud employees, which is nice.
Also worth giving a mention to the NextCloud Snap.
Some people like using the Snap
as another sort of effortless way to deploy NextCloud.
I have not personally used it myself,
but I know many people do. So you can also check out the NextCloud. I have not personally used it myself, but I know many people do.
So you can also check out the NextCloud Snap installation.
There are also various NextCloud virtual machine instances.
Whew, there's a lot.
Let's take a moment to do some software spotlights.
Flockis. Flockis is a bookmark syncing app for browsers iOS and Android. It syncs your bookmarks and tabs either to next cloud,
link warden, carekeep, git, basic web dev server. It's easy to connect into your
self-hosting service or to something like Google
Drive or GitHub. Flock is just tries to sync and get out of your way. Next cloud bookmarks is the
next cloud bookmark app you can optionally install, which will allow you to use something like like Floccus or the web UI to add and synchronize bookmarks
between your different Next Cloud clients.
I've used this successfully with Windows, Mac OS, Firefox,
Chromium, whatever you can think of.
I've done it.
It works awesome.
I absolutely love Next Cloud Bookmarks.
It's one of my favorite apps of all time. I've done some testing on it in the past. I think it is a works awesome. I absolutely love next cloud bookmarks. It's one of my favorite apps of all
time. I've done some testing on it in the past. I think it is a great tool and I highly recommend
checking out next cloud bookmarks if you're running next cloud. Recognize AI is a photo
recognition enhancement application for next cloud. It is system intensive. It basically requires x86.
It's not designed for single board computers
or anything like this,
but if you have powerful hardware
and you wanna add basic machine learning
to your NextCloud to do local first photo recognition,
that is totally possible to recognize AI.
I'll also leave a link to our previous interview
about recognize AI and the use of machine learning next cloud
from previous episode.
Check that out if you're interested.
Also in the self-hosted spotlight is the upcoming
NextCloud Atomic.
NextCloud Atomic is the easy-to-use batteries
included system for running NextCloud.
It is a wrapper for NextClouds all in one.
It provides the convenience of automated updates
and a number of additional services
that can be enabled individually on top of the system.
Things like disk management, full disk encryption,
TPM support, monitoring, and making everything accessible
from an administrative interface.
Next Cloud Atomic's aim is to provide all the tools
you would need to host Next Cloud as a hobbyist
or as a company assuming your needs can be covered
by a singular machine.
One of the core selling points of Next Cloud Atomic
is the operating system layout.
Everything related to the core system is contained
in an immutable partition that
gets replaced during updates, which allows for painless updates to either fully succeed
or fully fail and be automatically rolled back. There is no in between. Apart from updates,
this also enables a simple factory reset feature, which can be used to restore the system to
a working state in the event something goes wrong. The operating system is Debian as
its base, which is widely
trusted as a surper operating system.
Next Cloud atomic was made possible in part by the
prototype fund, the prototype fund.
Let's get into what that is.
Prototype fund is a fund for German developers seeking to work
on free and open source projects, offering six months of
funding so that you can work on your project. source projects, offering six months of funding
so that you can work on your project.
This includes projects like Next Cloud Atomic,
which we will be talking about today.
If you're interested in the Prototype Fund,
I will include a link in the show notes.
If you like the show, please do share it with others.
I am running a Steam key giveaway.
All I ask in return is you leave a review
on your podcast system of choice, no more than three entries per person, please. And please just leave more
than a one word review. Say anything you want. It doesn't matter. But that runs through November
1st right now and details on it are in the show notes. I hope you enjoy this long format
interview. And I want to say thank you to everyone who submitted your questions, concerns,
comments, especially the people
from the next cloud pie community where I've been a community manager over the years and
done my best to assist Tobias.
So we've done our best to address all of your concerns and questions here in the interview.
If you enjoy the show, keep in mind, you can always donate to me directly.
I will also include donation information for Tobias is in his projects as well as Marcel
either way. Thank you for listening. A quick moment of gratitude for Ignacio who is the original
developer of next cloud pie. Thank you so much for paving the way that allowed Toby is to take
over and allowed us to continue on to these exciting new steps today. You can always contact me through
email by following the podcast directly on the Fediverse
or your podcast platform of choice.
You can also join our Matrix chat and forum.
Please do enjoy this long format interview.
Look forward to even more discussion
dealing with Toby's Next Cloud Secrets application
in a future episode along with a focus
on container technology and whether tools like
Docker, Podman, and virtualization are right for you as a home enthusiast.
Thank you for listening to Linux Proper and let's go directly into the interview.
Welcome to the podcast.
Yeah, hi, I would say about six years fully employed and probably
about a to 10 years in my free time and privately. And I'm also working on a number of open source
projects among others, next slide atomic. Actually, for some reason,
most of the big open source project that I'm working on
within the Nestle ecosystem,
probably just because that's the one open source product
I use the most personally,
and that gives me a lot of value.
And yeah, so I'm working on Next Light Atomic,
Next Light Pie and Next Light Secrets mostly. And yeah, so I'm working on next-class atomic,
next-class py and next-class secrets mostly, during that.
Right now I'm in the process of working
as a freelance engineer full-time,
which has also the reason that I hope to be able
to focus more on my open source work that way,
but also be more flexible in
just generally what I want to focus on in my life and in my work.
I think, yeah, I think that's about
what there is to know right now for this context, for this broadcast.
Yeah, sure.
I am Marcel, Marcel Player.
I work for NextCloud full time currently for about three years now.
And have been
tabbling in open source projects for a number of years, about 10 I think.
Yeah, and I'm currently in my free time mostly focusing on floccus, floccus
bookmark syncing, which allows people to sync bookmarks across browser vendors and the NextCloud Bookmarks app
that acts as a backend for flocca
and as a standalone bookmarks app
in the NextCloud ecosystem.
Yeah, that's mostly it.
I've been on this podcast previously talking about AI
at NextCloud, which I'm involved in.
Yeah.
- Thank you.
Also, thank you for floccas.
I'm using that personally, actually.
- Nice.
It's always great to hear that.
- Although I'm not using it across browser yet, I think the idea just never occurred to me.
But yeah, lost opportunity there.
Which browser are you using?
Firefox mostly.
Contential question, contentious question.
Yeah, right now I don't really have the one browser I'm happy with. I support Firefox because I want to.
I'm not happy about the Google monopoly, of course, but at the same time, Mozilla.
It's very good at doing shady stuff every now and then, and I'm not too happy about that either and yeah it's
a work in process, a work in progress I would say. Totally agree. And do you mind
giving us an overview on what Next Cloud Pi and Atomic are? Sure. Yeah so maybe I just start with an extra pile because it's the more well-known and
the actually used product.
The other one is not in a usable stage just yet.
So an extra pile is basically, or I should say differently.
When you are running your next cloud,
you need to find a way of installing
and configuring NextCloud itself,
but you also need ways of managing your operating system,
your web server, your domain backups and so on.
So there are a lot of administrative tasks involved web server, your domain backups and so on.
So there are a lot of administrative tasks involved
in running a next cloud server.
And next cloud py basically takes a Debian system
and automates all of that stuff,
all of the administrative tasks starting
from the installation of next cloud
and ending with backups dynamic DNS.
Many, like backups, many features, server-side encryption and so on,
many features that you would have to solve manually as a systems
administrator. And the result is a system that allows you to run next cloud without ever interacting
with a terminal if you don't want to, because everything is configurable from an administration web user interface.
So that's, I would say, what next Cloud Pi is.
I sometimes call it managed self-hosting because it is the managed hosting experience but on
your own hardware.
And something that I should address because it always gets misunderstood.
Next slide is not specifically built for the Raspberry Pi, even though it has a Pi in the
name, but it does support a wide range of single board computers like the Raspberry Pi including it, but it also is available for VMs and containers,
LXD containers.
I, for example, run my currently,
my next-door instance that is currently backed by
an extra Pi on just a server server I built myself with consumer hardware
running Proxmox and they're inside of a container. So that's an X-LOT Pi. Yeah, I have been a user of NextCloud PIE for, I believe about seven years.
I should have looked up the numbers before the podcast.
But basically when NextCloud came into existence, a few months after I changed to Next cloud and
probably half a year or a year after I switched to Next cloud pi and it saved
me a lot of work back then and about two years ago I took over
maintain a ship of the project completely before I was
contributing a few things and now I am the core maintainer of the project
basically. And yeah, I think that's a good basis maybe to directly talk
about an extra atomic because it is similar in many ways
and does some things purposefully different
than extra time.
Basically, for the last six years,
I've been working as a,
yeah, professional software developer
and DevOps engineer.
I've been doing a lot of cloud application development
and cloud infrastructure and a lot of work with containers
and operating systems, build pipelines and so on. So a lot of work with containers and operating systems, build pipelines and so on.
So a lot of system level things,
but also application development.
And my knowledge about these things has been growing.
And I became a bit dissatisfied with the way
I was running NextCloud.
Because first of all, my skills have been,
yeah, have changed over the time. And I would do things differently than I did it
seven years ago. And secondly, the industry has changed. and there have been a lot of emerging technologies
that have not been as ubiquitous seven years ago.
For example, containers did expect them, but I don't think I have used them ever back then
and now I'm using them constantly.
Like that's always the go-to way of running an application is to see if there's a well-maintained and well-built
container for it because it makes life so much easier
if there is.
Also, a lot of--
yeah, I always had the issue. The next car I didn't fully cover the security
requirements that I had for running and yeah running extra interesting it was my
own data mainly because I didn't only want to encrypt the data directory, but I did want to
encrypt the full disk and that was not supported in extra pile. And secondly,
because there you need to jump through some hoops to be able to to encrypt the
full disk and still have your server reboot
without being physically near it.
So there were some challenges there.
And when I took over maintainer ship of Next.py,
I was exploring and considering options
to bring the project forward on those levels
and implement them. But ultimately
noticed that there are some things that I could never solve within the current
architecture. And so I decided to start a new project that would have a slightly
different objective and a slightly different goal and also target audience,
but would be at least as easy to use hopefully,
but a lot more professional
and yeah, following industry best practices in the way it solves things and
that project is next-out atomic. So next-out atomic is also like next-out
by a project that tries to solve the administration overhead that you get
when running next-out. The whole operating system management,
service configuration, backups, disk management, of maintenance, like on my side.
That was one of the issues with an extra pie that it takes a lot of effort to maintain.
of effort to maintain because well one of the issues is that the project is written mostly in Bash so it's about I think I counted it once 17 000 lines of Bash code and Bash is really nice
for automating operating system tasks.
Don't get me wrong, it's a very nice tool for that,
but there's a point where it causes issues
because especially Bash doesn't give you
a lot of guarantees of the environment and surrounding it.
Like a Bash trip could run and you wouldn't
notice if the variables it expects from the environment are set or not if they
have the right format if they like it doesn't support data types for variables. Basically everything is a string of characters. And
these things are different in more complex programming languages and that gives you a lot of guarantees
to notice early if you made a mistake in your code. And that among other things caused next cloud
pyre releases to always be a lot of hassle and effort to test and ensure
that there are no errors and it still caused errors to be overlooked in releases.
And with the new architecture,
I hope to have a more robust set
that can be more easily tested
and also is less prone to error sources like this.
And thus causes me less work
so I can better sustain it, which has become difficult for an Excel
Pi. So basically when an Excel Pi was created there were not many options to
host NextCloud because it was one of the... it was the early days of NextCloud
right. So NextCloud Pi manages the whole next cloud installation, database installation, PHP installation itself.
It has scripts for that. They work, but they need to be maintained and that takes a lot of the work
in maintaining the project. However, nowadays these things are actually solved
by other projects.
And I don't see a lot of value to solve them again
in my own project.
So with next electronic,
I actually integrate next to the order one,
which is maybe I should explain that
it's the multi-container deployment
that is developed at NextCloud itself at the company. And I actually integrate this container setup into NextCloud Atomic and focus a lot more
on the operating system tasks that are not solved by an excellent all-in-one.
So the heavy lifting in running NextCloud itself
and its surrounding services,
which have also increased over the time,
like now a classic NextCloud installation
has I think five containers or five services, depending on whether or not
you're actually using containers.
Like you have an ex-cloud itself, you have a web server, you have a Redis, you have a
database, and you have a service for talk. You have the unified push service.
And I think you will also have,
yeah, likely also a call of error online for office editing
and the whiteboard.
So it's actually eight services that you have to run
while it has been three to four when exit PI was created.
And this service management and configuration is now solved pretty well by an exit audit
in one. And so I'm integrating that, I'm fronting it with my own reverse proxy.
I'm doing some of the things that could be solved
by an oil and one, for example,
certificate management, backup management,
because I have more options to do them
on the operating system level,
or need more flexibility in how they are done.
But mostly I rely on an extra Doran one.
And I think that way I can actually better provide the value that an extra atomic is
adding by providing a robust and secure system and like operating system and additional
features that I'm not provided by the next one. Next up I, one of the bigger
shortcomings is that the build system is not compatible with running containers
inside of Next.py. I have actually--
I had started to push in the direction,
but ultimately realized it's too complicated,
because I basically had to rewrite the whole built system.
Currently, the built system installs NextCloud
and verifies certain things of the installation that they
worked during installation. And with containers, like the installation process
itself is running inside of containers and it's very hard to run containers
reliably inside of containers and expect them to be behave the same way as if they are not running instead of containers.
It's possible, but the engineering effort isn't worth it, in my opinion,
because you always would risk that in the end, you may
may be fixing things in containers
that are not fixed in the final system
and thus be overlooking errors.
And that's one of the shortcomings.
So I had difficulties to run something like
Nix.0 and one, or even things like the whiteboard itself
or even things like the whiteboard itself
inside of Nexcloud Pi as part of the, because the build process doesn't allow for it.
The second thing is one of the big pain points
for Nexcloud Pi is that it is running
a traditional operating system
with traditional update mechanisms, which next little atomic doesn't. I'll come to that in a second. And that means that
both the installation process and the update process is procedural. It runs some scripts, it applies them, it does some modifications to the system,
and hopefully it succeeds.
And if it doesn't, you will have to fix that to be able to go forward.
And that results in every system being slightly different. Because even if the script is the same,
the conditions in which it runs might not be.
For example, you have different features enabled.
You have a different date,
and you're using external packet repositories,
like the official Debian packages, for example.
And they have different versions
of packages at different dates. So there might be conflicts at one date and not another,
and thus your upgrade process might fail after some time without changing the process itself
and so on. These issues always existed for NextLoud Pie And next.atomic is using a separate approach in that it provides
the full disk image as is. And you download the final disk image, swap it out, like swap the previous disk image
with the new one and then you have the new version and if something goes wrong, you're
just stuck with the old version but not with a broken system in an intermediate state. And that also allows me to actually test everything just as you receive it as
next cloud atomic user and not have two processes that are both need to test that result in
different results and also result in every system even on the same version to be slightly different. So things can break in different ways.
And yeah, that was really something very painful with maintaining next
our pie because errors come up in ways that you cannot prevent as a maintainer.
>> Next Cloud Atomic itself is about immutability.
And that raises a question that's come in from listeners
wondering why you wouldn't just use something like say,
fedora silver blue or NixOS immutable operating systems
of Linux as your base.
- Yeah, definitely.
So actually I was considering some of these options.
Silver Blue is not a great much because it's
a desktop operation operating system with desktop manager
and so on included.
But there are other specifically other operating systems
specifically designed for that, like Fedora CoreOS for VMs or Fedora IoT
for basically IoT devices.
That actually look like they are designed for the job.
Also, there's the EUPLU,
a universal PLU project that offers a build framework
for building systems based on the
rat head atomic architecture for specific use cases and so on. is of course Nixos which I have also looked at then there is a yeah then
there are options from openSuser as well I forgot what's the name of the immutable server variant,
unfortunately, but yeah,
micro OS is the one that I was going for.
Yeah, open source of micro OS.
So that's also there.
Yeah, I have been considering many of these options.
I didn't consider some of them, especially BootC, which is
the basis for universal blue, for example, because when I started, it wasn't yet really available.
However, originally, I actually started with an entirely separate project or build system for building acceleratomic.
And I would actually like to mention that because I think it looks, it's really cool
and maybe it can be useful for some of your viewers or listeners to be more specific.
It's called skiff OS and it is basically a build framework around build route.
Build route being a tool for building minimal Linux systems from scratch for especially especially IoT and embedded devices. And that's really nice if you need something
where you want to be in control of every aspect.
You will be compiling a lot of things yourself,
but the build system takes care of compiling it.
So it's not difficult to do that.
It just takes some computation time. But you can
just compile all of the packages that you need specifically for your device and then you have a
ready image and skiffle as ads on top atomic updates and the ability to have and yeah it gives you a slightly easier configuration system and also
focuses on systems that are minimal but able to run containers. That was the original idea
which I wanted to use for next to atomic but ultimately I noticed that this build system
But ultimately, I noticed that the split system had a few issues for my use case.
One of them is it took too long to get updates, especially security updates. With running NextCloud, you have a service that's exposed to the internet.
So you, or at least many people will, and therefore, you need to have an operating system
that provides you with security updates very, very early.
So you reduce any issues.
Also, things-- a lot of security related features
were not the biggest focus of SK4S.
For example, with an exotic atomic, I'm supporting TPM based encryption for both disks and all
sorts of credentials that are used inside of the operating system, like for example,
your database password and so on and also for and also secure boot so images are signed by me
and not only images but also yeah something I will get to later that allows
me to update only parts of the operating system in an atomic fashion.
So these features were not well supported by Biltrude and therefore also not by Skiffel
as at least not in a manner where I wouldn't have to do a lot of work to bring it forward
and to make it work. And that's when I was looking for other solutions.
And I found M Cozy,
which is a build system provided by
the people behind system D.
And it's for example, also used for normal S,
if you know that,
I know you have recently covered the
KDE distro. But some time ago there was a, I think normal S existed for a while, but
now they have shifted their architecture to use MCOSI and an atomic way of upgrading.
And MCOSI allows me to basically take
any existing system image.
And as a matter of fact, I'm using
a standard Debian Trixie base right now
and run operations on it and create an image.
And in my case, I'm creating an immutable image,
which means it's basically you have a Debian system
but you don't have the package manager,
but I as the developer or maintainer
am using the standard Debian package system to install stuff and so the nice thing about
M cosy in comparison to booty or nixos is that I'm very flexible. I could use a standard Debian
base. I could also and that's what I am planning to do to support ambient devices
to support, yeah, single-world computers is I could also use an ambient base and use the ambient tools to install stuff
and then freeze it into an atomic image
and push it to devices.
And in my opinion, that gives me the best of both worlds.
I have the ability to create this minimal operating system
that only has what I really need,
only has the security related stuff. Therefore, I can afford to
I will have fewer critical updates because there's just fewer stuff on it. I have a lower attack surface and at the same time be flexible in the base
technology that I'm using. I'm not building my own operating system in terms of building
everything that makes an operating system. I'm just customizing my own operating system from existing
operating system, I'm just customizing my own operating system from existing base lines.
Could you talk a bit about the upgrade process and how that will be different for you as much as the user within Atomic? Next slide pipe usually an update would mean I would be adjusting the scripts that make next log pi next log pi
that automate everything. Then I would define how those could be tested. I would test them
manually, then I would run my pipeline to generate some images, then I would run my pipeline
to generate some images, then I would ask people
in the community to test those images on the devices
or environments where they are running an x.py
and get feedback.
And basically, when you install the update as an x.py user,
you would download the latest version of the scripts.
Then you would run all of the update scripts that would apply stuff in your operating system.
For example, exchange some scripts, exchange some config, replace something inside of a config and so on.
replace something inside of a config and so on. And then you would be on the latest version.
With Next-Out atomic, it's different.
You have your system installed.
You have two partitions that are used for the
root image, for the root operating system,
and then you have something called system extensions,
which is a feature by system D,
that can be used as an overlay over your file system,
very similar to how containers work,
and they are used for individual services
or used for the whole operating system. And when you do updates, then the update checks if there is a new image, downloads it, applies
it to the other partition.
Like if you're running partition A right now, I use this from partition A right now,
then it would write the new image to partition B,
and then it would do a reboot or a software reboot to partition B.
And if the system comes up successfully, then it would stay there
and mark the new partition as successful.
If it doesn't, then it would roll back to the old partition.
That's the general process. And that being said, that has one drawback
that your server needs to restart now and then. However, there are two things how I'm addressing that. One is I don't always update the whole system, but I use the already mentioned system extensions.
So basically I have system extensions that overlay your file system and it just appears
to your operating system like there are files there that are
not actually in the root image but they are in the system extension. And then I can download
a new system extension, swap that out, just restart all of the services that are affected by that
avoid a full reboot. That's one thing. The second thing is I will be looking into soft reboots. So system D allows you to actually mount the new root file system
to a temporary directory, switch over through that, and then restart and services which are marked as such
do not need to be restarted.
They can survive basically a kernel replacement.
And that's something that I will also be working on,
but that's not yet done.
It's only planned so far.
And maybe one third thing is relying on TPM based encryption,
which means your server can actually reboot if it supports TPM without user interaction. So you
can have scheduled reboots. You will be able to schedule when they happen and if they
happen at night you won't probably notice it and I expect root image updates to
not be more frequently in once per week. The process behind it is basically I
will have a build server
which regularly builds new images for both system extensions
and for the root image and then tests them automatically
and provides them.
And that's what users will receive.
And then download it, apply it and then switch over. So that will be the process
with an extra atomic as opposed to I write a lot of scripts, test them superficially
in an automated fashion and then very extensively manually and push them out after that.
- And could you talk a little bit about how this will impact
the upgrading updating of NextCloud itself
on the user facing side?
Example with the new version, 30 plus coming out.
- So that's also one of the nice things
of wrapping Next.org in one.
Basically right now when I support a new next.org version,
like I'm, for example, working on just now with Nixert32.
And Nixert32 with an Nixert file is--
I have to check the release notes.
And I have to check the administrator hints and then I have to
try it out and see if anything breaks, some things I miss things that break and with wrapping
extra all in one, that work is mostly done by the people over at next.all. And all I have to do is to check their releases,
which will, because they are using containers and the general setup of these containers is always very similar,
they will have breaking changes every now and then, but it will affect me a lot less than setting up next cloud completely in my system.
And basically what I'm doing with NxS1 in one is,
that's also maybe an interesting detail.
NxS1 in one in NxS atomic is run using Podman and is running not as root but as its own user, only having
access to the file system in the places where the actual files are located that are relevant
for next slot.
So your data directory for example.
And so it's next to all in-all-in-one is already running
very sandboxed, and then on top of that,
you obviously have to containerization.
And that means, next-order-all-in-one is very,
very much separate from the system.
And so I don't expect to have to adjust the system much to cater for an excellent only one.
Other than its own configuration.
And that can be tested a lot more easily than if everything can affect everything like it is the case with an XOT pi. - And are any of these changes being upstreamed
into all in one?
- Some actually are already,
but mostly I'm focusing not on changes
to the containers themselves,
but I will, yeah, I will contribute things
that make compatibility with Potman,
more difficult, for example, because the upstream,
they are mostly using Docker.
The reason why I'm using Portman is because it integrates
a lot better with system D. And it has some security benefits.
With Docker, you have to imagine that there is one central Docker process
that runs all containers. And when you run a new container, you just the command you
execute just talks to the central process and test the central Docker process to create a new process for the new container.
And so all containers are children to that one process.
And that means that some Linux kernel sandboxing features don't really work with that,
because when you define, basically,
in Nukes, there's a concept of control groups
and a process name spaces that would apply
to the process that you execute,
but the process you execute is just calling an API
and then waiting for updates from the Docker process,
but the actual container creation happens somewhere else.
So when I configure sandboxing and security restrictions for system D services that run
Docker containers, they don't work because everything is in the same control group in the same process namespace being attached
to the central Docker process or Docker daemon.
And that's the reason why I decided to go with partner.
And that's one of the things that I will keep contributing
if there are issues with partner support,
then I'll contribute those things back to all in one.
Other things are, yeah, I'm using all in one
a bit special, I have more,
I'm exercising more control over the specific configuration.
I'm not using the master container,
but I'm using the compose file that's provided
and actually doing slide
adjustments to that and I'm using mostly sockets to communicate between
services instead of local ports because they have a permission system built in
because they're using file system permissions basically.
And yeah, that's also something I will support
back if it's not supported yet.
Small things like that.
But mostly I'm not touching all in one match.
I'm just using it as is and building the system around it.
- When you say building around it, does that also include the apps themselves?
For example, things like next cloud bookmarks and next cloud secrets that you would otherwise install in addition, you know, on the running system.
Will that be managed as well?
Likely.
Yes.
that be managed as well? - Likely yes, although I have to see what cases those are
and they will likely have to be addressed one by one.
I mean, generally installing Nectored Apps
is possible in Excel to all in one, right?
So you can just install them
from the next-world administration interface as fast as fast My concern is in regards to the update breaking the next cloud apps themselves
right it's people who run a lot of apps and then when the major next cloud version update happens that
Breaks those other applications that they're using
That's something
Where I will definitely spend some time on just to, I mean usually
the next cloud update from the web interface checks, I think whether there are apps which
are incompatible and disabled system, which is not ideal because you can't easily roll back without applying a backup.
And I see if I can find a way around to prevent updates that are blocked by incompatible apps
and notifies the administrator instead.
Basically, the update process itself, however, is provided by Nextet or in one.
It is just triggered by next cloud atomic.
We had a user question also on whether this will make it
possible for them to use a magic.
For example, something installed at the operating system
level now by default.
This is the kind of thing that's not supported in Next Cloud
Pi currently.
There are two parts to this answer. So the first part is that, or maybe I should take
a second to explain the issue with the magic. Basically, a magic has been always plagued
a bit by security issues if you have untrusted images as input for it.
Because image formats can be very complex
and can allow you to actually do remote code
execution in your server.
Like you can embed scripts in some images that can cause someone to take over your server
if they are executed by an Emagic project with the appropriate permissions.
Next code, all in one, actually has a solution to that. And that is not using a magic for these things,
but instead using a container
that provides a API which does the same thing.
I think it's called imagine,
I'm not 100% sure right now,
but yeah, it's included in next-order all-in-one and it solves
this issue by relying on a different system than a metric.
Emetric has still a use case for theming in next cloud.
However, that is not critical because when you're doing theming, it's usually the next
cloud instance administrator that chooses the pictures that are being used.
So there is no risk of an attacker like uploading an image into a public share or something
that would then be processed by
a magic to create a thumbnail and had a risk of remote code
execution. And this the letter use case would be handed by
Imagine, both this container I hope it's called Imagine, should
look it up. Yeah, but generally the imagine would then only be used for internal purposes and then it is fine to
use.
What about those enthusiastic users who don't want to wait for you to test things and they
want to make their own changes?
Say changes to the containers or add additional containers.
Say plug in something like Jellyfin as a
media streaming service to watch their next cloud videos and make other changes without waiting
for you to test them in any way. How will that be supported?
Yeah, so that's really interesting and I've thought about that a lot. One of the main
and I've thought about that a lot. One of the main focuses of an extra atomic is to provide as little foot guns as possible. So it's in a sense both more tinker-friendly than extra pie and lasting differently. Because it you can be more
certain that when you do some changes, you
won't break anything. Because the things
that would break stuff, if you change them,
are not changeable. However, it also
restricts you in the ways you can change things
because you have to follow the system philosophy
and architecture in how things are managed.
And basically every service is running containers.
So it's hard to do anything without knowing
and understanding containers
and also
It's not meant to
Yeah, manually change things around in the operating system. The same applies to Nectl
By the way, but of course people do it all the way and that's a very valid use case
It's just one that also meant, okay, but you should only do it
if you're able to help yourself if you break something. With Next.atomic, I have one feature
that would cover this on my wishlist, but it's on the wishlist because I first want to focus on providing a stable
and working system that includes next cloud and everything that's needed to run that.
And after that, I will look into this. So what is this about? Basically, I would love to be able to provide containerized environments that are mutable,
where users can do whatever they want, where they can choose to expose specific directories
from the host and have it run their other services on there. Those would be likely inks containers.
So inks is a container format that's more
meant to run a whole operating system in a container
as opposed to Docker containers,
which are meant to run single applications
inside a container.
And those containers could then be
any distribution you like, basically.
They could be a Debian system.
They could be a Fedora system.
They could be open SUSO or whatever.
And you could install stuff and manage stuff there,
basically similar to how Proxmox containers work, if you will. So that
would be something I would really like but it's a it's only wish list right now.
And I think if I managed to do that that will also cater for the Tinkler use cases,
while still not being in conflict with the stability of the base system
and the next cloud installation and services.
- So if a user did an update
and they ran into some kind of issue,
what would they do?
And how is that different from next cloud pie
currently, especially in regards to that classically untested functionality you sometimes find
in next cloud, right, where you're like, Oh, here's something that's untested, I can click
and enable it. and I will. I will try to make that not easy to do.
So if you do it, then you know what you're doing.
Basically, basically my goal as maintainer is always to
protect users from making uninformed decisions.
I'm totally fine and I encourage you to use your system like you want.
But I want to make sure that you know the implications when you're doing it.
And with Nexide Atomic, it will be harder to do that kind of thing but it will be possible.
It will probably involve something like adding your own system extension which can overlay
files if you want to do changes to the basis.
So that's possible.
It's just not as straightforward as editing
a file. And the idea is that this tells you when you try to edit a file in your, like,
let's say, user bin directory, then that being read only will tell you, okay, that's not how I am supposed to interact with the system.
And then you will use a search engine and find a documentation entry where there's described how you can do it in a way that is easy to undo and still gives you a warning that this might
break things in unexpected ways because it's not how the system is meant to be
and it's not tested. So that's for general system modification. The other
thing is regarding NextCloud, you do have access to the next cloud installation
and data directory.
You can do whatever you want there.
You can use the web interface.
If something breaks, you will, the worst case should always be that you have to rely on
your backups.
And in the next time, I will make, I will, I will make the warnings
if you don't have backups a bit more present
than it was a next-up file.
And I will actually encourage you to set up
backups during the installation process
or show a warning that you don't have backup set up in the user interface,
the administration user interface provided by an external atomic.
So the main thing is if you have good backups,
then you can do whatever you want and don't have to be afraid.
If you don't have backups, there will be things that can't be easily fixed.
But that's like a general system administrator's rule of thumb, I would say.
Who would you say the target audience for NextCloud Atomic is?
And is that audience different from the people who would use NextCloud Pi currently?
NextCloud Atomic caters to a slightly different target audience, but I would say it's mostly
a broader target audience. However, the main focus of Nexide Pie in comparison to Nexide Pie
is focusing on robustness and security. And the focus of Nex Excel Pi is mostly on ease of use and supporting as many use cases
and also low budget use cases.
Next, atomic aims to support the same audience, but it focuses on the system security
path first. So the first things that will be supported by next atomic RBA machine
for two reasons. First, they are more easy to test, so it's easier to develop with them.
test so it's easier to develop with them and if I support visual machines I can faster progress the system itself and secondly because visual machines are
very ubiquitous they can be run anywhere they can be run on a single board
computer as well as on your own hardware as well as on a
hosting provider or cloud provider. And after that, I will work on support for single board
computers where there's one major challenge. Basically, next topic right now assumes that you have a TPM address or platform material.
That's the thing that Windows currently annoys everyone that they only support devices with
TPM. That's basically a chip that allows you to ensure the integrity of the operating system
and allows you to store secrets and keys inside which are only provided to the operating system
if the integrity is ensured. So it allows you to unlock your
disks without entering a password because it's provided by TPM only if your operating system is
secure, securely signed and its integrity is ensured by TPM.
That has a big advantage for servers because it allows you to reboot your server without user
interaction and still have this encryption enabled. And that's why I'm, that's one of the reasons
why I'm focusing on that also I don't want to have unencrypted credentials lying around if I can avoid it.
But I, after I get everything to a working and stable state, then I will work on a fallback system that will likely involve something like a web-based
unlock mechanism that can be used on device
without TPM encryption until then single board computers
without TPM will not be supported.
Unless you add TPM, for example, I think there are TPM shields
for some single board computers. I think there is a solution for the Raspberry Pi 5, for
example, but not all SPCs will be supported from the start. But regarding target audience, next
up atomic also extends the target audience to a mob, I would
say professional or even enterprise target audience.
Because I think with the system design architecture, it will be interesting for basically any company or small business
or NGO or community that is currently running next cloud on a single machine and managing
it themselves because it provides a number of guarantees on top of the existing options
in terms of safety and compliance that take a lot of effort to achieve yourself.
And so to sum it up, next slide, atomic still caters a lot to self-hosting.
It also caters to professional requirements like because I think next cloud is really a
system that tends to be trusted with sensitive and important data. And so that was important for me to focus on.
And it also caters to companies and NGOs and communities
that want their own next cloud and want to make sure it is
built with security in mind.
It has good monitoring options and good backup options.
- I also just wanted to share this comment
a user sent in thanking you for the years
that you have worked on next cloud pie so far.
And yeah, so they just wanted to say,
"Thank you, Ed, thank you from the community."
- Thank you, that's very kind.
I always love hearing from the community. Thank you. That's very kind. I always love hearing from the community.
And that's also something I really hope I can provide a solution
that covers the use cases of the community because.
Yeah, well, I think that's something I should say here.
I think that's something I should say here.
I won't be sustaining next card pie forever.
And I will at some point, when I think next card atomic has achieved
a certain level of maturity,
discontinue work on next card pie.
Right now, next card pie is mostly in a maintenance state, so I'm adding
new Nexrad versions, I'm fixing critical bugs. To be honest, I can't even fix all of the bugs,
because Nexrad Pie has many bugs and I know about them. It takes a lot of work to fix them. It's super hard to test.
It's often that there are new bugs when old ones are fixed and so on. I'm trying, basically,
my focus right now is to really ensure that all the existing systems are working, that they
support up to the next start versions, that they support up to the next cloud versions, that
they support up to the operating systems and so on.
And at some point, I will provide a path to migrate from next cloud to next cloud all
in one or next extra atomic. These migrations will be very similar because an extra atomic obviously uses an extra
or an one and that will...
Yeah, and then I hope that will be a good and even better option for
users that are currently using Excel Pi. So that will be some way for the future.
I don't think it will happen at least for the foreseeable months.
Like it will certainly not happen before probably mid-2026 or something.
But it will happen eventually.
- Super interesting.
- Is there any correlation between this podcast
and your upcoming conference talk
at the next cloud conference that you'd like mentioned
as part of the episode?
- I think it's actually a great complement for the next cloud conference talk
because they only have like seven minutes.
And I can't cover any of the technical details.
And it's nice to be able to point to this interview
and to the podcast episode.
- Yeah, it seems like a really useful project
in the landscape of Next.Cloud,
because yeah, Next. Play. NextCloud Play does serve a valid use case,
I think, which is this professional, like for people that are not professionals that want to
use NextCloud, NextCloud Play is kind of the ideal thing to use. And it's just unfortunate or
use and it's just unfortunate or well, it's just how it is that it evolved into the state where it's barely maintainable apparently. And it seems really cool that you're willing
to take the next step and turn up with something that is more maintainable and more secure and more stable. It's really cool.
You know, that makes me curious. What do you feel like you've learned as the maintainer from
doing this over the last seven or eight years? Oh, that's a that's a great question. So first of all, when I jumped into a next-class hosting scenario, it was actually before
next-class.
I started out with an on-cloud server that I was hosting on an old laptop.
And I think I spent a week just configuring Apache because I didn't know much about it and I wanted it to be secure and I didn't feel comfortable with it
Without yeah understanding every line I wrote for the configuration and
since then it has been
an amazing learning experience I have learned a
It has been an amazing learning experience. I've learned a huge amount about especially
Bash from Nacho from the original author of Nexelpile.
Like, don't get me wrong, the Bash quality in Nexelpile
is quite good.
It's just, I think, that the complexity of the project
has exceeded
the use case of Bash.
But I learned a lot about that.
I learned really a lot about system architecture,
system management about it.
I learned with National AtomicES-Out atomic,
I've been working on NACES-Out atomic for about a year now.
And I learned so much about Linux systems,
kernels, I learned more about containers than I ever anticipated.
And plan two,
like how the individual sandboxing features work and how they are actually not tied to any container runtime.
You can achieve the same level of sandboxing and isolation that you get with, for example, Docker containers entirely without them.
And that's also something I'm doing in Excel atomic with some system D services.
I have learned a lot of what trusted boot about build systems and also about, I think about, well, I hope that I have learned a lot about the needs of the community as well,
and I hope to be able to keep catering to them.
a very impactful story for me.
The whole next cloud hosting and next cloud pie,
and next cloud atomic.
- Yeah, it seems like a huge rabbit hole
to go into from just wanting to have a next cloud setup
into next cloud pie, best programming, and then even deeper into
containers, system D extensions and whatnot. Yeah, super truly.
I guess I'll take this moment to share my own experience getting involved with NextCloud Pie, which began in running it for my hacker space.
And that worked for two years until someone literally destroyed the computer.
But as part of that,
I was also learning to run a discourse forum for the hacker space.
And I ended up realizing that I was a lot more interested in the NextCloud
community itself
and so many like-minded people.
And through doing that, I got connected with Ignacio,
who was the original developer of NextCloud Pie
and thinking it'll be nice to improve our connection
to the Discourse Forum provided by NextCloud
at help.nextcloud.com.
So then I ended up becoming a moderator
and even administrator in order to further connection
between Next Cloud Pi and Next Cloud as a company. And that ended with you and I being in Berlin
together presenting at the conference. And yeah, it's be able to talk to you now.
Also, I really would like to take the opportunity and congratulate you to this podcast that you now have been running for, I think, over a year.
And yeah, really amazing to see what you made of it. And it's really nice to be back and talk to you about it.
After the time that we have spent together within the next cloud,
Py community, where I think I shouldn't leave that unmentioned.
One of the really amazing parts of the experience was working with the community and with the
active people from the community that organized the forum that helped each other out, that
contributed to the documentation that helped me test new releases and some of which also had in contributing code.
But just really getting a feeling for there are actually people who are not only using the product
but like it and want to bring it forward and support each other because in no universe I could provide
that as maintainer and provide this kind of community support that is actually provided
by volunteers.
Yeah, and yeah, you've definitely been one of the active community members too, and it
was really great to have, to be working with you and now to be back talking to you about
Maddie's project.
Yeah, and I just want to say thank you so much to both of you for being on this show.
It's great.
A nice full circle moment, of course. And from when we gave our presentation at the
next cloud conference a couple years ago, and I was on a panel discussion about how people can
contribute to next cloud, and they're going to have a new iteration of that panel discussion
this year. I think they did it every year. And yeah, I'll link in the show so you can watch it,
but I did feel at that time ready to step away, right,
from being involved in that community because being involved
in the next cloud got me, of course, more involved with next cloud more broadly.
And I realized that I just needed to take time away because in order to do some kind of project like this or talk with the people involved
I felt like I just kind of needed to take my own time right and live life and do other things
Because I like you guys I like everyone at the next club company as people
I mean I've gotten them know them and I like them
So I just needed some time sort of away from being headfirst in it so I could feel like
have these kinds of discussions for the broader public, which I'm happy to do.
Yeah, I get that.
Sometimes you need some distancing to figure out where to go next.
Yeah, I look forward to even more of these kinds of conversations on the show
moving forward, these kinds of interviews and, you know, these kinds of topics, of course,
are of interest to me. And I think to the broader show audience. But I'm curious in
terms of starting fresh now, what would you like, say, the next iteration of community
behind your project to do to help you to contribute to to contribute, to be involved. What do you see that looking like and to be the most successful it can be?
Yeah, so that's also a good question. I think it's hard to answer in a very general way,
like in a whole ecosystem kind of way.
I suppose the sharing of use cases and information
and solutions for specific problems and integrations is something that's super valuable to contribute
in a general way within the next cloud ecosystem, for example, or any other open source ecosystem. Specifically for an excellent atomic, I am actually planning
to start calling for contributors with the next community conferences upcoming because
so far I have not done that a lot because I needed to figure out how things work, how the architecture is and so on. I
couldn't easily work with other contributors before I had a strong vision and understanding
of the system myself. But now I'm actively looking for contributors that are either wanting to
contribute in the area of Rust development, like the user interface, many
of the system tools are written in Rust for next-order atomic, which was a choice
because Rust allows you to catch a lot of errors before you actually ship anything,
rather than some other linkages.
And secondly, people who know something about system administration
and would like to contribute there through the system building process and maybe are curious about atomic distributions
or build systems in general.
And those I invite to just contact me.
I will shortly also provide contribution guidelines
on the next atomic webpage.
Hopefully they will be available when this episode is out.
And lastly, but I think that maybe at a later point,
some time this year there will be a test release and I welcome you to test it out and to give me feedback and to,
yeah, let me know what you think, what you need,
how well it covers the use cases.
Just let me know.
And I will consider all sorts of feedback that I receive.
>> And will you be looking for feedback
primarily through GitHub?
- Yes, and no, I'm not completely decided on that yet.
I could, I'm actually thinking about building some
I'm actually thinking about building some sort of feedback or community involvement mechanisms right inside of Next
World Atomic.
But for now, GitHub is a good place.
If you want to contact me about technical things regarding Next World Atomic, then you
can absolutely create issues on exaltatomic.
Likely at some point there will be a policy
that will help you decide, well,
that's the right place for it.
But for now, that's the way to go.
If you're interested in one of my projects,
I like, if you're interested in Next Cloud Pi,
you will find options to interact with the community
and myself on nextcloudpi.com.
If you're interested in Next Cloud Atomic,
you will find information about the project
and soon or so contribution guidelines
on nextcloudatomic.com.
And when you want to contact me and ask me about anything that's
related to those projects or if you are looking for like contract work in any way or want about stuff you can find me on Mastodon or find my email
on my web page.
You will also find my Mastodon account there.
So I guess it makes sense to just include it in the show notes
if that's fine with your chains.
Yeah, also my web page would probably
be the best starting point.
You can just email me and say hi and we can chat.
It would be awesome.
MasterClear.de in my case.
Well, yeah, thank you. Thank you for having me. Thank you for chatting about my projects
and work and life in open source in general. It was really fun. And I hope, yeah, I hope
you'll keep having a great time with this podcast.
Thanks.
That will conclude today's episode, The Longest Ever, and it is part one of part two. In the
second part of the interview, we'll talk all about how they handle donations and asking
Tobias about how he did his fundraising to work on next Cloud Atomic. Also ask Marcel about the same, how they feel about donations, how they have balance in their
lives with their other interests and asking about what those interests are.
We'll also be talking all about containers in the next episode and whether containers
and virtualization might interest you as a home enthusiast. And who there for really?
So if you have thoughts, feel free to send them in podcast@james.network.
Thank you so much for listening.
Please do share the show if you liked it and you can look forward to more in the next
episode of Linux Prepper.
Bye. [MUSIC]