Loading...
Loading...

Gary is concerned that he might have dug himself into a hole of on-prem vendor lock-in, despite using open source software. Plus why you should have PiKVM type device in your toolkit.
HelloFresh
Go to HelloFresh.com/hcs10fm to Get 10 free meals + a FREE Zwilling Knife (a $144.99 value) on your third box. Offer valid while supplies last. Free meals applied as discount on first box, new subscribers only, varies by plan.
Support us on patreon and get an ad-free RSS feed with early episodes sometimes

Subscribe to the RSS feed.
This late-night Linux family podcast is made possible by our patrons.
Go to latenightlinux.com slash support for details of how you can join them.
Support us on Patreon for access to ad-free episodes and early releases.
That's latenightlinux.com slash support.
So Gary, I think you wanted to tell us about Proxmox.
Yeah, and more get your guys' opinions on whether I've just dug myself into a massive whole
of on-prem vendor lock-in or whether I'm probably OK and actually have a bit of a discussion on
what are the things you should look out for in terms of vendor lock-in on-prem because
we talk a lot about making sure that you're not locking yourselves too much into the cloud
providers platforms, but clearly that can happen on-prem T.
I am quite ignorant about many things, but especially about the on-prem world.
Before we kind of dig into it a little bit, can you explain a little bit about sort of
what is the ecosystem in which you could run the risk of being locked in?
I know there's like hardware that can be like certified for particular platforms,
and then I know there's like hypervisor management planes, which I guess we like
a Proxmox thing, and then I guess data stuff, like I remember VMware on my desktop had
VMDKs, but beyond that, like what's the scope of the like lock-in that's possible, I suppose?
So if I outline my specific scenario and then we can explore that, which will inevitably lead
to other ones, I'm sure. So I've just spent a lot of time migrating myself fully into the
Proxmox ecosystem. So I'm now running across the state I look after a grand total of six
Proxmox hosts, and all of my backups are a couple of slightly more critical things
are running using Proxmox backup server. So I am now fully in that ecosystem in a way that
if Proxmox were to go away tomorrow, I would have to rethink a lot of the way that I'm running
and managing the infrastructure. Okay, so you've chosen to use Proxmox backup. I've got no idea
what Proxmox backup is. How does that work? So Proxmox backup server is the solution that Proxmox
give for native incremental backups on ZFS of your Proxmox hosts. So you effectively have an
API that your Proxmox hosts can call, and then it handles the incremental diffs of the backups of
VMs. So for example, my file server has one big six terabyte raw file as its storage volume.
I don't want to be copying that six terabyte raw file across the public internet every day.
So what Proxmox backup server allows me to do is do the large sync once it then learns where all
of these zeros are in that raw file, and then it just does the deltas every day. So in my example
of the file server, I copied two gigs of data to the file server. My backup that night is only the
diff of the two gigs. Is there a reason that you did it that way rather than keeping the data
sits themselves on the host? Yes, because my strategy for hypervisors at home has not been to hyper
converge the storage on the same box. So any of the boxes actually running VMs at home just have
very small like 120 gig boot disks, and all of the VMs are stored on shared storage so that I
can do the clustering very easily without having to have the storage sync between the nodes.
Ice cozy or NFS? That shared storage is over NFS. Very cool. Yeah, this sounds like a lot like
what I do at home too. Now I will, however, say that the backend storage for that is not ZFS.
In this case, it's a ubiquity unas pro because that was a really cheap 10 gig file server with
lots of disks in. I slightly regret the decision of buying that machine, but it's there now,
and all of my data's on it. So I kind of got to live with that. So Proxmox backup server allows me
to just do those differential syncs to other boxes which are running ZFS that are in various other
locations, and then that handles the scrubs and everything else of the data. Okay, so you're using
Proxmox backup, but what comes out of that at the other end is just regular ZFS. Yeah, exactly.
So what I get at the other end is effectively recompiled raw files on ZFS. So you're not using VMA,
VMAs are just ZFS snapshots? All of the backups and the diffs of them and the way that the
stuff is sent is proprietary to Proxmox backup server. So effectively does a freeze of the VM
for a couple of seconds, works out what has changed since the last time it did one of those
snapshots and then sends it over via their HTTP API. Right, and is that actually proprietary?
So I think Proxmox backup server itself is open source, but what they're doing is specific to
Proxmox backup server. Right. And that's not using an open standard. That's not something you can
just pour into a different service. No. Okay, so that makes sense how you're getting your VM discs
onto your NAS, which is the ubiquity, but then how are you going from the ubiquity you
NAS to your ZFS servers? Is that our sync or something? I imagine it might be proprietary as well.
That is also built into Proxmox backup server. So the two off-site boxes that I have are
running Debian with the Proxmox backup server packages installed. So they both have a boot
disk and then a ZFS mirror. So that ZFS pool is then mounted as a storage device within Proxmox backup
server. Then Proxmox backup server itself handles the sync within the product.
Gotcha. That's a pretty cool setup. Not gonna lie.
Yeah, I mean, it works really well for me and gets me around the problem that my on-prem
storage, which is just fast shared storage, is not ZFS, but gives me a copy of all of the VMs
and indeed 30 days worth of snapshots off-site on ZFS. Okay. So you've got your current setup
that is effectively using proprietary stuff, but what you've got at the end of all of that is
effectively open. Yeah. So what I could do is at the other end of this, they are just VM discs that
are sitting on ZFS that I could restore if I wanted to. And indeed, one of the things that I've
done to allow myself very quick DR is that the boxes that are at the other sites have got enough
memory that I could re-import that VM and boost it at the other site if I needed to.
But do you have the numeric bandwidth to send those files or those disk images all the way over
to the other hosts? Yeah. So I've got 100 meg up at home and the other two sites have both
got like 200 meg cable connections. So that's not an issue. I'm crying in American right now.
So what is it that led to your concern that you were locked in? Well, I mean, we said the word
proprietary quite a few times here, right? So I have effectively now built this entire thing
around the assumption that Proxmox backup server continues to exist in order for me to get
off-site copies of this data on a file system that I trust a bit more than the ubiquity NAS.
That would take a lot of re-architecting, I think, for me to build something myself that did it.
So that's my main concern, right? Is that Proxmox backup server goes away tomorrow or it becomes
proprietary or they lock some of these features down, which means that I can't just go and install
the package on a Debian box with ZFS disks. So why is that though? You've got the ZFS that has all
of the data in it. You've got the VM images. What would be to stop you just setting up a virtualization
host importing those VMs and using the ZFS as it is recompiled at the other end? Nothing stops me
doing that. That is absolutely what made me comfortable with this solution. In as much as I could
just take those files at the end and put them on a KVM host and run them and that's fine.
What gives me pause is having to reboot strap the entire system of backing up my VMs,
getting them off site onto a file system I trust if this were to go away tomorrow.
And I know the ultimate solution is I probably should have just used Open Standards,
particularly for the storage component on-prem, but I didn't do that. So it's led me down this
interesting rabbit hole of gripping other open source solutions that are from
for-profit companies in order to dig my way out of that hole. It's very interesting that it's
an open source project, but not using an open standard. That's a new ons lock-in you've got
there. Is there no competitors like trying to add support for this in the future?
There isn't anything that I've come across that is quite as slick nor is free.
So I could have probably used something like a VM to do this, but I don't know if VM has
proxmox support and it is indeed a proprietary solution that has stuff you can use in your
home lab, but again it's a proprietary solution. And the only other solution here because my
storage on-prem isn't in ZFS as far as I could tell would be to grab a full snapshot and
export of the VM every day, but then I'm sending six gigabytes across the wire every day,
which clearly isn't going to work. So differential backups are really valuable, yeah?
Yeah. I don't know. I don't think that I'm as uncomfortable with this as you seem to be.
You've got a solution that works. It's not leaving you in a position that is irreversible or
getting worse. And maybe by the time this becomes a problem, if it ever becomes a problem,
you've replaced your file server with something that has native ZFS support and then your problem
effectively goes away in a much nicer way than having to solve this differently. But until that
happens, you've got something that's working well for you. I agree. I thought, just like you said,
Gary, the number of times you said proprietary, that'd be more worried. But there are other solutions
like VM. VM does support proxmox and incremental backups. And if proxmox backup server went away,
I still think you'd be able to find a way to do the incremental backup that you like. And if proxmox
went away, then I still feel like because you have the raw images that you'd be able to import those
easily into any other solution, whether that be KVM or or something else, because maybe it's crazy.
I go with stack. But with all that said, I do think you're right to worry about this. I have no
reason to suspect proxmox would ever, you know, just kind of disappear or make proxmox backup server
proprietary. But in today's world, I think you're absolutely right to be cautious of it.
Yeah. And I think that's what I really wanted to sanity check right was, have I worried about all
of the right things in this scenario? And it sounds like I have. But then also I guess just to bring
to the table that on-prem lock-in, even if you're using open source software, really is a thing
depending on the standards you're adopting. Yeah, this is interesting. How you took a bunch of
proprietary standards like ubiquity, NAS and their storage formats, right? And then how they're
transporting data as well as proxmox backup server, which isn't close source itself, but the method
in which it does it is proprietary. But at the end of the day, when all these things hopefully work
together, you do have something that is open source that is open standard and could be used.
And for that, good-os. Yeah. At the other end, I can still get a raw or a QCount
2 out of this. And I could spin it up on anything that will run KVM. And that was the goal,
right? And that's on a file system that probably has had quite a few more eyes on it than
whatever ubiquity you're doing behind the scenes on the U-NAS.
This episode is sponsored by HelloFresh. Nothing compares to a homecooked meal and HelloFresh makes
it easier than ever to enjoy them all year long with recipes that are cozy, flavorful, and a
pleasure to make night after night. Gather everyone around the table with meals that are simple
to prepare and are deeply satisfying, even on your busiest evenings. Choose from over 100 weekly
recipes featuring global flavors and comforting dishes designed to lift your mood through the
colder months. Feel good about what you're eating with wholesome ingredients like sustainably
sourced seafood and chicken raised without antibiotics or added hormones.
While your guests or spoil yourself with new grass fed steak revise and create meals with seasonal
produce like pears, apples, and asparagus, when dinner tastes this good, nothing hits like home
cooking. Sean tried HelloFresh and said that the information cards were really helpful for
his cousin who was counting calories for a diet. So go to HelloFresh.com slash HCS10FM to get
10 free meals and a free Zwilling Knife, a $144.99 value on your third box. Off a valid while
supplies last, free meals supplied is a discount on first box and you subscribe as only varies by plan.
I remember somebody talking a long time ago, a lecture somewhere. In fact, I may have even
mentioned it on this show at some point, but talking about when you were a government or
government organization trying to source software and he bucked the trend a little bit and that he
was less absolutist about open source software and more saying that what matters is the standards
and the data formats that you have because even if you're using modular components that produce
things that are open source compatible, you always have that option of coming back and replacing
that piece with either another piece of proprietary software if that's your bag or with open source
in the future. It's much, much more dangerous if you're investing in things that have a network
effect with either their communications protocol or the standard formats that they're producing
or non-standard formats they're producing. Yeah, and I think that is the only reason Aaron that I
ended up with this solution was that I knew that I could get an open standard at the end of it.
Was I doing this with VMware? I'm not entirely sure if being quite as comfortable as I am now
because I know that at the other end what I would compile would still be a VMDK that would be very
difficult, not impossible, but a lot more work for me to end up with something I could just run
anywhere. I hear there are tons of online converters you can use just upload your image to some sketchy
site. It's fine. Yeah, I mean, I've done a bunch of V2V conversion before for a whole variety of
reason. I mean, Starwind have a bunch of tools that are pretty good for that, but you're right.
Doing that on a couple of development web servers is probably OK. Doing that on your production
database server can't say I'd recommend it. Even I've had a few desktop VMs where I've tried to
export them from VMware into virtual box and it was a ginormous headache and I was just like,
it's just not worth it. I'll just wipe it. So that was relatively simple. So like, yeah, I would not
trust that format for conversion. I'm sure it's fine most of the time, but you just don't know.
Yeah, in my case, this was VHD to VHDX when I was moving for a bunch of older HyperV servers to
some newer ones and we wanted to take advantage of a new pool of SSDs that we put on the sand
and VHDX had trim support and things like that. The VHD didn't. And yeah, it was quite a
but clenching moment to get in front of some production servers through the stool.
Gary, you're a big infrastructure as code proponent. Is it a big deal for you if you did have to
change the way you were doing your backups or is it just a change to your answer or somewhere
or something like that? This would be a pretty big deal for me because these are physical servers
that are in locations that are sometimes difficult for me to access that I've installed the OS on
tin, configured the storage, et cetera, et cetera. So the config on top of the VMs is fine,
like I could have those rebuilt in a few hours, but it would be the bootstrapping of all of the
infrastructure that would be difficult. This is a multi-week project to get this done.
And I don't have a good way of going and redeploying NOS to a machine that is 30 miles away
inside a building that is very difficult to get to. Like it would very much be go and give
a Pi KVM to someone and ask them to drop it in next time they're there, which yeah, that's just
not going to happen very quickly. So that's the thing that concerns me more to be honest with you.
Fair enough. Have any of you actually used Pi KVMs?
Yep, I have one. And I also have a GLINET KVM, which is also pretty solid.
I've used Pi KVM and I've used tiny pilots. I saw that there were a couple of the other like
smaller NAS ones, but I never got a chance to order them.
Yeah, I had the Pi KVM for quite a while. The issue with my one is that it doesn't do 1080p.
It will only do 720 or maybe like 1440 by 900. And that's an issue when you want to plug a laptop
into it to remote support someone. It's also an issue in the proxmox GUI installer,
as I found out, which tries to output at 1080p 60 hertz, which my Pi KVM didn't support.
Interesting. That's what ultimately led to me buying the GLINET KVM, which is a much nicer device,
but again, is a proprietary solution. That's insane that an installer that's based on devian,
with all of its legacy support cannot be out of way to render an installer at 720p.
Yeah, I mean, in fairness, I just went into the group menu and chose the text-based installer,
which was fine and actually now I've discovered that's there is a way better installer
than the proxmox GUI installer. But yeah, only rendering 1080p is a bit of a weird choice given
some of the GPUs that are in servers. I also saw a nano KVM, but I haven't actually purchased any
of these products. So I'm keen to give them a go sometime, but just haven't pulled trigger on it.
I think as a tool in your tool belt, one of these is absolutely worth having.
I bought a couple of USB-C PoE splitters for the ones that I've got. So it's very much just
go and plug them into a PoE switch, HDMI and USB into the server and problem solved. I can get
full GUI access to it. It's meant that I've taken the old 17-inch Dell TFT off at the top of
my server rack now. I believe that most of them let you have operating systems to prevent,
you can provision operating systems from them onto the hardware, so they appear as a USB key
or something like that. Yeah, you just upload the ICT them and they appear as a virtual
CV where I'm drive much like it would in an iDRAC or an HPI low or something, and it's pretty slick.
Are any of them tied into all of these infrastructures code things? Like, could you provision
using Terraform or something via your PI KVM or your nano KVM? I haven't tried. I don't think
that's the thing. I suppose what you could do is if you had like user data or cloud in it or
something, you could mount that as a fake USB drive from the KVM, but yeah, Sean's rolling his
eyes at me. See, it's like a stretch too far. No, no, that's insane. I think maybe a better way
to do that would actually be to use something like mass from canonical metal as a service,
because then you can do that cloud in it stuff in a centralized managed place and still boot and
provision your raw, you know, metal machines. That would be quite the hack Gary. Yeah, I mean,
for me, I've always sort of thought of the provisioning of the metal as a one-time thing that I don't
do unless I'm changing the hardware or have a boot drive that dies. The reality is that anything
I care about is in the VMs, those are backed up and the hypervisors are just dumb boxes, not much
storage. If they can boot get on the network and mount an NFS share, then they can run any of
the VMs. I don't know. I think terraforming over some of these KVM sounds absolutely hilarious.
Actually, I'd love to try it. Then there's weird dodgy KVM provider. I mean, one of the things
that they do all have now is support for things like tail scale. So for me, when I dropped my box
into one of the locations that I don't access to very often, just plugging that in, having it boot
up and get an IPvide HCP and knowing where I'm going on the tailnet to get to it was actually
really handy. Yeah, there's no doubt that the overlaying VPN technology has really changed the game
as far as sticking hardware and random places. Gary, that's genius. I thought about that.
I'm going to have to see if I can do that with like a bootable container. We're like, I baked that
in, but no, but then you'd have to put your keys and stuff into the image itself. Yeah, that's
tough. Yeah, but because this is just a hardware device, right? It means that if I were to be away,
but someone I knew needed tech support and OS needed reinstalling or something, I could just
give them the GLINET and get them to plug it into the network, connect the HDMI and the USB,
and I could have full hardware access to the machine from wherever I was, which is really nice.
Well, maybe we should all finish early and go and look at some of these pi KVMs and nano KVMs.
If you've got any questions or comments on things we've been talking about, please do send those
in to show at hybridcloudshow.com. We'll be back in two weeks. Until then, I've been Aaron.
I've been Gary. I've been Sean. I've been Shane. See you later.
