[BlueOnyx:26722] Re: Aventurin{e} 6109R: Issues after YUM update - Hotfix available
Michael Stauber
mstauber at blueonyx.it
Fri Jan 26 12:02:56 -05 2024
Hi Taco,
> Looking at the very low engagement in the OpenVZ community is indeed risky.
Yeah, it is pretty sad. They had such a huge technological advantage on
anyone else doing secure and massively scalable container based
virtualization and then blew it away.
What also surprises me is this: This bug? It's a massive show stopper.
It *should* have been caught during testing prior to release. The RPM
was rolled up in November 2023 and only got published on January 25th.
So for a bit over two months nobody on their team noticed the obvious fault.
There is no way in hell they tested it before it was pushed out,
otherwise they'd realized outright how screwed this "vcmmd" update was.
Just restarting and existing CT (or VM) or creating a new one would have
made it clear.
Although I don't practice it, I do get this alien and somewhat fuzzy
concept of "regular business hours". Still: With such a massively faulty
YUM update published and bug reports flooding in? It still took them 24
hours to respond to my ticket:
https://bugs.openvz.org/browse/OVZ-7488
It's one out of three related tickets. Mine was the 2nd that mentioned
the problem and the first that identified the root cause.
And there is still no official remedy yet. They didn't even pull the
faulty "vcmmd" RPM from the repositories, which should have been step #1.
Then it would have been a five minute job to take the next older "vcmmd"
SRPM, bump the version number, rebuild it and push it out and that would
prevent further OpenVZ 7 installs from getting corrupted during YUM updates.
They didn't even do that. Which is a pretty dismal shit show on top of
not noticing the problem in the two months prior to release.
The surprising part is that they apparently didn't have any clients that
phoned *them* out of bed. No support client barked up their tree. Not
even from their commercial Virtuozzo end, which probably was also
affected. That's not a good sign. And it's one bad sign out of many
we've seen over the last few years.
I've mentioned it earlier: Once upon a time they had some really stellar
Linux gurus and after those left, it all went into sort of a
"maintenance mode". Which made it really challenging when they started
the development of OpenVZ 8. Which they started *really* late, long
after EL8 was out. And then they forked this and also did the OpenVZ 9
development in parallel. Which made sure both moved only at glacial
speeds. This pace has picked up a little, but it's still far from being
great.
As for "engagements": OVZ8 and OVZ9 were released as Alpha and they
encouraged testing. So some enterprising minds downloaded and tested
them, reported issues and these eventually got fixed in SVN, but the YUM
updates weren't forthcoming for months on end. Even now their OpenVZ 9
repos are more than half a year behind whatever is in their Bitbucket
repository. This isn't really encouraging "engagements" or testing, so
they kind of blew away what little was left of their community of
willing supporters.
OpenVZ 9 is worth keeping an eye on, but I'm no longer holding my breath
for it. This is something we simply can't bet the farm on.
> What technology are you looking at as an alternative if I may ask?
Sadly there isn't much of an alternative other than Linux containers
and QEMU for KVMs. So that's what I've been looking into for 6110R: LXD
for the Containers and then LibVirt and QEMU for VMs. The focus will be
on Containers with VMs being an option for anything that can't be
containerized easily. Much as it's now on OVZ7 and 6109R.
There are currently two major challenges in that: To do this on EL9 I
need to complement the toolchain a bit more than I thought. It's easier
on EL8, which has a pretty decent eco-system for that. But if at all
possible I want to base this on EL9 to make it more future-proof.
And lastly the next big challenge will be to provide an easy migration
path from VZ 7 containers to LXD containers. This can be scripted, but
needs to be at least comparable in handling and reliability to
vzpmigrate.sh which was used during OVZ6 -> OVZ7 migrations. I have the
procedure for this worked out in my mind, but it'll need some serious
coding to make it robust.
--
With best regards
Michael Stauber
More information about the Blueonyx
mailing list