[BlueOnyx:27020] Upcoming BlueOnyx 5211R changes

Michael Stauber mstauber at blueonyx.it
Sun Jun 9 02:37:54 -05 2024


Hi all,

The last couple of weeks the code for BlueOnyx 5211R has seen some 
massive changes which we are about to release in the next week or two.

They will first hit the [BlueOnyx-5211R-Testing] YUM repository and 
after further tests they will appear in the regular 5211R YUM repository.


List of changes:
================

Sauce::Service notification light:
-----------------------------------

After saving in the changes it can take small moment until the changes 
are applied and all relevant services have been restarted. So far the 
GUI didn't give any indication if the changes were now applied or if 
something was still working on that in the background.

This has now been addressed. See the attached image. In the upper right 
corner a spinning circle of dots will appear when GUI triggered Network 
service changes are pending and are being processed. A help text will 
also appear and indicate which services are currently awaiting their 
restart.

This spinning circle and the helptext will turn invisible again once the 
pending transactions are completed.


Network configuration changes:
-------------------------------

The network configuration of BlueOnyx has always consisted of a set of 
legacy scripts that go back to the Cobalt days. Back then (from a Linux 
administrators perspective) you had to configure the network "on foot". 
Like write various "ifcfg" config files, configure the routes yourself, 
deal with NAT and what not.

These days? NetworkManager actually does it all - if you let it.

Over the years we have modified our scripts extensively and also made 
them somewhat NetworkManager conforming. But not quite due to a lot of 
legacy baggage.

Therefore the "network stack" of BlueOnyx 5211R has now seen a complete 
rewrite that properly uses NetworkManager and "nmcli" to configure the 
network.

The old eth0:X network aliases for each secondary IP? They're gone. All 
IPs are now bound to the primary network interface directly.

The /etc/syconfig/network-scripts/ifcfg-eth* files? They're gone now as 
well. The network configuration is instead persistently stored in 
NetworkManager.

Or better said: The network Constructors and Handlers of BlueOnyx will 
now poll CODB for what the network configuration *should* be. And then 
they will then use "nmcli" to apply these settings to NetworkManager.

These scripts are even so clever that they don't blindly apply the 
configuration every time. Instead they compare what the configuration 
*should* be (according to CODB) and compare it with the currently active 
network configuration. If there are discrepancies? Then *only* the 
network interfaces with the pending changes are updated. Which cuts down 
on the amount of network restarts required.


/root/network_settings.sh:
---------------------------

The BlueOnyx 5211R network settings configuration script now also uses 
"nmcli" conforming methods to apply the network settings.

But beyond that it now also gives you three choices for which kind of 
network configuration you can set up:

	1 Traditional eth0 primary interface (default)
	2 Bridged br0 network (for Incus Containers)
	3 DHCP

The first options is the same as your network on BlueOnyx always has 
been. So no surprises there.

The second option sets up a bridged network interface called 'br0' and 
it will assign the primary network interface to it as 'slave'. All IPs 
get bound to 'br0'.

Like the text says: This option is only relevant if you want to use your 
BlueOnyx to run Incus Containers. More on that further down.

The third option is DHCP. BlueOnyx has supported DHCP for a couple of 
years now, but it wasn't advertised, transparent or thoroughly integrated.

The way this works now is this: You *can* set up your BlueOnyx to use 
DHCP. If there is a DHCP server in the same network, then your BlueOnyx 
ought to receive one IPv4 and/or IPv6 IP address from it. Alongside with 
the DNS it is supposed to use as resolver.

These single DHCP obtained IPv4 and/or IPv6 IPs will be the only IPs 
that you can use to create Vsites. Should the DHCP server assign you 
different IPs in the future? Then the Vsites will automatically switch 
to the new IPs.

For the most part this feature will be of no practical interest to most 
of you. But some who want to run BlueOnyx in certain clouds might 
appreciate the new ease of use and thorough integration.

Reconfiguration of the network settings from "traditional eth0" to 'br0' 
over to DHCP and back is always possible via /root/network_settings.sh.

DHCP *and* bridged network cannot be used together, though. It wouldn't 
serve any practical purpose either. After all: A bridge is intended to 
be used by more than a single DHCP obtained IP.


Network device names (ethX):
-----------------------------

BlueOnyx had "old style" ethX network devices hardcoded in the network 
scripts and even some of the GUI pages. Therefore it couldn't cope with 
"en[p|s|o]"-style network interfaces. For that reason the ISO installer 
and installation scripts always modified the Grub boot configuration to 
force the server back to use ethX style interface names.

This is no longer necessary, as the new network scripts can deal with 
those eventualities. We now even cover 'wlan', 'waan', 'bond' and 'veth'.


Disk Quota:
------------

In the past having at least a /home partition WITH enabled user- and 
group- disk quota was a *must* *have* for BlueOnyx. Alternatively: If no 
dedicated /home partition was present, then disk quota was required to 
be enabled for the / partition instead.

However: Some clouds and/or container based virtualisation solutions 
don't offer native file system disk quotas on a per user and per group 
level.

The absence of disk quota meant that BlueOnyx would not be able to 
create Vsites OR Users. And even if it could create them: It would be 
unable to display disk usage or enforce disk quota limits.

This has now been addressed as well:

Disk quota is now optional.

If it is present and works? Then it will be used like always. But if it 
is absent or not working? In that case BlueOnyx will fall back to 
alternate methods to accurately (and quickly!) determine the current 
disk usage of Vsites and Users.

But what if a Vsite or User goes over-quota? Then there is nothing that 
stops him form consuming more space than allowed? Or even fill up a 
whole partition by himself?

Well, not quite. \o/

We extended the GUI code so that upon reaching the disk quota limits 
features of Users and Vsites will be deactivated automatically:

- An over-quota User can no longer receive emails. They will bounce with
   the error message "ERROR:5.2.2:550 User is over quota"

- An over-quota User can no longer use FTP to create or append files.
   All he can do (if he has FTP enabled) is to login and delete files.

- If a Vsite itself goes over-quota? All emails will bounce. FTP gets
   'write' access removed and PHP will be reconfigured so that creation
   of files via PHP will be prohibited.

This doesn't affect Perl scripts, so they could still be used to create 
files or do stuff that lets the disk usage grow further. However: Web 
facing Perl scripts are getting so rare these days that we can perhaps 
ignore this for now. Optionally one might want to turn Perl scripts off 
if disk quota is not available.


Aventurin{e} 6110R / Incus:
===========================

Above I said I'd mention Incus again and here we are.

The OpenVZ 7 based Aventurin{e} 6109R is in need of a successor and that 
will be Aventurin{e} 6110R. HOWEVER: This will NOT be shipped as a 
separate ISO image.

Instead: Aventurin{e} 6110R will be a PKG that can be installed onto a 
BlueOnyx 5211R.

That BlueOnyx 5211R then needs to be configured to use bridged network 
(i.e.: 'br0' will be the primary interface). Then you can use a special 
menu in the BlueOnyx 5211R GUI (included in the PKG) to create and 
manage Linux containers.

You can still run Vsites on such a BlueOnyx, but generally and for 
security reasons we would discourage that.

Incus is a fork of the LXD project and it provides the ability to create 
and manage Linux Containers. It can additionally also run VMs (via 
Qemu). Incus itself is rather new, but shares many commonalities with 
its predecessor LXD and adds some new and interesting features on top of 
that.

If you're already familiar with LXD or come from Aventurin{e} 6109R or 
OpenVZ 7 you should feel right at home with it.

As is Incus has a very long list of OS templates that cover many Linux 
flavors and distributions. All the popular Linux distributions are 
present and accounted for in various versions. And (in the close future) 
it will also be possible to run VMs with it on an AlmaLinux/RockyLinux 9 
host, but for the moment that isn't an option yet.

So with an Aventurin{e} 6110R PKG installed on your BlueOnyx 5211R you 
will be able to offer Linux Containers running a multitude of possible 
Linux OS's in them. Each Container is isolated against any other 
Container and has its own IP address and network configuration and 
resource limits.

Even there various network related options will be possible: Static IP 
addresses that are available from the outside or static or dynamic IP 
addresses that are only reachable from the host node itself. Unless you 
configure NAT or Proxy for those.

That would for example allow to have one dedicated and publicly 
reachable Container that does firewalling and routing and the rest of 
the Containers are in a private network and only reachable through the 
firewall-container.

Incus has means and methods integrated for cloning, snapshotting, 
templating, clustering, migrations as well as for backup and restore of 
Containers (and VMs).


Migration path OpenVZ 7 or 6109R -> Incus / 6110R:
---------------------------------------------------

Short answer: Yes. Will be provided.

We're working on a script (other than Easy-Migrate) that will export 
Containers from OpenVZ 7 based nodes and will convert them to Incus CTs.

Like Easy-Migrate this will most likely work in a "Pull" fashion. So you 
start it on the target system and it will use SSH and RSYNC to fetch the 
configuration and data. With the gathered information an Incus CTs is 
created on the target system and that will be populated with the data 
from the source CT.


Release date Aventurin{e} 6110R:
---------------------------------

No promises on that yet. The general idea is to have a wrap of the 
initial code base around August and then it will go into testing with 
selected sponsors. A general release will then follow after suggestions 
and possible additional feature requests have been implemented.


BlueOnyx 5210R:
================

How much of this will be ported to BlueOnyx 5210R?

Uhm ... not much, sorry.

The Sauce::Service notification light:
---------------------------------------

Yes. That will be ported.

The change that makes disk quota optional:
-------------------------------------------

Yes. That will be ported.


The network related changes?
-----------------------------

Only partially. Generally speaking: BlueOnyx 5210R has a lot of OpenVZ 7 
related baggage in its network handling routines. And in OpenVZ 7 
containers we don't have NetworkManager. So if 5210R were to receive the 
updated network handling code from 5211R, that code would need a hell of 
a lot of exceptions to *still* make it work "the old way" in OpenVZ 7 
containers. Therefore we will only release a "light" version that makes 
some minimal changes that won't affect the overall network functionality.


Incus support?
---------------

No. The Aventurin{e} 6110R PKG will only be available for BlueOnyx 
5211R. The older kernel and libraries on EL8 make this extremely 
unlikely and it wouldn't be a good match to begin with.


Wrapping up:
============

Phew ... that was a long message. Thanks for reading this far and I hope 
you're also excited about the upcoming changes.

Let me know if you have any questions.

-- 
With best regards

Michael Stauber
-------------- next part --------------
A non-text attachment was scrubbed...
Name: daemon_spinner.png
Type: image/png
Size: 13053 bytes
Desc: not available
URL: <http://mail.blueonyx.it/pipermail/blueonyx/attachments/20240609/e542999b/attachment-0002.png>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: network_settings_sh.png
Type: image/png
Size: 15922 bytes
Desc: not available
URL: <http://mail.blueonyx.it/pipermail/blueonyx/attachments/20240609/e542999b/attachment-0003.png>


More information about the Blueonyx mailing list