[BlueOnyx:16840] Re: Soliciting Suggestions

Michael Stauber mstauber at blueonyx.it
Fri Jan 9 14:51:48 -05 2015


Hi Chuck,

>  So I've had to learn some of the Hyper-V crap - and you DON'T want to 
> go there!  Virtualbox is good (and also free), but doesn't seem quite as mature 
> as VMware.  And I just can't speak about the Linux native Xen, Linux-VServer, or 
> OpenVZ - because I haven't used them.
> 
> Of course, there's also the Solarspeed Aventurine.  But I believe its primarily 
> written/developed to support BlueOnyx (and likely other similar Linux guests) - 
> so I was concerned that I would have problems with Windows guest OSes.  I don't 
> know that for sure - and maybe Michael can speak to the capability/compatibility 
> of other OSes besides Linux.

Ok, let me chime in there for a bit. I started virtualizing in earnest
in early 2006. At that point I was running around 30 dedicated boxes
inhouse and some hosted in data centers. A few years later I had cut
down to eight boxes: Workstation, firewall and fileserver plus five
OpenVZ nodes running 50-60 VPS's in total. As nodes became more powerful
I cut down on them eventually as well.

I had chosen OpenVZ early on and stuck with it after some experimenting.
It's a really good choice if you want to do two things:

a.) You only want to virtualize Linux OS's.
b.) You want to pack as many of them as possible onto single nodes.

The approach here is different in so far we OpenVZ provided container
based hosting. The term that got nailed to that form of virtualization
is called "Paravirtualization". That doesn't attempt to replicate a
"physical server" by providing a virtual one. With all screws, nooks and
crannies mimicked as if it were a "real" server.

In fact all VPS's on an OpenVZ node use the same kernel. The one from
the host node. But inside the VPS's they can run their own Linux. So you
can run different flavors of Linux on the same node.

Another strong point is that you can have the VPS's use either virtual
ethernet devices (eth0, eth1 or so on) or the much more secure venet
interfaces that prevent traffic sniffing.

What also appealed strongly to me: You can access every file and folder
of a VPS directly from the master node. Regardless if the VPS is running
or stopped. I have used this functionality extensively for purposes of
rescuing wrecked (physical) servers for clients over the years.

Drawbacks: You cannot install non-Linux systems. You cannot install a
VPS using an ISO. Instead you need to use pre-created OS templates. But
you can (via a procedure) turn a physical Linux server into an OpenVZ VPS.

A wide set of command line tools allow to create, manage, modify,
manipulate, snapshot, dump and to migrate OpenVZ VPS's. You can easily
migrate a running VPS from one node to another.

Aventurin{e} was created by me in 2006 to provide a GUI for OpenVZ and
used (at that time) a modified and extended BlueOnyx 5106R GUI with the
model number 6105R. For EL6 (64-bit) I used components from BlueOnyx
5108R to create Aventurin{e} 6106R. The GUI is kinda basic, but allows
you to easily create and manage VPS's, set up backups and restores and
there is also a clustering option for those that need high availability
with automatic fail overs in case of node failures. Aventurin{e} is
slated to receive the new GUI (like the one on BlueOnyx 5208R) as well
and a version for EL7 will eventually be out as well once the OpenVZ
project itself gets to a stable release of OpenVZ sometime this month
(or the next).

The guys behind OpenVZ are (according to their own words) the biggest
single contributor to the Linux kernel (even before Google) and they
just announced some exiting plans for OpenVZ, so it's future is kinda
guaranteed. It'll get a new name ("Virtuozzo Core") and a more
streamlined and open development process with a directer feedback from
their commercial sister-project "Virtuozzo".

But back to the beginning and to the point that appealed to me the most:
Packing desentity of VPS.

Right now I am using a workstation in my office to write this email,
that has pretty much identical hardware and specs as one of my lower end
Aventurin{e} nodes out there. Just the graphics card is different.

On this workstation I have VirtualBox installed for testing purposes.
Right now I have nine VPS's defined. Each has 1GB RAM, 30GB disk and may
use one CPU core. When I fire up three of them at the same time, my
workstation is on its knees and yells "Mercy!". I once had four running
at the same time and the box just go totally unusable.

The Aventurin{e} node with the same specs? It runs 10 VPS's at the same
time. Constantly. Including two toplevel mirrors for BlueOnyx and
Solarspeed, the BlueOnyx website, this mailing list and some heavily
used database servers. Load average right now? 1.13, 0.77, 0.68

I do have Aventurin{e} clients with some pretty powerful hardware. Think
200GB of RAM, half a dozen terabytes of storage and a couple of dozen
cores per node. We haven't yet determined how many VPS's we could run
there, but they're not even getting out of idle with just 40-50 VPS's.

My own biggest node (sponsored by Virtbiz.com) has 16 cores and 32 GB of
RAM. With 20 VPS's it's also mostly idling. It could probably take
another 10 VPS's in a pinch, but then RAM might get a bit short at
certain times.

The limiting factors to virtualization is first and foremost I/O
traffic. If the disks are slow, you will suffer. Hardware RAID is a
must, as are fast disks and dedicated controllers.

The second limiting factor is usually RAM. If you have 16GB and allocate
1GB per VPS, then you'll run into issues if you try to run more than
14-15 VPS's. If you assign 2GB per VPS, then you can usually only run
half the number of VPS's, unless your method if virtualization allows
over-allocation of resources (OpenVZ does, for example).

What form of virtualization is best for you? That question is hard to
answer, as it simply depends on how much you can invest in time and/or
money. All of them have a learning curve, which requires time. Some
require money in order to purchase the tools of trade. Some don't. Some
"pay for use" tools might actually be worse than some free ones. There
are some great systems out there that run out of the box and have a
fancy GUI or tools. There are more costly approaches that may or may not
include extras that you might not need. Or great free ones, where the
nifty tools that you actually might need cost extra.

It also all depends on if you need "nook-and-cranny" virtualization that
(inside the VPS) replicates a "real" machine as closely as possible. In
that case VirtualBox *might* just barely do that. VMWare does it better,
though. You can (if need be) run Windows on each of these two - for
example. You can also do that on KVM, but the procedure for doing so is
very complicated and will require a lot of reading and trial and error.

Hyper-V, XEN, KVM, OpenStack ... you can pretty much pick any of these
for some pretty good reasons or others. It all depends on what you want
to do, how much time you want to invest to familliarize yourself with
them and how they will fit into your architecture.

Also: Do yourself a favor early on and think how the chosen method for
virtualization might affect (or dictate) your ability to back up and
restore. If you can backup and restore an entire VPS with the least
possible downtime, then this sure is something that should rank high on
the priority list of required capabilities.

So what method is best for you? I don't know and would need more input.

But as is I'm happy with OpenVZ and sure will stick with it for many
years to come. It suits my needs just perfectly and BlueOnyx runs
exceptionally fine on it. And like said: I can squeeze a hell of a lot
more BlueOnyx VPS's onto any given node than I could do with VMware,
VirtualBox or KVM (where I am or have been running BlueOnyx, too).

Back to the specs that Danny posted:

> I went with a Lenovo TS140 i7-4770 3.4GHz 32GB with 3 3TB 7200
> rpm drives.

If this server had a dedicated RAID card and was mine to play with, then
I'd do this:

I'd install Aventurin{e} 6106R on it. I'd use two disks in a hardware
RAID1 and the third disk would serve me as recipient for the daily
(local) backups of all VPS's.

With these specs it would easily run 15 VPS's (or more). Some large
productive ones and some smaller ones for development and fooling around.

It would be possible to migrate the existing 5208R and also the Debian
into OpenVZ VPS's, although they would need to get switched from ethX
style interfaces to venet0. Which might (or might not) be problematic
with the given ISPConfig install. If ISPConfig doesn't support venet0
interfaces, it could be (manually) configured to retain the ethX style
interfaces, but that would make future GUI management of VPS related
options for this particular VPS a bit problematic. Hence I wouldn't
really recommend it to a first time user that has no previous experience
with OpenVZ.

This sure is some nice hardware and with that you sure do have a few
options to choose from.

-- 
With best regards

Michael Stauber



More information about the Blueonyx mailing list