<HTML>
<HEAD>
<META content="text/html; charset=iso-8859-1" http-equiv=Content-Type>
<META content="OPENWEBMAIL" name=GENERATOR>
</HEAD>
<BODY bgColor=#ffffff>
Ah! That explains a lot Michael.
<br />
<br />As I've mentioned, I've used VMware and (unfortunately) Hyper-V for years. They each build a complete container where you can load an entire OS - no matter what the host is. I've tried to get across to the company owner that each time he "just spin up a new VM" for some odd-ball customer, the back-end work that OS has to do to manage itself and keep running is just another set of overhead on the host.
<br />
<br />For years, I'd seen you and others post about how many VMs you've run on Aventurine - and always been curious how you did it. It could be you had some big honking hardware - but I didn't think so. So your explanation about how OpenVZ uses the same kernel and OS processes - just virtualizing the software for each VM explains it. You're not creating a lot of ridiculous duplicate overhead with duplication of the kernel/OS processes for each VM.
<br />
<br />Thanks for that explanation. It also helps us learn more about the different virtualization packages out there - and how they work.
<br />
<br />In my case, I need to be able to spin up a Winblows server now and then during customer emergencies. So I think the VMware allowing me to run a BlueOnyx server, and others (including Winblows) is still probably the best bet for me. But one of these days I'm going to dump all the MS crap - and then I'll go to something like OpenVZ or Aventurine for virtualization.
<br />
<br />
<br />Chuck
<br />
<br />
<br /><font size="2">
<br />
<br /><b>---------- Original Message
-----------</b>
<br />
From: Michael Stauber <mstauber@blueonyx.it>
<br />
To: BlueOnyx General Mailing List <blueonyx@mail.blueonyx.it>
<br />
Sent: Fri, 09 Jan 2015 14:51:48 -0500
<br />
Subject: [BlueOnyx:16840] Re: Soliciting Suggestions
<br />
<br />> Hi Chuck,
<br />>
<br />>
> So I've had to learn some of the Hyper-V crap - and you DON'T want to
<br />>
> go there! Virtualbox is good (and also free), but doesn't seem quite
as mature
<br />>
> as VMware. And I just can't speak about the Linux native Xen,
Linux-VServer, or
<br />>
> OpenVZ - because I haven't used them.
<br />>
>
<br />>
> Of course, there's also the Solarspeed Aventurine. But I believe its
primarily
<br />>
> written/developed to support BlueOnyx (and likely other similar Linux
guests) -
<br />>
> so I was concerned that I would have problems with Windows guest OSes.
I don't
<br />>
> know that for sure - and maybe Michael can speak to the
capability/compatibility
<br />>
> of other OSes besides Linux.
<br />>
<br />>
Ok, let me chime in there for a bit. I started virtualizing in earnest
<br />>
in early 2006. At that point I was running around 30 dedicated boxes
<br />>
inhouse and some hosted in data centers. A few years later I had cut
<br />>
down to eight boxes: Workstation, firewall and fileserver plus five
<br />>
OpenVZ nodes running 50-60 VPS's in total. As nodes became more powerful
<br />>
I cut down on them eventually as well.
<br />>
<br />>
I had chosen OpenVZ early on and stuck with it after some experimenting.
<br />>
It's a really good choice if you want to do two things:
<br />>
<br />>
a.) You only want to virtualize Linux OS's.
<br />>
b.) You want to pack as many of them as possible onto single nodes.
<br />>
<br />>
The approach here is different in so far we OpenVZ provided container
<br />>
based hosting. The term that got nailed to that form of virtualization
<br />>
is called "Paravirtualization". That doesn't attempt to replicate a
<br />>
"physical server" by providing a virtual one. With all screws, nooks
and
<br />>
crannies mimicked as if it were a "real" server.
<br />>
<br />>
In fact all VPS's on an OpenVZ node use the same kernel. The one from
<br />>
the host node. But inside the VPS's they can run their own Linux. So you
<br />>
can run different flavors of Linux on the same node.
<br />>
<br />>
Another strong point is that you can have the VPS's use either virtual
<br />>
ethernet devices (eth0, eth1 or so on) or the much more secure venet
<br />>
interfaces that prevent traffic sniffing.
<br />>
<br />>
What also appealed strongly to me: You can access every file and folder
<br />>
of a VPS directly from the master node. Regardless if the VPS is running
<br />>
or stopped. I have used this functionality extensively for purposes of
<br />>
rescuing wrecked (physical) servers for clients over the years.
<br />>
<br />>
Drawbacks: You cannot install non-Linux systems. You cannot install a
<br />>
VPS using an ISO. Instead you need to use pre-created OS templates. But
<br />>
you can (via a procedure) turn a physical Linux server into an OpenVZ VPS.
<br />>
<br />>
A wide set of command line tools allow to create, manage, modify,
<br />>
manipulate, snapshot, dump and to migrate OpenVZ VPS's. You can easily
<br />>
migrate a running VPS from one node to another.
<br />>
<br />>
Aventurin{e} was created by me in 2006 to provide a GUI for OpenVZ and
<br />>
used (at that time) a modified and extended BlueOnyx 5106R GUI with the
<br />>
model number 6105R. For EL6 (64-bit) I used components from BlueOnyx
<br />>
5108R to create Aventurin{e} 6106R. The GUI is kinda basic, but allows
<br />>
you to easily create and manage VPS's, set up backups and restores and
<br />>
there is also a clustering option for those that need high availability
<br />>
with automatic fail overs in case of node failures. Aventurin{e} is
<br />>
slated to receive the new GUI (like the one on BlueOnyx 5208R) as well
<br />>
and a version for EL7 will eventually be out as well once the OpenVZ
<br />>
project itself gets to a stable release of OpenVZ sometime this month
<br />>
(or the next).
<br />>
<br />>
The guys behind OpenVZ are (according to their own words) the biggest
<br />>
single contributor to the Linux kernel (even before Google) and they
<br />>
just announced some exiting plans for OpenVZ, so it's future is kinda
<br />>
guaranteed. It'll get a new name ("Virtuozzo Core") and a more
<br />>
streamlined and open development process with a directer feedback from
<br />>
their commercial sister-project "Virtuozzo".
<br />>
<br />>
But back to the beginning and to the point that appealed to me the most:
<br />>
Packing desentity of VPS.
<br />>
<br />>
Right now I am using a workstation in my office to write this email,
<br />>
that has pretty much identical hardware and specs as one of my lower end
<br />>
Aventurin{e} nodes out there. Just the graphics card is different.
<br />>
<br />>
On this workstation I have VirtualBox installed for testing purposes.
<br />>
Right now I have nine VPS's defined. Each has 1GB RAM, 30GB disk and may
<br />>
use one CPU core. When I fire up three of them at the same time, my
<br />>
workstation is on its knees and yells "Mercy!". I once had four
running
<br />>
at the same time and the box just go totally unusable.
<br />>
<br />>
The Aventurin{e} node with the same specs? It runs 10 VPS's at the same
<br />>
time. Constantly. Including two toplevel mirrors for BlueOnyx and
<br />>
Solarspeed, the BlueOnyx website, this mailing list and some heavily
<br />>
used database servers. Load average right now? 1.13, 0.77, 0.68
<br />>
<br />>
I do have Aventurin{e} clients with some pretty powerful hardware. Think
<br />>
200GB of RAM, half a dozen terabytes of storage and a couple of dozen
<br />>
cores per node. We haven't yet determined how many VPS's we could run
<br />>
there, but they're not even getting out of idle with just 40-50 VPS's.
<br />>
<br />>
My own biggest node (sponsored by Virtbiz.com) has 16 cores and 32 GB of
<br />>
RAM. With 20 VPS's it's also mostly idling. It could probably take
<br />>
another 10 VPS's in a pinch, but then RAM might get a bit short at
<br />>
certain times.
<br />>
<br />>
The limiting factors to virtualization is first and foremost I/O
<br />>
traffic. If the disks are slow, you will suffer. Hardware RAID is a
<br />>
must, as are fast disks and dedicated controllers.
<br />>
<br />>
The second limiting factor is usually RAM. If you have 16GB and allocate
<br />>
1GB per VPS, then you'll run into issues if you try to run more than
<br />>
14-15 VPS's. If you assign 2GB per VPS, then you can usually only run
<br />>
half the number of VPS's, unless your method if virtualization allows
<br />>
over-allocation of resources (OpenVZ does, for example).
<br />>
<br />>
What form of virtualization is best for you? That question is hard to
<br />>
answer, as it simply depends on how much you can invest in time and/or
<br />>
money. All of them have a learning curve, which requires time. Some
<br />>
require money in order to purchase the tools of trade. Some don't. Some
<br />>
"pay for use" tools might actually be worse than some free ones. There
<br />>
are some great systems out there that run out of the box and have a
<br />>
fancy GUI or tools. There are more costly approaches that may or may not
<br />>
include extras that you might not need. Or great free ones, where the
<br />>
nifty tools that you actually might need cost extra.
<br />>
<br />>
It also all depends on if you need "nook-and-cranny" virtualization
that
<br />>
(inside the VPS) replicates a "real" machine as closely as possible.
In
<br />>
that case VirtualBox *might* just barely do that. VMWare does it better,
<br />>
though. You can (if need be) run Windows on each of these two - for
<br />>
example. You can also do that on KVM, but the procedure for doing so is
<br />>
very complicated and will require a lot of reading and trial and error.
<br />>
<br />>
Hyper-V, XEN, KVM, OpenStack ... you can pretty much pick any of these
<br />>
for some pretty good reasons or others. It all depends on what you want
<br />>
to do, how much time you want to invest to familliarize yourself with
<br />>
them and how they will fit into your architecture.
<br />>
<br />>
Also: Do yourself a favor early on and think how the chosen method for
<br />>
virtualization might affect (or dictate) your ability to back up and
<br />>
restore. If you can backup and restore an entire VPS with the least
<br />>
possible downtime, then this sure is something that should rank high on
<br />>
the priority list of required capabilities.
<br />>
<br />>
So what method is best for you? I don't know and would need more input.
<br />>
<br />>
But as is I'm happy with OpenVZ and sure will stick with it for many
<br />>
years to come. It suits my needs just perfectly and BlueOnyx runs
<br />>
exceptionally fine on it. And like said: I can squeeze a hell of a lot
<br />>
more BlueOnyx VPS's onto any given node than I could do with VMware,
<br />>
VirtualBox or KVM (where I am or have been running BlueOnyx, too).
<br />>
<br />>
Back to the specs that Danny posted:
<br />>
<br />>
> I went with a Lenovo TS140 i7-4770 3.4GHz 32GB with 3 3TB 7200
<br />>
> rpm drives.
<br />>
<br />>
If this server had a dedicated RAID card and was mine to play with, then
<br />>
I'd do this:
<br />>
<br />>
I'd install Aventurin{e} 6106R on it. I'd use two disks in a hardware
<br />>
RAID1 and the third disk would serve me as recipient for the daily
<br />>
(local) backups of all VPS's.
<br />>
<br />>
With these specs it would easily run 15 VPS's (or more). Some large
<br />>
productive ones and some smaller ones for development and fooling around.
<br />>
<br />>
It would be possible to migrate the existing 5208R and also the Debian
<br />>
into OpenVZ VPS's, although they would need to get switched from ethX
<br />>
style interfaces to venet0. Which might (or might not) be problematic
<br />>
with the given ISPConfig install. If ISPConfig doesn't support venet0
<br />>
interfaces, it could be (manually) configured to retain the ethX style
<br />>
interfaces, but that would make future GUI management of VPS related
<br />>
options for this particular VPS a bit problematic. Hence I wouldn't
<br />>
really recommend it to a first time user that has no previous experience
<br />>
with OpenVZ.
<br />>
<br />>
This sure is some nice hardware and with that you sure do have a few
<br />>
options to choose from.
<br />>
<br />>
--
<br />>
With best regards
<br />>
<br />>
Michael Stauber
<br />>
_______________________________________________
<br />>
Blueonyx mailing list
<br />>
Blueonyx@mail.blueonyx.it
<br />>
<a target="_blank" href="http://mail.blueonyx.it/mailman/listinfo/blueonyx">http://mail.blueonyx.it/mailman/listinfo/blueonyx</a>
<br /><b>------- End of Original Message
-------</b>
<br />
</font>
</BODY>
</HTML>