[BlueOnyx:16843] Re: Soliciting Suggestions

Michael Stauber mstauber at blueonyx.it
Fri Jan 9 22:56:38 -05 2015


Hi Chuck,

> For years, I'd seen you and others post about how many VMs you've run on 
> Aventurine - and always been curious how you did it.  It could be you had some 
> big honking hardware - but I didn't think so.  So your explanation about how 
> OpenVZ uses the same kernel and OS processes - just virtualizing the software 
> for each VM explains it.  You're not creating a lot of ridiculous duplicate 
> overhead with duplication of the kernel/OS processes for each VM.

Exactly. The hardware I used to get started on with OpenVZ was very
basic. I was in a pinch and didn't have much money to throw it at this.
So my first OpenVZ node in 2006 was an old Tyan GS-10 or GS-12 chassis
with 6-8GB of RAM. I don't recall what CPU, but it certainly had only
2-4 cores at the most.

Before I stopped running my own servers "in the broom closet" my last
inhouse nodes (in 2012) were some desktops that I had pinched from my
supplier for 600 Euros a piece and I just had thrown an extra disk into
each. They boxes were Intel i5's with 8GB RAM and two 1TB disks. I still
use one of them as workstation in the office, although by now some parts
got changed due to wear and tear. :p

When Chris at Virtbiz.com and Uwe Stache at BB-One.net started
sponsoring BlueOnyx (in 2008) with hardware, hosting and bandwith, the
hardware we started with was along those lines, too. It was simply good
enough in all regards and we didn't need more. But as time went by the
specs got a bit broader and more powerful hardware got added due to
higher usage and more demand for development VPS's for the newer
BlueOnyx versions. Still: I got around 50-60 VPS spread over four nodes
(at this time) and none of them could run what it does if it weren't for
OpenVZ and it's form of virtualization. It needs very little overhead
for the para-virtualization and therefore you have more "oomph"
available for the VPS's.

Nowadays OpenVZ recommends to use the Ploop filesystem for VPS's. We
still don't and so our VPS's typically also share the same filesystem as
other VPS's or the host OS. With all the ups and downs of that:
Blocksize, Inode usage (downs) and ups such as: Ease of access to the
file area of a VPS regardless if it's stopped or running. Oh, and *very*
dynamic resizing if you need to take away or grant more space to a VPS.
Just tweak the config while it runs. No restart needed. Likewise: You
can tweak all relevant parameters without restart. Not too long ago I
had to restart a clients OpenVZ node due to a kernel glitch and it made
my heart bleed to waste that 1278 days of uptime.

But yes: You can think of that form of virtualization as a big "chroot"
jail for each VPS. Same kernel, intricate task and I/O management,
shared memory, segmented network with internal routing, usage of a chunk
of the underlying file system. Things like that.

Which can easily lead to something like this on a midrange node:

[root at hegira /]# ps axf|wc -l
1640

But it's not like it's cooking:

[root at hegira /]# w
 22:46:27 up 244 days,  9:04,  1 user,  load average: 1,08, 1,06, 1,01

> In my case, I need to be able to spin up a Winblows server now and then during 
> customer emergencies.  So I think the VMware allowing me to run a BlueOnyx 
> server, and others (including Winblows) is still probably the best bet for me.  

Yeah, that sounds like a good approach given this particular scenario.
There is also a wide range of VMware users around, so it'll be easy to
find ideas, solutions and help when needed. Which is a big plus, too.

-- 
With best regards

Michael Stauber



More information about the Blueonyx mailing list