Показаны сообщения с ярлыком OpenVZ. Показать все сообщения
Показаны сообщения с ярлыком OpenVZ. Показать все сообщения

пятница, 2 октября 2015 г.

Don't think for me...

I really like OpenVZ and Proxmox. But what I hate is programs which try to think for me. For example, we have /var/lib/vz and subdirectories on ZFS. If by chance it was not mounted on system startup, OpenVZ creates subdirectories in /var/lib/vz/template. Next time ZFS filesystem containers/vz/template just will not be mounted on non-empty /var/lib/vz/template. And you discover it only during runtime. If OpenVZ just threw error on startup, it would be better. If ZFS just silently mounted filesystems over non-empty directories (as usual mount does), it would be better. But two subsystems try to think for me and make my life better. I hate programs being so smart...

вторник, 31 июля 2012 г.

Notes on configuring two-nodes proxmox cluster with drbd-backed storage

We had a task to deploy two new visualization servers with possibility of live migration and high availability data. The second means that in case of physical server failure you don't want faulted VMs to be powered up automagically on another node, but just that you can do it by hand in five minutes.

We decided to use proxmox VE 2, because it's free, we have experience of maintaining proxmox 1.9 systems and because it supports live migration without shared storage.

So, we configured two nodes with 4 additional LVM volume groups each: for VZ data for each node (n1vz with one lvm mounted on first node on /var/lib/vz and n2vz with one volume mounted on /var/lib/vz/ on second, n1kvm and n2kvm as VM disk storage on each node, n1kvm is used by VMs running normally on first node, n2kvm - by VMs running on second node). 4 DRBD volumes with primary-primary configuration was created for each of 4 volume groups. Using separate pair of drbd devices for VM's disks makes split brain recovery easier, as explained here. And note, we can't use drbd-mirrored (quazy-shared) disk for VZ storage, because final step of VZ migration includes "rm -rf" after rsyncing container private area.

In such configuration we can do live migration of KVM VMs and VZ. Also we have copy of each VM and VZ for emergencies (falling of one node).

Some difficulties we met were related to LVM and DRBD startup ordering. First one was the following: LVM locked drbd backing storage and drbd couldn't use them. It was solved with correct filter in lvm.conf. The other one was more difficult. Physical volumes n1vz and n2vz available over DRBD couldn't be mounted normally - they should be mounted after initial system startup. Usually firstly starts lvm (and init script makes vgchange -ay, activating volume groups), then drbd, and now we have additional VG, but they are not active.

To solve this problem we are supposed to use hearthbeat. But I am too lazy to study it. So I adopted things more familiar to me - automounter (autofs) to mount /var/lib/vz and udev to make volume groups available on drbd* device appearance. I've added "/- /etc/auto.direct" line to /etc/auto.master and created /etc/auto.direct file, containing:

/var/lib/vz              -fstype=ext4            :/dev/mapper/n1vz-data
Configuration of udev consisted from creation of /etc/udev/rules.d/80-drbd-lvm.rules file, containing:
ACTION=="add|change", SUBSYSTEM=="block",KERNEL=="drbd*", RUN+="/bin/sh -c /sbin/lvm vgscan; /sbin/lvm vgchange -a y'"

I consider this more elegant then just including "vgchange -a y && mount ..." in rc.local.

пятница, 23 декабря 2011 г.

Ubuntu VZ container and lo interface

If you look at VZ image of Ubuntu 11.04 , you'll see that udev is excluded from startup. It seems to be incorrect, because without udev "net-device-added INTERFACE=lo" event is not emitted and loopback interface remains unconfigured... At least, this behavior appeared after updating this image to 11.10. I don't remember if lo interface was present in original 11.04 image.

вторник, 1 ноября 2011 г.

Yandex Server: Start indexing... Aborted

I know, that unix style is to avoid writing anything unnecessary.
Issue "dd if=/dev/sda1 of=/dev/sda2 bs=32M" and wait for 40 minutes without any diagnostics. In FreeBSD it is sometimes better - at least, you can hit Ctr^T.
But Yandex server is awsome:

# yandex-server --indexer -r /usr/local/etc/yandex/yandex.cfg
Yandex.DsIndexer
This program is a part of Yandex.Software 2010.9.0
Copyright (c) 1996-2009 OOO "Yandex". All rights reserved.
Call software@yandex-team.ru for support.
Product ID: ENT-030-2010.9.0
Config file '/usr/local/etc/yandex/yandex.cfg' was parsed with the message(s):
Processing of '/usr/local/etc/yandex/yandex.cfg':
Warning at line 5, col 2: section 'Server' not allowed here and will be ignored.

Start indexing...

Start indexing...
Aborted

WTF? Logs contain:

Working with "webds" data source...
Mon Oct 31 16:56:33 2011 [Webds] [INFO] - Indexing: datasource webds opened successfully
Indexing was finished at Mon Oct 31 16:56:33 2011
It has been indexed 0 documents.
Index contains 0 documents.
Error: std::bad_alloc
Indexing was started at Mon Oct 31 16:57:08 2011


It turned out that container with Yandex Server just had not enough allocated memory... After increasing it from 128 to 512 MB the server started creating indexes...

вторник, 4 октября 2011 г.

OpenVZ and Java

Today we encountered interesting bug in Proxmox 1.9. Java works in container very slow and randomly fails. There are two possible workarounds: provide at least two CPUs to container or downgrade kernel to 2.6.32-4-pve. I've opted for the first way.

среда, 9 февраля 2011 г.

More troubles with OpenVZ

I have two troubles with OpenVZ:
1) udevd works strange, in particular, /dev/ptmx gets incorrect permissions and so xterm, gnome-terminal, etc doesn't work. Solved it whith "chmod 666 /dev/ptmx" in rc.local
2) su doesn't work for usual user: asks a password, and when the password is correct, it returns immediately with "incorrect password", when the password is wrong, it returns the same after timeout. Decided to use sudo.

Vnc worked without surprises: just created necessary entries in /etc/sysconfig/vncservers:

VNCSERVERS="2:username"
VNCSERVERARGS[2]="-nohttpd"

I had to replace ~/.vnc/xstartup with the following to run gnome-session on each user login:

while /bin/true; do
/usr/bin/gnome-session
done


Now I have to install some supplementary software (OpenJDK, NetBeans, Oracle SQL Developer) and create template from this settings. After finishing deploying template, I'll have about 20 VMs for students...

OpenVZ and Oracle

I prepare for new Oracle course now. I'd like to provide every student a OpenVZ container running developer desktop and Oracle XE. Just for now, it seems XE runs normally in container. Just set SHMPAGES at least to 524288 in /etc/sysconfig/vz-scripts/<vid>.conf and install XE in container in the following way:

yum install libaio bc
rpm --nopre -ivh oracle-xe-univ-10.2.0.1-1.0.i386.rpm
/etc/rc.d/oracle-xe configure