In the past days I tried to set up CernVM on a Nimbus cloud to get an ATLAS Software Release (local version) running. On this way some problems came up. One of them could be solved by instructing the Xen hypervisor to choose the proper Linux kernel.
The software components of CernVM can be managed with rPath’s package manager conary. The standard CernVM image is very slim. Important packages are missing, even a compiler. To transform this slim system into a platform that is able to run e.g. ATLAS Software (or any other LHC experiment software) correctly, one needs to “migrate” the system to the special pre-defined group, providing the required packages. In case of ATLAS Software one should migrate the standard CernVM to “group-atlas
“. Among others then e.g. GCC
, compat-libgfortran
and libstdc++
get installed into the system.
The corresponding command is (must be executed as root):
$ conary migrate group-atlas --interactive
But it resulted in
Troves being installed appear to conflict: glibc:lib -> /conary.rpath.com@rpl:devel//1/2.3.6-8.9-1[~!bootstrap,~glibc.tls,~nptl,~!xen is: x86_64]->/conary.rpath.com@rpl:devel//1/2.3.6-8.4-1[~!bootstrap,~glibc.tls,~nptl is: x86_64] glibc:runtime -> /conary.rpath.com@rpl:devel//1/2.3.6-8.9-1[~!bootstrap,~glibc.tls,~nptl,~!xen is: x86_64]->/conary.rpath.com@rpl:devel//1/2.3.6-8.4-1[~!bootstrap,~glibc.tls,~nptl is: x86_64]
This caused confusion, but after discussing the issue some days in this savannah support ticket, the solution was found (thanks to Predrag Buncic, Tim Freeman and Artem Harutyunyan):
On Xen based systems the kernel is given to the hypervisor separately from the VM image. Nimbus deploys VMs with its own kernel (like EC2 does, too).
This is what the fresh CernVM 1.2.0 tells about itself (deployed on Nimbus Cloud)
$ uname -a Linux tp-x001.ci.uchicago.edu 2.6.18-xen #2 SMP Wed Apr 16 12:47:36 CDT 2008 x86_64 x86_64 x86_64 GNU/Linux
Conary seems to get confused by this kernel (missing kernel modules), so that one has to extract the original kernel of CernVM and make it available to the Xen hypervisor. In contrast to EC2, where you can choose from a big pool of kernels and even submit your own kernels, Nimbus does not offer this option by default. Last year Artem Harutyunyan had the same problem. Tim Freeman, Nimbus developer, added the CernVM kernel to the cloud and made changes to the Nimbus client so the user could specify the kernel with deployment command. This change was not included into a public release of Nimbus’ cloud client.
I used this unique client to deploy a new VM with the named kernel (“alien5”), which is the kernel extracted from CernVM 1.01:
$ ./bin/cloud-client.sh --conf /path/to/cloud.properties --run --name cernvm_120x86_dotSSHadded.gz --hours 5 --kernel alien5
It started up without problems and showed this system information:
$ uname -a Linux tp-x002.ci.uchicago.edu 2.6.21.7-5.smp.pae.gcc4.1.x86.i686.xen.domU #1 SMP Mon Oct 6 16:34:24 EDT 2008 i686 athlon i386 GNU/Linux
Now the migration ran with success:
$ conary migrate group-atlas --interactive Resolving dependencies...The following updates will be performed: Job 1 of 9: Install info-vcsa(:user)=1-1-0.1 Job 2 of 9: Install cernvm-plugin-releasemgr(:python :runtime)=20090223-3-1 [...] Migrate erases all troves not referenced in the groups specified. continue with migrate? [y/N] y Applying update job 1 of 9: Install info-vcsa(:user)=1-1-0.1 Applying update job 2 of 9: Install cernvm-plugin-releasemgr(:python :runtime)=20090223-3-1 Install compat-db4(:lib)=4.2-3-1 Install compat-libgfortran(:lib)=4.1.2-1-1 Install compat-libstdc++-slc3(:lib)=3.2.3-2-1 [...] Applying update job 6 of 9: Install gcc(:devel :devellib :lib :runtime)=3.4.4-9.4-1 [...] Install xterm(:lib :runtime)=202-5.3-1 Applying update job 9 of 9: [...]
After this I could start setting up a local ATLAS Software Release using pacman. There the next problem occured: Problems at Boston University’s ATLAS Software mirror
Leave a Reply