In this chapter I will lead you to an Elastic Block Store (EBS) snapshot of the ATLAS Software Release version you wish to use on EC2. This snapshot can be used by the AWSACtools job system, described in chapter 5 The job system: how AWSACtools work.
After launching an instance, an EBS will be created, attached to the instance and mounted into the file system. The desired release of the ATLAS Software will be installed using Pacman. Then a standard cmt configuration will be performed that lets you easy initialize your jobs later on. After this, of course, the EBS content is saved into an EBS snapshot to S3.
I recommend to use Elasticfox to execute the following steps.
Use Elasticfox to create a new EBS volume. It must be in the same availability zone like the instance just launched. The cost for the EBS volume and also for the EBS snapshot depend on the size of the volume/snapshot. So the size should be chosen carefully. I decided to make an EBS volume of 16 GB. ATLAS Software takes about 8 GB. The maximum overhead then is about 8 GB - a waste of money? I am not sure about the consequences of too few free space while working with ATLAS Software (especially when working on a workspace that is not on the EBS). The Kit Validation warns if there is less than 7 GB free space:
None of the following alternatives are satisfied:
[freeMegsMinimum 7000 free Megabytes at .] is not available.
[WARNING, less than the minimum of 7G free is required to install release, carry on anyway?]
hasn't been asked. Package will not be installed
This message appears after installing. I do not know what is more important - the EBS cost or the warning. But for this documentation I decided to follow the secure way.
Connect to the instance via ssh. Create the folder /mnt/atlas, format the new device with Ext 3 and mount it:
$ mkdir /mnt/atlas
$ mkfs.ext3 /dev/sdh
mke2fs 1.35 (28-Feb-2004)
/dev/sdh is entire device, not just one partition!
Proceed anyway? (y,n) y
[...]
$ mount /dev/sdh /mnt/atlas
Lets check, if /mnt/atlas really corresponds to /dev/sdh, the EBS:
$ df
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/sda1 10321208 1845820 7951100 19% /
/dev/sda2 350891748 199372 332868096 1% /mnt
/dev/sdh 16513960 77888 15597212 1% /mnt/atlas
Now we need Pacman, the installer for the ATLAS Software. Download, extract and configure it:
$ wget http://physics.bu.edu/pacman/sample_cache/tarballs/pacman-latest.tar.gz
$ tar xf pacman-latest.tar.gz
$ cd pacman-3.26/
$ source setup.sh
Install:
In the next steps, the ATLAS Software Release version 14.2.24 will be installed to /mnt/atlas/14.2.24. So the “root” directory of the EBS volume will be 14.2.24.
$ mkdir /mnt/atlas/14.2.24 $ cd /mnt/atlas/14.2.24It is time to start Pacman and to decide whether you want to perform the Kit Validation (KV) after installation or not. If you want to:
$ pacman -allow trust-all-caches -get am-BU:14.2.24+KV
If you do not want to perform KV:
$ pacman -allow trust-all-caches -get am-BU:14.2.24
Note
I instructed Pacman to use BU (Boston University) as download source for the ATLAS Software release, because from EC2‘s point of view this is much faster than the CERN mirror (Amazon’s EC2 data centres are located at the east coast).
After some time the result (with Kit Validation) should look like:
am-BU:Generic:http://atlas-computing.web.cern.ch/atlas-computing/links/monolith//mnt/atlas/14.2.24 About to execute: ./KitValidation/*/share/KitValidation [...] ############################################################ ## Atlas Distribution Kit Validation Suite ## ## 01-10-2008 v1.9.18-1 ## ## ## ## Alessandro De Salvo <Alessandro.DeSalvo@roma1.infn.it> ## ############################################################ Testing AtlasProduction 14.2.24 athena executable [PASSED] athena shared libs [PASSED] Release shared libraries [PASSED] Release Simple Checks [ OK ] Athena Hello World [ OK ] MooEvent compilation [ OK ] /mnt/atlas/14.2.24/KV-14.2.24/tmp DB Release consistency check [ OK ] ################################################## ## AtlasProduction 14.2.24 Validation [ OK ] ##################################################Now the ATLAS Software Release is stored on the EBS and ready to use.
Note
During Kit Validation of 14.2.24 I got those warnings:
#CMT> Warning: template <src_dir> not expected in pattern install_scripts (from TDAQCPolicy) #CMT> Warning: template <files> not expected in pattern install_scripts (from TDAQCPolicy)They are not important and can be ignored:
See also
ATLAS Computing Workbook: «With Release 14.2.23, you might get [...] [these] warnings which [...] can be ignored»
Configure:
Before you can start jobs using the ATLAS Software Release, you have to configure the linux environment you are working on. This is done by sourcing a setup.sh that was automatically created by the configuration management tool cmt. This setup.sh must be sourced in any job shell script at the beginning. I will show you how to create this setup.sh using cmt.
The command to cause cmt to create the setup.sh is cmt config. It must be executed in a configuration directory that contains one special configuration file, the so-called requirements file. In this file you can specify your individual configuration that is needed by your jobs. cmt config parses the requirements file, processes the contained information and creates the corresponding setup.sh.
See also
More information about the coherences stated above can be found here: ATLAS Computing Workbook - Setting up your account:
So we at first create the configuration directory /mnt/atlas/14.2.24/cmthome, then the requirements file and then we invoke cmt config.
$ mkdir /mnt/atlas/14.2.24/cmthome $ cd /mnt/atlas/14.2.24/cmthome $ vi requirementsThe following content should ensure a seamless offline/standalone functionality of the ATLAS Software Release:
set CMTSITE STANDALONE set SITEROOT /mnt/atlas/14.2.24 macro ATLAS_DIST_AREA ${SITEROOT} apply_tag opt apply_tag setup apply_tag noTest set CMTCONFIG i686-slc4-gcc34-opt use AtlasLogin AtlasLogin-* $(ATLAS_DIST_AREA)Modify the content (e.g. the release version number) to your needs and save the file.
See also
The options above, their meanings and - of course - many more options are described here: The AtlasLogin environment setup package (especially here: The AtlasLogin environment setup package - The Home Requirements File and here: The AtlasLogin environment setup package - Available tags)
The next step is to use cmt to process the requirements file. But before cmt can be used, it must be initialized itself by sourcing the corresponding setup.sh at /mnt/atlas/14.2.24/CMT/LATEST_VERSION/mgr/setup.sh. For me this was
$ source /mnt/atlas/14.2.24/CMT/v1r20p20080222/mgr/setup.shNow cmt config can be invoked. This must happen in the configuration directory where we just placed the new requirements file:
$ echo $PWD /mnt/atlas/14.2.24/cmthomeThats okay; we are at the right place. Now invoke cmt config:
$ cmt config
It should result in
------------------------------------------ Configuring environment for standalone package. CMT version v1r20p20080222. System is Linux-i686 ------------------------------------------ Creating setup scripts. Creating cleanup scripts.Now there are some more files in the current directory (the configuration directory), produced by cmt:
$ ls cleanup.csh cleanup.sh Makefile requirements setup.csh setup.sh
The setup.sh is the objective we were looking at. This is the file that must be sourced at the beginning of any job shell script. Lets try it now:
$ source /mnt/atlas/14.2.24/cmthome/setup.sh -tag=14.2.24 #CMT> Warning: template <src_dir> not expected in pattern install_scripts (from TDAQCPolicy) #CMT> Warning: template <files> not expected in pattern install_scripts (from TDAQCPolicy)Now I will show two ways to “check”, if this initialization worked properly. At first, there now should be many executables in the $PATH beginning with csc. Change the current directory to any (e.g. /mnt) and enter csc and press TAB two times.
$ cd /mnt $ csc + TAB + TABThis should result in something like this:
csc_4d_segment_performance.exe csc_fullchain_trf.py csc_addTruthJetMet_trf.py csc_genAtlfast08_trf.py csc_atlasG4_trf.py csc_genAtlfast_trf.py csc_atlfast_trf.py csc_genAtlfastTwoStep08_trf.py csc_beamgasmix_trf.py csc_genAtlfastTwoStep_trf.py csc_beamhalo_trf.py csc_MergeHIST_trf.py csc_BSrecoESD_trf.py csc_mergeHIT_trf.py csc_BSreco_trf.py csc_modgen_trf.py csc_buildTAG_trf.py cscope csc_cavernbkg_trf.py cscope-indexer csc_cluster_performance.exe csc_physVal_Mon_trf.py csc_cosmic_cluster.exe csc_physVal_trf.py csc_cosmics_sim_trf.py csc_RDOtoBS_trf.py csc_cosmics_trf.py csc_readasciigen_trf.py csc_digi_reco_trf.py csc_recoAOD_trf.py csc_digi_trf.py csc_recoESD_trf.py csc_evgen08new_trf.py csc_reco_trf.py csc_evgen08_trf.py csc_segment_performance.exe csc_evgen900_trf.py csc_simseg_builder.exe csc_evgen_input_trf.py csc_simulID_recoFastCaloSim_trf.py csc_evgen_trf.py csc_simul_reco_trf.py csc_evgenTruthJetMet08_trf.py csc_simul_trf.py csc_evgenTruthJetMet_trf.py csc_writeasciigen_trf.pyThe second thing is to check, if there are some ATLAS extensions loaded into the cmt path:
$ cmt show path
This should result in something like this:
# Add path /mnt/atlas/14.2.24/AtlasOffline/14.2.24 from initialization # Add path /mnt/atlas/14.2.24/AtlasAnalysis/14.2.24 from ProjectPath # Add path /mnt/atlas/14.2.24/AtlasSimulation/14.2.24 from ProjectPath # Add path /mnt/atlas/14.2.24/AtlasTrigger/14.2.24 from ProjectPath # Add path /mnt/atlas/14.2.24/AtlasReconstruction/14.2.24 from ProjectPath # Add path /mnt/atlas/14.2.24/dqm-common/dqm-common-00-05-00 from ProjectPath # Add path /mnt/atlas/14.2.24/AtlasEvent/14.2.24 from ProjectPath # Add path /mnt/atlas/14.2.24/AtlasConditions/14.2.24 from ProjectPath # Add path /mnt/atlas/14.2.24/AtlasCore/14.2.24 from ProjectPath # Add path /mnt/atlas/14.2.24/DetCommon/14.2.24 from ProjectPath # Add path /mnt/atlas/14.2.24/GAUDI/v19r9-LCG54g from ProjectPath # Add path /mnt/atlas/14.2.24/tdaq-common/tdaq-common-01-09-03 from ProjectPath # Add path /mnt/atlas/14.2.24/LCGCMT/LCGCMT_54g from ProjectPathThis should validate the content of your new job configuration directory /mnt/atlas/14.2.24/cmthome. The EBS content is ready to get backed up into an EBS snapshot.
There is no need to modify the EBS anymore. Unmount the corresponding hard disk /dev/sdh:
$ umount /dev/sdh
Then use Elasticfox to detach the EBS from your instance.
Note
You should not detach it before the hard disk is unmounted!!
Then rightclick the EBS volume in Elasticfox and choose to create a snapshot. This will last some time. You can terminate the instance in the mean time, but you should not delete the EBS during snapshot creation! Remember/note the snapshot ID of the new snapshot! After snapshot creation has finished, delete the EBS volume.
Now you have a job prepared ATLAS Software Release stored on S3. Recreation of an EBS from this snapshot is an instantaneous process, since needed data will be loaded “to a new EBS volume” in the background.