Dirac on a vm at cern
Work in progress! Use at your own peril!
Contents
Launching a virtual machine at CERN
- Use openstack at CERN.
- Change "Current Project" to your username.
- Click on "Instances" and then on "Launch Instance"
- Upload public ssh key (cut and paste will do), if not already done so, in the Access & Security tab
- Choose "Any Availability Zone", "m1.medium" and "Boot from Image"
- As image I am currently trying "SLC6 Server x86_64" (diractest2) and "SLC6 CERN Server x86_64" (diractest)
- Click "Launch"
Claiming all your space
The virtual machine should come with 40GB, but the image doesn't expand correctly (this explanation is purposefully vague ;-) You probably see something like this:
[root@diractest2 ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/VolGroup00-LogVol00 8.0G 2.2G 5.5G 29% / tmpfs 2.0G 0 2.0G 0% /dev/shm /dev/vda1 388M 34M 334M 10% /boot
To claim all your space do:
growpart -N /dev/vda 2 growpart /dev/vda 2 reboot
After a successful reboot do:
pvresize /dev/vda2 lvresize -l +960 -r /dev/mapper/VolGroup00-LogVol00
and you should see all the space in VolGroup00-LogVol00 now.
Getting a hostcert for a virtual machine
- Once the machine is up and running, you can log on as root via lxplus.cern.ch
- Go to the new CERN CA
- Click on "New host certificate". If it whinges about having to see your cert, ignore it, it will ask for it sooner or later. Your user cert should be in your browser at that point though. Click again on "host cert". This time hopefully it works. Under "Host selection" you should find your virtual machine. Click "Request".
- Run the command it gives on your virtual machine and paste the output back (i.e. follow the instructions given on the webpage)
openssl req -new -subj "/CN=diractest2.cern.ch" -out newcsr.csr -nodes -sha512 -newkey rsa:2048
This should take you to a page "Certificate issued". Follow the instructions.
- Make a backup and save it somewhere other than your virtual machine:
openssl pkcs12 -export -inkey privkey.pem -in host-diractest.cert -out diractest.p12
- Place hostcert.pem and hostkey.pem (= privkey.pem) in /etc/grid-security and change the permissions: chmod 400 hostkey.pem
Setting up the CAs
- wget http://repository.egi.eu/sw/production/cas/1/current/repo-files/EGI-trustanchors.repo -O /etc/yum.repos.d/EGI-trustanchors.repo
- yum install ca-policy-egi-core
Setting up a UI
(Let's go with EMI3, even though we will need the EMI2 vomsclients.)
- Get the repos:
(if epel is not present:) wget http://www.nic.funet.fi/pub/mirrors/fedora.redhat.com/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm rpm -i epel-release-6-8.noarch.rpm wget http://emisoft.web.cern.ch/emisoft/dist/EMI/3/sl6/x86_64/base/emi-release-3.0.0-2.el6.noarch.rpm yum localinstall emi-release-3.0.0-2.el6.noarch.rpm
- epel needs to have priority over the system repos, otherwise the install fails with "emi.saga-adapter.isn-cpp-1.0.3-1.sl6.x86_64 (EMI-3-base) Requires: libxerces-c-3.0.so()(64bit)"
- yum install emi-ui
- Get the EMI2 voms clients
yum shell list *voms* erase voms-clients3.noarch install voms-clients.x86_64 run exit
- Make sure fetch-crl is turned on:
chkconfig --list | grep fetch-crl chkconfig fetch-crl-cron on
- Trying to get away without running yaim
cd /etc/grid-security wget http://www.hep.ph.ic.ac.uk/~dbauer/dirac/vomsdir.tar tar -xf vomsdir.tar cd /etc wget http://www.hep.ph.ic.ac.uk/~dbauer/dirac/vomses.tar tar -xf vomses (there's something missing here, but I'll get to it)
Last minute prep
- Remove mysql/mysql-libs from the node.
- Now go to the Imperial dirac instructions and good luck....
- Note that the VMs at CERN are not open to the outside world, unless by special dispensation. The only way to test job submission etc is to have the jobs run at cern. This snippet (to be inserted in the Sites section) allows you to run jobs at CERN, but I haven't worked out the SE bit yet.
LCG.CERN-TEST.ch { CE = ce403.cern.ch CE += ce404.cern.ch SE = CERN-disk Name = CERN-PROD CEs { ce403.cern.ch { # these parameters seem to get overwritten somewhere wnTmpDir = /tmp architecture = x86_64 OS = CentOS_Final_6.4 SI00 = 2125 Pilot = True CEType = CREAM SubmissionMode = Direct Queues { cream-lsf-grid_dteam { maxCPUTime = 2880 SI00 = 2125 MaxTotalJobs = 4000 VO = dteam } } } # ce403.cern.ch ce404.cern.ch { wnTmpDir = /tmp architecture = x86_64 OS = CentOS_Final_6.4 SI00 = 2125 Pilot = True CEType = CREAM SubmissionMode = Direct Queues { cream-lsf-grid_dteam { maxCPUTime = 2880 SI00 = 2125 MaxTotalJobs = 4000 VO = dteam } } } # ce404.cern.ch } # CEs } # LCG.CERN-TEST.ch
Note that while you cannot run jobs outside CERN from a dirac instance made on a CERN VM, you can access outside storage, as long as you only require an outgoing connection. Writing to CERN storage (at least the one available for dteam) fails 4 out of 5 times with a timeout problem, there is not much you can do about this. Here's a snippet for the CERN disk.
CERN-disk { BackendType = castor DiskCacheTB = 20 DiskCacheTBSave = 20 AccessProtocol.1 { Host = srm-public.cern.ch Port = 8443 # apparently this can't be found automatically ? ProtocolName = SRM2 Protocol = srm Path = /castor/cern.ch/grid/dteam/ Access = remote WSUrl = /srm/managerv2?SFN= } } # cern-disk
Go back to the Dirac overview page.