Difference between revisions of "Dirac on a vm at cern"

From GridPP Wiki
Jump to: navigation, search
(Replaced content with "Boo!")
 
(7 intermediate revisions by 2 users not shown)
Line 1: Line 1:
Work in progress! Use at your own peril!
+
Boo!
 
+
== Launching a virtual machine at CERN ==
+
* Use [https://openstack.cern.ch/dashboard openstack] at CERN.
+
* Change "Current Project" to your username.
+
* Click on "Instances" and then on "Launch Instance"
+
* Upload public ssh key (cut and paste will do), if not already done so, in the Access & Security tab
+
* Choose "Any Availability Zone", "m1.medium" and "Boot from Image"
+
* As image I am currently trying "SLC6 Server x86_64" (diractest2) and "SLC6 CERN Server x86_64" (diractest)
+
* Click "Launch"
+
 
+
== Claiming all your space ==
+
The virtual machine should come with 40GB, but the image doesn't expand correctly (this explanation is purposefully vague ;-)
+
You probably see something like this:
+
<pre>
+
[root@diractest2 ~]# df -h
+
Filesystem            Size  Used Avail Use% Mounted on
+
/dev/mapper/VolGroup00-LogVol00
+
                      8.0G  2.2G  5.5G  29% /
+
tmpfs                2.0G    0  2.0G  0% /dev/shm
+
/dev/vda1            388M  34M  334M  10% /boot
+
</pre>
+
To claim all your space do:
+
<pre>
+
growpart -N /dev/vda 2
+
growpart /dev/vda 2
+
reboot
+
</pre>
+
After a successful reboot do:
+
<pre>
+
pvresize /dev/vda2
+
lvresize -l +960 -r /dev/mapper/VolGroup00-LogVol00
+
</pre>
+
and you should see all the space in VolGroup00-LogVol00 now.
+
 
+
== Getting a hostcert for a virtual machine ==
+
* Once the machine is up and running, you can log on as root via lxplus.cern.ch
+
* Go to the new [https://gridca.cern.ch/gridca/ CERN CA]
+
* Click on "New host certificate". If it whinges about having to see your cert, ignore it, it  will ask for it sooner or later. Your user cert should be in your browser at that point though. Click again on "host cert". This time hopefully it works. Under "Host selection" you should find your virtual machine. Click "Request".
+
* Run the command it gives on your virtual machine and paste the output back (i.e. follow the instructions given on the webpage) <br>
+
<pre>
+
openssl req -new -subj "/CN=diractest2.cern.ch" -out newcsr.csr -nodes -sha512 -newkey rsa:2048
+
</pre>
+
This should take you to a page "Certificate issued". Follow the instructions.
+
* Save the base64 encoded certificate:
+
<pre>
+
host.cert
+
</pre>
+
* Convert new certificate from CERN page to .pem
+
<pre>
+
openssl x509 -in host.cert -out hostcert.pem
+
</pre>
+
* Make a backup and save it somewhere other than your virtual machine: <pre> openssl pkcs12 -export -inkey privkey.pem -in host-diractest.cert -out diractest.p12 </pre>
+
* Place hostcert.pem and hostkey.pem (= privkey.pem) in /etc/grid-security
+
<pre>
+
mkdir -p /etc/grid-security
+
cp hostcert.pem hostkey.pem /etc/grid-security
+
chmod 400 /etc/grid-security/hostkey.pem
+
</pre>
+
 
+
== Setting up the CAs ==
+
<pre>
+
wget http://repository.egi.eu/sw/production/cas/1/current/repo-files/EGI-trustanchors.repo -O /etc/yum.repos.d/EGI-trustanchors.repo
+
yum install ca-policy-egi-core
+
</pre>
+
== Setting up a UI ==
+
(Let's go with EMI3, even though we will need the EMI2 vomsclients.) <br>
+
* Get the repos:
+
  <pre>
+
  (if epel is not present:)
+
  wget http://www.nic.funet.fi/pub/mirrors/fedora.redhat.com/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm
+
  rpm -i epel-release-6-8.noarch.rpm
+
  wget http://emisoft.web.cern.ch/emisoft/dist/EMI/3/sl6/x86_64/base/emi-release-3.0.0-2.el6.noarch.rpm
+
  yum localinstall emi-release-3.0.0-2.el6.noarch.rpm
+
  </pre>
+
* epel needs to have priority over the system repos, otherwise the install fails with "emi.saga-adapter.isn-cpp-1.0.3-1.sl6.x86_64 (EMI-3-base) Requires: libxerces-c-3.0.so()(64bit)"
+
*  <pre> yum install emi-ui </pre>
+
 
+
* Get the EMI2 voms clients
+
<pre>
+
yum shell
+
list *voms*
+
erase voms-clients3.noarch
+
install voms-clients.x86_64
+
run
+
exit
+
</pre>
+
 
+
* Make sure fetch-crl is turned on:
+
<pre>
+
chkconfig --list | grep fetch-crl
+
chkconfig fetch-crl-cron on
+
service fetch-crl-cron start
+
</pre>
+
 
+
* Trying to get away without running yaim (note, you'll need the vomsdir one way or the other, even if it's doubtful you actually need the whole UI)
+
<pre>
+
cd /etc/grid-security
+
wget http://www.hep.ph.ic.ac.uk/~dbauer/dirac/vomsdir.tar
+
tar -xf vomsdir.tar
+
cd /etc
+
wget http://www.hep.ph.ic.ac.uk/~dbauer/dirac/vomses.tar
+
tar -xf vomses.tar
+
(there's something missing here, but I'll get to it)
+
</pre>
+
 
+
== Last minute prep ==
+
* Remove mysql/mysql-libs from the node.
+
<pre>
+
yum remove mysql-libs
+
</pre>
+
 
+
* Install WebOp (not correctly pulled in by 3rd party requirements)
+
<pre>
+
yum install *WebOb*1.2*
+
</pre>
+
 
+
* if your choice of poison is vim, (or for your own health) install the ncurses binary to get a better terminal access to stuff:
+
 
+
<pre>
+
yum install ncurses-term
+
</pre>
+
 
+
and set xterm to be your terminal as root and dirac:
+
 
+
<pre>
+
echo 'export TERM=xterm' >> ~/.bashrc
+
</pre>
+
 
+
* Now go to the [https://www.gridpp.ac.uk/wiki/Imperial_Dirac_server Imperial dirac instructions] and good luck....
+
 
+
* Note that the VMs at CERN are not open to the outside world, unless by special dispensation. The only way to test job submission etc is to have the jobs run at cern. This snippet (to be inserted in the Sites section) allows you to run jobs at CERN, but I haven't worked out the SE bit yet.
+
<pre>
+
 
+
    LCG.CERN-TEST.ch
+
      {
+
        CE = ce403.cern.ch
+
        CE += ce404.cern.ch
+
        SE = CERN-disk
+
        Name = CERN-PROD
+
        CEs
+
        {
+
          ce403.cern.ch
+
          {
+
          # these parameters seem to get overwritten somewhere
+
            wnTmpDir = /tmp
+
            architecture = x86_64
+
            OS = CentOS_Final_6.4
+
            SI00 = 2125
+
            Pilot = True
+
            CEType = CREAM
+
            SubmissionMode = Direct
+
            Queues
+
            {
+
              cream-lsf-grid_dteam
+
              {
+
                maxCPUTime = 2880
+
                SI00 = 2125
+
                MaxTotalJobs = 4000
+
                VO = dteam
+
              }
+
            }
+
          } # ce403.cern.ch
+
        ce404.cern.ch
+
          {
+
            wnTmpDir = /tmp
+
            architecture = x86_64
+
            OS = CentOS_Final_6.4
+
            SI00 = 2125
+
            Pilot = True
+
            CEType = CREAM
+
            SubmissionMode = Direct
+
            Queues
+
            {
+
              cream-lsf-grid_dteam
+
              {
+
                maxCPUTime = 2880
+
                SI00 = 2125
+
                MaxTotalJobs = 4000
+
                VO = dteam
+
              }
+
            }
+
          } # ce404.cern.ch
+
        } # CEs
+
        } # LCG.CERN-TEST.ch
+
</pre>
+
 
+
Note that while you cannot run jobs outside CERN from a dirac instance made on a CERN VM, you can access outside storage, as long as you only require an outgoing connection. Writing to CERN storage (at least the one available for dteam) fails 4 out of 5 times with a timeout problem, there is not much you can do about this. Here's a snippet for the CERN disk.
+
<pre>
+
  CERN-disk
+
    {
+
      BackendType = castor
+
      DiskCacheTB = 20
+
      DiskCacheTBSave = 20
+
      AccessProtocol.1
+
      {
+
        Host = srm-public.cern.ch
+
        Port =  8443 # apparently this can't be found automatically ?                                               
+
        ProtocolName = SRM2
+
        Protocol = srm
+
        Path = /castor/cern.ch/grid/dteam/
+
        Access = remote
+
        WSUrl = /srm/managerv2?SFN=
+
      }
+
    } # cern-disk 
+
</pre>
+
* My current best effort config files for a throw way dirac server at CERN can be found [http://www.hep.ph.ic.ac.uk/~dbauer/dirac/diractest/ here].
+
 
+
<hr>
+
Go back to the [https://www.gridpp.ac.uk/wiki/Dirac Dirac overview page].
+

Latest revision as of 14:08, 12 June 2024

Boo!