Difference between revisions of "VO specific software on the Grid"

From GridPP Wiki
Jump to: navigation, search
(Accessing software distributed via CVMFS)
Line 3: Line 3:
 
For most VOs the software is now being distributed via CVMFS. The only detail a user (client) has to know is how the repository(-ries) are mapped on Worker Nodes.
 
For most VOs the software is now being distributed via CVMFS. The only detail a user (client) has to know is how the repository(-ries) are mapped on Worker Nodes.
 
In this article we will use the gridpp VO repository, which is mapped to <code>/cvmfs/gridpp.egi.eu/</code> . A VO software administrator uploaded a following example
 
In this article we will use the gridpp VO repository, which is mapped to <code>/cvmfs/gridpp.egi.eu/</code> . A VO software administrator uploaded a following example
code into a directory called <code>testing/</code> :
+
python script and saved it as <code>testing/hello.py</code> :
 
  <nowiki>
 
  <nowiki>
  
Line 19: Line 19:
 
#
 
#
 
  </nowiki>
 
  </nowiki>
 +
 +
It normally takes a few hours before uploaded software becomes available to clients.
 +
Now we need to create a job wrapper (<code>run_hello_cvmfs.sh</code>) which will be submitted as a Dirac executable:
 +
<nowiki>
 +
 +
#!/bin/bash
 +
#
 +
# Run the Python script.
 +
export GRIDPP_VO_CVMFS_ROOT=/cvmfs/gridpp.egi.eu/testing/HelloWorld
 +
if [ -d "$GRIDPP_VO_CVMFS_ROOT" ]; then
 +
  $GRIDPP_VO_CVMFS_ROOT/hello.py
 +
else
 +
  echo "Requester CVMFS directory does not exist $GRIDPP_VO_CVMFS_ROOT  "
 +
  exit 1
 +
fi
 +
#
 +
 +
</nowiki>
 +
 +
The last step is to created a Dirac <code>jdl</code> file (<code>hello_cvmfs.jdl</code>):
 +
<nowiki>
 +
[
 +
JobName = "Snake_Job_CVMFS";
 +
Executable = "run_hello_cvmfs.sh";
 +
Arguments = "";
 +
StdOutput = "StdOut";
 +
StdError = "StdErr";
 +
InputSandbox = {"run_hello_cvmfs.sh"};
 +
OutputSandbox = {"StdOut","StdErr"};
 +
]
 +
</nowiki>
 +
 +
In the jdl we define the executable (<code>run_hello_cvmfs.sh</code>) which is shipped with the job in the input sandbox. Now we can submit our first CVMFS job:
 +
 +
<nowiki>
 +
dirac-wms-job-submit -f logfile hello_cvmfs.jdl
 +
</nowiki>
 +
 +
Check its status, which in our case returned:
 +
 +
<nowiki>
 +
dirac-wms-job-status -f logfile
 +
JobID=5213546 Status=Running; MinorStatus=Job Initialization; Site=VAC.UKI-LT2-RHUL.uk;
 +
<nowiki>
 +
 +
When job finishes, we can grab the output (<code>dirac-wms-job-get-output -f logfile</code>), which reads:
 +
 +
<nowiki>
 +
----------------------
 +
Hello, I'm a snake !  /\/\/o
 +
----------------------
 +
More info:
 +
 +
2.7.12 (default, Dec 17 2016, 21:07:48)
 +
[GCC 4.4.7 20120313 (Red Hat 4.4.7-17)]
 +
</nowiki>

Revision as of 12:45, 19 September 2017

DRAFT

Accessing software distributed via CVMFS

For most VOs the software is now being distributed via CVMFS. The only detail a user (client) has to know is how the repository(-ries) are mapped on Worker Nodes. In this article we will use the gridpp VO repository, which is mapped to /cvmfs/gridpp.egi.eu/ . A VO software administrator uploaded a following example python script and saved it as testing/hello.py :


#!/usr/bin/env python
import sys

print "----------------------"
print "Hello, I'm a snake !  /\/\/o"
print "----------------------"

print " More info:\n"

print (sys.version)

#
 

It normally takes a few hours before uploaded software becomes available to clients. Now we need to create a job wrapper (run_hello_cvmfs.sh) which will be submitted as a Dirac executable:


#!/bin/bash
#
# Run the Python script.
export GRIDPP_VO_CVMFS_ROOT=/cvmfs/gridpp.egi.eu/testing/HelloWorld
if [ -d "$GRIDPP_VO_CVMFS_ROOT" ]; then
   $GRIDPP_VO_CVMFS_ROOT/hello.py
else
   echo "Requester CVMFS directory does not exist $GRIDPP_VO_CVMFS_ROOT  "
   exit 1
fi
#


The last step is to created a Dirac jdl file (hello_cvmfs.jdl):

[
JobName = "Snake_Job_CVMFS";
Executable = "run_hello_cvmfs.sh";
Arguments = "";
StdOutput = "StdOut";
StdError = "StdErr";
InputSandbox = {"run_hello_cvmfs.sh"};
OutputSandbox = {"StdOut","StdErr"};
]

In the jdl we define the executable (run_hello_cvmfs.sh) which is shipped with the job in the input sandbox. Now we can submit our first CVMFS job:

dirac-wms-job-submit -f logfile hello_cvmfs.jdl

Check its status, which in our case returned:

dirac-wms-job-status -f logfile
JobID=5213546 Status=Running; MinorStatus=Job Initialization; Site=VAC.UKI-LT2-RHUL.uk;
<nowiki>

When job finishes, we can grab the output (<code>dirac-wms-job-get-output -f logfile</code>), which reads:

 <nowiki>
----------------------
Hello, I'm a snake !  /\/\/o
----------------------
 More info:

2.7.12 (default, Dec 17 2016, 21:07:48) 
[GCC 4.4.7 20120313 (Red Hat 4.4.7-17)]