EMITarball
The EMI WN UI Tarball project is on going, headed by members of GridPP - particularly Matt Doidge at Lancaster University. Sadly we lost much of our old wiki pages, hopefully this page will answer any questions. Please pardon our mess, and contact us if you have any questions.
Contents
Overview
The EMI3 Worker Node and User Interface tarball is produced for SL6 using tools originally written by David Smith (CERN). The goal of the tarball is to allow the EMI WN or UI to be served over a remote export (such as NFS) on a SL6 machine, without installation of any extra packages or rpms. The tarballs can also be exported over cvmfs, and the latest versions of the tarball are available in the grid.cern.ch repository.
Currently only the SL6 versions of the EMI WN and UI tarballs are being developed. Exploratory work on an EL7 version has started in Q2 2016.
Where to Download
The latest versions of the EMI3 WN and UI Tarballs can be found here:
http://repository.egi.eu/mirrors/EMI/tarball/test/sl6/emi3-emi-wn/
(latest emi-wn-3.17.1-1_v2, Sept 2016)
http://repository.egi.eu/mirrors/EMI/tarball/test/sl6/emi3-emi-ui/
(latest emi-ui-3.17.1-1_v3, September 2016, with the 2.1.7 version of canl-c)
The latest version of both can be found in the grid.cern.ch cvmfs repository.
/cvmfs/grid.cern.ch/emi3ui-latest /cvmfs/grid.cern.ch/emi3wn-latest
Environments for these can be set up using /cvmfs/grid.cern.ch/emi3XX-latest/etc/profile.d/setup-XX-example.sh
Please note that by default the cvmfs clients all use the same vomsdir and vomses settings in /cvmfs/grid.cern.ch/etc/grid-security/, if you would like a VO added to this please contact tarball support.
emi-ui-3.15.0-1_v1 patch
It was discovered that the tool glite-ce-job-output has a hardcoded default path for the uberftp client. To overcome this one needs to create a config file like:
cat $EMI_TARBALL_BASE/etc/emitar-cream-client.conf [ UBERFTP_CLIENT=uberftp ]
And export the variable GLITE_CREAM_CLIENT_CONFIG pointing at this:
export GLITE_CREAM_CLIENT_CONFIG=$EMI_TARBALL_BASE/etc/emitar-cream-client.conf
You only need to do this if using the glite-ce-job-* tools. The UI in cvmfs has this patch installed.
gsiscp
gsiscp, within the gsi ssh tools included in the tarball, has a default hardcoded path for gsissh in it. This can be overcome by specifying which ssh protocol to use (the "-S" option).
gsiscp -S gsissh .....
One can make things a bit easier by aliasing this in your shell (this alias is not included in the example environment setup scripts).
alias gsiscp='gsiscp -S gsissh'
Tarball Structure
Each tarball currently comes in two parts:
- The core tarball, containing the unpacked packages from the EMI repository.
- The "os-extras" tarball, built from packages from the SL and epel repositories.
The rpms that went into building each tarball are listed in "rpmlist.txt" and "os-extras.txt" respectively. A "halfway" approach to the tarball, installing packages from the "os-extras" list and only using the core tarball, is supported and does work.
This structure is currently under review and may change.
Tarball Versions
The tarball versions listed may look convoluted, but there is a system to them! The first part denotes what middleware was used to build the tarball (emi-ui or emi-wn), the second is the version of that middleware built as denoted by the rpm. The _vX is native to the tarball version, denoting the iteration of the tarball for that particular middleware release (things don't always go right first time).
How to install
1. Download the tarball. 2. Unpack the tarball (tar -xzf ....) to an exported volume or onto the node itself. If using the os-extras tarball you will need to download and unpack it in the same directory. 3. Write or edit a script that points such variables as PATH, LD_LIBRARY_PATH, VOMS_USERCONF etc at the tarball. An example setup script is placed in etc/profile.d/ in each tarball. 4. That's it - you should have a working set of the UI and WN middleware. Some extra work is needed for the vomsdir, vomses, CA and CRLs. For a WN you will have to set up the users and batch system yourself.
If wanting to access the tarball in the grid.cern.ch cvmfs repo, simply replace the unpacking of the tarball with setting up cvmfs, enable the grid.cern.ch repository, and have your scripts point there instead. Examples are stored in the repository, which is maintained by the tarball team.
Notes for tarballs containing the gfal2 tools
In order for the GFAL2 tools to work from the tarball there need to be some additions to the environment:
- You need to include the 32-bit site-python in your PYTHONPATH as well as the 64-bit, e.g.:
PYTHONPATH=$EMI_TARBALL_BASE/usr/lib64/python2.6/site-packages:$EMI_TARBALL_BASE/usr/lib/python2.6/site-packages:$PYTHONPATH
- You need to include these two GFAL specific variables, e.g.:
GFAL_PLUGIN_DIR=$EMI_TARBALL_BASE/usr/lib64/gfal2-plugins/ GFAL_CONFIG_DIR=$EMI_TARBALL_BASE/etc/gfal2.d/
CVMFS notes
We advise that one maintains ones own tarball profile scripts, but a functional UI can be obtained on a node simply by (on a node with the grid.cern.ch repo enabled) sourcing /etc/grid-security/emi3ui-latest/etc/profile.d/setup-ui-example.sh. This will use the certificates, vomsdir and vomses in /cvmfs/grid.cern.ch/etc/grid-security - which are not configured for all VOs. Please contact the tarball support team if you would like a VO added or an entry updated.
GLEXEC and the Tarball
Due to the delicate, secure and highly customisable nature of glexec we are unable to supply a proper "relocatable" distribution of the glexec tools. Sites will have to build their own - which has some caveats. On top of needing to build glexec with the prefixes you require for the binary and config paths one needs to account for glexec not using $LD_LIBRARY_PATH and either install the dependencies on the node (which should all be available from the normal or EPEL repos) or install the library into your tarball paths[1] and alter your ld.so.conf settings on the node to include the paths in question. However it could well be that neither of these options are suitable for most tarball sites.
Please see the page RelocatableGlexec for our forays into attempting to build a glexec tarball.
[1] To make this step easier we are investigating including the glexec dependencies in the regular WN tarball.
As of early 2016 glexec is no longer a WLCG requirement, and after consultation with the glexec devs who advised against making a relocatable or roll-your-own glexec, this work was terminated.
Future Plans
As of emi-ui tarball-3.15 we have moved to creating the UI tarball for a more up to date platform, greatly reducing the size and number of packages included. If this causes problems for you please let us know.
Planned new structure
The tarballs are currently produced on the same basic-server install SL6 VM that they have been for the last few years - kept up to date but otherwise untouched. However this has left some problems with some fairly low-level libraries being rolled into them. There is also a problem with the fact that the WN tarball ideally requires the HEPOSLIBS metapackage(s) installed - which can at the same time compound the previous problem whilst simultaneously working against the idea of a "complete tarball" . Finally, feedback has been given that some sites would like the epel and SL repo rpms to be separated from within the "os-extras" tarballs.
These factors have led to us considering a change in the tarball production infrastructure and methodology:
- Tarballs will be produced on a platform of a node in which the HEPOSlibs are already installed to try to reduce the number of "low-level" libraries appearing in it.
- The "os-extras" tarball will be split to "sl-extras" and "epel-extras".
- A single "full" version of the tarballl, made from the base, extras and heposlibs rpms will be produced on a separate (but cloned) node. This full tarball will mainly be intended for use in cvmfs (to aim for a paradigm where all you need is cmvfs installed).
EL7 tarball
Work has started on producing a tarball for EL7. Rather then being built from a meta-rpm these are constructed in a more adhoc manner, constructed from a list of clients. The list currently looks like:
gfal-tools (full suite of plugins), xrootd-clients, nordugrid-clients, fetch-crl, aria2, pacman, voms-tools, srm-tools (these two require some hacking, but being java based should be alright to move).
Current "left out" clients include lcg-utils, glite-ce- clients and rfcp. These may be imported at a later date.
The tarball is being built on a CentOS7 minimal install, which then had the EL7 heposlibs installed on it. A test alpha version of an EL7 UI will hopefully be made available soon.
Update - 19/5/16
This is a test version of the UI tarball for EL7 available in cvmfs, in
/cvmfs/grid.cern.ch/centos7-ui-test
You can the environment for this up by:
source /cvmfs/grid.cern.ch/centos7-ui-test/etc/profile.d/setup-c7-ui-example.sh
This tarball is made from the following packages (and their dependencies):
gfal2-all gfal2-util gfal2-plugin-xrootd gfal2-python gfalFS aria2 fetch-crl xrootd-client globus-common-progs globus-proxy-utils uberftp globus-gsi-cert-utils-progs globus-gass-copy-progs nordugrid-arc-client nordugrid-arc-plugins-needed nordugrid-arc-plugins-xrootd nordugrid-arc-plugins-gfal nordugrid-arc-plugins-globus gsi-openssh-clients voms-clients-cpp ca-policy-egi-core
For reference please see the JIRA ticket: https://its.cern.ch/jira/browse/MWREADY-128
Update - as noted in the JIRA ticket a more complete tarball based on the rpm created by Andreas is available:
/cvmfs/grid.cern.ch/centos7-ui-preview-v01
When available the EL7 tarballs will be available here:
http://repository.egi.eu/mirrors/EMI/tarball/test/centos7
How to contact us
The EMI tarball has its own GGUS support group, this one of the better ways of getting in touch, and of course the place to submit tickets to - either about the regular tarball or the tarballs within grid.cern.ch.
https://wiki.egi.eu/wiki/GGUS:UI_WN_Tarball_FAQ
There is a tarball email address:
tarball-grid-support atSPAMNOT cern.ch
Old docs
The old docs have been secreted here. We hope to improve the documentation over time, when we have time!
This page is a Key Document, and is the responsibility of Matt Doidge. It was last reviewed on 2016-09-30 when it was considered to be 90% complete. It was last judged to be accurate on 2016-09-30.