Difference between revisions of "EMITarball"

From GridPP Wiki
Jump to: navigation, search
(Where to Download)
(Future Plans)
Line 77: Line 77:
  
 
==Future Plans==
 
==Future Plans==
 +
'''As of the latest emi-ui tarball (3.15) we have moved to creating the UI tarball for a more up to date platform, greatly reducing the size and number of packages included. If this causes problems for you please let us know.'''
 +
  
 
===Planned new structure===
 
===Planned new structure===
Line 87: Line 89:
  
 
===SL7 tarball===
 
===SL7 tarball===
Currently it is looking like the SL7 tarball will start off being an "ad-hoc" affair, consisting of a list of (to-be-identified) utilities pulled into a relocatable distribution.  
+
Currently it is looking like the SL7 tarball will start off being an "ad-hoc" affair, consisting of a list of (to-be-identified) utilities pulled into a relocatable distribution.
  
 
==How to contact us==
 
==How to contact us==

Revision as of 14:49, 10 July 2015


The EMI WN UI Tarball project is on going, headed by members of GridPP - particularly Matt Doidge at Lancaster University. Sadly we lost much of our old wiki pages, hopefully this page will answer any questions. Please pardon our mess, and contact us if you have any questions.

Overview

The EMI3 Worker Node and User Interface tarball is produced for SL6 using tools originally written by David Smith (CERN). The goal of the tarball is to allow the EMI WN or UI to be served over a remote export (such as NFS) on a SL6 machine, without installation of any extra packages or rpms. The tarballs can also be exported over cvmfs, and the latest versions of the tarball are available in the grid.cern.ch repository.

Currently only the SL6 versions of the EMI WN and UI tarballs are being developed. Work on an SL7 version will start later in 2015.

Where to Download

The latest versions of the EMI3 WN and UI Tarballs can be found here:

http://repository.egi.eu/mirrors/EMI/tarball/test/sl6/emi3-emi-wn/

(latest emi-wn-3.15.3-1_sl6v1, June 2015 - the first WN tarball with the gfal2 tools in it)

http://repository.egi.eu/mirrors/EMI/tarball/test/sl6/emi3-emi-ui/

(latest emi-ui-3.15.0-1_v1, July 2015 - this vision requires a minor "patch", see below. This is also the first of the "lighter-weight" tarballs.)

The latest version of both can be found in the grid.cern.ch cvmfs repository.

/cvmfs/grid.cern.ch/emi3ui-latest
/cvmfs/grid.cern.ch/emi3wn-latest

Environments for these can be set up using /cvmfs/grid.cern.ch/emi3XX-latest/etc/profile.d/setup-XX-example.sh

emi-ui-3.15.0-1_v1 patch

It was discovered that the tool glite-ce-job-output has a hardcoded default path for the uberftp client. To overcome this one needs to create a config file like:

cat $EMI_TARBALL_BASE/etc/emitar-cream-client.conf 
[
UBERFTP_CLIENT=uberftp
]

And export the variable GLITE_CREAM_CLIENT_CONFIG pointing at this:

export GLITE_CREAM_CLIENT_CONFIG=$EMI_TARBALL_BASE/etc/emitar-cream-client.conf 

You only need to do this if using the glite-ce-job-* tools. The UI in cvmfs has this patch installed.

Tarball Structure

Each tarball currently comes in two parts:

  • The core tarball, containing the unpacked packages from the EMI repository.
  • The "os-extras" tarball, built from packages from the SL and epel repositories.

The rpms that went into building each tarball are listed in "rpmlist.txt" and "os-extras.txt" respectively. A "halfway" approach to the tarball, installing packages from the "os-extras" list and only using the core tarball, is supported and does work.

This structure is currently under review and may change.

Tarball Versions

The tarball versions listed may look convoluted, but there is a system to them! The first part denotes what middleware was used to build the tarball (emi-ui or emi-wn), the second is the version of that middleware built as denoted by the rpm. The _vX is native to the tarball version, denoting the iteration of the tarball for that particular middleware release (things don't always go right first time).

How to install

1. Download the tarball.
2. Unpack the tarball (tar -xzf ....) to an exported volume or onto the node itself. If using the os-extras tarball you will need to download and unpack it in the same directory.
3. Write or edit a script that points such variables as PATH, LD_LIBRARY_PATH, VOMS_USERCONF etc at the tarball. An example setup script is placed in etc/profile.d/ in each tarball.
4. That's it - you should have a working set of the UI and WN middleware. Some extra work is needed for the vomsdir, vomses, CA and CRLs. For a WN you will have to set up the users and batch system yourself.

If wanting to access the tarball in the grid.cern.ch cvmfs repo, simply replace the unpacking of the tarball with setting up cvmfs, enable the grid.cern.ch repository, and have your scripts point there instead. Examples are stored in the repository, which is maintained by the tarball team.

Notes for tarballs containing the gfal2 tools

In order for the GFAL2 tools to work from the tarball there need to be some additions to the environment:

  • You need to include the 32-bit site-python in your PYTHONPATH as well as the 64-bit, e.g.:
  PYTHONPATH=$EMI_TARBALL_BASE/usr/lib64/python2.6/site-packages:$EMI_TARBALL_BASE/usr/lib/python2.6/site-packages:$PYTHONPATH
  • You need to include these two GFAL specific variables, e.g.:
  GFAL_PLUGIN_DIR=$EMI_TARBALL_BASE/usr/lib64/gfal2-plugins/
  GFAL_CONFIG_DIR=$EMI_TARBALL_BASE/etc/gfal2.d/

Future Plans

As of the latest emi-ui tarball (3.15) we have moved to creating the UI tarball for a more up to date platform, greatly reducing the size and number of packages included. If this causes problems for you please let us know.


Planned new structure

The tarballs are currently produced on the same basic-server install SL6 VM that they have been for the last few years - kept up to date but otherwise untouched. However this has left some problems with some fairly low-level libraries being rolled into them. There is also a problem with the fact that the WN tarball ideally requires the HEPOSLIBS metapackage(s) installed - which can at the same time compound the previous problem whilst simultaneously working against the idea of a "complete tarball" . Finally, feedback has been given that some sites would like the epel and SL repo rpms to be separated from within the "os-extras" tarballs.

These factors have led to us considering a change in the tarball production infrastructure and methodology:

  • Tarballs will be produced on a platform of a node in which the HEPOSlibs are already installed to try to reduce the number of "low-level" libraries appearing in it.
  • The "os-extras" tarball will be split to "sl-extras" and "epel-extras".
  • A single "full" version of the tarballl, made from the base, extras and heposlibs rpms will be produced on a separate (but cloned) node. This full tarball will mainly be intended for use in cvmfs (to aim for a paradigm where all you need is cmvfs installed).

SL7 tarball

Currently it is looking like the SL7 tarball will start off being an "ad-hoc" affair, consisting of a list of (to-be-identified) utilities pulled into a relocatable distribution.

How to contact us

The EMI tarball has its own GGUS support group, this one of the better ways of getting in touch, and of course the place to submit tickets to - either about the regular tarball or the tarballs within grid.cern.ch.

https://wiki.egi.eu/wiki/GGUS:UI_WN_Tarball_FAQ

There is a tarball email address:

tarball-grid-support atSPAMNOT cern.ch

Old docs

The old docs have been secreted here. We hope to improve the documentation over time, when we have time!