Difference between revisions of "GridPP nitty gritty"

From GridPP Wiki
Jump to: navigation, search
(How does GridPP distribute the computing work load ?)
(Contacting GridPP for technical reasons)
Line 20: Line 20:
  
 
== Contacting GridPP for technical reasons ==
 
== Contacting GridPP for technical reasons ==
The computing is handled by GridPP personnel (e.g. I'm on a GridPP
+
The computing is handled by GridPP personnel, rather than the experiment  
grant), rather than the experiment groups at the sites.
+
groups at the sites.
You can always find the relevant contact information in the GOCDB:
+
You can always find the relevant contact information for a site in the GOCDB:
 
[https://goc.egi.eu/portal/ https://goc.egi.eu/portal/]
 
[https://goc.egi.eu/portal/ https://goc.egi.eu/portal/]
 
The sites (apart from RAL) don't really have enough staff that there
 
The sites (apart from RAL) don't really have enough staff that there
would be a designated contact person for an experiment at a site, even
+
would be a designated contact person for an experiment at, even e.g.
 
at Imperial which is one of the biggest institution around, everyone
 
at Imperial which is one of the biggest institution around, everyone
 
does everything. So using the designated lists might be a more
 
does everything. So using the designated lists might be a more
 
sustainable way, as opposed to emailing individuals at sites.
 
sustainable way, as opposed to emailing individuals at sites.
  
We also use ggus ([https://ggus.eu/ https://ggus.eu/]) for issue tracking. This has the
+
We also use GGUS ([https://ggus.eu/ https://ggus.eu/]) for issue tracking. This has the
 
advantage that we can all see each others issues and intervene if
 
advantage that we can all see each others issues and intervene if
 
necessary. You can file a ticket as a "service request" rather than an
 
necessary. You can file a ticket as a "service request" rather than an

Revision as of 15:11, 24 August 2018

How does GridPP distribute the computing work load ?

GridPP is a collaboration of 19 UK universities and while individual groups of researchers at universities are members of specific experiments, the computing resources are shared. I.e. Imperial is a member of CMS, but still processes Atlas jobs as long as there is capacity. The reverse is true for QMUL, which is a member of Atlas, but also processes CMS jobs. Additionally we (at least for certain experiments) process data stored at other (close) sites, so if QMUL processes CMS data, the data is actually located at Imperial. And so on. So you will see sites providing DUNE resources that are not involved in DUNE and never will be.

Right now (August 2018) a quick query of the information system for DUNE shows the following UK sites supporting DUNE: Lancaster, QMUL, RHUL, ECDF, Imperial, Liverpool, Manchester, Brunel, Bristol, Oxford and Sheffield. Some of the sites will only provide SL6 or SL7, in case this is an issue for DUNE.

Contacting GridPP for technical reasons

The computing is handled by GridPP personnel, rather than the experiment groups at the sites. You can always find the relevant contact information for a site in the GOCDB: https://goc.egi.eu/portal/ The sites (apart from RAL) don't really have enough staff that there would be a designated contact person for an experiment at, even e.g. at Imperial which is one of the biggest institution around, everyone does everything. So using the designated lists might be a more sustainable way, as opposed to emailing individuals at sites.

We also use GGUS (https://ggus.eu/) for issue tracking. This has the advantage that we can all see each others issues and intervene if necessary. You can file a ticket as a "service request" rather than an incident if you wanted to use it for setting up a site. Currently, there is no designated entry in GGUS for DUNE in the VO fields, so choose "other" and mention DUNE in the ticket description.

Certificates

GridPP only accepts certificates from trusted authorities, for the US that's OSG and cilogon silver. The full list of trusted authorities can be found on cvmfs under /cvmfs/grid.cern.ch/etc/grid-security/certificates. Here are the instructions if you want to install them locally.