GDB 14th November 2012

From GridPP Wiki
Jump to: navigation, search

Future meetings

- We've been talking about the future of middleware support with a view to the post EMI period; this is on the agenda for the December GDB, and anyone Markus Schulz is trying to collect feedback on his proposal by the 20th of November.

- There is a new WLCG Operations Coordination meeting that happens on the 1st and 3rd Thursdays of each month. That's as much as I know about it.

EMI Migration and SHA2 proxies

- On the EMI migration, there's nothing we don't already know. Nineteen sites are currently 'eligible' for suspension for now having migrated or reported plans, none of them are in the UK. There is still some lack of clarity about the plans for migrating worker nodes, but I think we're basically on top of it, not withstanding the tarball issue (on which, more later).

- On SHA2 - there should not be any use of SHA2 before August 2013; this is after EMI1 goes EOL, so the support only needs to be targeted to EMI2 versions.

Security Working Group update

There are two strands to this - 'tracability', on which there's been little progress, but Roman asked for more effort from interested people, and the 'identity federation pilot' which sounds a lot like CERN are planning to build something like SARoNGS. It might be as well if someone who actually knows about SARoNGS could get in touch to point out that we already did.

Information Systems

- There's a new BDII release expected in December, which should appear in EPEL.

- There seems to have been some discussion between WLCG and EGI regarding whether to set multiple BDIIs in the LCG_GFAL_INFOSYS variable (aka the BDII_LIST YAIM setting), with WLCG thinking that sounded sensible, and EGI apparently thinking it would be better for to just use one and try not to ever break it (!). Given that I think we mostly tend to use several, but all UK based ones, I suggest we just ignore this bit of silliness.

- There is a command line tool available for querying Glue 2 information; it's called 'ginfo' and is in EPEL now.

- Stephen Burke is doing terribly sensible stuff about nailing down how Glue 2 should be used in practice; it's all interesting, but it's probably going to make it to the sites by being codified in middleware rather than being something we need to think about directly. However, for the interested, there is a period for comments open until December 7th.

Middleware clients and VO software areas

There's been some discussion about the possibility of the VOs maintaining their own copies of the middleware clients in their VO software areas (or their cvmfs areas) rather than having them provided by the nodes. The three major LHC VOs outlined their positions:

- CMS focussed rather more on their ability to work with different WN environments including their stuff working with EMI 2 WN installs. They did note that they're happy to run on EMI 2 on SL6 nodes, and that any sites who had no other reasons to avoid moving to SL6 were 'encouraged' to do so. - LHCb don't use any tools from the worker node anyway, preferring to use their own copies so that they can keep an eye on the exact versions that they're using. They have no intention of doing anything different, so really don't care what everyone else does. - ATLAS have been doing some work on getting a tarball that they can unpack into cvmfs, but noted that it's not just a matter of converting the packages with rpm2cpio. Overall it sounds like ATLAS are OK with contributing to the effort, but don't really fancy having to do the whole thing themselves if they can avoid it.

David Smith then reported on the work he's been doing in collaboration with several GridPP sites to try to build a generic tarball install. It essentially works by pulling in all the EMI and EPEL packages required above a base SL5 install, converting them via rpm2cpio, and fixing up a few details. Also generated is a list of the RPMs that are required from the SL repos that may not be present on a minimal install, but they're not included in the tarball. The tarball also deliberately does not include the Certificate Authorities - sites would/will need to get and maintain those another way.

There was then quite a bit of discussion which finished with a rough conclusion that we'd work in the general direction of exploring the potential of maybe having the generic tarball install maintained in cvmfs, but in a WLCG repo, not in a specific VO area.

GFAL 2

Finally, there was a report on the state and purpose of GFAL 2; a new (and API incompatible) successor to the Grid File Access Library and its tools. GFAL 2 is intended to be one client library that can access anything.

- It is usable and 'production quality' now, and is being used for some things (including, I think, FTS 3). - It includes gfalFS; a FUSE filesystem that allows anything that GFAL 2 understands (like an SE) to be mounted. - It's in EMI and EPEL now, with Debian packaging being worked on. - There are plans for GFAL 2 based replacements for the common command line tools (e.g. lcg-cr et al); the GFAL 2 people are interested in hearing which ones people particularly value and would be interested in having the reimplementations of.

Next GDB 12th December. Tier-2 rep is Matt.