RAL Tier1 weekly operations castor 16/02/2018

From GridPP Wiki
Jump to: navigation, search

Draft agenda

1. Problems encountered this week

2. Upgrades/improvements made this week

3. What are we planning to do next week?

4. Long-term project updates (if not already covered)

  1. SL5 elimination from CIP and tape verification server
  2. CASTOR stress test improvement
  3. Generic CASTOR headnode setup
  4. Aquilonised headnodes

5. Special topics

6. Actions

7. Anything for CASTOR-Fabric?

8. AoTechnicalB

9. Availability for next week

10. On-Call

11. AoOtherB

Operation problems

DNS issue with the genTape disk servers

Operation news

Modified show_waitspace that parses stagerd.log rather than the output of printmigrationstatus is running on all CASTOR Stagers

ralreplicas version for repack is available now

New hardware is being ordered for Facilities headnodes (vrtualization cluster) and disk servers

Plans for next week

GP: Fix-up on Aquilon SRM profiles: 1) Move nscd feature to a sub-dir task 2) Make castor/cron-jobs/srmbed-monitoring part of castor/daemons/srmbed feature task

Long-term projects

Headnode migration to Aquilon - Stager configuration and testing complete. Aquilon profiles for lst and utility nodes compile OK. Replicate current RAL setup as an intermediate step towards the making 'Macro' headnodes.

HA-proxyfication of the CASTOR SRMs

Target: Combined headnodes running on SL7/Aquilon - implement CERN-style 'Macro' headnodes.

Draining of 4 x 13 generation disk servers from Atlas that will be deployed on genTape

Draining of 10% of the 14 generation disk servers

Actions

RA/BD: Run GFAL unit tests against CASTOR. Get them here: https://gitlab.cern.ch/dmc/gfal2/tree/develop/test/

Miguel to determine of there is a need to have a nagios test for checking the number of entries on the newrequests table

RA to organise a meeting with the Fabric team to discuss outstanding issues with Data Services hardware

Staffing

GP on call