Difference between revisions of "RAL Tier1 weekly operations castor 23/02/2018"

From GridPP Wiki
Jump to: navigation, search
(Created page with "== Draft agenda == 1. Problems encountered this week 2. Upgrades/improvements made this week 3. What are we planning to do next week? 4. Long-term project updates (if not ...")
 
Line 30: Line 30:
 
== Operation problems ==
 
== Operation problems ==
  
-----------------------------------------------
+
gdss762 was removed from production and it is now back in read-only mode
  
DNS issue with the genTape disk servers
+
aliceDisk run out of space - VO's responsibility
  
 
== Operation news ==
 
== Operation news ==
  
-----------------------------------------------
+
vCert is back in production
 
+
Modified show_waitspace that parses stagerd.log rather than the output of printmigrationstatus is running on all CASTOR Stagers
+
  
ralreplicas version for repack is available now
+
Patching of the 
  
New hardware is being ordered for Facilities headnodes (vrtualization cluster) and disk servers
 
  
 
== Plans for next week ==
 
== Plans for next week ==
 +
 +
Patching of the Neptune and Pluto DB and testing of switching over to R26 on Thu 01/03
 +
 +
Patching of the  Juno database on 7th of March: CASTOR downtime from 10:30 to 13:30 
  
 
-----------------------------------------------
 
-----------------------------------------------
Line 57: Line 58:
 
intermediate step towards the making 'Macro' headnodes.
 
intermediate step towards the making 'Macro' headnodes.
  
HA-proxyfication of the CASTOR SRMs
+
HA-proxyfication of the CASTOR SRMs: HA proxy is back and can be tested on preprod
  
 
Target: Combined headnodes running on SL7/Aquilon - implement CERN-style 'Macro' headnodes.
 
Target: Combined headnodes running on SL7/Aquilon - implement CERN-style 'Macro' headnodes.
Line 72: Line 73:
  
 
RA to organise a meeting with the Fabric team to discuss outstanding issues with Data Services hardware
 
RA to organise a meeting with the Fabric team to discuss outstanding issues with Data Services hardware
 +
 +
GP to talk to Alastair about draining of 10% of 14 gen disk servers
 +
 +
GP/RA to write a Nagios test to check for large number of requests that remain for a long time
  
 
== Staffing ==
 
== Staffing ==
  
GP on call
+
RA on call

Revision as of 11:31, 23 February 2018

Draft agenda

1. Problems encountered this week

2. Upgrades/improvements made this week

3. What are we planning to do next week?

4. Long-term project updates (if not already covered)

  1. SL5 elimination from CIP and tape verification server
  2. CASTOR stress test improvement
  3. Generic CASTOR headnode setup
  4. Aquilonised headnodes

5. Special topics

6. Actions

7. Anything for CASTOR-Fabric?

8. AoTechnicalB

9. Availability for next week

10. On-Call

11. AoOtherB

Operation problems

gdss762 was removed from production and it is now back in read-only mode

aliceDisk run out of space - VO's responsibility

Operation news

vCert is back in production

Patching of the


Plans for next week

Patching of the Neptune and Pluto DB and testing of switching over to R26 on Thu 01/03

Patching of the Juno database on 7th of March: CASTOR downtime from 10:30 to 13:30


GP: Fix-up on Aquilon SRM profiles: 1) Move nscd feature to a sub-dir task 2) Make castor/cron-jobs/srmbed-monitoring part of castor/daemons/srmbed feature task

Long-term projects

Headnode migration to Aquilon - Stager configuration and testing complete. Aquilon profiles for lst and utility nodes compile OK. Replicate current RAL setup as an intermediate step towards the making 'Macro' headnodes.

HA-proxyfication of the CASTOR SRMs: HA proxy is back and can be tested on preprod

Target: Combined headnodes running on SL7/Aquilon - implement CERN-style 'Macro' headnodes.

Draining of 4 x 13 generation disk servers from Atlas that will be deployed on genTape

Draining of 10% of the 14 generation disk servers

Actions

RA/BD: Run GFAL unit tests against CASTOR. Get them here: https://gitlab.cern.ch/dmc/gfal2/tree/develop/test/

Miguel to determine of there is a need to have a nagios test for checking the number of entries on the newrequests table

RA to organise a meeting with the Fabric team to discuss outstanding issues with Data Services hardware

GP to talk to Alastair about draining of 10% of 14 gen disk servers

GP/RA to write a Nagios test to check for large number of requests that remain for a long time

Staffing

RA on call