Difference between revisions of "RAL Tier1 weekly operations castor 23/02/2018"

From GridPP Wiki
Jump to: navigation, search
(Created page with "== Draft agenda == 1. Problems encountered this week 2. Upgrades/improvements made this week 3. What are we planning to do next week? 4. Long-term project updates (if not ...")
 
(Actions)
 
(7 intermediate revisions by one user not shown)
Line 30: Line 30:
 
== Operation problems ==
 
== Operation problems ==
  
-----------------------------------------------
+
gdss762 was removed from production and it is now back in read-only mode
  
DNS issue with the genTape disk servers
+
aliceDisk run out of space - VO's responsibility
  
 
== Operation news ==
 
== Operation news ==
  
-----------------------------------------------
+
vCert is back in production
  
Modified show_waitspace that parses stagerd.log rather than the output of printmigrationstatus is running on all CASTOR Stagers
+
Patching of the Castor Standby Environments
 
+
ralreplicas version for repack is available now
+
 
+
New hardware is being ordered for Facilities headnodes (vrtualization cluster) and disk servers
+
  
 
== Plans for next week ==
 
== Plans for next week ==
  
-----------------------------------------------
+
Patching of the Neptune and Pluto DB and testing of switching over to R26 on Thu 01/03: Rolling update
 +
and test of the switch over to standby. CASTOR downtime from
 +
 
 +
Patching of the  Juno database on 7th of March: CASTOR downtime from 10:30 to 13:30 
  
 
GP: Fix-up on Aquilon SRM profiles: 1) Move nscd feature to a sub-dir [https://phabricator.gridpp.rl.ac.uk/T2 task]  
 
GP: Fix-up on Aquilon SRM profiles: 1) Move nscd feature to a sub-dir [https://phabricator.gridpp.rl.ac.uk/T2 task]  
Line 53: Line 52:
 
== Long-term projects ==
 
== Long-term projects ==
  
Headnode migration to Aquilon - Stager configuration and testing complete. Aquilon
+
Headnode migration to Aquilon - Stager, scheduler, utility and nameserver configuration mainly complete.  
profiles for lst and utility nodes compile OK. Replicate current RAL setup as an
+
Stager, scheduler, utility tested seperately and all together on preprod. Will start combining the  
intermediate step towards the making 'Macro' headnodes.
+
Stager, scheduler, utility features in one node
  
HA-proxyfication of the CASTOR SRMs
+
HA-proxyfication of the CASTOR SRMs: HA proxy is back and can be tested on preprod
  
 
Target: Combined headnodes running on SL7/Aquilon - implement CERN-style 'Macro' headnodes.
 
Target: Combined headnodes running on SL7/Aquilon - implement CERN-style 'Macro' headnodes.
Line 68: Line 67:
  
 
RA/BD:  Run GFAL unit tests against CASTOR. Get them here: https://gitlab.cern.ch/dmc/gfal2/tree/develop/test/
 
RA/BD:  Run GFAL unit tests against CASTOR. Get them here: https://gitlab.cern.ch/dmc/gfal2/tree/develop/test/
 
Miguel to determine of there is a need to have a nagios test for checking the number of entries on the newrequests table
 
  
 
RA to organise a meeting with the Fabric team to discuss outstanding issues with Data Services hardware
 
RA to organise a meeting with the Fabric team to discuss outstanding issues with Data Services hardware
 +
 +
GP to talk to Alastair about draining of 10% of 14 gen disk servers
 +
 +
GP/RA to write a Nagios test to check for large number of requests that remain for a long time
  
 
== Staffing ==
 
== Staffing ==
  
GP on call
+
RA on call

Latest revision as of 12:28, 23 February 2018

Draft agenda

1. Problems encountered this week

2. Upgrades/improvements made this week

3. What are we planning to do next week?

4. Long-term project updates (if not already covered)

  1. SL5 elimination from CIP and tape verification server
  2. CASTOR stress test improvement
  3. Generic CASTOR headnode setup
  4. Aquilonised headnodes

5. Special topics

6. Actions

7. Anything for CASTOR-Fabric?

8. AoTechnicalB

9. Availability for next week

10. On-Call

11. AoOtherB

Operation problems

gdss762 was removed from production and it is now back in read-only mode

aliceDisk run out of space - VO's responsibility

Operation news

vCert is back in production

Patching of the Castor Standby Environments

Plans for next week

Patching of the Neptune and Pluto DB and testing of switching over to R26 on Thu 01/03: Rolling update and test of the switch over to standby. CASTOR downtime from

Patching of the Juno database on 7th of March: CASTOR downtime from 10:30 to 13:30

GP: Fix-up on Aquilon SRM profiles: 1) Move nscd feature to a sub-dir task 2) Make castor/cron-jobs/srmbed-monitoring part of castor/daemons/srmbed feature task

Long-term projects

Headnode migration to Aquilon - Stager, scheduler, utility and nameserver configuration mainly complete. Stager, scheduler, utility tested seperately and all together on preprod. Will start combining the Stager, scheduler, utility features in one node

HA-proxyfication of the CASTOR SRMs: HA proxy is back and can be tested on preprod

Target: Combined headnodes running on SL7/Aquilon - implement CERN-style 'Macro' headnodes.

Draining of 4 x 13 generation disk servers from Atlas that will be deployed on genTape

Draining of 10% of the 14 generation disk servers

Actions

RA/BD: Run GFAL unit tests against CASTOR. Get them here: https://gitlab.cern.ch/dmc/gfal2/tree/develop/test/

RA to organise a meeting with the Fabric team to discuss outstanding issues with Data Services hardware

GP to talk to Alastair about draining of 10% of 14 gen disk servers

GP/RA to write a Nagios test to check for large number of requests that remain for a long time

Staffing

RA on call