Difference between revisions of "RAL Tier1 weekly operations castor 16/02/2018"

From GridPP Wiki
Jump to: navigation, search
(Plans for next week)
 
(3 intermediate revisions by 2 users not shown)
Line 6: Line 6:
  
 
3. What are we planning to do next week?
 
3. What are we planning to do next week?
 +
 
4. Long-term project updates (if not already covered)
 
4. Long-term project updates (if not already covered)
  
Line 29: Line 30:
 
== Operation problems ==
 
== Operation problems ==
  
-----------------------------------
+
DNS issue with the genTape disk servers
 
+
Follow up on gdss762 and fdsdss36
+
  
 
== Operation news ==
 
== Operation news ==
  
-----------------------------------
+
Modified show_waitspace that parses stagerd.log rather than the output of printmigrationstatus is running on all CASTOR Stagers
  
 +
ralreplicas version for repack is available now
  
New hardware is being ordered for Facilities headnodes and disk servers
+
New hardware is being ordered for Facilities headnodes (vrtualization cluster) and disk servers
  
 
== Plans for next week ==
 
== Plans for next week ==
  
-----------------------------------
+
GP: Fix-up on Aquilon SRM profiles: 1) Move nscd feature to a sub-dir [https://phabricator.gridpp.rl.ac.uk/T2 task]
 
+
2) Make castor/cron-jobs/srmbed-monitoring part of castor/daemons/srmbed feature [https://phabricator.gridpp.rl.ac.uk/T3 task]
Run new check_tape_pools version on all Castor stagers
+
 
+
GP: Fix-up on Aquilon SRM profiles.
+
  
 
== Long-term projects ==
 
== Long-term projects ==
 
-----------------------------------
 
  
 
Headnode migration to Aquilon - Stager configuration and testing complete. Aquilon  
 
Headnode migration to Aquilon - Stager configuration and testing complete. Aquilon  
Line 59: Line 54:
  
 
Target: Combined headnodes running on SL7/Aquilon - implement CERN-style 'Macro' headnodes.
 
Target: Combined headnodes running on SL7/Aquilon - implement CERN-style 'Macro' headnodes.
 
Draining of remainder of 12 generation HW - waiting on CMS migration to Echo. No more draining ongoing.
 
  
 
Draining of 4 x 13 generation disk servers from Atlas that will be deployed on genTape
 
Draining of 4 x 13 generation disk servers from Atlas that will be deployed on genTape
Line 67: Line 60:
  
 
== Actions ==  
 
== Actions ==  
 
-----------------------------------
 
  
 
RA/BD:  Run GFAL unit tests against CASTOR. Get them here: https://gitlab.cern.ch/dmc/gfal2/tree/develop/test/
 
RA/BD:  Run GFAL unit tests against CASTOR. Get them here: https://gitlab.cern.ch/dmc/gfal2/tree/develop/test/
 
GP/RA to chase up Fabric team about getting action on RT197296(use fdscspr05 as the preprod stager - replacement for ccse08)
 
  
 
Miguel to determine of there is a need to have a nagios test for checking the number of entries on the newrequests table
 
Miguel to determine of there is a need to have a nagios test for checking the number of entries on the newrequests table
 
GP to generate a ralreplicas version for repack
 
  
 
RA to organise a meeting with the Fabric team to discuss outstanding issues with Data Services hardware
 
RA to organise a meeting with the Fabric team to discuss outstanding issues with Data Services hardware
Line 82: Line 69:
 
== Staffing ==
 
== Staffing ==
  
-----------------------------------
+
GP on call
 
+
RA on call
+
 
+
GP out on Wed 14/2
+

Latest revision as of 14:02, 16 February 2018

Draft agenda

1. Problems encountered this week

2. Upgrades/improvements made this week

3. What are we planning to do next week?

4. Long-term project updates (if not already covered)

  1. SL5 elimination from CIP and tape verification server
  2. CASTOR stress test improvement
  3. Generic CASTOR headnode setup
  4. Aquilonised headnodes

5. Special topics

6. Actions

7. Anything for CASTOR-Fabric?

8. AoTechnicalB

9. Availability for next week

10. On-Call

11. AoOtherB

Operation problems

DNS issue with the genTape disk servers

Operation news

Modified show_waitspace that parses stagerd.log rather than the output of printmigrationstatus is running on all CASTOR Stagers

ralreplicas version for repack is available now

New hardware is being ordered for Facilities headnodes (vrtualization cluster) and disk servers

Plans for next week

GP: Fix-up on Aquilon SRM profiles: 1) Move nscd feature to a sub-dir task 2) Make castor/cron-jobs/srmbed-monitoring part of castor/daemons/srmbed feature task

Long-term projects

Headnode migration to Aquilon - Stager configuration and testing complete. Aquilon profiles for lst and utility nodes compile OK. Replicate current RAL setup as an intermediate step towards the making 'Macro' headnodes.

HA-proxyfication of the CASTOR SRMs

Target: Combined headnodes running on SL7/Aquilon - implement CERN-style 'Macro' headnodes.

Draining of 4 x 13 generation disk servers from Atlas that will be deployed on genTape

Draining of 10% of the 14 generation disk servers

Actions

RA/BD: Run GFAL unit tests against CASTOR. Get them here: https://gitlab.cern.ch/dmc/gfal2/tree/develop/test/

Miguel to determine of there is a need to have a nagios test for checking the number of entries on the newrequests table

RA to organise a meeting with the Fabric team to discuss outstanding issues with Data Services hardware

Staffing

GP on call