Difference between revisions of "RAL Tier1 weekly operations castor 09/02/2018"

From GridPP Wiki
Jump to: navigation, search
(Created page with "== Draft agenda == 1. Problems encountered this week 2. Upgrades/improvements made this week 3. What are we planning to do next week? 4. Long-term project updates (if not a...")
 
Line 29: Line 29:
 
== Operation problems ==
 
== Operation problems ==
  
 
+
Follow up on gdss762 and fdsdss36
-------------------------------------------
+
 
+
gdss762 (atlasStripInput) crashed again
+
 
+
gdss761 (lhcbDst) crashed and removed from prod
+
 
+
Alice used more of the batch farm than they are allowed to and this put a large load
+
on aliceDisk disk servers
+
  
 
== Operation news ==
 
== Operation news ==
  
 
+
New hardware is being ordered for Facilities (headnodes and disk servers)
 
+
----------------------------------------------
+
 
+
New check_tape_pools version running on Atlas Stager
+
 
+
Disk space rebalancing on lhcbDst
+
  
 
== Plans for next week ==
 
== Plans for next week ==
 
 
------------------------------------------------
 
  
 
Run new check_tape_pools version on all Castor stagers
 
Run new check_tape_pools version on all Castor stagers
Line 60: Line 43:
 
== Long-term projects ==
 
== Long-term projects ==
  
CASTOR stress test improvement - Script writing, awaiting testing and validation
+
Headnode migration to Aquilon - Stager configuration and testing complete. Ongoing
 +
progress with lsf and utility boxes
  
Headnode migration to Aquilon - Stager configuration and testing complete.
+
HA-proxyfication of the CASTOR SRMs
  
 
Target: Combined headnodes running on SL7/Aquilon - implement CERN-style 'Macro' headnodes.
 
Target: Combined headnodes running on SL7/Aquilon - implement CERN-style 'Macro' headnodes.
Line 69: Line 53:
  
 
Draining of 4 x 13 generation disk servers from Atlas that will be deployed on genTape
 
Draining of 4 x 13 generation disk servers from Atlas that will be deployed on genTape
 +
 +
Draining of 10% of the 14 generation disk servers
  
 
== Actions ==  
 
== Actions ==  
Line 77: Line 63:
  
 
Miguel to determine of there is a need to have a nagios test for checking the number of entries on the newrequests table
 
Miguel to determine of there is a need to have a nagios test for checking the number of entries on the newrequests table
 
RA and GP to sit down and evaluate the gfal-copy stress test
 
  
 
GP to generate a ralreplicas version for repack
 
GP to generate a ralreplicas version for repack
 
RA to organise a meeting with the Production team to discuss the CASTOR callouts spreadsheet
 
  
 
RA to organise a meeting with the Fabric team to discuss outstanding issues with Data Services hardware
 
RA to organise a meeting with the Fabric team to discuss outstanding issues with Data Services hardware
Line 88: Line 70:
 
== Staffing ==
 
== Staffing ==
  
GP on call
+
RA on call
 +
 
 +
GP out on Wed 14/2

Revision as of 11:00, 9 February 2018

Draft agenda

1. Problems encountered this week

2. Upgrades/improvements made this week

3. What are we planning to do next week? 4. Long-term project updates (if not already covered)

  1. SL5 elimination from CIP and tape verification server
  2. CASTOR stress test improvement
  3. Generic CASTOR headnode setup
  4. Aquilonised headnodes

5. Special topics

6. Actions

7. Anything for CASTOR-Fabric?

8. AoTechnicalB

9. Availability for next week

10. On-Call

11. AoOtherB

Operation problems

Follow up on gdss762 and fdsdss36

Operation news

New hardware is being ordered for Facilities (headnodes and disk servers)

Plans for next week

Run new check_tape_pools version on all Castor stagers

GP: Fix-up on Aquilon SRM profiles.

Long-term projects

Headnode migration to Aquilon - Stager configuration and testing complete. Ongoing progress with lsf and utility boxes

HA-proxyfication of the CASTOR SRMs

Target: Combined headnodes running on SL7/Aquilon - implement CERN-style 'Macro' headnodes.

Draining of remainder of 12 generation HW - waiting on CMS migration to Echo. No more draining ongoing.

Draining of 4 x 13 generation disk servers from Atlas that will be deployed on genTape

Draining of 10% of the 14 generation disk servers

Actions

RA/BD: Run GFAL unit tests against CASTOR. Get them here: https://gitlab.cern.ch/dmc/gfal2/tree/develop/test/

GP/RA to chase up Fabric team about getting action on RT197296(use fdscspr05 as the preprod stager - replacement for ccse08)

Miguel to determine of there is a need to have a nagios test for checking the number of entries on the newrequests table

GP to generate a ralreplicas version for repack

RA to organise a meeting with the Fabric team to discuss outstanding issues with Data Services hardware

Staffing

RA on call

GP out on Wed 14/2