Difference between revisions of "RAL Tier1 weekly operations castor 02/02/2018"

From GridPP Wiki
Jump to: navigation, search
(Created page with "== Draft agenda == 1. Problems encountered this week 2. Upgrades/improvements made this week 3. What are we planning to do next week? 4. Long-term project updates (if not a...")
 
 
(One intermediate revision by one user not shown)
Line 29: Line 29:
 
== Operation problems ==
 
== Operation problems ==
  
 +
gdss762 (atlasStripInput) crashed again
  
 +
gdss761 (lhcbDst) crashed and removed from prod
  
-------------------------------------------
+
Alice used more of the batch farm than they are allowed to and this put a large load
 
+
on aliceDisk disk servers
gdss762 (atlasStripInput) - running RAID verify.
+
 
+
gdss736 (lhcbDst) - Back in production
+
 
+
fdsdss34, 35, 36 - Down following reboot for patching. fdsdss34, 35, back in prod, 36 stll down
+
 
+
The ACSSS tape library control server (buxton1) failed and refused to reboot. This created a backlog
+
of canbemigrs which was eventually processed after the server was back up and running
+
 
+
Atlas Stager DB load problems continued. GC was not moving and the whole DISKCOPY table had to be recreated
+
as it was fragmented [https://helpdesk.gridpp.rl.ac.uk/Ticket/Display.html?id=201623 RT201623]
+
  
 
== Operation news ==
 
== Operation news ==
  
 +
New check_tape_pools version running on Atlas Stager
  
----------------------------------------------
+
Disk space rebalancing on lhcbDst
 
+
Patching for Meltdown/Spectre done for castor-functional-test1 and lcgcadm01 boxes
+
 
+
GC accelerated on 12 atlasStripInput disk servers.
+
 
+
check_tape_pools cron and "c stager canbemigr" Nagios test disabled on Atlas stager. Need to reconsider the use
+
of printmigrationtatus as a production monitoring tool due to the DB load it induces (GP to raise a Support ticket)
+
  
 
== Plans for next week ==
 
== Plans for next week ==
  
-------------------------------------------
+
Run new check_tape_pools version on all Castor stagers
  
 
GP: Fix-up on Aquilon SRM profiles.
 
GP: Fix-up on Aquilon SRM profiles.
Line 67: Line 52:
 
CASTOR stress test improvement - Script writing, awaiting testing and validation
 
CASTOR stress test improvement - Script writing, awaiting testing and validation
  
Headnode migration to Aquilon - Stager configuration almost complete, awaiting testing.
+
Headnode migration to Aquilon - Stager configuration and testing complete.
  
 
Target: Combined headnodes running on SL7/Aquilon - implement CERN-style 'Macro' headnodes.
 
Target: Combined headnodes running on SL7/Aquilon - implement CERN-style 'Macro' headnodes.
  
 
Draining of remainder of 12 generation HW - waiting on CMS migration to Echo. No more draining ongoing.
 
Draining of remainder of 12 generation HW - waiting on CMS migration to Echo. No more draining ongoing.
 +
 +
Draining of 4 x 13 generation disk servers from Atlas that will be deployed on genTape
  
 
== Actions ==  
 
== Actions ==  
 
RA to check if the disk server setting, changed to bring disk servers back more quickly after a CASTOR shutdown, is still in place
 
  
 
RA/BD:  Run GFAL unit tests against CASTOR. Get them here: https://gitlab.cern.ch/dmc/gfal2/tree/develop/test/
 
RA/BD:  Run GFAL unit tests against CASTOR. Get them here: https://gitlab.cern.ch/dmc/gfal2/tree/develop/test/
Line 81: Line 66:
 
GP/RA to chase up Fabric team about getting action on RT197296(use fdscspr05 as the preprod stager - replacement for ccse08)
 
GP/RA to chase up Fabric team about getting action on RT197296(use fdscspr05 as the preprod stager - replacement for ccse08)
  
== Staffing ==
+
Miguel to determine of there is a need to have a nagios test for checking the number of entries on the newrequests table
 +
 
 +
RA and GP to sit down and evaluate the gfal-copy stress test
 +
 
 +
GP to generate a ralreplicas version for repack
 +
 
 +
RA to organise a meeting with the Production team to discuss the CASTOR callouts spreadsheet
  
-------------------------------------------
+
RA to organise a meeting with the Fabric team to discuss outstanding issues with Data Services hardware
 +
 
 +
== Staffing ==
  
RA back next week
+
GP on call

Latest revision as of 11:12, 2 February 2018

Draft agenda

1. Problems encountered this week

2. Upgrades/improvements made this week

3. What are we planning to do next week? 4. Long-term project updates (if not already covered)

  1. SL5 elimination from CIP and tape verification server
  2. CASTOR stress test improvement
  3. Generic CASTOR headnode setup
  4. Aquilonised headnodes

5. Special topics

6. Actions

7. Anything for CASTOR-Fabric?

8. AoTechnicalB

9. Availability for next week

10. On-Call

11. AoOtherB

Operation problems

gdss762 (atlasStripInput) crashed again

gdss761 (lhcbDst) crashed and removed from prod

Alice used more of the batch farm than they are allowed to and this put a large load on aliceDisk disk servers

Operation news

New check_tape_pools version running on Atlas Stager

Disk space rebalancing on lhcbDst

Plans for next week

Run new check_tape_pools version on all Castor stagers

GP: Fix-up on Aquilon SRM profiles.

Long-term projects

CASTOR stress test improvement - Script writing, awaiting testing and validation

Headnode migration to Aquilon - Stager configuration and testing complete.

Target: Combined headnodes running on SL7/Aquilon - implement CERN-style 'Macro' headnodes.

Draining of remainder of 12 generation HW - waiting on CMS migration to Echo. No more draining ongoing.

Draining of 4 x 13 generation disk servers from Atlas that will be deployed on genTape

Actions

RA/BD: Run GFAL unit tests against CASTOR. Get them here: https://gitlab.cern.ch/dmc/gfal2/tree/develop/test/

GP/RA to chase up Fabric team about getting action on RT197296(use fdscspr05 as the preprod stager - replacement for ccse08)

Miguel to determine of there is a need to have a nagios test for checking the number of entries on the newrequests table

RA and GP to sit down and evaluate the gfal-copy stress test

GP to generate a ralreplicas version for repack

RA to organise a meeting with the Production team to discuss the CASTOR callouts spreadsheet

RA to organise a meeting with the Fabric team to discuss outstanding issues with Data Services hardware

Staffing

GP on call