Difference between revisions of "RAL Tier1 weekly operations castor 26/01/2018"

From GridPP Wiki
Jump to: navigation, search
Line 40: Line 40:
 
Atlas Stager DB load problems continued. GC was not moving and the whole DISKCOPY table had to be recreated  
 
Atlas Stager DB load problems continued. GC was not moving and the whole DISKCOPY table had to be recreated  
 
as it was fragmented [https://helpdesk.gridpp.rl.ac.uk/Ticket/Display.html?id=201623 RT201623]
 
as it was fragmented [https://helpdesk.gridpp.rl.ac.uk/Ticket/Display.html?id=201623 RT201623]
 
  
 
== Operation news ==
 
== Operation news ==
  
Patching for Meltdown/Spectre done castor-functional-test1 and lcgcadm01 boxes
+
Patching for Meltdown/Spectre done for castor-functional-test1 and lcgcadm01 boxes
  
 
GC accelerated on 12 atlasStripInput disk servers.
 
GC accelerated on 12 atlasStripInput disk servers.
  
 
check_tape_pools cron and "c stager canbemigr" Nagios test disabled on Atlas stager. Need to reconsider the use  
 
check_tape_pools cron and "c stager canbemigr" Nagios test disabled on Atlas stager. Need to reconsider the use  
of printmigrationtatus as production monitoring tool due to the DB load it induces (GP to raise a Support ticket)  
+
of printmigrationtatus as a production monitoring tool due to the DB load it induces (GP to raise a Support ticket)  
  
 
== Plans for next week ==
 
== Plans for next week ==
  
GP: Fix-up on Aqulion SRM profiles.
+
GP: Fix-up on Aquilon SRM profiles.
  
 
== Long-term projects ==
 
== Long-term projects ==

Revision as of 10:17, 29 January 2018

Draft agenda

1. Problems encountered this week

2. Upgrades/improvements made this week

3. What are we planning to do next week? 4. Long-term project updates (if not already covered)

  1. SL5 elimination from CIP and tape verification server
  2. CASTOR stress test improvement
  3. Generic CASTOR headnode setup
  4. Aquilonised headnodes

5. Special topics

6. Actions

7. Anything for CASTOR-Fabric?

8. AoTechnicalB

9. Availability for next week

10. On-Call

11. AoOtherB

Operation problems

gdss762 (atlasStripInput) - running RAID verify.

gdss736 (lhcbDst) - Back in production

fdsdss34, 35, 36 - Down following reboot for patching. fdsdss34, 35, abck in prod, 36 stll down

The ACSSS tape library control server (buxton1) failed and refused to reboot. This created a backlog of canbemigrs which was eventually processed after the server was back up and running

Atlas Stager DB load problems continued. GC was not moving and the whole DISKCOPY table had to be recreated as it was fragmented RT201623

Operation news

Patching for Meltdown/Spectre done for castor-functional-test1 and lcgcadm01 boxes

GC accelerated on 12 atlasStripInput disk servers.

check_tape_pools cron and "c stager canbemigr" Nagios test disabled on Atlas stager. Need to reconsider the use of printmigrationtatus as a production monitoring tool due to the DB load it induces (GP to raise a Support ticket)

Plans for next week

GP: Fix-up on Aquilon SRM profiles.

Long-term projects

CASTOR stress test improvement - Script writing, awaiting testing and validation

Headnode migration to Aquilon - Stager configuration almost complete, awaiting testing.

Target: Combined headnodes running on SL7/Aquilon - implement CERN-style 'Macro' headnodes.

Draining of remainder of 12 generation HW - waiting on CMS migration to Echo. No more draining ongoing.

Actions

RA to check if the disk server setting, changed to bring disk servers back more quickly after a CASTOR shutdown, is still in place

RA/BD: Run GFAL unit tests against CASTOR. Get them here: https://gitlab.cern.ch/dmc/gfal2/tree/develop/test/

GP/RA to chase up Fabric team about getting action on RT197296(use fdscspr05 as the preprod stager - replacement for ccse08)

Staffing

RA back next week