Difference between revisions of "RAL Tier1 weekly operations castor 26/01/2018"

From GridPP Wiki
Jump to: navigation, search
(Operation problems)
 
(3 intermediate revisions by one user not shown)
Line 33: Line 33:
 
gdss736 (lhcbDst) - Back in production  
 
gdss736 (lhcbDst) - Back in production  
  
fdsdss34, 35, 36 - Down following reboot for patching. fdsdss34, 35, abck in prod, 36 stll down
+
fdsdss34, 35, 36 - Down following reboot for patching. fdsdss34, 35, back in prod, 36 stll down
  
 
The ACSSS tape library control server (buxton1) failed and refused to reboot. This created a backlog
 
The ACSSS tape library control server (buxton1) failed and refused to reboot. This created a backlog
Line 40: Line 40:
 
Atlas Stager DB load problems continued. GC was not moving and the whole DISKCOPY table had to be recreated  
 
Atlas Stager DB load problems continued. GC was not moving and the whole DISKCOPY table had to be recreated  
 
as it was fragmented [https://helpdesk.gridpp.rl.ac.uk/Ticket/Display.html?id=201623 RT201623]
 
as it was fragmented [https://helpdesk.gridpp.rl.ac.uk/Ticket/Display.html?id=201623 RT201623]
 
 
_----------------------------
 
 
 
 
CMS still having trouble with remote xrootd access - two GGUS tickets.
 
 
ATLAS - large pileup in the DB - lots of queries (printmigrationstatus, printdiskcopy) running very slowly (> 1 day). This is thought to be due to DB tables growing unreasonably large. The findFailedMigrations script returns a great deal of stuff - is this related?
 
- Fiddling with execution plans made things a very little bit better. RA and MLF to look at later.
 
 
  
 
== Operation news ==
 
== Operation news ==
  
Patching for Meltdown/Spectre done on both Tier 1 and Facilities.
+
Patching for Meltdown/Spectre done for castor-functional-test1 and lcgcadm01 boxes
  
Still some cleanup to do for assorted admin boxes.
+
GC accelerated on 12 atlasStripInput disk servers.
  
New CIP currently under construction, will be ready Real Soon Now.
+
check_tape_pools cron and "c stager canbemigr" Nagios test disabled on Atlas stager. Need to reconsider the use
 +
of printmigrationtatus as a production monitoring tool due to the DB load it induces (GP to raise a Support ticket)
  
 
== Plans for next week ==
 
== Plans for next week ==
  
RA: Skiing.
+
GP: Fix-up on Aquilon SRM profiles.
 
+
GP: Fix-up on Aqulion SRM profiles.
+
 
+
GP: Roll-up of remaining unpatched CASTOR machines
+
 
+
--
+
 
+
Patching for Meltdown/Spectre
+
  
 
== Long-term projects ==
 
== Long-term projects ==
Line 92: Line 74:
 
== Staffing ==
 
== Staffing ==
  
GP On call
+
RA back next week
 
+
RA out next week
+

Latest revision as of 10:26, 29 January 2018

Draft agenda

1. Problems encountered this week

2. Upgrades/improvements made this week

3. What are we planning to do next week? 4. Long-term project updates (if not already covered)

  1. SL5 elimination from CIP and tape verification server
  2. CASTOR stress test improvement
  3. Generic CASTOR headnode setup
  4. Aquilonised headnodes

5. Special topics

6. Actions

7. Anything for CASTOR-Fabric?

8. AoTechnicalB

9. Availability for next week

10. On-Call

11. AoOtherB

Operation problems

gdss762 (atlasStripInput) - running RAID verify.

gdss736 (lhcbDst) - Back in production

fdsdss34, 35, 36 - Down following reboot for patching. fdsdss34, 35, back in prod, 36 stll down

The ACSSS tape library control server (buxton1) failed and refused to reboot. This created a backlog of canbemigrs which was eventually processed after the server was back up and running

Atlas Stager DB load problems continued. GC was not moving and the whole DISKCOPY table had to be recreated as it was fragmented RT201623

Operation news

Patching for Meltdown/Spectre done for castor-functional-test1 and lcgcadm01 boxes

GC accelerated on 12 atlasStripInput disk servers.

check_tape_pools cron and "c stager canbemigr" Nagios test disabled on Atlas stager. Need to reconsider the use of printmigrationtatus as a production monitoring tool due to the DB load it induces (GP to raise a Support ticket)

Plans for next week

GP: Fix-up on Aquilon SRM profiles.

Long-term projects

CASTOR stress test improvement - Script writing, awaiting testing and validation

Headnode migration to Aquilon - Stager configuration almost complete, awaiting testing.

Target: Combined headnodes running on SL7/Aquilon - implement CERN-style 'Macro' headnodes.

Draining of remainder of 12 generation HW - waiting on CMS migration to Echo. No more draining ongoing.

Actions

RA to check if the disk server setting, changed to bring disk servers back more quickly after a CASTOR shutdown, is still in place

RA/BD: Run GFAL unit tests against CASTOR. Get them here: https://gitlab.cern.ch/dmc/gfal2/tree/develop/test/

GP/RA to chase up Fabric team about getting action on RT197296(use fdscspr05 as the preprod stager - replacement for ccse08)

Staffing

RA back next week