Difference between revisions of "RAL Tier1 weekly operations castor 12/01/2018"

From GridPP Wiki
Jump to: navigation, search
(Created page with "== Draft agenda == 1. Problems encountered this week 2. Upgrades/improvements made this week 3. What are we planning to do next week? 4. Long-term project updates (if not a...")
 
 
Line 28: Line 28:
  
 
== Operation problems ==
 
== Operation problems ==
 +
 +
Lots of disk servers going out of prod.
 +
 +
Buildup of hanging subrequests/migrationjobs in ATLAS Stager DB. Possibly the cause of printmigrationstatus to run slowly.
 +
 +
xrootd-cms (one of the CMS xrootd redirectors) hitting trouble.
 +
 +
Large queue built up in Gen transfermanager on Tuesday.
 +
 +
Meltdown/Spectre: Need to patch CASTOR
 +
 +
gdss745: Complete loss. Need to delete all the diskcopies and clean up.
 +
 +
gdss736: Out of prod
 +
 +
gdss728: Out of prod
 +
 +
gdss756: Read only mode
 +
 +
---
 +
  
 
Quiet Christmas aside from disk server issues.
 
Quiet Christmas aside from disk server issues.
Line 35: Line 56:
 
gdss756 out of production, expected back in production shortly.
 
gdss756 out of production, expected back in production shortly.
  
------
 
 
LHCb DB thread count increase worked (?); on Monday plan to reduce back to original setting (maybe in 2-3 steps)
 
  
 
== Operation news ==
 
== Operation news ==
  
 +
GP back next week!
 +
 +
New CIP currently under construction, will be ready Real Soon Now.
  
  
 
== Plans for next week ==
 
== Plans for next week ==
  
Work on cattle headnodes.
+
Patching for Meltdown/Spectre
  
 
== Long-term projects ==
 
== Long-term projects ==
Line 71: Line 92:
 
GP out until 15th January
 
GP out until 15th January
  
RA on call
+
RA on call until GP takes over next week.

Latest revision as of 10:46, 12 January 2018

Draft agenda

1. Problems encountered this week

2. Upgrades/improvements made this week

3. What are we planning to do next week? 4. Long-term project updates (if not already covered)

  1. SL5 elimination from CIP and tape verification server
  2. CASTOR stress test improvement
  3. Generic CASTOR headnode setup
  4. Aquilonised headnodes

5. Special topics

6. Actions

7. Anything for CASTOR-Fabric?

8. AoTechnicalB

9. Availability for next week

10. On-Call

11. AoOtherB

Operation problems

Lots of disk servers going out of prod.

Buildup of hanging subrequests/migrationjobs in ATLAS Stager DB. Possibly the cause of printmigrationstatus to run slowly.

xrootd-cms (one of the CMS xrootd redirectors) hitting trouble.

Large queue built up in Gen transfermanager on Tuesday.

Meltdown/Spectre: Need to patch CASTOR

gdss745: Complete loss. Need to delete all the diskcopies and clean up.

gdss736: Out of prod

gdss728: Out of prod

gdss756: Read only mode

---


Quiet Christmas aside from disk server issues.

Major failure on gdss745 (atlasStripInput), possible loss of the whole server.

gdss756 out of production, expected back in production shortly.


Operation news

GP back next week!

New CIP currently under construction, will be ready Real Soon Now.


Plans for next week

Patching for Meltdown/Spectre

Long-term projects

CASTOR stress test improvement - Script writing, awaiting testing and validation

Headnode migration to Aquilon - Stager configuration almost complete; functional tests pass. Focus on features but not personalities

Target: Combined headnodes running on SL7/Aquilon - implement CERN-style 'Macro' headnodes.

Draining of remainder of 12 generation HW - waiting on CMS migration to Echo. No more draining ongoing.

Actions

RA to check if the disk server setting, changed to bring disk servers back more quickly after a CASTOR shutdown, is still in place

RA/BD: Run GFAL unit tests against CASTOR. Get them here: https://gitlab.cern.ch/dmc/gfal2/tree/develop/test/

GP: Turn CASTOR SRMs & other Aquilonised nodes from clients to servers

GP/RA to chase up Fabric team about getting action on RT197296(use fdscspr05 as the preprod stager - replacement for ccse08)

Staffing

GP out until 15th January

RA on call until GP takes over next week.