Difference between revisions of "RAL Tier1 weekly operations castor 19/01/2018"
(Created page with "== Draft agenda == 1. Problems encountered this week 2. Upgrades/improvements made this week 3. What are we planning to do next week? 4. Long-term project updates (if not a...") |
(→Operation news) |
||
Line 68: | Line 68: | ||
Still some cleanup to do for assorted admin boxes. | Still some cleanup to do for assorted admin boxes. | ||
− | |||
− | |||
− | |||
− | |||
New CIP currently under construction, will be ready Real Soon Now. | New CIP currently under construction, will be ready Real Soon Now. | ||
− | |||
== Plans for next week == | == Plans for next week == |
Latest revision as of 10:24, 26 January 2018
Contents
Draft agenda
1. Problems encountered this week
2. Upgrades/improvements made this week
3. What are we planning to do next week? 4. Long-term project updates (if not already covered)
1. SL5 elimination from CIP and tape verification server 2. CASTOR stress test improvement 3. Generic CASTOR headnode setup 4. Aquilonised headnodes
5. Special topics
6. Actions
7. Anything for CASTOR-Fabric?
8. AoTechnicalB
9. Availability for next week
10. On-Call
11. AoOtherB
Operation problems
gdss762 (atlasStripInput) - running RAID verify.
gdss717 (cmsTape) - broken but no CANBEMIGRs
gdss776 (lhcbDst) - Memory check & RAID verify
fdsdss34, 35, 36 - Down following reboot for patching. Fabric team aware.
CMS still having trouble with remote xrootd access - two GGUS tickets.
ATLAS - large pileup in the DB - lots of queries (printmigrationstatus, printdiskcopy) running very slowly (> 1 day). This is thought to be due to DB tables growing unreasonably large. The findFailedMigrations script returns a great deal of stuff - is this related? - Fiddling with execution plans made things a very little bit better. RA and MLF to look at later.
Lots of disk servers going out of prod.
Buildup of hanging subrequests/migrationjobs in ATLAS Stager DB. Possibly the cause of printmigrationstatus to run slowly.
xrootd-cms (one of the CMS xrootd redirectors) hitting trouble.
Large queue built up in Gen transfermanager on Tuesday.
Meltdown/Spectre: Need to patch CASTOR
gdss745: Complete loss. Need to delete all the diskcopies and clean up.
gdss736: Out of prod
gdss728: Out of prod
gdss756: Read only mode
Operation news
Patching for Meltdown/Spectre done on both Tier 1 and Facilities.
Still some cleanup to do for assorted admin boxes.
New CIP currently under construction, will be ready Real Soon Now.
Plans for next week
RA: Skiing.
GP: Fix-up on Aqulion SRM profiles.
GP: Roll-up of remaining unpatched CASTOR machines
--
Patching for Meltdown/Spectre
Long-term projects
CASTOR stress test improvement - Script writing, awaiting testing and validation
Headnode migration to Aquilon - Stager configuration almost complete, awaiting testing.
Target: Combined headnodes running on SL7/Aquilon - implement CERN-style 'Macro' headnodes.
Draining of remainder of 12 generation HW - waiting on CMS migration to Echo. No more draining ongoing.
Actions
RA to check if the disk server setting, changed to bring disk servers back more quickly after a CASTOR shutdown, is still in place
RA/BD: Run GFAL unit tests against CASTOR. Get them here: https://gitlab.cern.ch/dmc/gfal2/tree/develop/test/
GP/RA to chase up Fabric team about getting action on RT197296(use fdscspr05 as the preprod stager - replacement for ccse08)
Staffing
GP On call
RA out next week