Difference between revisions of "RAL Tier1 weekly operations castor 10/11/2017"
(Created page with "Parent article: https://www.gridpp.ac.uk/wiki/RAL_Tier1_weekly_operations_castor == Draft agenda == 1. Problems encountered this week 2. Upgrades/improvements made this wee...") |
|||
Line 32: | Line 32: | ||
== Operation problems == | == Operation problems == | ||
− | + | Possible CMS problem on Sunday - no ticket found. | |
--------------------------- | --------------------------- | ||
− | |||
tape usage plots for facilities instance of castor are not same with Tier1 [https://helpdesk.gridpp.rl.ac.uk/Ticket/Display.html?id=196714 RT196714] - PROGRESS, SEE TICKET | tape usage plots for facilities instance of castor are not same with Tier1 [https://helpdesk.gridpp.rl.ac.uk/Ticket/Display.html?id=196714 RT196714] - PROGRESS, SEE TICKET | ||
− | |||
− | |||
SOLID cannot write into CASTOR - ONGOING | SOLID cannot write into CASTOR - ONGOING | ||
Line 46: | Line 43: | ||
== Operation news == | == Operation news == | ||
+ | All Tier 1 tape servers upgraded to SL7/Aquilon | ||
− | + | Facilities headnodes replaced with 'new' hardware, patched to latests kernel/errata. New headnodes didn't get the correct kernel, had to do a short-notice reboot on Thursday afternoon to fix this while under cover of another Diamond problem. | |
− | - | + | |
− | + | ||
− | + | ||
− | + | ||
− | + | ||
== Plans for next week == | == Plans for next week == | ||
− | + | Upgrade LHCb CASTOR SRMs to 2.1.16-18 | |
+ | Patch CASTOR to November kernel & errata. | ||
--------------------------- | --------------------------- | ||
− | |||
− | |||
− | |||
− | |||
− | |||
== Long-term projects == | == Long-term projects == | ||
Line 73: | Line 62: | ||
Configure multiple instances of a CASTOR generic headnode | Configure multiple instances of a CASTOR generic headnode | ||
− | + | Draining of remainder of 12 generation HW waiting on CMS migration to Echo | |
− | + | ||
+ | Combined headnodes running on SL7/Aquilon - implement CERN-style 'Macro' headnodes. | ||
== Special topics == | == Special topics == | ||
− | + | Patching - consider turning repeatedly re-used patching plan into standard procedure. | |
== Actions == | == Actions == | ||
Line 87: | Line 77: | ||
RA/GP: Run GDAL unit tests against CASTOR. Get them here: https://gitlab.cern.ch/dmc/gfal2/tree/develop/test/ | RA/GP: Run GDAL unit tests against CASTOR. Get them here: https://gitlab.cern.ch/dmc/gfal2/tree/develop/test/ | ||
− | |||
− | |||
== Staffing == | == Staffing == | ||
− | + | TBD on call | |
− | GP out on Fri | + | GP out on Fri 17/11 |
Revision as of 10:53, 10 November 2017
Parent article: https://www.gridpp.ac.uk/wiki/RAL_Tier1_weekly_operations_castor
Contents
Draft agenda
1. Problems encountered this week
2. Upgrades/improvements made this week
3. What are we planning to do next week?
1. Facilities Headnode Replacement
4. Long-term project updates (if not already covered)
1. SL7 upgrade on tape servers 2. SL5 elimination from CASTOR functional test boxes and tape verification server 3. CASTOR stress test improvement 4. Generic CASTOR headnode setup
5. Special topics
6. Actions
7. Anything for CASTOR-Fabric?
8. AoTechnicalB
9. Availability for next week
10. On-Call
11. AoOtherB
Operation problems
Possible CMS problem on Sunday - no ticket found.
tape usage plots for facilities instance of castor are not same with Tier1 RT196714 - PROGRESS, SEE TICKET
SOLID cannot write into CASTOR - ONGOING
Operation news
All Tier 1 tape servers upgraded to SL7/Aquilon
Facilities headnodes replaced with 'new' hardware, patched to latests kernel/errata. New headnodes didn't get the correct kernel, had to do a short-notice reboot on Thursday afternoon to fix this while under cover of another Diamond problem.
Plans for next week
Upgrade LHCb CASTOR SRMs to 2.1.16-18 Patch CASTOR to November kernel & errata.
Long-term projects
CASTOR stress test improvement
Configure multiple instances of a CASTOR generic headnode
Draining of remainder of 12 generation HW waiting on CMS migration to Echo
Combined headnodes running on SL7/Aquilon - implement CERN-style 'Macro' headnodes.
Special topics
Patching - consider turning repeatedly re-used patching plan into standard procedure.
Actions
RA to check if the disk server setting, changed to bring disk servers back more quickly after a CASTOR shutdown, is still in place
GP to talk to AL about service ownership/handover of the xroot manager boxes
RA/GP: Run GDAL unit tests against CASTOR. Get them here: https://gitlab.cern.ch/dmc/gfal2/tree/develop/test/
Staffing
TBD on call
GP out on Fri 17/11