Difference between revisions of "Tier1 Operations Report 2015-11-25"
From GridPP Wiki
(→) |
(→) |
||
Line 9: | Line 9: | ||
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Review of Issues during the week 18th to 25th November 2015. | | style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Review of Issues during the week 18th to 25th November 2015. | ||
|} | |} | ||
− | * | + | * None |
− | + | ||
− | + | ||
<!-- ***********End Review of Issues during last week*********** -----> | <!-- ***********End Review of Issues during last week*********** -----> | ||
<!-- *********************************************************** -----> | <!-- *********************************************************** -----> |
Revision as of 09:48, 25 November 2015
RAL Tier1 Operations Report for 25th November 2015
Review of Issues during the week 18th to 25th November 2015. |
- None
Resolved Disk Server Issues |
- GDSS707 (AtlasDataDisk - D1T0) was returned to services yesterday afternoon (24th Nov.). It had been out of production since Friday (16th Oct). The server was drained, the CPU swapped, and the server put through a week long re-acceptance test before being returned to service.
- GDSS720 (AtlasDataDisk - D1T0) was returned to services yesterday afternoon (24th Nov.). It had failed on 9th Nov. The server was drained, the CPU swapped, and the server put through a week long re-acceptance test before being returned to service.
- GDSS678 (CMSTape - D0T1) failed on Saturday evening (21st Nov). The server was returned to production yesterday afternoon (24th Nov). A disk had been replaced and the rebuild completed before returning to service. Two files were declared lost to CMS - both of which were being written at the time the server failed.
Current operational status and issues |
- There is a problem seen by LHCb of a low but persistent rate of failure when copying the results of batch jobs to Castor. There is also a further problem that sometimes occurs when these (failed) writes are attempted to storage at other sites.
- The intermittent, low-level, load-related packet loss seen over external connections is still being tracked. Likewise we have been working to understand some remaining low level of packet loss seen within a part of our Tier1 network.
Ongoing Disk Server Issues |
None
Notable Changes made since the last meeting. |
- None.
Declared in the GOC DB |
- None
Advanced warning for other interventions |
The following items are being discussed and are still to be formally scheduled and announced. |
- Upgrade of remaining Castor disk servers (those in tape-backed service classes) to SL6. This will be transparent to users.
- Some detailed internal network re-configurations to enable the removal of the old 'core' switch from our network. This includes changing the way the UKLIGHT router connects into the Tier1 network.
- Implementing a changed algorithm for the draining of worker nodes to make space for multi-core jobs. The new version allow "pre-emptable" jobs to run in the job slots until they are needed.
Listing by category:
- Databases:
- Switch LFC/3D to new Database Infrastructure.
- Castor:
- Update SRMs to new version (includes updating to SL6).
- Update disk servers to SL6 (ongoing)
- Update to Castor version 2.1.15.
- Networking:
- Complete changes needed to remove the old core switch from the Tier1 network.
- Make routing changes to allow the removal of the UKLight Router.
- Fabric
- Firmware updates on remaining EMC disk arrays (Castor, LFC)
Entries in GOC DB starting since the last report. |
None
Open GGUS Tickets (Snapshot during morning of meeting) |
GGUS ID | Level | Urgency | State | Creation | Last Update | VO | Subject |
---|---|---|---|---|---|---|---|
117846 | Green | Urgent | In Progress | 2015-11-23 | 2015-11-24 | Atlas | ATLAS request- storage consistency checks |
117683 | Green | Less Urgent | In Progress | 2015-11-18 | 2015-11-19 | CASTOR at RAL not publishing GLUE 2 | |
116866 | Yellow | Less Urgent | In Progress | 2015-10-12 | 2015-10-19 | SNO+ | snoplus support at RAL-LCG2 (pilot role) |
116864 | Yellow | Urgent | In Progress | 2015-10-12 | 2015-11-12 | CMS | T1_UK_RAL AAA opening and reading test failing again... |
Availability Report |
Key: Atlas HC = Atlas HammerCloud (Queue ANALY_RAL_SL6, Template 508); CMS HC = CMS HammerCloud
Day | OPS | Alice | Atlas | CMS | LHCb | Atlas HC | CMS HC | Comment |
---|---|---|---|---|---|---|---|---|
18/11/15 | 100 | 100 | 100 | 100 | 96 | 100 | 100 | Single SRM test failure on listing "[SRM_INVALID_PATH] No such file or directory" |
19/11/15 | 100 | 100 | 100 | 100 | 100 | 100 | N/A | |
20/11/15 | 100 | 100 | 100 | 100 | 100 | 100 | 100 | |
21/11/15 | 100 | 100 | 100 | 100 | 100 | 100 | 100 | |
22/11/15 | 100 | 100 | 100 | 100 | 100 | 100 | 100 | |
23/11/15 | 100 | 100 | 100 | 100 | 100 | 100 | 100 | |
24/11/15 | 100 | 100 | 100 | 96 | 100 | 97 | 100 | Single SRM Test failure on PUT. Monitoring problem - couldn't find file to copy in. |