Difference between revisions of "Tier1 Operations Report 2016-01-06"
From GridPP Wiki
(→) |
(→RAL Tier1 Operations Report for 6th January 2016) |
||
Line 21: | Line 21: | ||
| style="background-color: #f8d6a9; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Resolved Disk Server Issues | | style="background-color: #f8d6a9; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Resolved Disk Server Issues | ||
|} | |} | ||
− | * | + | * |
− | * GDSS620 (GenTape - D0T1) | + | * GDSS665 (AtlasTape) failed on the 21st Dec. It was rebooted and all canbemigr files migrated to tape. There wasa bad disk that was replaced. Returned to production on Christmas Day! |
− | * | + | * GDSS620 (GenTape - D0T1) failed on the 22nd Dec. It was rebooted and all canbemigr files migrated to tape. Following testing it was put back into service on the 24th December. (But note that it has since failed yet again). |
− | * | + | * GDSS656 (lhcbRawRdst - D0T1) has a double disk failure on the 23 Dec. It has been removed from service while the disks are replaced and RAID rebuilt. It was returned to service on the 24th December. |
− | * | + | * GDSS675 (CMSTape - D0T1) failed on the afternoon of 23rd December. A faulty drive was found. Also returned to production on Christmas day. |
− | * GDSS710 (CMSDisk - D1T0) | + | * GDSS770 (AtlasDataDisk - D1T0) crashed on the 31st December. Following checks it was returned to service later that day. |
− | * | + | * GDSS710 (CMSDisk - D1T0) Failed on teh 2nd January. Following checks it was returne dto service the following day. |
+ | * GDSS648 (LHCb -User - D1T0) was taken out of service on the 4th January as two disks had failed. Returned to production the following day. | ||
+ | * GDSS707 (AtlasDataDisk - D1T0) Was taken out of service on teh 4th January - one of the Castor processes was failing to run. Following investigations it was returned to production in read-only mode the next day. | ||
<!-- ***********End Resolved Disk Server Issues*********** -----> | <!-- ***********End Resolved Disk Server Issues*********** -----> | ||
<!-- ***************************************************** -----> | <!-- ***************************************************** -----> | ||
Line 51: | Line 53: | ||
| style="background-color: #f8d6a9; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Ongoing Disk Server Issues | | style="background-color: #f8d6a9; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Ongoing Disk Server Issues | ||
|} | |} | ||
− | * | + | * GDSS620 (GenTape - D0T1) failed on the first January. This server has failed before recently. Investigations ongoing. |
− | * | + | * GDSS677 (CMSTape - D0T1) failed yesterday evening (5th Jan). It is being worked on. |
− | + | ||
<!-- ***************End Ongoing Disk Server Issues**************** -----> | <!-- ***************End Ongoing Disk Server Issues**************** -----> | ||
<!-- ************************************************************* -----> | <!-- ************************************************************* -----> |
Revision as of 12:06, 6 January 2016
RAL Tier1 Operations Report for 6th January 2016
Review of Issues during the fortnight 23rd December 2015 to 6th January 2016. |
- On Thursday 10th December there was a significant problem on the Tier1 network. A packet storm was followed by the Tier1 network being disconnected from the site network. The trigger appears to have been the restarting of a particular switch. The details of this are not yet understood.
- There was a problem with the recall from tape of a large number of files for LHCb over the weekend of 11/12/13 Dec. This was caused by some poor performance of at least one of the disk servers in the disk cache and a parameter introduced in the Castor 2.1.15 tape servers that delayed the reporting of when files were read from tape.
Resolved Disk Server Issues |
- GDSS665 (AtlasTape) failed on the 21st Dec. It was rebooted and all canbemigr files migrated to tape. There wasa bad disk that was replaced. Returned to production on Christmas Day!
- GDSS620 (GenTape - D0T1) failed on the 22nd Dec. It was rebooted and all canbemigr files migrated to tape. Following testing it was put back into service on the 24th December. (But note that it has since failed yet again).
- GDSS656 (lhcbRawRdst - D0T1) has a double disk failure on the 23 Dec. It has been removed from service while the disks are replaced and RAID rebuilt. It was returned to service on the 24th December.
- GDSS675 (CMSTape - D0T1) failed on the afternoon of 23rd December. A faulty drive was found. Also returned to production on Christmas day.
- GDSS770 (AtlasDataDisk - D1T0) crashed on the 31st December. Following checks it was returned to service later that day.
- GDSS710 (CMSDisk - D1T0) Failed on teh 2nd January. Following checks it was returne dto service the following day.
- GDSS648 (LHCb -User - D1T0) was taken out of service on the 4th January as two disks had failed. Returned to production the following day.
- GDSS707 (AtlasDataDisk - D1T0) Was taken out of service on teh 4th January - one of the Castor processes was failing to run. Following investigations it was returned to production in read-only mode the next day.
Current operational status and issues |
- There is a problem seen by LHCb of a low but persistent rate of failure when copying the results of batch jobs to Castor. There is also a further problem that sometimes occurs when these (failed) writes are attempted to storage at other sites. A recent modification has improved, but not completed fixed this.
- The intermittent, low-level, load-related packet loss seen over external connections is still being tracked. Likewise we have been working to understand some remaining low level of packet loss seen within a part of our Tier1 network.
- There is a problem reported by LHCb of a high rate of batch job failures since around the 9th December. The cause is not yet known.
Ongoing Disk Server Issues |
- GDSS620 (GenTape - D0T1) failed on the first January. This server has failed before recently. Investigations ongoing.
- GDSS677 (CMSTape - D0T1) failed yesterday evening (5th Jan). It is being worked on.
Notable Changes made since the last meeting. |
- The final steps have been taken for the removal of the old core network switch which has now been taken off the network.
- A board has replaced in the UKLight router and another added. Following this the link between the UKLight router and the RAL border router was doubled from a single to a pair of 10Gbit connections. Thus doubling our data bandwidth over this route.
- In order to ease problems with tape recalls servers in the LHCbRawRDst service class were converted to use the Linux NOOP IO scheduler.
Declared in the GOC DB |
None
Advanced warning for other interventions |
The following items are being discussed and are still to be formally scheduled and announced. |
- Upgrade of remaining Castor disk servers (those in tape-backed service classes) to SL6. This will be transparent to users.
Listing by category:
- Databases:
- Switch LFC/3D to new Database Infrastructure.
- Castor:
- Update SRMs to new version (includes updating to SL6).
- Update disk servers in tape-backed service classes to SL6 (ongoing)
- Update to Castor version 2.1.15.
- Networking:
- Make routing changes to allow the removal of the UKLight Router.
- Fabric
- Firmware updates on remaining EMC disk arrays (Castor, LFC)
Entries in GOC DB starting since the last report. |
Service | Scheduled? | Outage/At Risk | Start | End | Duration | Reason |
---|---|---|---|---|---|---|
srm-atlas.gridpp.rl.ac.uk, | UNSCHEDULED | WARNING | 01/01/2016 04:30 | 01/01/2016 12:27 | 7 hours and 57 minutes | one of four machines in DNS alias down, so some transfer failures possible |
Open GGUS Tickets (Snapshot during morning of meeting) |
GGUS ID | Level | Urgency | State | Creation | Last Update | VO | Subject |
---|---|---|---|---|---|---|---|
118631 | Green | Very Urgent | Waiting Reply | 2016-01-05 | 2016-01-06 | Atlas | RAL-LCG2 'unable-to-connect' destination transfer errors |
118573 | Green | Urgent | In Progress | 2016-01-02 | 2016-01-05 | Atlas | A lot of RAL-LCG2 data transfers fails |
118549 | Green | Urgent | Waiting for Reply | 2015-12-30 | 2016-01-05 | CMS | Volume Idle about 100% of Volume Requested at T1_UK_RAL |
118494 | Green | Urgent | In Progress | 2015-12-23 | 2015-12-24 | CMS | Xrootd problems??? |
118209 | Green | Less Urgent | In Progress | 2015-12-15 | 2015-12-18 | Enabling CVMFS for the vo.neugrid.eu VO | |
118044 | Green | Less Urgent | Waiting Reply | 2015-11-30 | 2016-01-05 | Atlas | gLExec hammercloud jobs failing at RAL-LCG2 since October |
117846 | Green | Urgent | Waiting for Reply | 2015-11-23 | 2015-12-22 | Atlas | ATLAS request- storage consistency checks |
117683 | Green | Less Urgent | In Progress | 2015-11-18 | 2016-01-05 | CASTOR at RAL not publishing GLUE 2 | |
116866 | Amber | Less Urgent | On Hold | 2015-10-12 | 2015-12-18 | SNO+ | snoplus support at RAL-LCG2 (pilot role) |
116864 | Red | Urgent | In Progress | 2015-10-12 | 2015-12-16 | CMS | T1_UK_RAL AAA opening and reading test failing again... |
Availability Report |
Key: Atlas HC = Atlas HammerCloud (Queue ANALY_RAL_SL6, Template 508); CMS HC = CMS HammerCloud
Day | OPS | Alice | Atlas | CMS | LHCb | Atlas HC | CMS HC | Comment |
---|---|---|---|---|---|---|---|---|
23/12/15 | 100 | 100 | 100 | 96 | 100 | 100 | 100 | Single SRM test failure. Could not find the test file. |
24/12/15 | 100 | 100 | 97 | 100 | 100 | 100 | 100 | SRM failure: File deletion failed in token. Error message: __main__.TimeoutException |
25/12/15 | 100 | 100 | 100 | 100 | 100 | 96 | 100 | |
26/12/15 | 100 | 100 | 100 | 100 | 100 | 100 | 100 | |
27/12/15 | 100 | 100 | 100 | 100 | 100 | 100 | 100 | |
28/12/15 | 100 | 100 | 100 | 100 | 100 | 86 | 100 | |
29/12/15 | 100 | 100 | 100 | 100 | 100 | 0 | 100 | |
30/12/15 | 100 | 100 | 100 | 100 | 100 | 0 | 100 | |
31/12/15 | 100 | 100 | 92 | 96 | 100 | 100 | 100 | CMS: Single SRM failure. Atlas: AtlasDataDisk full |
01/01/16 | 100 | 100 | 6 | 100 | 100 | 55 | 100 | AtlasDataDisk full |
02/01/16 | 100 | 100 | 40 | 100 | 100 | 94 | 100 | AtlasDataDisk full |
03/01/16 | 100 | 100 | 100 | 100 | 100 | 97 | N/A | |
04/01/16 | 96.1 | 100 | 80 | 85 | 85 | 100 | 100 | Internal communication problem (or something) affected Castor. |
05/01/16 | 99.6 | 100 | 68 | 68 | 68 | 91 | N/A | Problem on the Bypass (non-OPN) link. |