Difference between revisions of "Tier1 Operations Report 2016-01-06"

From GridPP Wiki
Jump to: navigation, search
()
()
 
(11 intermediate revisions by one user not shown)
Line 9: Line 9:
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Review of Issues during the fortnight 23rd December 2015 to 6th January 2016.
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Review of Issues during the fortnight 23rd December 2015 to 6th January 2016.
 
|}
 
|}
* On Thursday 10th December there was a significant problem on the Tier1 network. A packet storm was followed by the Tier1 network being disconnected from the site network. The trigger appears to have been the restarting of a particular switch. The details of this are not yet understood.
+
* There were significant batch problems during the first part of this fortnight. Problems had been progressively getting worse. These problems were resolved between Christmas and the New Year. The problems were found to be partly excessive load on the Condor components running on the ARC CEs. This load coming from the new draining algorithm. There was an additional problem caused by a parameter change introduced with a Condor update. This problem was initially reported at the last meeting as a high rate of batch job failures seen by LHCb since around the 9th December.
* There was a problem with the recall from tape of a large number of files for LHCb over the weekend of 11/12/13 Dec. This was caused by some poor performance of at least one of the disk servers in the disk cache and a parameter introduced in the Castor 2.1.15 tape servers that delayed the reporting of when files were read from tape.
+
* Over the holiday period we were running with 2*10Gbit link for the "bypass" route that is used by data traffic that does not use the OPN. However, it was found that the ACLs that act as a firewall over this route were not in place. There have been problems this week as we have attempted to put the ACLs back in place and still run with the 2*10Gbit link. There were significant problems during last night (5-6 Jan). We have now reverted to a single 10Gbit link for this connection.
 +
* Over the new year (31/12-01/01) AtlasDataDisk filled up. This caused the Atlas SAM tests to fail. There was very high load on the couple of servers that did have some remaining space - which in turn led to read failures as well. The problem s was alleviated when Atlas changed their deletion algorith on the 2nd Jan to preferentially delete larger files first and free space became available.
 +
* There was a problem with all Castor instances during the day on Monday 4th January when there were internal problems within Castor. The problem lasted roughly the length of the working day but is not yet understood.
 
<!-- ***********End Review of Issues during last week*********** ----->
 
<!-- ***********End Review of Issues during last week*********** ----->
 
<!-- *********************************************************** ----->
 
<!-- *********************************************************** ----->
Line 21: Line 23:
 
| style="background-color: #f8d6a9; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Resolved Disk Server Issues
 
| style="background-color: #f8d6a9; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Resolved Disk Server Issues
 
|}
 
|}
* GDSS675 (CMSTape - D0T1) failed in the early hours of 8th Dec. On investigation found there were two disks that had failed. Returned to production on 11th December.
+
* GDSS665 (AtlasTape) failed on the 21st Dec. It was rebooted and all canbemigr files migrated to tape. A faulty disk drive was replaced. System returned to production on Christmas Day!
* GDSS620 (GenTape - D0T1) also failed during the early morning of 8th Dec. Also returned to production on the 11th Dec. No cause found.
+
* GDSS620 (GenTape - D0T1) failed on the 22nd Dec. It was rebooted and all canbemigr files migrated to tape. Following testing it was put back into service on the 24th December. (But note that it has since failed yet again).
* GDSS689 (AtlasDataDisk - D1T0) was taken out of production on the 10th Dec. when a second disk failed during the rebuilding of a first. Returned to production after the first disk rebuild had completed on Tuesday 15th Dec.
+
* GDSS656 (lhcbRawRdst - D0T1) has a double disk failure on the 23 Dec. It was removed from service while the disks were replaced and the RAID array rebuilt. It was returned to service on the 24th December.
* GDSS654 (LHCbRawRDst - D0T1) was taken out of service on the 11th Dec. Again a precaution as a double disk failure. Returned to production on the 13th Dec.
+
* GDSS675 (CMSTape - D0T1) failed on the afternoon of 23rd December. A faulty drive was found. Also returned to production on Christmas day.
* GDSS617 (AliceDisk -D1T0) Crashed on the 15th Dec. After tests no underlying cause found. Returned to production on the 17th Dec.
+
* GDSS770 (AtlasDataDisk - D1T0) crashed on the 31st December. Following checks it was returned to service later that day.
* GDSS710 (CMSDisk - D1T0) Taken out of production for a short while for a reboot as the RAID card was not seeing a replacement disk.
+
* GDSS710 (CMSDisk - D1T0) Failed on the 2nd January. Following checks it was returned to service the following day.
* GDSS686 (AtlasDataDisk - D1T0) Taken out of production on the 21st Dec. as there was double disk failure. Returned to production on the 22nd after the first disk had rebuilt.
+
* GDSS648 (LHCb -User - D1T0) was taken out of service on the 4th January as two disks had failed. Returned to production the following day.
 +
* GDSS707 (AtlasDataDisk - D1T0) Was taken out of service on the 4th January - one of the Castor processes was failing to run. Following investigations it was returned to production in read-only mode the next day.
 
<!-- ***********End Resolved Disk Server Issues*********** ----->
 
<!-- ***********End Resolved Disk Server Issues*********** ----->
 
<!-- ***************************************************** ----->
 
<!-- ***************************************************** ----->
Line 40: Line 43:
 
* There is a problem seen by LHCb of a low but persistent rate of failure when copying the results of batch jobs to Castor. There is also a further problem that sometimes occurs when these (failed) writes are attempted to storage at other sites. A recent modification has improved, but not completed fixed this.
 
* There is a problem seen by LHCb of a low but persistent rate of failure when copying the results of batch jobs to Castor. There is also a further problem that sometimes occurs when these (failed) writes are attempted to storage at other sites. A recent modification has improved, but not completed fixed this.
 
* The intermittent, low-level, load-related packet loss seen over external connections is still being tracked. Likewise we have been working to understand some remaining low level of packet loss seen within a part of our Tier1 network.
 
* The intermittent, low-level, load-related packet loss seen over external connections is still being tracked. Likewise we have been working to understand some remaining low level of packet loss seen within a part of our Tier1 network.
* There is a problem reported by LHCb of a high rate of batch job failures since around the 9th December. The cause is not yet known.
 
 
<!-- ***********End Current operational status and issues*********** ----->
 
<!-- ***********End Current operational status and issues*********** ----->
 
<!-- *************************************************************** ----->
 
<!-- *************************************************************** ----->
Line 51: Line 53:
 
| style="background-color: #f8d6a9; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Ongoing Disk Server Issues
 
| style="background-color: #f8d6a9; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Ongoing Disk Server Issues
 
|}
 
|}
* GDSS665 (AtlasTape) failed on the 21st Dec. It was rebooted and all canbemigr files migrated to tape. It is undergoing tests.
+
* GDSS620 (GenTape - D0T1) failed on the first January. This server has failed before recently. Investigations ongoing.
* GDSS620 (GenTape - D0T1) failed again (see above) on the 22nd Dec. It was rebooted and all canbemigr files migrated to tape. It is undergoing tests.
+
* GDSS677 (CMSTape - D0T1) failed yesterday evening (5th Jan). It is being worked on.
* GDSS656 (lhcbRawRdst  - D0T1) has a double disk failure on the 23 Dec. It has been removed from service while the disks are replaced and RAID rebuilt.
+
 
<!-- ***************End Ongoing Disk Server Issues**************** ----->
 
<!-- ***************End Ongoing Disk Server Issues**************** ----->
 
<!-- ************************************************************* ----->
 
<!-- ************************************************************* ----->
Line 64: Line 65:
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Notable Changes made since the last meeting.
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Notable Changes made since the last meeting.
 
|}
 
|}
* The final steps have been taken for the removal of the old core network switch which has now been taken off the network.
+
* Following problems the 'bypass' network link has been dropped back to a single 10Gbit link.
* A board has replaced in the UKLight router and another added. Following this the link between the UKLight router and the RAL border router was doubled from a single to a pair of 10Gbit connections. Thus doubling our data bandwidth over this route.
+
* The new batch draining script has been modified to eliminate load on Condor.
* In order to ease problems with tape recalls servers in the LHCbRawRDst service class were converted to use the Linux NOOP IO scheduler.
+
* The quarterly UPS/Generator load test took place successfully this morning.
 +
* A power supply was replaced on Monday 4th Jan. in the disk array that hosts the Castor standby databases. This had reported problems over the holiday period.
 
<!-- *************End Notable Changes made this last week************** ----->
 
<!-- *************End Notable Changes made this last week************** ----->
 
<!-- ****************************************************************** ----->
 
<!-- ****************************************************************** ----->
Line 196: Line 198:
 
| Waiting Reply
 
| Waiting Reply
 
| 2015-11-30
 
| 2015-11-30
| 2015-01-05
+
| 2016-01-05
 
| Atlas
 
| Atlas
 
| gLExec hammercloud jobs failing at RAL-LCG2 since October
 
| gLExec hammercloud jobs failing at RAL-LCG2 since October

Latest revision as of 12:29, 6 January 2016

RAL Tier1 Operations Report for 6th January 2016

Review of Issues during the fortnight 23rd December 2015 to 6th January 2016.
  • There were significant batch problems during the first part of this fortnight. Problems had been progressively getting worse. These problems were resolved between Christmas and the New Year. The problems were found to be partly excessive load on the Condor components running on the ARC CEs. This load coming from the new draining algorithm. There was an additional problem caused by a parameter change introduced with a Condor update. This problem was initially reported at the last meeting as a high rate of batch job failures seen by LHCb since around the 9th December.
  • Over the holiday period we were running with 2*10Gbit link for the "bypass" route that is used by data traffic that does not use the OPN. However, it was found that the ACLs that act as a firewall over this route were not in place. There have been problems this week as we have attempted to put the ACLs back in place and still run with the 2*10Gbit link. There were significant problems during last night (5-6 Jan). We have now reverted to a single 10Gbit link for this connection.
  • Over the new year (31/12-01/01) AtlasDataDisk filled up. This caused the Atlas SAM tests to fail. There was very high load on the couple of servers that did have some remaining space - which in turn led to read failures as well. The problem s was alleviated when Atlas changed their deletion algorith on the 2nd Jan to preferentially delete larger files first and free space became available.
  • There was a problem with all Castor instances during the day on Monday 4th January when there were internal problems within Castor. The problem lasted roughly the length of the working day but is not yet understood.
Resolved Disk Server Issues
  • GDSS665 (AtlasTape) failed on the 21st Dec. It was rebooted and all canbemigr files migrated to tape. A faulty disk drive was replaced. System returned to production on Christmas Day!
  • GDSS620 (GenTape - D0T1) failed on the 22nd Dec. It was rebooted and all canbemigr files migrated to tape. Following testing it was put back into service on the 24th December. (But note that it has since failed yet again).
  • GDSS656 (lhcbRawRdst - D0T1) has a double disk failure on the 23 Dec. It was removed from service while the disks were replaced and the RAID array rebuilt. It was returned to service on the 24th December.
  • GDSS675 (CMSTape - D0T1) failed on the afternoon of 23rd December. A faulty drive was found. Also returned to production on Christmas day.
  • GDSS770 (AtlasDataDisk - D1T0) crashed on the 31st December. Following checks it was returned to service later that day.
  • GDSS710 (CMSDisk - D1T0) Failed on the 2nd January. Following checks it was returned to service the following day.
  • GDSS648 (LHCb -User - D1T0) was taken out of service on the 4th January as two disks had failed. Returned to production the following day.
  • GDSS707 (AtlasDataDisk - D1T0) Was taken out of service on the 4th January - one of the Castor processes was failing to run. Following investigations it was returned to production in read-only mode the next day.
Current operational status and issues
  • There is a problem seen by LHCb of a low but persistent rate of failure when copying the results of batch jobs to Castor. There is also a further problem that sometimes occurs when these (failed) writes are attempted to storage at other sites. A recent modification has improved, but not completed fixed this.
  • The intermittent, low-level, load-related packet loss seen over external connections is still being tracked. Likewise we have been working to understand some remaining low level of packet loss seen within a part of our Tier1 network.
Ongoing Disk Server Issues
  • GDSS620 (GenTape - D0T1) failed on the first January. This server has failed before recently. Investigations ongoing.
  • GDSS677 (CMSTape - D0T1) failed yesterday evening (5th Jan). It is being worked on.
Notable Changes made since the last meeting.
  • Following problems the 'bypass' network link has been dropped back to a single 10Gbit link.
  • The new batch draining script has been modified to eliminate load on Condor.
  • The quarterly UPS/Generator load test took place successfully this morning.
  • A power supply was replaced on Monday 4th Jan. in the disk array that hosts the Castor standby databases. This had reported problems over the holiday period.
Declared in the GOC DB

None

Advanced warning for other interventions
The following items are being discussed and are still to be formally scheduled and announced.
  • Upgrade of remaining Castor disk servers (those in tape-backed service classes) to SL6. This will be transparent to users.

Listing by category:

  • Databases:
    • Switch LFC/3D to new Database Infrastructure.
  • Castor:
    • Update SRMs to new version (includes updating to SL6).
    • Update disk servers in tape-backed service classes to SL6 (ongoing)
    • Update to Castor version 2.1.15.
  • Networking:
    • Make routing changes to allow the removal of the UKLight Router.
  • Fabric
    • Firmware updates on remaining EMC disk arrays (Castor, LFC)
Entries in GOC DB starting since the last report.
Service Scheduled? Outage/At Risk Start End Duration Reason
srm-atlas.gridpp.rl.ac.uk, UNSCHEDULED WARNING 01/01/2016 04:30 01/01/2016 12:27 7 hours and 57 minutes one of four machines in DNS alias down, so some transfer failures possible
Open GGUS Tickets (Snapshot during morning of meeting)
GGUS ID Level Urgency State Creation Last Update VO Subject
118631 Green Very Urgent Waiting Reply 2016-01-05 2016-01-06 Atlas RAL-LCG2 'unable-to-connect' destination transfer errors
118573 Green Urgent In Progress 2016-01-02 2016-01-05 Atlas A lot of RAL-LCG2 data transfers fails
118549 Green Urgent Waiting for Reply 2015-12-30 2016-01-05 CMS Volume Idle about 100% of Volume Requested at T1_UK_RAL
118494 Green Urgent In Progress 2015-12-23 2015-12-24 CMS Xrootd problems???
118209 Green Less Urgent In Progress 2015-12-15 2015-12-18 Enabling CVMFS for the vo.neugrid.eu VO
118044 Green Less Urgent Waiting Reply 2015-11-30 2016-01-05 Atlas gLExec hammercloud jobs failing at RAL-LCG2 since October
117846 Green Urgent Waiting for Reply 2015-11-23 2015-12-22 Atlas ATLAS request- storage consistency checks
117683 Green Less Urgent In Progress 2015-11-18 2016-01-05 CASTOR at RAL not publishing GLUE 2
116866 Amber Less Urgent On Hold 2015-10-12 2015-12-18 SNO+ snoplus support at RAL-LCG2 (pilot role)
116864 Red Urgent In Progress 2015-10-12 2015-12-16 CMS T1_UK_RAL AAA opening and reading test failing again...
Availability Report

Key: Atlas HC = Atlas HammerCloud (Queue ANALY_RAL_SL6, Template 508); CMS HC = CMS HammerCloud

Day OPS Alice Atlas CMS LHCb Atlas HC CMS HC Comment
23/12/15 100 100 100 96 100 100 100 Single SRM test failure. Could not find the test file.
24/12/15 100 100 97 100 100 100 100 SRM failure: File deletion failed in token. Error message: __main__.TimeoutException
25/12/15 100 100 100 100 100 96 100
26/12/15 100 100 100 100 100 100 100
27/12/15 100 100 100 100 100 100 100
28/12/15 100 100 100 100 100 86 100
29/12/15 100 100 100 100 100 0 100
30/12/15 100 100 100 100 100 0 100
31/12/15 100 100 92 96 100 100 100 CMS: Single SRM failure. Atlas: AtlasDataDisk full
01/01/16 100 100 6 100 100 55 100 AtlasDataDisk full
02/01/16 100 100 40 100 100 94 100 AtlasDataDisk full
03/01/16 100 100 100 100 100 97 N/A
04/01/16 96.1 100 80 85 85 100 100 Internal communication problem (or something) affected Castor.
05/01/16 99.6 100 68 68 68 91 N/A Problem on the Bypass (non-OPN) link.