Difference between revisions of "Tier1 Operations Report 2016-06-29"

From GridPP Wiki
Jump to: navigation, search
()
()
 
(6 intermediate revisions by one user not shown)
Line 9: Line 9:
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Review of Issues during the fortnight 15th to 29th June 2016.
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Review of Issues during the fortnight 15th to 29th June 2016.
 
|}
 
|}
* Problems with the tape library were reported last week - and in meetings before then. During this week we have not had any hardware problems with the library. However, the control software became very unstable at the end of last week and continued to be unstable through the weekend. We have been in contact with Oracle about this problem. Yesterday (14th) the control system was configured to only use tape drives in the non-Tier1 library - this includes four Tier1 drives physically located in that library. The system ran stably during last night - with this very limited Tier1 tape capacity. This morning some more of the tape drives in the Tier1 library have been progressively brought into use and we are closely monitoring the software during this process.
+
* The problems with the "ACSLS" control software for the tape library crashing as reported at previous meetings continued up until last Wednesday (22nd). Various actions were taken to understand the problem and we have worked closely with the vendor (Oracle). Since that date the system has been stable - with no crashes at all for a week. We do have a reduced number of Tier1 tape drives in use - although this has been sufficient to keep up with the existing workload. However, the cause of the problem has not been understood and our investigations continue. We will be changing various configurations of the tape library during the working day with the aim of understanding this problem, returning to what be believe is a known good configuration overnight.
* We continue to high usage of the OPN link to CERN with the inbound traffic saturating the 10Gbit link for several periods (of some hours each) within the week.
+
* At the start of this period we continued to see high load on the OPN link. However this has eased off for the last week or so.
 +
* On Friday there was a problem with lcgwms04. A user had submitted many jobs needing large output sandboxes. The user was contacted and responded quickly. It took us a while to clear jobs already in lcgwms04 and there were some ongoing problems until the start of this week.
 +
* Yesterday afternoon (Tuesday 28th) there was a problem with the Atlas Castor instance that lasted for a few hours. This was resolved by a restart of processes. We have suggestion as to the cause of this problem (a resource limit) although it is not conclusive.
 
<!-- ***********End Review of Issues during last week*********** ----->
 
<!-- ***********End Review of Issues during last week*********** ----->
 
<!-- *********************************************************** ----->
 
<!-- *********************************************************** ----->
Line 21: Line 23:
 
| style="background-color: #f8d6a9; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Resolved Disk Server Issues
 
| style="background-color: #f8d6a9; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Resolved Disk Server Issues
 
|}
 
|}
* GDSS670 (AliceDisk - D1T0) crashed in the early hours of Sunday morning (12th). Following a disk replacement it was put back in service in read-only mode at the end of Monday afternoon (13th) as another disk was showing media errors. Returned to full production this morning (15th).
+
* GDSS748 (AtlasDataDisk - D1T0) crashed on the afternoon of Sunday morning 19th June. There were problems with the RAID array which were finally fixed by transplanting all the disks into another chassis which was re-named. The server was put back in service read-only on Tuesday 21st June. It is being drained ahead of sorting out the re-named system.
<!-- ***********End Resolved Disk Server Issues*********** ----->
+
* GDSS743 (AtlasDataDisk - D1T0) crashed in the early hours of Monday 20th June. Following two disk replacements it was put back in service the following day, initially read-only. Once teh disk rebuilds had all completed it was set back to read/write.
 
<!-- ***************************************************** ----->
 
<!-- ***************************************************** ----->
  
Line 55: Line 57:
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Notable Changes made since the last meeting.
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Notable Changes made since the last meeting.
 
|}
 
|}
* The latest version of the "machine job features" software has been rolled out to all worker nodes.
+
* All access to GenScratch has now been stopped. (As announced)
* The "test" FTS3 instance was upgraded to version 3.4.6 last Thursday.
+
* ARC-CE02 has been reinstalled on new virtual infrastructure. The space for job sandboxes has been tripled.
 +
* All access to lecgwms06 has now been stopped - largely completing the decommissioning procedure.
 +
* The migration of Atlas data from "C" to "D" tapes continues. We have migrated around 600 of the 1300 tapes so far.  
 
<!-- *************End Notable Changes made this last week************** ----->
 
<!-- *************End Notable Changes made this last week************** ----->
 
<!-- ****************************************************************** ----->
 
<!-- ****************************************************************** ----->
Line 98: Line 102:
 
|}
 
|}
 
<!-- ******* still to be formally scheduled and/or announced ******* ----->
 
<!-- ******* still to be formally scheduled and/or announced ******* ----->
* Decommissioning of "GEN Scratch" storage in Castor. (Formally announced by EGI broadcast). Write access to this area has now been stopped in preparation for completely stopping access on the 20th June.
 
* Decommissioning of lcgwms06. This has stopped receiving new work ahead of full withdrawal.
 
 
'''Listing by category:'''
 
'''Listing by category:'''
 
* Databases:
 
* Databases:
Line 111: Line 113:
 
* Fabric
 
* Fabric
 
** Firmware updates on remaining EMC disk arrays (Castor, LFC)
 
** Firmware updates on remaining EMC disk arrays (Castor, LFC)
* Grid Services
 
** Once the use of the Load Balancer (HAProxy) has been proven for the test FTS service it will be extended to other services.
 
 
<!-- ***************End Advanced warning for other interventions*************** ----->
 
<!-- ***************End Advanced warning for other interventions*************** ----->
 
<!-- ************************************************************************** ----->
 
<!-- ************************************************************************** ----->
Line 133: Line 133:
 
! Reason
 
! Reason
 
|-
 
|-
| All Castor tape
+
|arc-ce02.gridpp.rl.ac.uk, arc-ce02.gridpp.rl.ac.uk,
 +
| SCHEDULED
 +
| OUTAGE
 +
| 20/06/2016 10:00
 +
| 27/06/2016 17:00
 +
| 7 days, 7 hours
 +
| CE being drained and and moved.
 +
|-
 +
| All Castor tape.
 
| UNSCHEDULED
 
| UNSCHEDULED
 
| WARNING
 
| WARNING
Line 140: Line 148:
 
| 1 day, 2 hours
 
| 1 day, 2 hours
 
| Tape access limited while we investigate problem with tape library control software crashing.
 
| Tape access limited while we investigate problem with tape library control software crashing.
|-
 
| lcgfts3.gridpp.rl.ac.uk,
 
| SCHEDULED
 
| WARNING
 
| 08/06/2016 10:00
 
| 08/06/2016 11:00
 
| 1 hour
 
| Upgrade of FTS3 service to version 3.4.6
 
 
|-
 
|-
 
| lcgwms06.gridpp.rl.ac.uk,  
 
| lcgwms06.gridpp.rl.ac.uk,  

Latest revision as of 09:56, 29 June 2016

RAL Tier1 Operations Report for 29th June 2016

Review of Issues during the fortnight 15th to 29th June 2016.
  • The problems with the "ACSLS" control software for the tape library crashing as reported at previous meetings continued up until last Wednesday (22nd). Various actions were taken to understand the problem and we have worked closely with the vendor (Oracle). Since that date the system has been stable - with no crashes at all for a week. We do have a reduced number of Tier1 tape drives in use - although this has been sufficient to keep up with the existing workload. However, the cause of the problem has not been understood and our investigations continue. We will be changing various configurations of the tape library during the working day with the aim of understanding this problem, returning to what be believe is a known good configuration overnight.
  • At the start of this period we continued to see high load on the OPN link. However this has eased off for the last week or so.
  • On Friday there was a problem with lcgwms04. A user had submitted many jobs needing large output sandboxes. The user was contacted and responded quickly. It took us a while to clear jobs already in lcgwms04 and there were some ongoing problems until the start of this week.
  • Yesterday afternoon (Tuesday 28th) there was a problem with the Atlas Castor instance that lasted for a few hours. This was resolved by a restart of processes. We have suggestion as to the cause of this problem (a resource limit) although it is not conclusive.
Resolved Disk Server Issues
  • GDSS748 (AtlasDataDisk - D1T0) crashed on the afternoon of Sunday morning 19th June. There were problems with the RAID array which were finally fixed by transplanting all the disks into another chassis which was re-named. The server was put back in service read-only on Tuesday 21st June. It is being drained ahead of sorting out the re-named system.
  • GDSS743 (AtlasDataDisk - D1T0) crashed in the early hours of Monday 20th June. Following two disk replacements it was put back in service the following day, initially read-only. Once teh disk rebuilds had all completed it was set back to read/write.
Current operational status and issues
  • There is a problem seen by LHCb of a low but persistent rate of failure when copying the results of batch jobs to Castor. There is also a further problem that sometimes occurs when these (failed) writes are attempted to storage at other sites. A recent modification has improved, but not completed fixed this.
  • The intermittent, low-level, load-related packet loss seen over external connections is still being tracked. Likewise we have been working to understand some remaining low level of packet loss seen within a part of our Tier1 network.
Ongoing Disk Server Issues
  • None.
Notable Changes made since the last meeting.
  • All access to GenScratch has now been stopped. (As announced)
  • ARC-CE02 has been reinstalled on new virtual infrastructure. The space for job sandboxes has been tripled.
  • All access to lecgwms06 has now been stopped - largely completing the decommissioning procedure.
  • The migration of Atlas data from "C" to "D" tapes continues. We have migrated around 600 of the 1300 tapes so far.
Declared in the GOC DB
Service Scheduled? Outage/At Risk Start End Duration Reason
lcgwms06.gridpp.rl.ac.uk, SCHEDULED OUTAGE 01/06/2016 11:00 30/06/2016 11:00 29 days, Server lcgwms06.gridpp.rl.ac.uk Decommissioning
Advanced warning for other interventions
The following items are being discussed and are still to be formally scheduled and announced.

Listing by category:

  • Databases:
    • Switch LFC/3D to new Database Infrastructure.
  • Castor:
    • Update SRMs to new version, including updating to SL6. This will be done after the Castor 2.1.15 update.
    • Update to Castor version 2.1.15. This awaits successful resolution and testing of the new version.
    • Migration of data from T10KC to T10KD tapes (Affects Atlas & LHCb data).
  • Networking:
    • Replace the UKLight Router. Then upgrade the 'bypass' link to the RAL border routers to 2*40Gbit.
  • Fabric
    • Firmware updates on remaining EMC disk arrays (Castor, LFC)
Entries in GOC DB starting since the last report.
Service Scheduled? Outage/At Risk Start End Duration Reason
arc-ce02.gridpp.rl.ac.uk, arc-ce02.gridpp.rl.ac.uk, SCHEDULED OUTAGE 20/06/2016 10:00 27/06/2016 17:00 7 days, 7 hours CE being drained and and moved.
All Castor tape. UNSCHEDULED WARNING 14/06/2016 12:00 15/06/2016 14:00 1 day, 2 hours Tape access limited while we investigate problem with tape library control software crashing.
lcgwms06.gridpp.rl.ac.uk, SCHEDULED OUTAGE 01/06/2016 11:00 30/06/2016 11:00 29 days, Server lcgwms06.gridpp.rl.ac.uk Decommissioning
Open GGUS Tickets (Snapshot during morning of meeting)
GGUS ID Level Urgency State Creation Last Update VO Subject
121687 Green Less Urgent On Hold 2016-05-20 2016-05-23 packet loss problems seen on RAL-LCG perfsonar
120810 Red Urgent In Progress 2016-04-13 2016-06-24 Biomed Decommissioning of SE srm-biomed.gridpp.rl.ac.uk - forbid write access for biomed users
120350 Green Less Urgent In Progress 2016-03-22 2016-05-06 LSST Enable LSST at RAL
119841 Red Less Urgent On Hold 2016-03-01 2016-04-26 LHCb HTTP support for lcgcadm04.gridpp.rl.ac.uk
117683 Yellow Less Urgent On Hold 2015-11-18 2016-04-05 CASTOR at RAL not publishing GLUE 2
Availability Report

Key: Atlas HC = Atlas HammerCloud (Queue ANALY_RAL_SL6, Template 729); CMS HC = CMS HammerCloud

Day OPS Alice Atlas CMS LHCb Atlas HC CMS HC Comment
15/06/16 100 100 98 100 100 100 100 Single SRM test failure on GET: Unable to issue PrepareToGet request to Castor
16/06/16 100 100 100 100 100 100 100
17/06/16 100 100 100 100 100 100 100
18/06/16 100 100 100 100 100 100 100
19/06/16 100 100 98 100 100 100 100 Single SRM test failure on GET: Unable to issue PrepareToGet request to Castor
20/06/16 100 100 100 100 100 100 100
21/06/16 100 94 100 100 100 98 100 Alien test failures affected many sites.
22/06/16 100 91 100 100 96 100 100 Alice: Alien test failures affected many sites; LHCb: Single SRM error on listing: [SRM_INVALID_PATH] No such file or directory
23/06/16 100 100 100 100 100 100 100
24/06/16 100 100 100 100 100 100 100
25/06/16 100 100 100 100 100 100 100
26/06/16 100 100 100 92 100 100 100 Looks spurious. We did fail one CE test - but other CEs were OK.
27/06/16 100 100 100 100 100 100 100
28/06/16 100 100 94 100 100 98 100 Failed some SRM tests early afternoon. Problem with Atlas Castor instance.