Difference between revisions of "Tier1 Operations Report 2016-12-21"

From GridPP Wiki
Jump to: navigation, search
()
()
Line 22: Line 22:
 
| style="background-color: #f8d6a9; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Resolved Disk Server Issues
 
| style="background-color: #f8d6a9; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Resolved Disk Server Issues
 
|}
 
|}
 +
* GDSS677 (CMSTape - D0T1) failed late evening on 14th Dec. The following day it was put back in service and seven of the eight files awaiting migration to tape were flushed off it. One file was declared lost to CMS.
 
* GDSS822 (AtlasDataDisk - D1T0) failed in the early hours of Tuesday 20th Dec. It was returned to service late afternoon the same day.Unclear why sever had stopped.
 
* GDSS822 (AtlasDataDisk - D1T0) failed in the early hours of Tuesday 20th Dec. It was returned to service late afternoon the same day.Unclear why sever had stopped.
* GDSS749 (AtlasDataDisk - D1T0) was unresponsive during Tuesday evening (20th Dec) and was taken out od pfoduction. It was found that a disk had failed - and setting that disk out of the RAID array allowed the server to work normally again. The server was put back in production later that evening. It was left in read-only mode overnight.
+
* GDSS749 (AtlasDataDisk - D1T0) was unresponsive during Tuesday evening (20th Dec) and was taken out of production. It was found that a disk had failed - and setting that disk out of the RAID array allowed the server to work normally again. The server was put back in production later that evening. It was left in read-only mode overnight.
 
<!-- ***************************************************** ----->
 
<!-- ***************************************************** ----->
  

Revision as of 12:32, 21 December 2016

RAL Tier1 Operations Report for 21st December 2016

Review of Issues during the week 14th to 21st December 2016.
  • There was a problem with the Atlas Frontier service on Wednesday (7th). The excess load caused by particular Atlas user. The services on the squid systems needed several restarts through the day and evening.
  • Since yesterday (Tuesday 13th Dec) we are seeing high load on CMSTape in Castor - and are failing SAM tests as a result.
Resolved Disk Server Issues
  • GDSS677 (CMSTape - D0T1) failed late evening on 14th Dec. The following day it was put back in service and seven of the eight files awaiting migration to tape were flushed off it. One file was declared lost to CMS.
  • GDSS822 (AtlasDataDisk - D1T0) failed in the early hours of Tuesday 20th Dec. It was returned to service late afternoon the same day.Unclear why sever had stopped.
  • GDSS749 (AtlasDataDisk - D1T0) was unresponsive during Tuesday evening (20th Dec) and was taken out of production. It was found that a disk had failed - and setting that disk out of the RAID array allowed the server to work normally again. The server was put back in production later that evening. It was left in read-only mode overnight.
Current operational status and issues
  • There is a problem seen by LHCb of a low but persistent rate of failure when copying the results of batch jobs to Castor. There is also a further problem that sometimes occurs when these (failed) writes are attempted to storage at other sites.
  • We had been reporting a problem with intermittent, low-level, load-related packet loss over external connections. We have been tracking this - particularly since the replacement of the UKLight router. The rate of packet loss has reduced and we now conclude there is no longer a significant problem. Loss rates seen are comparable with other sites.
Ongoing Disk Server Issues
  • GDSS671 (AliceDisk - D1T0) failed on Saturday (17th Dec) with a read-only filesystem. Two failing drives have been identified.
Notable Changes made since the last meeting.
  • On Monday (5th Dec) the bulk of the HPE Worker Nodes (the second tranche of last year's procurement) were reinstalled with SL6 and put into production. This follows a period of testing with another configuration in readiness for CEPH deployment. (This item overdue from last week's report).
  • On Thursday (8th Dec) The small LhcbUser disk pool was merged into the larger LhcbDst pool.
  • FTS Database adjustments requested by ATLAS
  • (Ongoing at time of meeting: Firmware updates of RAID cards on Clustervision '13 batch of disk servers.)
Declared in the GOC DB
Service Scheduled? Outage/At Risk Start End Duration Reason
Castor CMS instance SCHEDULED OUTAGE 31/01/2017 10:00 31/01/2017 16:00 6 hours Castor 2.1.15 Upgrade. Only affecting CMS instance. (CMS stager component being upgraded).
Castor GEN instance SCHEDULED OUTAGE 26/01/2017 10:00 26/01/2017 16:00 6 hours Castor 2.1.15 Upgrade. Only affecting GEN instance. (GEN stager component being upgraded).
Castor Atlas instance SCHEDULED OUTAGE 24/01/2017 10:00 24/01/2017 16:00 6 hours Castor 2.1.15 Upgrade. Only affecting Atlas instance. (Atlas stager component being upgraded).
Castor LHCb instance SCHEDULED OUTAGE 17/01/2017 10:00 17/01/2017 16:00 6 hours Castor 2.1.15 Upgrade. Only affecting LHCb instance. (LHCb stager component being upgraded).
All Castor (all SRM endpoints) SCHEDULED OUTAGE 10/01/2017 10:00 10/01/2017 16:00 6 hours Castor 2.1.15 Upgrade. Upgrade of Nameserver component. All instances affected.
All Castor SCHEDULED OUTAGE 05/01/2017 09:30 05/01/2017 17:00 7 hours and 30 minutes Outage of Castor Storage System for patching
Advanced warning for other interventions
The following items are being discussed and are still to be formally scheduled and announced.

Pending - but not yet formally announced:

  • Merge AtlasScratchDisk into larger Atlas disk pool.

Listing by category:

  • Castor:
    • Update to Castor version 2.1.15. Dates announced via GOC DB for early 2017.
    • Update SRMs to new version, including updating to SL6. This will be done after the Castor 2.1.15 update.
  • Fabric
    • Firmware updates on older disk servers.
Entries in GOC DB starting since the last report.
Service Scheduled? Outage/At Risk Start End Duration Reason
srm-atlas.gridpp.rl.ac.uk, srm-cms-disk.gridpp.rl.ac.uk, srm-lhcb.gridpp.rl.ac.uk, SCHEDULED WARNING 14/12/2016 10:00 14/12/2016 17:00 7 hours Warning while some disk servers have rolling firmware updates and are rebooted. Temporary loss of access to files.
Open GGUS Tickets (Snapshot during morning of meeting)
GGUS ID Level Urgency State Creation Last Update VO Subject
125 Green Urgent In Progress 2016-12-13 2016-12-13 CMS Transfers failing from T1_UK_RAL_Buffer
125551 Green Urgent In Progress 2016-12-13 2016-12-13 CMS T1_UK_RAL SAM3_SRM Critical > 2h
125480 Green Less Urgent In Progress 2016-12-09 2016-12-12 total Physical and Logical CPUs values
125157 Green Less Urgent In Progress 2016-11-24 2016-12-07 Creation of a repository within the EGI CVMFS infrastructure
124876 Green Less Urgent On Hold 2016-11-07 2016-11-21 OPS [Rod Dashboard] Issue detected : hr.srce.GridFTP-Transfer-ops@gridftp.echo.stfc.ac.uk
124606 Red Top Priority Waiting for Reply 2016-10-24 2016-12-09 CMS Consistency Check for T1_UK_RAL
124478 Green Less Urgent In Progress 2016-11-18 2016-12-13 NA62 Jobs submitted via RAL WMS stuck in state READY forever and ever and ever
117683 Red Less Urgent On Hold 2015-11-18 2016-12-07 CASTOR at RAL not publishing GLUE 2. Plan to revisit week starting 19th.
Availability Report

Key: Atlas HC = Atlas HammerCloud (Queue ANALY_RAL_SL6, Template 808); CMS HC = CMS HammerCloud

Day OPS Alice Atlas CMS LHCb Atlas HC CMS HC Comment
14/12/16 100 100 100 44 100 N/A 100 Load problem on CMS_Tape causing multiple test failures.
15/12/16 100 100 100 100 100 N/A 100
16/12/16 100 100 100 100 100 N/A 100
17/12/16 100 100 100 99 100 N/A 99 Single SRM test failure: User timeout over
18/12/16 100 100 100 100 100 N/A 100
19/12/16 100 100 100 96 100 N/A 100 Several SRM test failures: User timeout over
20/12/16 100 100 100 93 100 N/A 100 Several SRM test failures: User timeout over
Notes from Meeting.
  • The meeting did not take place as there were no VO representatives present.