Difference between revisions of "Tier1 Operations Report 2016-07-27"

From GridPP Wiki
Jump to: navigation, search
()
()
Line 45: Line 45:
 
| style="background-color: #f8d6a9; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Ongoing Disk Server Issues
 
| style="background-color: #f8d6a9; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Ongoing Disk Server Issues
 
|}
 
|}
* GDSS675
+
* GDSS675 (CMSTape D0T1) was taken out of service on Tuesday morning. 26th July. It had a second disk failure while the first one was being rebuilt. All files awaiting migration to tape were flushed off the server before it was taken out of service.
 
<!-- ***************End Ongoing Disk Server Issues**************** ----->
 
<!-- ***************End Ongoing Disk Server Issues**************** ----->
 
<!-- ************************************************************* ----->
 
<!-- ************************************************************* ----->

Revision as of 15:21, 26 July 2016

RAL Tier1 Operations Report for 27th July 2016

Review of Issues during the week 20th to 27th July 2016.
  • There has been saturation of the inbound 10Gbit OPN link at times over the last week. The bypass route to JANET has also shown high traffic volumes.
Resolved Disk Server Issues
  • GDSS650 (LHCbUser, D1T0) which failed on Monday (19th July) was returned to service on Wednesday afternoon (20th). There was a single file lost which was being written when server failed.
  • GDSS634 (AtlasTape, doT1) crashed on Thursday 21st July. It was returned to service on Monday, 25th July. This looks like a disk controller failure. Eleven files that were being written as it failed were reported lost to Atlas.
  • GDSS678 (CMSTape D0T1) crashed on Saturday (23rd July). It was returned to service, initially read-only, the following day.
Current operational status and issues
  • There is a problem seen by LHCb of a low but persistent rate of failure when copying the results of batch jobs to Castor. There is also a further problem that sometimes occurs when these (failed) writes are attempted to storage at other sites. A recent modification has improved, but not completed fixed this.
  • The intermittent, low-level, load-related packet loss seen over external connections is still being tracked. Likewise we have been working to understand some remaining low level of packet loss seen within a part of our Tier1 network.
Ongoing Disk Server Issues
  • GDSS675 (CMSTape D0T1) was taken out of service on Tuesday morning. 26th July. It had a second disk failure while the first one was being rebuilt. All files awaiting migration to tape were flushed off the server before it was taken out of service.
Notable Changes made since the last meeting.
  • The migration of Atlas data from "C" to "D" tapes continues. We have migrated over 700 of the 1300 tapes so far.
Declared in the GOC DB
Service Scheduled? Outage/At Risk Start End Duration Reason
lcgfts3.gridpp.rl.ac.uk, SCHEDULED WARNING 19/07/2016 12:00 19/07/2016 13:00 1 hour Upgrade FTS to v3.4.7
lfc.gridpp.rl.ac.uk, lfc.gridpp.rl.ac.uk, SCHEDULED WARNING 01/08/2016 12:00 01/08/2016 17:00 5 hours RAC Oracle backend migration to new hardware
lfc.gridpp.rl.ac.uk, lfc.gridpp.rl.ac.uk, SCHEDULED OUTAGE 01/08/2016 09:00 01/08/2016 12:00 3 hours RAC Oracle backend migration to new hardware


Advanced warning for other interventions
The following items are being discussed and are still to be formally scheduled and announced.

Listing by category:

  • Databases:
    • Switch LFC database to use new Database Infrastructure.
  • Castor:
    • Update SRMs to new version, including updating to SL6. This will be done after the Castor 2.1.15 update.
    • Update to Castor version 2.1.15. This awaits successful resolution and testing of the new version.
    • Migration of data from T10KC to T10KD tapes (Affects Atlas & LHCb data).
  • Networking:
    • Replace the UKLight Router. Then upgrade the 'bypass' link to the RAL border routers to 2*40Gbit.
  • Fabric
    • Firmware updates on older disk servers.
Entries in GOC DB starting since the last report.
  • None
Open GGUS Tickets (Snapshot during morning of meeting)
GGUS ID Level Urgency State Creation Last Update VO Subject
122827 Green Less Urgent In Progress 2016-07-12 2016-07-13 SNO+ Disk area at RAL
122818 Green Less Urgent In Progress 2016-07-12 2016-07-12 Atlas Object Store at RAL
122804 Green Less Urgent Waiting Reply 2016-07-12 2016-07-15 SNO+ glite-transfer failure
122364 Green Less Urgent Waiting Reply 2016-06-27 2016-07-15 cvmfs support at RAL-LCG2 for solidexperiment.org
121687 Yellow Less Urgent On Hold 2016-05-20 2016-05-23 packet loss problems seen on RAL-LCG perfsonar
120810 Green Urgent In Progress 2016-04-13 2016-06-24 Biomed Decommissioning of SE srm-biomed.gridpp.rl.ac.uk - forbid write access for biomed users
120350 Green Less Urgent In Progress 2016-03-22 2016-05-06 LSST Enable LSST at RAL
119841 Red Less Urgent On Hold 2016-03-01 2016-04-26 LHCb HTTP support for lcgcadm04.gridpp.rl.ac.uk
117683 Yellow Less Urgent On Hold 2015-11-18 2016-04-05 CASTOR at RAL not publishing GLUE 2
Availability Report

Key: Atlas HC = Atlas HammerCloud (Queue ANALY_RAL_SL6, Template 729); CMS HC = CMS HammerCloud

Day OPS Alice Atlas CMS LHCb Atlas HC CMS HC Comment
20/07/16 100 100 100 98 100 100 100 Single SRM test failure: User Timeout.
21/07/16 100 100 100 100 100 100 100
22/07/16 100 100 100 100 100 100 100
23/07/16 100 100 100 100 100 100 100
24/07/16 100 100 100 100 100 100 100
25/07/16 100 100 100 100 100 100 100
26/07/16 100 100 100 100 100 100 100