Difference between revisions of "Tier1 Operations Report 2016-11-30"

From GridPP Wiki
Jump to: navigation, search
()
()
Line 108: Line 108:
 
! Reason
 
! Reason
 
|-
 
|-
|gridftp.echo.stfc.ac.uk, s3.echo.stfc.ac.uk, s3.echo.stfc.ac.uk, xrootd.echo.stfc.ac.uk
+
| lcgfts3.gridpp.rl.ac.uk,  
| UNSCHEDULED
+
| OUTAGE
+
| 18/11/2016 14:30
+
| 21/11/2016 11:00
+
| 2 days, 20 hours and 30 minutes
+
| Echo cluster is coming back online, following on from shutdown for backend network upgrade behind Echo Storage service
+
|-
+
|gridftp.echo.stfc.ac.uk, s3.echo.stfc.ac.uk, s3.echo.stfc.ac.uk, xrootd.echo.stfc.ac.uk
+
| UNSCHEDULED
+
| OUTAGE
+
| 16/11/2016 14:30
+
| 18/11/2016 14:30
+
| 2 day,
+
| Follow on work dealing with bringing cluster back online after backend network upgrade behind Echo Storage service
+
|-
+
| gridftp.echo.stfc.ac.uk, s3.echo.stfc.ac.uk, s3.echo.stfc.ac.uk, xrootd.echo.stfc.ac.uk
+
 
| SCHEDULED
 
| SCHEDULED
| OUTAGE
+
| WARNING
| 16/11/2016 10:30
+
| 30/11/2016 11:00
| 16/11/2016 14:30
+
| 30/11/2016 13:00
| 4 hours
+
| 2 hours
| Upgrading backend network behind Echo Storage service
+
| Upgrade of FTS3 service
 
|}
 
|}
 
<!-- **********************End GOC DB Entries************************** ----->
 
<!-- **********************End GOC DB Entries************************** ----->

Revision as of 12:20, 30 November 2016

RAL Tier1 Operations Report for 30th November 2016

Review of Issues during the week 23rd to 30th November 2016.
  • There was a problem on the Castor 'GEN' instance following the renewal of the host certificates on the SRMs. The renewal process did not replicate the 'alternative names' section of the certificates and the DNS aliases for the GEN SRMs were not included. This caused failures when accessing the SRMs. GGUS tickets were received from both SNO+ and MICE on Monday evening and Tuesday Morning (21/22 Nov). However, the lack of Admin On Duty yesterday meant these tickets were not acted upon until this morning (Wed 23rd) when the problem was quickly resolved.
Resolved Disk Server Issues
  • GDSS750 (LHCbDst – D1T0) was taken out of service having reported 'FSProbe' problems on Sunday morning (20th Nov). The server was put back in service the following day. Two disks were replaced.
Current operational status and issues
  • There is a problem seen by LHCb of a low but persistent rate of failure when copying the results of batch jobs to Castor. There is also a further problem that sometimes occurs when these (failed) writes are attempted to storage at other sites.
  • The intermittent, low-level, load-related packet loss that has been seen over external connections is still being tracked. The replacement of the UKLight router appears to have reduced this - but we are allowing more time to pass before drawing any conclusions.
Ongoing Disk Server Issues
  • None
Notable Changes made since the last meeting.
  • Additional disk servers have been added to Castor: For Alice 5 extra servers, each 100TB. For LHCb 12 additional servers, each 120TB. This will enable both an increase in capacity and the withdrawal of some older (smaller capacity) disk servers.
  • There was an intervention on the ECHO Ceph system last week to enable a reconfiguration of its underlying network.
  • LHCb writing to the 'D' tapes. The migration of their data from 'C' to 'D' tapes is underway - with around 200 (some 20%) of the tapes done.
Declared in the GOC DB

None

Advanced warning for other interventions
The following items are being discussed and are still to be formally scheduled and announced.

Listing by category:

  • Castor:
    • Merge AtlasScratchDisk and LhcbUser into larger disk pools
    • Update to Castor version 2.1.15. Planning to roll out January 2017. (Proposed dates: 10th Jan: Nameserver; 17th Jan: First stager (LHCb); 24th Jan: Stager (Atlas); 26th Jan: Stager (GEN); 31st Jan: Final stager (CMS)).
    • Update SRMs to new version, including updating to SL6. This will be done after the Castor 2.1.15 update.
  • Fabric
    • Firmware updates on older disk servers.
Entries in GOC DB starting since the last report.
Service Scheduled? Outage/At Risk Start End Duration Reason
lcgfts3.gridpp.rl.ac.uk, SCHEDULED WARNING 30/11/2016 11:00 30/11/2016 13:00 2 hours Upgrade of FTS3 service
Open GGUS Tickets (Snapshot during morning of meeting)
GGUS ID Level Urgency State Creation Last Update VO Subject
125126 Green Urgent In Progress 2016-11-22 2016-11-23 MICE Problems connecting to srm-mice.gridpp.rl.ac.uk
125116 Green Less Urgent In Progress 2016-11-21 2016-11-23 SNO+ DNS configuration problem
124876 Green Less Urgent On Hold 2016-11-07 2016-11-21 OPS [Rod Dashboard] Issue detected : hr.srce.GridFTP-Transfer-ops@gridftp.echo.stfc.ac.uk
124785 Red Urgent Reopened 2016-11-02 2016-11-09 CMS Configuration updated AAA - CMS Site Name missing
124606 Red Top Priority In Progress 2016-10-24 2016-11-01 CMS Consistency Check for T1_UK_RAL
124487 Green Less Urgent Waiting for Reply 2016-11-18 2016-11-18 Jobs submitted via RAL WMS stuck in state READY forever and ever and ever
122827 Green Less Urgent In Progress 2016-07-12 2016-10-11 SNO+ Disk area at RAL
117683 Red Less Urgent On Hold 2015-11-18 2016-10-05 CASTOR at RAL not publishing GLUE 2 (Updated. There are ongoing discussions with GLUE & WLCG)
Availability Report

Key: Atlas HC = Atlas HammerCloud (Queue ANALY_RAL_SL6, Template 808); CMS HC = CMS HammerCloud

Day OPS Alice Atlas CMS LHCb Atlas HC CMS HC Comment
23/11/16 100 100 100 100 100 N/A 100
24/11/16 100 100 100 100 100 N/A 100
25/11/16 100 100 100 100 100 N/A 100
26/11/16 100 100 100 98 100 N/A 98 Two SRM 'GET' test failures. Both user timeout error
27/11/16 100 100 100 100 100 N/A 100
28/11/16 100 100 100 100 100 N/A 98
29/11/16 96.5 100 100 100 100 N/A 99 Central monitoring problem affected other sites too.
Notes from Meeting.
  • Some work is needed in the Castor configuration to separate the storage of the files from the different Dirac sites (Durham, Leicester etc.).