Difference between revisions of "Tier1 Operations Report 2017-02-15"

From GridPP Wiki
Jump to: navigation, search
()
()
Line 14: Line 14:
 
** We had been failing tests for CMS xroot. This was fixed by changing the weighting for xrootd in the Castor transfermanagerd.
 
** We had been failing tests for CMS xroot. This was fixed by changing the weighting for xrootd in the Castor transfermanagerd.
 
** We had being failing CMS tests for an SRM endpoint defined in the GOC DB but not in production ("srm-cms-disk"). This SRM endpoint was removed from the GOCDB but then we started failing SAM tests for the remaining CMS SRM endpoint. Eventually the problem was tracked down to a bug in the test. CMS fixed it and now we are back to our 'normal' level of SRM test failures.
 
** We had being failing CMS tests for an SRM endpoint defined in the GOC DB but not in production ("srm-cms-disk"). This SRM endpoint was removed from the GOCDB but then we started failing SAM tests for the remaining CMS SRM endpoint. Eventually the problem was tracked down to a bug in the test. CMS fixed it and now we are back to our 'normal' level of SRM test failures.
 +
* There was an issue with a network switch connecting some of the ECHO (CEPH) headnodes to the network on Friday (10th Feb). It was resolved when the switch's link was restarted on Tuesday morning.
 
<!-- ***********End Review of Issues during last week*********** ----->
 
<!-- ***********End Review of Issues during last week*********** ----->
 
<!-- *********************************************************** ----->
 
<!-- *********************************************************** ----->

Revision as of 10:23, 15 February 2017

RAL Tier1 Operations Report for 15th February 2017

Review of Issues during the week 8th to 15th February 2017.
  • Since the castor upgrade we have seen a couple of further problems which are now mainly fixed:
    • Two VOs, Atlas and LHCb have seen issues with database resources (number of cursors). The latest being Atlas on Friday 10th. We still don't understand this.
    • We had been failing tests for CMS xroot. This was fixed by changing the weighting for xrootd in the Castor transfermanagerd.
    • We had being failing CMS tests for an SRM endpoint defined in the GOC DB but not in production ("srm-cms-disk"). This SRM endpoint was removed from the GOCDB but then we started failing SAM tests for the remaining CMS SRM endpoint. Eventually the problem was tracked down to a bug in the test. CMS fixed it and now we are back to our 'normal' level of SRM test failures.
  • There was an issue with a network switch connecting some of the ECHO (CEPH) headnodes to the network on Friday (10th Feb). It was resolved when the switch's link was restarted on Tuesday morning.
Resolved Disk Server Issues
  • GDSS674 (CMSTape - D0T1) reported problems on Friday evening, 10th Feb. There were no files on the server awaiting to go to tape. After being checked over it was returned to service on Monday (13th).
Current operational status and issues
  • We are still seeing a rate of failures of the CMS SAM tests against the SRM. These are affecting our (CMS) availabilities but the level of failure have been reduced recently.
Ongoing Disk Server Issues
  • None
Notable Changes made since the last meeting.
Declared in the GOC DB


Service Scheduled? Outage/At Risk Start End Duration Reason
Whole site SCHEDULED WARNING 01/03/2017 07:00 01/03/2017 11:00 4 hours Warning on site during network intervention in preparation for IPv6.
All Castor and ECHO storage and Perfsonar. SCHEDULED WARNING 22/02/2017 07:00 22/02/2017 11:00 4 hours Warning on Storage and Perfsonar during network intervention in preparation for IPv6.
Advanced warning for other interventions
The following items are being discussed and are still to be formally scheduled and announced.

Pending - but not yet formally announced:

  • Merge AtlasScratchDisk into larger Atlas disk pool.

Listing by category:

  • Castor:
    • Update SRMs to new version, including updating to SL6. This will be done after the Castor 2.1.15 update.
  • Networking:
    • Enabling IPv6 onto production network.
  • Databases
    • Removal of "asmlib" layer on Oracle database nodes.
Entries in GOC DB starting since the last report.
Service Scheduled? Outage/At Risk Start End Duration Reason
ECHO: gridftp.echo.stfc.ac.uk, s3.echo.stfc.ac.uk, s3.echo.stfc.ac.uk, xrootd.echo.stfc.ac.uk UNSCHEDULED OUTAGE 13/02/2017 00:00 13/02/2017 13:45 13 hours and 45 minutes Problem with switch causing Echo to stop being accessible.
Open GGUS Tickets (Snapshot during morning of meeting)
GGUS ID Level Urgency State Creation Last Update VO Subject
126533 Green Urgent In Progress 2017-02-10 2017-02-14 Atlas UK RAL-LCG2-ECHO transfer/staging/deletion failures with "Unable to connect to gridftp.echo.stfc.ac.uk"
126532 Green Urgent In Progress 2017-02-09 2017-02-10 Atlas RAL tape staging errors
126184 Green Less Urgent In Progress 2017-01-26 2017-02-07 Atlas Request of inputs for new sites monitoring
124876 Red Less Urgent On Hold 2016-11-07 2017-01-01 OPS [Rod Dashboard] Issue detected : hr.srce.GridFTP-Transfer-ops@gridftp.echo.stfc.ac.uk
117683 Red Less Urgent On Hold 2015-11-18 2016-12-07 CASTOR at RAL not publishing GLUE 2. We looked at this as planned in December (report).
Availability Report

Key: Atlas HC = Atlas HammerCloud (Queue ANALY_RAL_SL6, Template 845); CMS HC = CMS HammerCloud

Day OPS Alice Atlas CMS LHCb Atlas HC CMS HC Comment
08/02/17 100 100 88 100 100 100 99 SRM test failures (timeouts)
09/02/17 100 100 59 100 100 99 100 SRM test failures (timeouts)
10/02/17 100 100 21 100 100 98 100 SRM test failures (timeouts)
11/02/17 100 100 100 100 100 100 100
12/02/17 100 100 100 100 100 100 100
13/02/17 100 100 100 100 100 100 100
14/02/17 100 100 100 86% 100 100 86% SRM test failures (timeouts)
Notes from Meeting.
  • None yet