Difference between revisions of "Tier1 Operations Report 2017-11-15"

From GridPP Wiki
Jump to: navigation, search
()
()
 
(One intermediate revision by one user not shown)
Line 283: Line 283:
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Notes from Meeting.
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Notes from Meeting.
 
|}
 
|}
* Following the merger of AtlasScratchDisk with AtlasDataDisk in Castor some months ago - all data sitting on the older disk servers that made up AtlsScratchDisk has now been moved (or removed). These servers will be decommissioned.
+
* The Echo team are investigating failure rates for disks drives within Echo.
 +
* The impact of the problems at the Italian Tier1 site were discussed.

Latest revision as of 14:36, 15 November 2017

RAL Tier1 Operations Report for 15th November 2017

Review of Issues during the week 9th to 15th November 2017.
  • No significant operational problems to report. However, two successful interventions on Castor:
    • Successful patching and rebooting of Castor to pick up latest kernel and errata versions.
    • LHCb needed a new version of the SRM software that respects the user-specified bringOnline timeout rather than ignoring it and defaulting to 4 hours. This upgrade of the LHCB SRM component to CASTOR version 2.1.16-18 was successfully carried out this morning.
  • The problem with the SAM test results for CMS is a problem known to the CMS/SAM teams at CERN.
Current operational status and issues
  • There is a problem on the site firewall which is causing problems for some specific data flows. Discussions have been taking place with the vendor. This is having an effect on our data that flows through the firewall (such as to/from worker nodes).
Resolved Disk Server Issues
  • None
Ongoing Disk Server Issues
  • GDSS776 (LHCbDst D1T0) crashed at the end of yesterday afternoon (14th). Investigations are ongoing.
Limits on concurrent batch system jobs.
  • CMS Multicore 550
Notable Changes made since the last meeting.
  • Re-distribution of data in Echo onto the 2015 capacity hardware is now complete. There are now 8PBytes of usable space in Echo. However, there is some data rebalancing to be done before upping the quotas so the VOs can make use of the extra space.
  • All Tier1 Castor tape servers have been upgraded to SL7.
  • The "Production" FTS service was enabled for IPv6 (i.e. dual stack enabled) yesterday (14th Nov). (The Test instance had been done last week.)
Entries in GOC DB starting since the last report.
Service Scheduled? Outage/At Risk Start End Duration Reason
srm-lhcb.gridpp.rl.ac.uk, srm-lhcb.gridpp.rl.ac.uk, SCHEDULED OUTAGE 15/11/2017 10:00 15/11/2017 10:40 40 minutes LHCb CASTOR SRM Update to 2.1.16-18
All Castor SCHEDULED OUTAGE 14/11/2017 09:30 14/11/2017 13:00 3 hours and 30 minutes Security patching of CASTOR nodes
Declared in the GOC DB
  • None
Advanced warning for other interventions
The following items are being discussed and are still to be formally scheduled and announced.

Ongoing or Pending - but not yet formally announced:

  • Production FTS migrating to a distributed database for the back-end. (Proposed for next Tuesday, 21st November).

Listing by category:

  • Castor:
    • Update systems (initially tape servers) to use SL7 and configured by Quattor/Aquilon.
    • Move to generic Castor headnodes.
  • Echo:
    • Update to next CEPH version ("Luminous").
  • Networking
    • Extend the number of services on the production network with IPv6 dual stack. (Done for Perfsonar, FTS3, all squids and the CVMFS Stratum-1 servers).
  • Services
    • The Production and "Test" (Atlas) FTS3 services will be merged and will make use of a resilient distributed database.
  • Internal
    • DNS servers will be rolled out within the Tier1 network.
Open GGUS Tickets (Snapshot during morning of meeting)
GGUS ID Level Urgency State Creation Last Update VO Subject
131815 Green Urgent In Progress 2017-11-14 2017-11-15 Other solidexperiment.org CASTOR tape copy fails
131815 Green Less Urgent In Progress 2017-11-13 2017-11-15 T2K.Org Extremely long download times for T2K files on tape at RAL
130207 Red Urgent On Hold 2017-08-24 2017-11-13 MICE Timeouts when copyiing MICE reco data to CASTOR
127597 Red Urgent On Hold 2017-04-07 2017-10-05 CMS Check networking and xrootd RAL-CERN performance
124876 Red Less Urgent On Hold 2016-11-07 2017-11-13 Ops [Rod Dashboard] Issue detected : hr.srce.GridFTP-Transfer-ops@gridftp.echo.stfc.ac.uk
117683 Red Less Urgent On Hold 2015-11-18 2017-11-06 CASTOR at RAL not publishing GLUE 2
Availability Report
Day OPS Alice Atlas CMS LHCb Atlas Echo Comment
08/11/17 100 100 98 N/A 100 100
09/11/17 100 100 100 N/A 100 100
10/11/17 100 100 100 N/A 100 100
11/11/17 100 100 100 N/A 100 100
12/11/17 100 100 100 N/A 100 100
13/11/17 100 100 100 N/A 100 100
14/11/17 85 100 85 N/A 100 100 CASTOR patch update.
Hammercloud Test Report

Key: Atlas HC = Atlas HammerCloud (Queue ANALY_RAL_SL6, Template 845); Atlas HC Echo = Atlas Echo (Template 841);CMS HC = CMS HammerCloud

Day Atlas HC Atlas HC Echo CMS HC Comment
08/11/17 100 100 46
09/11/17 100 98 100
10/11/17 100 97 100
11/11/17 100 100 100
12/11/17 100 100 100
13/11/17 100 100 100
14/11/17 96 100 39
Notes from Meeting.
  • The Echo team are investigating failure rates for disks drives within Echo.
  • The impact of the problems at the Italian Tier1 site were discussed.