Difference between revisions of "Tier1 Operations Report 2017-01-18"

From GridPP Wiki
Jump to: navigation, search
()
m ()
 
Line 252: Line 252:
 
* ECHO: The ECHO cluster was successfully rebuilt a couple of weeks ago. The ECHO service is now being regarded as a "Tier2 site". I.e. we will respond to (GGUS) tickets etc. in the usual operation manner - but there will be no out-of-hours call-outs.
 
* ECHO: The ECHO cluster was successfully rebuilt a couple of weeks ago. The ECHO service is now being regarded as a "Tier2 site". I.e. we will respond to (GGUS) tickets etc. in the usual operation manner - but there will be no out-of-hours call-outs.
 
* Following a change to the name of a SE at another site SNO+ have requested a block renaming of entries in the LFC. In discussion it was first suggested that there be some push back to see if the site could put the original SE name back.
 
* Following a change to the name of a SE at another site SNO+ have requested a block renaming of entries in the LFC. In discussion it was first suggested that there be some push back to see if the site could put the original SE name back.
* Three out of teh four ARC-CEs have been re-installed with larger disks so that job logging information can be kept for longer. The fourth will be rebuilt soon (within a few weeks).
+
* Three out of the four ARC-CEs have been re-installed with larger disks so that job logging information can be kept for longer. The fourth will be rebuilt soon (within a few weeks).
 
* Work as been going on at the Edinburgh and HPC Cambridge Dirac sites in preparation for moving data to us.
 
* Work as been going on at the Edinburgh and HPC Cambridge Dirac sites in preparation for moving data to us.

Latest revision as of 14:46, 18 January 2017

RAL Tier1 Operations Report for 18th January 2017

Review of Issues during the week 11th to 18th January 2017.
  • We have still been seeing SAM SRM tests failures for CMS. These are owing to the total load on the instance.
  • LHCb have reported a problem accessing some files - and a GGUS ticket was opened about this. This may now be solved (we are awaiting confirmation). A problem was found with the xroot configuration on one disk server.
  • Some disk errors had been seen on hypervisors in our High Availability Hyper-V 2012 cluster. Errors on two of the network connections supporting the iSCSI links to the disk array were found. These were swapped on Monday (16th Jan) - during an unscheduled 'warning'. However, this has not resolved the problem.
  • The tape migration queues for Atlas, CMS and LHCb were growing from around 6pm on Saturday until Monday morning. It looks like a tape was stuck in one drive and others became blocked when being asked to mount the same tape.
  • Yesterday (17th Jan) there was a problem with teh ALICE Castor instance with many xrootd connections to disk servers but not much activity. At the end of the afternoon the number of ALICE batch jobs was cut back (to 500) as a temporary measure to reduce the load on the instance.
Resolved Disk Server Issues
  • None
Current operational status and issues
  • There is a problem seen by LHCb of a low but persistent rate of failure when copying the results of batch jobs to Castor. There is also a further problem that sometimes occurs when these (failed) writes are attempted to storage at other sites.
Ongoing Disk Server Issues
  • None
Notable Changes made since the last meeting.
  • Changes made to the publishing of CPU capacity to the information system (GLUE1 & GLUE2).
  • Migration of LHCb data from 'C' to 'D' tapes ongoing. Now a little over 80% done. Around 170 out of the 1000 tapes still to do.
  • The site-BDIIs are have been put fully behind the load balancers.
  • The (internal) Castor "repack" instance was upgraded to Castor version 2.1.15 on Monday (16th). The upgrade of the LHCb stager ongoing at time of meeting.
Declared in the GOC DB
Service Scheduled? Outage/At Risk Start End Duration Reason
Castor CMS instance SCHEDULED OUTAGE 31/01/2017 10:00 31/01/2017 16:00 6 hours Castor 2.1.15 Upgrade. Only affecting CMS instance. (CMS stager component being upgraded).
Castor GEN instance SCHEDULED OUTAGE 26/01/2017 10:00 26/01/2017 16:00 6 hours Castor 2.1.15 Upgrade. Only affecting GEN instance. (GEN stager component being upgraded).
Castor Atlas instance SCHEDULED OUTAGE 24/01/2017 10:00 24/01/2017 16:00 6 hours Castor 2.1.15 Upgrade. Only affecting Atlas instance. (Atlas stager component being upgraded).
Castor LHCb instance SCHEDULED OUTAGE 18/01/2017 10:00 18/01/2017 16:00 6 hours Castor 2.1.15 Upgrade. Only affecting LHCb instance. (LHCb stager component being upgraded).
Advanced warning for other interventions
The following items are being discussed and are still to be formally scheduled and announced.

Pending - but not yet formally announced:

  • Merge AtlasScratchDisk into larger Atlas disk pool.

Listing by category:

  • Castor:
    • Update to Castor version 2.1.15. Dates announced via GOC DB for early 2017 and ongoing.
    • Update SRMs to new version, including updating to SL6. This will be done after the Castor 2.1.15 update.
Entries in GOC DB starting since the last report.
Service Scheduled? Outage/At Risk Start End Duration Reason
srm-lhcb.gridpp.rl.ac.uk SCHEDULED OUTAGE 18/01/2017 10:00 18/01/2017 16:00 6 hours Castor 2.1.15 Upgrade. Only affecting LHCb instance. (LHCb stager component being upgraded).
Most services (not Castor) UNSCHEDULED WARNING 16/01/2017 13:30 16/01/2017 14:30 1 hour Warning on site services during short intervention on system supporting VMs.
Open GGUS Tickets (Snapshot during morning of meeting)
GGUS ID Level Urgency State Creation Last Update VO Subject
125856 Green Top Priority Waiting Reply 2017-01-06 2016-01-18 LHCb Permission denied for some files
124876 Amber Less Urgent On Hold 2016-11-07 2017-01-01 OPS [Rod Dashboard] Issue detected : hr.srce.GridFTP-Transfer-ops@gridftp.echo.stfc.ac.uk
117683 Red Less Urgent On Hold 2015-11-18 2016-12-07 CASTOR at RAL not publishing GLUE 2. We looked at this as planned in December (report).
Availability Report

Key: Atlas HC = Atlas HammerCloud (Queue ANALY_RAL_SL6, Template 808); CMS HC = CMS HammerCloud

Day OPS Alice Atlas CMS LHCb Atlas HC CMS HC Comment
11/01/17 100 100 100 92 100 N/A 100 SRM test failures - User timeout
12/01/17 100 100 100 93 100 N/A N/A SRM test failures - User timeout
13/01/17 100 100 100 97 100 N/A 100 SRM test failures - User timeout
14/01/17 100 100 100 95 100 N/A 100 SRM test failures - User timeout
15/01/17 100 100 100 100 100 N/A 100
16/01/17 100 100 100 97 100 N/A 100 SRM test failures - User timeout
17/01/17 100 100 100 96 96 N/A 100 LHCb: Single SRM failure on list; CMS: SRM test failures - User timeout
Notes from Meeting.
  • ECHO: The ECHO cluster was successfully rebuilt a couple of weeks ago. The ECHO service is now being regarded as a "Tier2 site". I.e. we will respond to (GGUS) tickets etc. in the usual operation manner - but there will be no out-of-hours call-outs.
  • Following a change to the name of a SE at another site SNO+ have requested a block renaming of entries in the LFC. In discussion it was first suggested that there be some push back to see if the site could put the original SE name back.
  • Three out of the four ARC-CEs have been re-installed with larger disks so that job logging information can be kept for longer. The fourth will be rebuilt soon (within a few weeks).
  • Work as been going on at the Edinburgh and HPC Cambridge Dirac sites in preparation for moving data to us.