Difference between revisions of "Tier1 Operations Report 2017-01-18"

From GridPP Wiki
Jump to: navigation, search
()
()
Line 10: Line 10:
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Review of Issues during the week 11th to 18th January 2017.
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Review of Issues during the week 11th to 18th January 2017.
 
|}
 
|}
* We have been seeing SAM SRM tests failures for CMS. These are owing to the total load. On Friday an adjustment was made to the number of transfers Castor allocates to the newer three disk servers - which may help but not resolve the problem.
+
* We have still been seeing SAM SRM tests failures for CMS. These are owing to the total load.
 
* LHCb have reported a problem accessing some files - and a GGUS ticket is open about this.
 
* LHCb have reported a problem accessing some files - and a GGUS ticket is open about this.
 
<!-- ***********End Review of Issues during last week*********** ----->
 
<!-- ***********End Review of Issues during last week*********** ----->

Revision as of 17:58, 17 January 2017

RAL Tier1 Operations Report for 11th January 2017

Review of Issues during the week 11th to 18th January 2017.
  • We have still been seeing SAM SRM tests failures for CMS. These are owing to the total load.
  • LHCb have reported a problem accessing some files - and a GGUS ticket is open about this.
Resolved Disk Server Issues
  • None
Current operational status and issues
  • There is a problem seen by LHCb of a low but persistent rate of failure when copying the results of batch jobs to Castor. There is also a further problem that sometimes occurs when these (failed) writes are attempted to storage at other sites.
Ongoing Disk Server Issues
  • GDSS665 (LhcbRawRdst - D0T1) failed on Saturday 31st Dec. Two disks in the system were replaced and it was returned to service on Friday 6th Jan.
  • GDSS780 (LHCbDst - D1T0) failed on Thursday 5th Jan. It was returned to service the following day - initially in read-only mode. The BIOS and IPMI firmware were updated.
Notable Changes made since the last meeting.
  • On Thursday 5th Jan the firmware was updated in the RAID cards in the Viglen '13 batch of disk servers.
  • cvmfs client v2.3.2-1 has been deployed on all worker nodes.
  • Increased the number of Castor transfer slots on the three newest disk servers in CMS_Tape.
  • Load balancers have been introduced in front of the Site-BDII systems.
  • The Castor Nameserver has been upgraded to version 2.1.15. This is the first step of the overall Castor 2.1.15 update.
  • Migration of LHCb data from 'C' to 'D' tapes ongoing. Now a little over 70% done. Around 280 out of the 1000 tapes still to do.
Declared in the GOC DB
Service Scheduled? Outage/At Risk Start End Duration Reason
Castor CMS instance SCHEDULED OUTAGE 31/01/2017 10:00 31/01/2017 16:00 6 hours Castor 2.1.15 Upgrade. Only affecting CMS instance. (CMS stager component being upgraded).
Castor GEN instance SCHEDULED OUTAGE 26/01/2017 10:00 26/01/2017 16:00 6 hours Castor 2.1.15 Upgrade. Only affecting GEN instance. (GEN stager component being upgraded).
Castor Atlas instance SCHEDULED OUTAGE 24/01/2017 10:00 24/01/2017 16:00 6 hours Castor 2.1.15 Upgrade. Only affecting Atlas instance. (Atlas stager component being upgraded).
Castor LHCb instance SCHEDULED OUTAGE 18/01/2017 10:00 18/01/2017 16:00 6 hours Castor 2.1.15 Upgrade. Only affecting LHCb instance. (LHCb stager component being upgraded).
Advanced warning for other interventions
The following items are being discussed and are still to be formally scheduled and announced.

Pending - but not yet formally announced:

  • Merge AtlasScratchDisk into larger Atlas disk pool.

Listing by category:

  • Castor:
    • Update to Castor version 2.1.15. Dates announced via GOC DB for early 2017.
    • Update SRMs to new version, including updating to SL6. This will be done after the Castor 2.1.15 update.
  • Fabric
    • Firmware updates on older disk servers.
Entries in GOC DB starting since the last report.
Service Scheduled? Outage/At Risk Start End Duration Reason
srm-lhcb.gridpp.rl.ac.uk SCHEDULED OUTAGE 18/01/2017 10:00 18/01/2017 16:00 6 hours Castor 2.1.15 Upgrade. Only affecting LHCb instance. (LHCb stager component being upgraded).
Most services (not Castor) UNSCHEDULED WARNING 16/01/2017 13:30 16/01/2017 14:30 1 hour Warning on site services during short intervention on system supporting VMs.
Open GGUS Tickets (Snapshot during morning of meeting)
GGUS ID Level Urgency State Creation Last Update VO Subject
125856 Green Top Piority In Progress 2017-01-06 2016-01-10 LHCb Permission denied for some files
125480 Green Less Urgent On Hold 2016-12-09 2016-12-21 total Physical and Logical CPUs values
125157 Green Less Urgent In Progress 2016-11-24 2017-01-03 Creation of a repository within the EGI CVMFS infrastructure
124876 Amber Less Urgent On Hold 2016-11-07 2017-01-01 OPS [Rod Dashboard] Issue detected : hr.srce.GridFTP-Transfer-ops@gridftp.echo.stfc.ac.uk
117683 Red Less Urgent On Hold 2015-11-18 2016-12-07 CASTOR at RAL not publishing GLUE 2. We looked at this as planned in December (report).
Availability Report

Key: Atlas HC = Atlas HammerCloud (Queue ANALY_RAL_SL6, Template 808); CMS HC = CMS HammerCloud

Day OPS Alice Atlas CMS LHCb Atlas HC CMS HC Comment
11/01/17 100 100 100 92 100 N/A 100 SRM test failures - User timeout
12/01/17 100 100 100 93 100 N/A N/A SRM test failures - User timeout
13/01/17 100 100 100 97 100 N/A 100 SRM test failures - User timeout
14/01/17 100 100 100 95 100 N/A 100 SRM test failures - User timeout
15/01/17 100 100 100 100 100 N/A 100
16/01/17 100 100 100 97 100 N/A 100 SRM test failures - User timeout
17/01/17 100 100 100  ?? 100 N/A 100
Notes from Meeting.
  • None yet