Difference between revisions of "Tier1 Operations Report 2016-12-07"

From GridPP Wiki
Jump to: navigation, search
()
()
 
(10 intermediate revisions by 2 users not shown)
Line 10: Line 10:
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Review of Issues during the week 30th November and 7th December 2016.
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Review of Issues during the week 30th November and 7th December 2016.
 
|}  
 
|}  
* It was found that some worker nodes were being put offline owing to clock drift. (This was around 10% of them on Monday). This was traced to a problem within the NTP daemon and was fixed by a configuration change.
+
* There was some problems with the CMS Castor instance during last week. A restart of the "transfermanager" on Friday cleared out a backlog of transfer requests that were not progressing - and this enabled the service to work normally.
 +
* There a problem on one of the Power Distribution Units  to a rack in the UPS room during the early hours of Monday morning (5th Dec). This affected two network switches - which in turn affected some core services (including the TopBDII). This was resolved by a member of staff attending during the night.
 
<!-- ***********End Review of Issues during last week*********** ----->
 
<!-- ***********End Review of Issues during last week*********** ----->
 
<!-- *********************************************************** ----->
 
<!-- *********************************************************** ----->
Line 35: Line 36:
 
|}
 
|}
 
* There is a problem seen by LHCb of a low but persistent rate of failure when copying the results of batch jobs to Castor. There is also a further problem that sometimes occurs when these (failed) writes are attempted to storage at other sites.
 
* There is a problem seen by LHCb of a low but persistent rate of failure when copying the results of batch jobs to Castor. There is also a further problem that sometimes occurs when these (failed) writes are attempted to storage at other sites.
* The intermittent, low-level, load-related packet loss that has been seen over external connections is still being tracked. The replacement of the UKLight router appears to have reduced this - but we are allowing more time to pass before drawing any conclusions.
+
* The intermittent, low-level, load-related packet loss that has been seen over external connections is still being monitored. The replacement of the UKLight router appears to have reduced this. However, we are closely monitoring the links to confirm that any remaining error rates are low and typical for this type of wide area link.
 
<!-- ***********End Current operational status and issues*********** ----->
 
<!-- ***********End Current operational status and issues*********** ----->
 
<!-- *************************************************************** ----->
 
<!-- *************************************************************** ----->
Line 57: Line 58:
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Notable Changes made since the last meeting.
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Notable Changes made since the last meeting.
 
|}
 
|}
* Maintenance was carried out on the UPS and generator in R89 yesterday.
+
* Nothing p[articular to report.
* There was restart test of the ECHO Ceph system yesterday> this was to understand how best to do this and set-up appropriate operating procedures.
+
* LHCb writing to the 'D' tapes. The migration of their data from 'C' to 'D' tapes is underway - with around 300 (some 30%) of the tapes done.
+
* An update to the FTS3 service (to version 3.5.7) has taken place this morning.
+
* Increased number of CMS multicore jobs allowed to run as the previous limit was a bit too low. (This is a further increase as compared to that of around a month ago).
+
 
<!-- *************End Notable Changes made this last week************** ----->
 
<!-- *************End Notable Changes made this last week************** ----->
 
<!-- ****************************************************************** ----->
 
<!-- ****************************************************************** ----->
Line 88: Line 85:
 
'''Pending - but not yet formally announced:'''
 
'''Pending - but not yet formally announced:'''
 
* Firmware update on Clustervision '13 disk servers. These are distributed as follows: AtlasDataDisk: 12; CMSDisk: 5; LHCbDst: 12.
 
* Firmware update on Clustervision '13 disk servers. These are distributed as follows: AtlasDataDisk: 12; CMSDisk: 5; LHCbDst: 12.
* Merge AtlasScratchDisk and LhcbUser into larger disk pools - Possible date Thursday 8th Dec.
+
* Merge AtlasScratchDisk and LhcbUser into larger disk pools. For LHCbUser this will be done on Thursday 8th Dec.
 
'''Listing by category:'''
 
'''Listing by category:'''
 
* Castor:  
 
* Castor:  
Line 116: Line 113:
 
! Reason
 
! Reason
 
|-
 
|-
| lcgfts3.gridpp.rl.ac.uk,  
+
| lcgbdii.gridpp.rl.ac.uk,  
| SCHEDULED
+
| UNSCHEDULED
| WARNING
+
| OUTAGE
| 30/11/2016 11:00
+
| 05/12/2016 01:15
| 30/11/2016 13:00
+
| 05/12/2016 03:09
| 2 hours
+
| 1 hour and 54 minutes
| Upgrade of FTS3 service
+
| networking problems affecting part of the Tier-1 services
 
|}
 
|}
 
<!-- **********************End GOC DB Entries************************** ----->
 
<!-- **********************End GOC DB Entries************************** ----->
Line 138: Line 135:
 
|-style="background:#b7f1ce"
 
|-style="background:#b7f1ce"
 
! GGUS ID !! Level !! Urgency !! State !! Creation !! Last Update !! VO !! Subject
 
! GGUS ID !! Level !! Urgency !! State !! Creation !! Last Update !! VO !! Subject
 +
|-
 +
| 125348
 +
| Green
 +
| Top Priority
 +
| In Progress
 +
| 2016-12-05
 +
| 2016-12-05
 +
| CMS
 +
| Request to update phedex
 
|-
 
|-
 
| 125157
 
| 125157
 
| Green
 
| Green
 
| Less Urgent
 
| Less Urgent
| Waiting for Reply
+
| In Progress
 
| 2016-11-24
 
| 2016-11-24
| 2016-11-29
+
| 2016-12-07
 
|  
 
|  
 
| Creation of a repository within the EGI CVMFS infrastructure
 
| Creation of a repository within the EGI CVMFS infrastructure
Line 162: Line 168:
 
| In Progress
 
| In Progress
 
| 2016-10-24
 
| 2016-10-24
| 2016-11-24
+
| 2016-12-02
 
| CMS
 
| CMS
 
| Consistency Check for T1_UK_RAL
 
| Consistency Check for T1_UK_RAL
Line 172: Line 178:
 
| 2016-11-18
 
| 2016-11-18
 
| 2016-11-18
 
| 2016-11-18
|  
+
| NA62
 
| Jobs submitted via RAL WMS stuck in state READY forever and ever and ever
 
| Jobs submitted via RAL WMS stuck in state READY forever and ever and ever
 
|-
 
|-
Line 180: Line 186:
 
| In Progress
 
| In Progress
 
| 2016-07-12
 
| 2016-07-12
| 2016-10-11
+
| 2016-12-01
 
| SNO+
 
| SNO+
 
| Disk area at RAL
 
| Disk area at RAL
Line 191: Line 197:
 
| 2016-10-05
 
| 2016-10-05
 
|  
 
|  
| CASTOR at RAL not publishing GLUE 2 (Updated. There are ongoing discussions with GLUE & WLCG)
+
| CASTOR at RAL not publishing GLUE 2. Plan to revisit week starting 19th.
 
|}
 
|}
 
<!-- **********************End GGUS Tickets************************** ----->
 
<!-- **********************End GGUS Tickets************************** ----->
Line 233: Line 239:
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Notes from Meeting.
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Notes from Meeting.
 
|}
 
|}
* None Yet
+
* Poor CMS job efficiencies were discussed. This follows discussions on raising the number of CMS batch jobs that can run.
 +
* The firmware updates on the ClusterVision '13 batch of disk servers was discussed. These are in disk only service classes for CMS, LHCb and Atlas.

Latest revision as of 16:58, 12 December 2016

RAL Tier1 Operations Report for 7th December 2016

Review of Issues during the week 30th November and 7th December 2016.
  • There was some problems with the CMS Castor instance during last week. A restart of the "transfermanager" on Friday cleared out a backlog of transfer requests that were not progressing - and this enabled the service to work normally.
  • There a problem on one of the Power Distribution Units to a rack in the UPS room during the early hours of Monday morning (5th Dec). This affected two network switches - which in turn affected some core services (including the TopBDII). This was resolved by a member of staff attending during the night.
Resolved Disk Server Issues
  • GDSS726 (CMSDisk - D1T0) reported FSProbe errors on Thursday 1st Dec. and was taken out of service. It was returned to service the following day although the tests did not find any problems.
  • GDSS747 (AtlasDataDisk - D1T0) also failed on Thursday 1st Dec. Two failed disks were found. The server was returned to service on Monday (5th Dec).
  • GDSS650 (LHCbUser - D1T0) failed on Saturday morning, 3rd Dec. It was returned to service yesterday (6th Dec). A disk had failed - and the replacement to that disk also failed. During teh RAID rebuild a further disk drive started reporting problems and was also swapped.
  • GDSS701 (LHCbDst - D1T0) was taken out of service on Saturday (3rd Dec) when it reported FSProbe errors when a disk was replaced. It was returned to service on the 5th Dec.
Current operational status and issues
  • There is a problem seen by LHCb of a low but persistent rate of failure when copying the results of batch jobs to Castor. There is also a further problem that sometimes occurs when these (failed) writes are attempted to storage at other sites.
  • The intermittent, low-level, load-related packet loss that has been seen over external connections is still being monitored. The replacement of the UKLight router appears to have reduced this. However, we are closely monitoring the links to confirm that any remaining error rates are low and typical for this type of wide area link.
Ongoing Disk Server Issues
  • None
Notable Changes made since the last meeting.
  • Nothing p[articular to report.
Declared in the GOC DB

None

Advanced warning for other interventions
The following items are being discussed and are still to be formally scheduled and announced.

Pending - but not yet formally announced:

  • Firmware update on Clustervision '13 disk servers. These are distributed as follows: AtlasDataDisk: 12; CMSDisk: 5; LHCbDst: 12.
  • Merge AtlasScratchDisk and LhcbUser into larger disk pools. For LHCbUser this will be done on Thursday 8th Dec.

Listing by category:

  • Castor:
    • Merge AtlasScratchDisk and LhcbUser into larger disk pools
    • Update to Castor version 2.1.15. Planning to roll out January 2017. (Proposed dates: 10th Jan: Nameserver; 17th Jan: First stager (LHCb); 24th Jan: Stager (Atlas); 26th Jan: Stager (GEN); 31st Jan: Final stager (CMS)).
    • Update SRMs to new version, including updating to SL6. This will be done after the Castor 2.1.15 update.
  • Fabric
    • Firmware updates on older disk servers.
Entries in GOC DB starting since the last report.
Service Scheduled? Outage/At Risk Start End Duration Reason
lcgbdii.gridpp.rl.ac.uk, UNSCHEDULED OUTAGE 05/12/2016 01:15 05/12/2016 03:09 1 hour and 54 minutes networking problems affecting part of the Tier-1 services
Open GGUS Tickets (Snapshot during morning of meeting)
GGUS ID Level Urgency State Creation Last Update VO Subject
125348 Green Top Priority In Progress 2016-12-05 2016-12-05 CMS Request to update phedex
125157 Green Less Urgent In Progress 2016-11-24 2016-12-07 Creation of a repository within the EGI CVMFS infrastructure
124876 Green Less Urgent On Hold 2016-11-07 2016-11-21 OPS [Rod Dashboard] Issue detected : hr.srce.GridFTP-Transfer-ops@gridftp.echo.stfc.ac.uk
124606 Red Top Priority In Progress 2016-10-24 2016-12-02 CMS Consistency Check for T1_UK_RAL
124478 Green Less Urgent In Progress 2016-11-18 2016-11-18 NA62 Jobs submitted via RAL WMS stuck in state READY forever and ever and ever
122827 Green Less Urgent In Progress 2016-07-12 2016-12-01 SNO+ Disk area at RAL
117683 Red Less Urgent On Hold 2015-11-18 2016-10-05 CASTOR at RAL not publishing GLUE 2. Plan to revisit week starting 19th.
Availability Report

Key: Atlas HC = Atlas HammerCloud (Queue ANALY_RAL_SL6, Template 808); CMS HC = CMS HammerCloud

Day OPS Alice Atlas CMS LHCb Atlas HC CMS HC Comment
30/11/16 100 100 100 100 100 N/A 100
01/12/16 100 100 100 99 100 N/A 100 Single SRM test failure: User timeout over
02/12/16 100 100 100 74 100 N/A 100 Block of SRM test failures during the day (User timeout).
03/12/16 100 100 100 99 100 N/A 100 Single SRM test failure: User timeout over
04/12/16 100 100 100 100 100 N/A 100
05/12/16 100 100 100 98 100 N/A 100 Single SRM test failure: User timeout over
06/12/16 100 100 100 100 100 N/A 98
Notes from Meeting.
  • Poor CMS job efficiencies were discussed. This follows discussions on raising the number of CMS batch jobs that can run.
  • The firmware updates on the ClusterVision '13 batch of disk servers was discussed. These are in disk only service classes for CMS, LHCb and Atlas.