Difference between revisions of "Tier1 Operations Report 2015-06-24"

From GridPP Wiki
Jump to: navigation, search
()
()
 
(20 intermediate revisions by one user not shown)
Line 9: Line 9:
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Review of Issues during the week 17th to 24th June 2015.
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Review of Issues during the week 17th to 24th June 2015.
 
|}
 
|}
* The behaviour of the CMS Castor instance has been improved since a change made last week.
+
* As reported at the last meeting. AtlasDataDisk in Castor became full on the morning of Wed 17th June. Four additional disk servers were added on Wednesday afternoon (17th) and a further four on Friday (19th). Note that one of the initial set of four servers (GDSS763) failed on the Friday (19th).
* There has been a problem with job submission for MICE ("no compatible resources" from the BDII) reported on 10th June.
+
* During last week LHCb reported problems accessing old files in Castor which did not have a stored checksum. (These files have been like this for some years). Stored checksums have been retrospectively added to Castor for these cases.
* A problem with a couple of entries in the Castor gridmap file was flagged up by LHCb. This was chased to the relevant updating script pointing at an old VOMS server and fixed this morning (Wed 17th).
+
* Last Wednesday (17th) following a problem flagged up by LHCb it was realized that we were using incorrect VOMS servers for regenerating the Castor gridmap files. This was fixed that day.
* AtlasDataDisk in Castor became full this morning (Wed 17th June). Additional disk servers are in the process of being deployed to this service class.
+
* A problem transferring files from FNAL to us is being investigated.
* LHCb have flagged up a problem reading very old files that pre-date the checksums being enabled in Castor. Some of these have had their checksum added - but there are more to be done.
+
* Yesterday, Tuesday 23rd June, There were two network issues. The first was a spontaneous reboot of a core network switch that led to a break in connectivity to the Tier1 for around 8 minutes. The second was a very high rate of traffic around the Tier1 network that lasted for around 45 minutes from 16:00. This is not completely understood, but appears to have been caused by restarting an old hypervisor and this causing more than one copy of a particular Virtual Machine to run.  
 
<!-- ***********End Review of Issues during last week*********** ----->
 
<!-- ***********End Review of Issues during last week*********** ----->
 
<!-- *********************************************************** ----->
 
<!-- *********************************************************** ----->
Line 24: Line 24:
 
| style="background-color: #f8d6a9; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Resolved Disk Server Issues
 
| style="background-color: #f8d6a9; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Resolved Disk Server Issues
 
|}
 
|}
* None.
+
* GDSS711 (CMSDisk - D1T0) failed on Wednesday evening (17th). The server was checked out but no specific fault found. It was returned to service on Friday (19th).
 
<!-- ***********End Resolved Disk Server Issues*********** ----->
 
<!-- ***********End Resolved Disk Server Issues*********** ----->
 
<!-- ***************************************************** ----->
 
<!-- ***************************************************** ----->
Line 36: Line 36:
 
|}
 
|}
 
* We are running with a single router connecting the Tier1 network to the site network, rather than a resilient pair.
 
* We are running with a single router connecting the Tier1 network to the site network, rather than a resilient pair.
* Castor xroot performance problems seen by CMS has been improved. The proof of this awaits further running of problematic job mixes by CMS but looks promising.
 
 
* The post mortem review of the network incident on the 8th April is being finalised.
 
* The post mortem review of the network incident on the 8th April is being finalised.
 
* The intermittent, low-level, load-related packet loss over the OPN to CERN is still being tracked.
 
* The intermittent, low-level, load-related packet loss over the OPN to CERN is still being tracked.
Line 49: Line 48:
 
| style="background-color: #f8d6a9; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Ongoing Disk Server Issues
 
| style="background-color: #f8d6a9; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Ongoing Disk Server Issues
 
|}
 
|}
* GDSS763 (AtlasDatDisk) failed in the early hours of Friday mornin (19th June). This was one of the disk servers added to AtlasDataDisk after it became full. After initial checks it was put back in service read-only, but crashed again. The disks have now been placed in a different server/chassis and the server is being drained.
+
* GDSS763 (AtlasDataDisk - D1T0) failed in the early hours of Friday morning (19th June). This was one of the disk servers added to AtlasDataDisk after it became full. After initial checks it was put back in service read-only, but crashed again. The disks have now been placed in a different server/chassis and it is being drained.
 
<!-- ***************End Ongoing Disk Server Issues**************** ----->
 
<!-- ***************End Ongoing Disk Server Issues**************** ----->
 
<!-- ************************************************************* ----->
 
<!-- ************************************************************* ----->
Line 60: Line 59:
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Notable Changes made since the last meeting.
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Notable Changes made since the last meeting.
 
|}
 
|}
* On Wednesday (10th June) access to arc-ce05 was stopped pending its full decommissioning.
+
* On Wednesday (just after last week's meeting) a network test was carried out. Thi confirmed that non-Tier1 services on our network now route traffic avoiding our (problematic) Tier1 router pair. The test also coinfirmed the long-standing problem in the primary Tier1 router. This test paves the way for a longer intervention, with the vendor present, to try and get to the bottom of the router problem.
* On Wednedsay (10th June) the CMS Castor instance was modified to allow unscheduled reads when using xroot. Performance so far suggests a big improvement.
+
* Old files in Castor that were missing stored checksums have had these added.
* Urgent updates to version 2.9.3-1 were applied to both test and production FTS3 systems on Thursday (11th June).
+
* Eight additional disk servers have been added to AtlasDataDisk (approaching a Petabyte of extra capacity).
* On Monday (15th June) the Castor rebalancer was enabled across all instances.
+
* The batch job limit for Alice has been completely removed. (It was set at 6000).
* Access to the LFC has been enabled for the DiRAC VO.
+
* A detailed change to a database procedure has been made to the LHCb Castor stager (yesterday - Tuesday 23rd) and then to CMS today (Wed 24th). This change significantly speeds up file open times within Castor.
 +
* Files are being successfull transferred from Durham for Dirac.
 
<!-- *************End Notable Changes made this last week************** ----->
 
<!-- *************End Notable Changes made this last week************** ----->
 
<!-- ****************************************************************** ----->
 
<!-- ****************************************************************** ----->
Line 106: Line 106:
 
|}
 
|}
 
<!-- ******* still to be formally scheduled and/or announced ******* ----->
 
<!-- ******* still to be formally scheduled and/or announced ******* ----->
* A short (fifteen minute) test needs to take place to confirm we have successfully separated the non-Tier1 traffic off the router pair. This will require a break in connectivity to the Tier1. (We will also make use of this short outage to stop a small number of additional services while the hypervisor hosting them is physically moved.) Once this test has been done a longer (few hour) intervention can be scheduled to investigate the router problems.
 
 
'''Listing by category:'''
 
'''Listing by category:'''
 
* Databases:
 
* Databases:
Line 112: Line 111:
 
** Update to Oracle 11.2.0.4. This will affect all services that use Oracle databases: Castor, Atlas Frontier (LFC done)
 
** Update to Oracle 11.2.0.4. This will affect all services that use Oracle databases: Castor, Atlas Frontier (LFC done)
 
* Castor:
 
* Castor:
** Enable disk server rebalancing.
 
 
** Update SRMs to new version (includes updating to SL6).
 
** Update SRMs to new version (includes updating to SL6).
 
** Update the Oracle databses behind Castor to version 11.2.0.4. Will require some downtimes (See above)
 
** Update the Oracle databses behind Castor to version 11.2.0.4. Will require some downtimes (See above)
Line 118: Line 116:
 
** Update to Castor version 2.1.15.
 
** Update to Castor version 2.1.15.
 
* Networking:
 
* Networking:
** Resolve problems with primary Tier1 Router
+
** Resolve problems with primary Tier1 Router. Need to schedule a roughly half-day outage for the vendor to carry out investigations.
 
** Enable the RIP protocol for updating routing tables on the Tier1 routers. (Install patch to Router software).
 
** Enable the RIP protocol for updating routing tables on the Tier1 routers. (Install patch to Router software).
 
** Increase bandwidth of the link from the Tier1 into the RAL internal site network to 40Gbit.
 
** Increase bandwidth of the link from the Tier1 into the RAL internal site network to 40Gbit.
Line 186: Line 184:
 
! GGUS ID !! Level !! Urgency !! State !! Creation !! Last Update !! VO !! Subject
 
! GGUS ID !! Level !! Urgency !! State !! Creation !! Last Update !! VO !! Subject
 
|-
 
|-
| 114391
+
| 114512
 
| Green
 
| Green
| Urgent
+
| Less Urgent
 
| In Progress
 
| In Progress
| 2015-06-17
+
| 2015-06-12
| 2015-06-17
+
| 2015-06-22
 
| Atlas
 
| Atlas
| RAL-LCG2: DESTINATION transfer failure with "disk pool is full" error
+
| deletion errors for RAL-LCG2
|-
+
| 114296
+
| Green
+
| Top Priority
+
| Waiting Reply
+
| 2015-06-12
+
| 2015-06-12
+
| LHCb
+
| Checksum missing for RAL disk resident file replicas
+
 
|-
 
|-
 
| 113910
 
| 113910
 
| Green
 
| Green
 
| Less Urgent
 
| Less Urgent
| In Progress
+
| Waiting Reply
 
| 2015-05-26
 
| 2015-05-26
| 2015-05-28
+
| 2015-06-23
 
| SNO+
 
| SNO+
 
| RAL data staging
 
| RAL data staging
Line 218: Line 207:
 
| In Progress
 
| In Progress
 
| 2015-05-20
 
| 2015-05-20
| 2015-06-08
+
| 2015-06-24
 
|  
 
|  
 
| GLUE 1 vs GLUE 2 mismatch in published queues
 
| GLUE 1 vs GLUE 2 mismatch in published queues
 
|-
 
|-
 
| 112721
 
| 112721
| Yellow
+
| Red
 
| Less Urgent
 
| Less Urgent
 
| In Progress
 
| In Progress
 
| 2015-03-28
 
| 2015-03-28
| 2015-06-08
+
| 2015-06-23
 
| Atlas
 
| Atlas
 
| RAL-LCG2: SOURCE Failed to get source file size
 
| RAL-LCG2: SOURCE Failed to get source file size
Line 234: Line 223:
 
| Red
 
| Red
 
| Urgent
 
| Urgent
| In Progress
+
| Waiting Reply
 
| 2014-11-03
 
| 2014-11-03
| 2015-05-27
+
| 2015-06-23
 
| SNO+
 
| SNO+
 
| gfal-copy failing for files at RAL
 
| gfal-copy failing for files at RAL

Latest revision as of 13:30, 24 June 2015

RAL Tier1 Operations Report for 24th June 2015

Review of Issues during the week 17th to 24th June 2015.
  • As reported at the last meeting. AtlasDataDisk in Castor became full on the morning of Wed 17th June. Four additional disk servers were added on Wednesday afternoon (17th) and a further four on Friday (19th). Note that one of the initial set of four servers (GDSS763) failed on the Friday (19th).
  • During last week LHCb reported problems accessing old files in Castor which did not have a stored checksum. (These files have been like this for some years). Stored checksums have been retrospectively added to Castor for these cases.
  • Last Wednesday (17th) following a problem flagged up by LHCb it was realized that we were using incorrect VOMS servers for regenerating the Castor gridmap files. This was fixed that day.
  • A problem transferring files from FNAL to us is being investigated.
  • Yesterday, Tuesday 23rd June, There were two network issues. The first was a spontaneous reboot of a core network switch that led to a break in connectivity to the Tier1 for around 8 minutes. The second was a very high rate of traffic around the Tier1 network that lasted for around 45 minutes from 16:00. This is not completely understood, but appears to have been caused by restarting an old hypervisor and this causing more than one copy of a particular Virtual Machine to run.
Resolved Disk Server Issues
  • GDSS711 (CMSDisk - D1T0) failed on Wednesday evening (17th). The server was checked out but no specific fault found. It was returned to service on Friday (19th).
Current operational status and issues
  • We are running with a single router connecting the Tier1 network to the site network, rather than a resilient pair.
  • The post mortem review of the network incident on the 8th April is being finalised.
  • The intermittent, low-level, load-related packet loss over the OPN to CERN is still being tracked.
Ongoing Disk Server Issues
  • GDSS763 (AtlasDataDisk - D1T0) failed in the early hours of Friday morning (19th June). This was one of the disk servers added to AtlasDataDisk after it became full. After initial checks it was put back in service read-only, but crashed again. The disks have now been placed in a different server/chassis and it is being drained.
Notable Changes made since the last meeting.
  • On Wednesday (just after last week's meeting) a network test was carried out. Thi confirmed that non-Tier1 services on our network now route traffic avoiding our (problematic) Tier1 router pair. The test also coinfirmed the long-standing problem in the primary Tier1 router. This test paves the way for a longer intervention, with the vendor present, to try and get to the bottom of the router problem.
  • Old files in Castor that were missing stored checksums have had these added.
  • Eight additional disk servers have been added to AtlasDataDisk (approaching a Petabyte of extra capacity).
  • The batch job limit for Alice has been completely removed. (It was set at 6000).
  • A detailed change to a database procedure has been made to the LHCb Castor stager (yesterday - Tuesday 23rd) and then to CMS today (Wed 24th). This change significantly speeds up file open times within Castor.
  • Files are being successfull transferred from Durham for Dirac.
Declared in the GOC DB
Service Scheduled? Outage/At Risk Start End Duration Reason
arc-ce05.gridpp.rl.ac.uk SCHEDULED OUTAGE 10/06/2015 12:00 08/07/2015 12:00 28 days, This particular ARC CE will be decommisioned. (Four other ARC CEs remain in production).
Advanced warning for other interventions
The following items are being discussed and are still to be formally scheduled and announced.

Listing by category:

  • Databases:
    • Switch LFC/3D to new Database Infrastructure.
    • Update to Oracle 11.2.0.4. This will affect all services that use Oracle databases: Castor, Atlas Frontier (LFC done)
  • Castor:
    • Update SRMs to new version (includes updating to SL6).
    • Update the Oracle databses behind Castor to version 11.2.0.4. Will require some downtimes (See above)
    • Update disk servers to SL6.
    • Update to Castor version 2.1.15.
  • Networking:
    • Resolve problems with primary Tier1 Router. Need to schedule a roughly half-day outage for the vendor to carry out investigations.
    • Enable the RIP protocol for updating routing tables on the Tier1 routers. (Install patch to Router software).
    • Increase bandwidth of the link from the Tier1 into the RAL internal site network to 40Gbit.
    • Make routing changes to allow the removal of the UKLight Router.
    • Cabling/switch changes to the network in the UPS room to improve resilience.
  • Fabric
    • Firmware updates on remaining EMC disk arrays (Castor, FTS/LFC)
Entries in GOC DB starting since the last report.


Service Scheduled? Outage/At Risk Start End Duration Reason
Whole site SCHEDULED OUTAGE 17/06/2015 13:45 17/06/2015 15:15 1 hour and 30 minutes During this time window there will be a 15 minute disconnection of the RAL-LCG2 site from the network. This will take place sometime between 13:00 - 13:30 UTC. For this 15-minute period all services will be unavailable. The Castor storage system will be stopped at 12:45 UTC before the network break, and restarted once the 15-minute break is over. The declared time window allows time for Castor and other services to be checked out after the network break. The network outage is for a test of a revised network configuration.
srm-atlas.gridpp.rl.ac.uk, UNSCHEDULED WARNING 17/06/2015 10:00 17/06/2015 17:00 7 hours AtlasDataDisk full. Working to resolve this. Other Atlas areas, and reads from AtlasDataDisk are OK.
arc-ce05.gridpp.rl.ac.uk SCHEDULED OUTAGE 10/06/2015 12:00 08/07/2015 12:00 28 days, This particular ARC CE will be decommisioned. (Four other ARC CEs remain in production).
Open GGUS Tickets (Snapshot during morning of meeting)
GGUS ID Level Urgency State Creation Last Update VO Subject
114512 Green Less Urgent In Progress 2015-06-12 2015-06-22 Atlas deletion errors for RAL-LCG2
113910 Green Less Urgent Waiting Reply 2015-05-26 2015-06-23 SNO+ RAL data staging
113836 Green Less Urgent In Progress 2015-05-20 2015-06-24 GLUE 1 vs GLUE 2 mismatch in published queues
112721 Red Less Urgent In Progress 2015-03-28 2015-06-23 Atlas RAL-LCG2: SOURCE Failed to get source file size
109694 Red Urgent Waiting Reply 2014-11-03 2015-06-23 SNO+ gfal-copy failing for files at RAL
108944 Red Less Urgent In Progress 2014-10-01 2015-06-16 CMS AAA access test failing at T1_UK_RAL
Availability Report

Key: Atlas HC = Atlas HammerCloud (Queue ANALY_RAL_SL6, Template 508); CMS HC = CMS HammerCloud

Day OPS Alice Atlas CMS LHCb Atlas HC CMS HC Comment
17/06/15 93.8 94.0 50.0 93.0 96.0 96 98 Atlas: DataDiskFull; All: Tests failed during network outage (planned test).
18/06/15 100 100 100 100 100 97 95
19/06/15 100 100 98.0 100 100 93 98 Single SRM Test failure: Could not open connection to srm-atlas
20/06/15 100 100 98.0 100 100 100 100 Single SRM Test failure: Error trying to locate the file in the disk cache
21/06/15 100 100 100 100 100 96 99
22/06/15 100 100 100 100 100 100 100
23/06/15 100 100 100 95.0 100 87 99 SRM and CE test failures during local network problem.