Difference between revisions of "Tier1 Operations Report 2014-04-23"

From GridPP Wiki
Jump to: navigation, search
m ()
()
 
(7 intermediate revisions by 2 users not shown)
Line 9: Line 9:
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Review of Issues during the week 16th to 23rd April 2014.
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Review of Issues during the week 16th to 23rd April 2014.
 
|}
 
|}
* We note that Atlas has managed to fill the OPN link inbound to RAL for around 24 hours on 9th April.
+
* There have been two maintenance periods on the Primary OPN link in this last week. One on the evening of Thursday 17th April, the other on the evening of Tuesday 22nd April. In both cases there was a manual switch over to the backup OPN link. In one case (the 17th) this was done before the intervention period and in the other it was switched quickly after the break.
* Over weekend (12/13 April) there were problems with our Atlas Frontier Service which went down. This was part of extended Frontier problems that affcted other Tier1 sites too.
+
* Nine files were reported lost to T2K from a server in GentTape that was drained.
* Maintenence on Primary OPN link early evening on Monday 14th April took the link down for a few hours. The failover to the backup link again did not work properly. The effect of this can be seen in the failure of the SUM tests from CERN during this time.
+
 
<!-- ***********End Review of Issues during last week*********** ----->
 
<!-- ***********End Review of Issues during last week*********** ----->
 
<!-- *********************************************************** ----->
 
<!-- *********************************************************** ----->
Line 22: Line 21:
 
| style="background-color: #f8d6a9; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Resolved Disk Server Issues
 
| style="background-color: #f8d6a9; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Resolved Disk Server Issues
 
|}
 
|}
* * GDSS403 (AtlasTape - D0T1) failed on Friday 11th April. At the time we understood there was one file on it waiting to go to tape. Following a drive replacement it was put back into service. However, a further 54 files were then found to be unmigrated to tape and were corrupt. These files have been declared lost to Atlas.
+
* GDSS403 (AtlasTape - D0T1) failed on Friday 11th April. At the time we understood there was one file on it waiting to go to tape. Following a drive replacement it was put back into service. However, a further 54 files were then found to be unmigrated to tape and were corrupt. These files have been declared lost to Atlas.
 
<!-- ***********End Resolved Disk Server Issues*********** ----->
 
<!-- ***********End Resolved Disk Server Issues*********** ----->
 
<!-- ***************************************************** ----->
 
<!-- ***************************************************** ----->
Line 33: Line 32:
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Current operational status and issues
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Current operational status and issues
 
|}
 
|}
* The load related problems reported for the CMS Castor instance have not been seen this last fortnight. However, work is underway to tackle these problems, in particular servers with faster network connections will be moved into the disk cache in front of CMS_Tape when they become available.
+
* The load related problems reported for the CMS Castor instance have not been seen for a few weeks. However, work is underway to tackle these problems, in particular servers with faster network connections will be moved into the disk cache in front of CMS_Tape when they become available.
* The Castor Team are now able to reproduce the intermittent failures of Castor access via the SRM that has been reported in recent weeks. Understanding of the problem is significantly adcanced and further investigations are ongoing using the Castor Preprod instance. Ideas for a workaround are being developed.
+
* The Castor Team are now able to reproduce the intermittent failures of Castor access via the SRM that has been reported in recent weeks. Understanding of the problem is significantly advanced and further investigations are ongoing using the Castor Preprod instance. Ideas for a workaround are being developed.
 
* As reported before, working with Atlas the file deletion rate was somewhat improved. However, there is still a problem that needs to be understood.  
 
* As reported before, working with Atlas the file deletion rate was somewhat improved. However, there is still a problem that needs to be understood.  
 
* Problems with the infrastructure used to host many of our non-Castor services have largely been worked around, although not yet fixed. Some additional migrations of VMs has been necessary.
 
* Problems with the infrastructure used to host many of our non-Castor services have largely been worked around, although not yet fixed. Some additional migrations of VMs has been necessary.
Line 60: Line 59:
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Notable Changes made this last week.
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Notable Changes made this last week.
 
|}
 
|}
* Appropriate systems patch for the "heartbleed" vulnerbility.
+
* Lcgvo04 (CMS "VOBOX") from EMI-2/SL5 to EMI-3/SL6.
* Environment modification made to enable T2K jobs to run on the ARC CEs.
+
* The rollout of new kernel, errata and condor across worker nodes is largely complete.
* Disk usage limits (of 150 GB) added to CEs.
+
* Alias "cernvmfs.gridpp.rl.ac.uk" updated to point to cvmfs-wlcg.gridpp.rl.ac.uk - the new CVMFS stratum-1 v2.1 on Tuesday (15th April).
+
 
<!-- *************End Notable Changes made this last week************** ----->
 
<!-- *************End Notable Changes made this last week************** ----->
 
<!-- ****************************************************************** ----->
 
<!-- ****************************************************************** ----->
Line 129: Line 126:
 
** Switch LFC/FTS/3D to new Database Infrastructure.
 
** Switch LFC/FTS/3D to new Database Infrastructure.
 
* Castor:
 
* Castor:
** Castor 2.1.14 testing is largely complete. (A non-Tier1 production Castor instance has been successfully upgraded.) We are starting to look at possible dates for rolling this out (probably around May).
+
** Castor 2.1.14 testing was largely complete, although a new minor version (2.1.14-12) will be released soon.
 
* Networking:
 
* Networking:
 
** Move switches connecting recent disk servers batches ('11, '12) onto the Tier1 mesh network.
 
** Move switches connecting recent disk servers batches ('11, '12) onto the Tier1 mesh network.
Line 205: Line 202:
 
| Red
 
| Red
 
| Urgent
 
| Urgent
| In Progress
+
| Waiting Reply
 
| 2013-10-21
 
| 2013-10-21
 
| 2014-03-13
 
| 2014-03-13

Latest revision as of 10:57, 23 April 2014

RAL Tier1 Operations Report for 23rd April 2014

Review of Issues during the week 16th to 23rd April 2014.
  • There have been two maintenance periods on the Primary OPN link in this last week. One on the evening of Thursday 17th April, the other on the evening of Tuesday 22nd April. In both cases there was a manual switch over to the backup OPN link. In one case (the 17th) this was done before the intervention period and in the other it was switched quickly after the break.
  • Nine files were reported lost to T2K from a server in GentTape that was drained.
Resolved Disk Server Issues
  • GDSS403 (AtlasTape - D0T1) failed on Friday 11th April. At the time we understood there was one file on it waiting to go to tape. Following a drive replacement it was put back into service. However, a further 54 files were then found to be unmigrated to tape and were corrupt. These files have been declared lost to Atlas.
Current operational status and issues
  • The load related problems reported for the CMS Castor instance have not been seen for a few weeks. However, work is underway to tackle these problems, in particular servers with faster network connections will be moved into the disk cache in front of CMS_Tape when they become available.
  • The Castor Team are now able to reproduce the intermittent failures of Castor access via the SRM that has been reported in recent weeks. Understanding of the problem is significantly advanced and further investigations are ongoing using the Castor Preprod instance. Ideas for a workaround are being developed.
  • As reported before, working with Atlas the file deletion rate was somewhat improved. However, there is still a problem that needs to be understood.
  • Problems with the infrastructure used to host many of our non-Castor services have largely been worked around, although not yet fixed. Some additional migrations of VMs has been necessary.
  • We have now had repeated instances where the OPN link has not cleanly failed over to the backup link during problems with the primary.
  • One of the network uplinks (for the 2012 disk servers) has been running at full capacity. We have a plan to move the switch into the new Tier1 mesh metwork to alleviate this.
Ongoing Disk Server Issues
  • GDSS566 (AtlasDataDisk - D1T0) failed yesterday evening (22nd April). System out of production while investigations continue.
Notable Changes made this last week.
  • Lcgvo04 (CMS "VOBOX") from EMI-2/SL5 to EMI-3/SL6.
  • The rollout of new kernel, errata and condor across worker nodes is largely complete.
Declared in the GOC DB
Service Scheduled? Outage/At Risk Start End Duration Reason
Whole site SCHEDULED WARNING 30/04/2014 10:00 30/04/2014 12:00 2 hours RAL Tier1 site in warning state due to UPS/generator test.
Whole site SCHEDULED OUTAGE 29/04/2014 07:00 29/04/2014 17:00 10 hours Site outage during Network Upgrade.
lcgrbp01.gridpp.rl.ac.uk, SCHEDULED OUTAGE 02/04/2014 12:00 01/05/2014 12:00 29 days, System being decommissioned. (Replaced my myproxy.gridpp.rl.ac.uk).


Advanced warning for other interventions
The following items are being discussed and are still to be formally scheduled and announced.

Listing by category:

  • Databases:
    • Switch LFC/FTS/3D to new Database Infrastructure.
  • Castor:
    • Castor 2.1.14 testing was largely complete, although a new minor version (2.1.14-12) will be released soon.
  • Networking:
    • Move switches connecting recent disk servers batches ('11, '12) onto the Tier1 mesh network.
    • Update core Tier1 network and change connection to site and OPN including:
      • Install new Routing layer for Tier1 & change the way the Tier1 connects to the RAL network. (Scheduled for 29th April)
      • These changes will lead to the removal of the UKLight Router.
  • Fabric
    • We are phasing out the use of the software server used by the small VOs.
    • Firmware updates on remaining EMC disk arrays (Castor, FTS/LFC)
    • There will be circuit testing of the remaining (i.e. non-UPS) circuits in the machine room during 2014.
Entries in GOC DB starting between the 16th and 23rd April 2014.


Service Scheduled? Outage/At Risk Start End Duration Reason
lcgrbp01.gridpp.rl.ac.uk SCHEDULED OUTAGE 02/04/2014 12:00 01/05/2014 12:00 29 days, System be decommissioned. (Replaced my myproxy.gridpp.rl.ac.uk).
Open GGUS Tickets (Snapshot during morning of meeting)
GGUS ID Level Urgency State Creation Last Update VO Subject
103197 Yellow Less Urgent Waiting Reply 2014-04-09 2014-04-09 RAL myproxy server and GridPP wiki
101968 Red Less Urgent On Hold 2014-03-11 2014-04-01 Atlas RAL-LCG2_SCRATCHDISK: One dataset to delete is causing 1379 deletion errors
98249 Red Urgent Waiting Reply 2013-10-21 2014-03-13 SNO+ please configure cvmfs stratum-0 for SNO+ at RAL T1
Availability Report

Key: Atlas HC = Atlas HammerCloud (Queue ANALY_RAL_SL6, Template 508); CMS HC = CMS HammerCloud

Day OPS Alice Atlas CMS LHCb Atlas HC CMS HC Comment
16/04/14 100 100 100 100 100 99 99
17/04/14 100 100 100 100 100 100 99
18/04/14 100 100 100 100 100 96 99
19/04/14 100 100 100 100 100 100 100
20/04/14 100 100 100 100 100 100 100
21/04/14 100 100 100 100 100 100 100
22/04/14 100 100 99.1 100 100 100 100 Single SRM GET test failure.