Difference between revisions of "Tier1 Operations Report 2014-04-30"
From GridPP Wiki
(→) |
(→) |
||
(11 intermediate revisions by one user not shown) | |||
Line 9: | Line 9: | ||
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Review of Issues during the week 23rd to 30th April 2014. | | style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Review of Issues during the week 23rd to 30th April 2014. | ||
|} | |} | ||
− | * There have been | + | * There have been problems with "CMSDisk" in Castor caused by it becoming very full. |
− | * | + | * There have been problems with large numbers of jobs (from T2K) submitted to the batch system by the WMSs. A batch system parameter (max number of gridftp connections on ARC CEs) has been increased to try and alleviate this. |
+ | * Five files were reported lost to Atlas following the draining of a disk server. (GDSS600). | ||
<!-- ***********End Review of Issues during last week*********** -----> | <!-- ***********End Review of Issues during last week*********** -----> | ||
<!-- *********************************************************** -----> | <!-- *********************************************************** -----> | ||
Line 21: | Line 22: | ||
| style="background-color: #f8d6a9; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Resolved Disk Server Issues | | style="background-color: #f8d6a9; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Resolved Disk Server Issues | ||
|} | |} | ||
− | * GDSS566 (AtlasDataDisk - D1T0) failed on the evening of Tuesday(22nd April). No specific problem was identified but it looks to have been a fault with the disk controller. Server returned to service on Thursday(24th April). | + | * GDSS566 (AtlasDataDisk - D1T0) failed on the evening of Tuesday(22nd April). No specific problem was identified but it looks to have been a fault with the disk controller. Server returned to service on Thursday(24th April). (This server is due to be decommissioned. We will drain it soon). |
* GDFSS460 (GenTape - D0T1) failed on Saturday afternoon (26th April). The problem was traced to a power supply problem. The server was returned to service at the end of Monday afternoon(28th April.) | * GDFSS460 (GenTape - D0T1) failed on Saturday afternoon (26th April). The problem was traced to a power supply problem. The server was returned to service at the end of Monday afternoon(28th April.) | ||
<!-- ***********End Resolved Disk Server Issues*********** -----> | <!-- ***********End Resolved Disk Server Issues*********** -----> | ||
Line 38: | Line 39: | ||
* Problems with the infrastructure used to host many of our non-Castor services have largely been worked around, although not yet fixed. Some additional migrations of VMs has been necessary. | * Problems with the infrastructure used to host many of our non-Castor services have largely been worked around, although not yet fixed. Some additional migrations of VMs has been necessary. | ||
* We have now had repeated instances where the OPN link has not cleanly failed over to the backup link during problems with the primary. | * We have now had repeated instances where the OPN link has not cleanly failed over to the backup link during problems with the primary. | ||
− | * One of the network uplinks (for the 2012 disk servers) has been running at full capacity. We have a plan to move the switch into the new Tier1 mesh | + | * One of the network uplinks (for the 2012 disk servers) has been running at full capacity. We have a plan to move the switch into the new Tier1 mesh network to alleviate this. |
<!-- ***********End Current operational status and issues*********** -----> | <!-- ***********End Current operational status and issues*********** -----> | ||
<!-- *************************************************************** -----> | <!-- *************************************************************** -----> | ||
Line 60: | Line 61: | ||
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Notable Changes made this last week. | | style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Notable Changes made this last week. | ||
|} | |} | ||
− | * | + | * RAL Tier1 network uplink has been migrated to the new Tier1 router pair |
− | * | + | * WMS nodes (lcgwms04, lcgwms05, lcgwms06) upgraded to EMI-3 update 15. |
+ | * L&B nodes (lcglb01, lcglb02) upgraded to EMI-3 update 12. | ||
+ | * EMI-3 update 15 applied to top-BDII nodes (lcgbdii01, lcgbdii03, lcgbdii04). | ||
+ | * EMI v3.15.0 BDII-site applied to lcgsbdii01 and lcgsbdii02 nodes. | ||
+ | * We are starting to test CVMFS Client version 2.1.19. | ||
+ | * Condor has been upgraded from version 8.0.4 to version 8.0.6 and the hyperthreading settings have been altered with many machines now running at full hyperthreaded capacity. | ||
+ | |||
<!-- *************End Notable Changes made this last week************** -----> | <!-- *************End Notable Changes made this last week************** -----> | ||
<!-- ****************************************************************** -----> | <!-- ****************************************************************** -----> | ||
Line 73: | Line 80: | ||
|} | |} | ||
<!-- ******* Declared in the GOC DB ******* -----> | <!-- ******* Declared in the GOC DB ******* -----> | ||
− | |||
{| border=1 align=center | {| border=1 align=center | ||
|- bgcolor="#7c8aaf" | |- bgcolor="#7c8aaf" | ||
Line 84: | Line 90: | ||
! Reason | ! Reason | ||
|- | |- | ||
− | | | + | | lcgui02.gridpp.rl.ac.uk, |
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
| SCHEDULED | | SCHEDULED | ||
| OUTAGE | | OUTAGE | ||
− | | | + | | 30/04/2014 14:00 |
− | | 29/ | + | | 29/05/2014 13:00 |
− | | | + | | 28 days, 23 hours |
− | | | + | | Service being decommissioned. |
|- | |- | ||
| lcgrbp01.gridpp.rl.ac.uk, | | lcgrbp01.gridpp.rl.ac.uk, | ||
Line 106: | Line 104: | ||
| 01/05/2014 12:00 | | 01/05/2014 12:00 | ||
| 29 days, | | 29 days, | ||
− | | System | + | | System be decommissioned. (Replaced my myproxy.gridpp.rl.ac.uk). |
|} | |} | ||
− | |||
− | |||
<!-- **********************End GOC DB Entries************************** -----> | <!-- **********************End GOC DB Entries************************** -----> | ||
<!-- ****************************************************************** -----> | <!-- ****************************************************************** -----> | ||
Line 131: | Line 127: | ||
** Move switches connecting recent disk servers batches ('11, '12) onto the Tier1 mesh network. | ** Move switches connecting recent disk servers batches ('11, '12) onto the Tier1 mesh network. | ||
** Update core Tier1 network and change connection to site and OPN including: | ** Update core Tier1 network and change connection to site and OPN including: | ||
− | *** | + | *** Make routing changes to allow the removal of the UKLight Router. |
− | + | ||
* Fabric | * Fabric | ||
** We are phasing out the use of the software server used by the small VOs. | ** We are phasing out the use of the software server used by the small VOs. | ||
Line 227: | Line 222: | ||
| Red | | Red | ||
| Less Urgent | | Less Urgent | ||
− | | | + | | In Progress |
| 2014-03-11 | | 2014-03-11 | ||
| 2014-04-01 | | 2014-04-01 | ||
Line 260: | Line 255: | ||
! Day !! OPS !! Alice !! Atlas !! CMS !! LHCb !! Atlas HC !! CMS HC !! Comment | ! Day !! OPS !! Alice !! Atlas !! CMS !! LHCb !! Atlas HC !! CMS HC !! Comment | ||
|- | |- | ||
− | | 23/04/14 || 100 || 100 || 100 || 100 || 100 || 63 || 99 || | + | | 23/04/14 || 100 || 100 || 100 || 100 || 100 || 63 || 99 || (Atlas renaming files) |
|- | |- | ||
| 24/04/14 || 100 || 100 || 100 || 100 || 100 || 93 || 99 || | | 24/04/14 || 100 || 100 || 100 || 100 || 100 || 93 || 99 || |
Latest revision as of 13:24, 30 April 2014
RAL Tier1 Operations Report for 30th April 2014
Review of Issues during the week 23rd to 30th April 2014. |
- There have been problems with "CMSDisk" in Castor caused by it becoming very full.
- There have been problems with large numbers of jobs (from T2K) submitted to the batch system by the WMSs. A batch system parameter (max number of gridftp connections on ARC CEs) has been increased to try and alleviate this.
- Five files were reported lost to Atlas following the draining of a disk server. (GDSS600).
Resolved Disk Server Issues |
- GDSS566 (AtlasDataDisk - D1T0) failed on the evening of Tuesday(22nd April). No specific problem was identified but it looks to have been a fault with the disk controller. Server returned to service on Thursday(24th April). (This server is due to be decommissioned. We will drain it soon).
- GDFSS460 (GenTape - D0T1) failed on Saturday afternoon (26th April). The problem was traced to a power supply problem. The server was returned to service at the end of Monday afternoon(28th April.)
Current operational status and issues |
- The load related problems reported for the CMS Castor instance have not been seen for a few weeks. However, work is underway to tackle these problems, in particular servers with faster network connections will be moved into the disk cache in front of CMS_Tape when they become available.
- The Castor Team are now able to reproduce the intermittent failures of Castor access via the SRM that has been reported in recent weeks. Understanding of the problem is significantly advanced and further investigations are ongoing using the Castor Preprod instance. Ideas for a workaround are being developed.
- As reported before, working with Atlas the file deletion rate was somewhat improved. However, there is still a problem that needs to be understood.
- Problems with the infrastructure used to host many of our non-Castor services have largely been worked around, although not yet fixed. Some additional migrations of VMs has been necessary.
- We have now had repeated instances where the OPN link has not cleanly failed over to the backup link during problems with the primary.
- One of the network uplinks (for the 2012 disk servers) has been running at full capacity. We have a plan to move the switch into the new Tier1 mesh network to alleviate this.
Ongoing Disk Server Issues |
- None
Notable Changes made this last week. |
- RAL Tier1 network uplink has been migrated to the new Tier1 router pair
- WMS nodes (lcgwms04, lcgwms05, lcgwms06) upgraded to EMI-3 update 15.
- L&B nodes (lcglb01, lcglb02) upgraded to EMI-3 update 12.
- EMI-3 update 15 applied to top-BDII nodes (lcgbdii01, lcgbdii03, lcgbdii04).
- EMI v3.15.0 BDII-site applied to lcgsbdii01 and lcgsbdii02 nodes.
- We are starting to test CVMFS Client version 2.1.19.
- Condor has been upgraded from version 8.0.4 to version 8.0.6 and the hyperthreading settings have been altered with many machines now running at full hyperthreaded capacity.
Declared in the GOC DB |
Service | Scheduled? | Outage/At Risk | Start | End | Duration | Reason |
---|---|---|---|---|---|---|
lcgui02.gridpp.rl.ac.uk, | SCHEDULED | OUTAGE | 30/04/2014 14:00 | 29/05/2014 13:00 | 28 days, 23 hours | Service being decommissioned. |
lcgrbp01.gridpp.rl.ac.uk, | SCHEDULED | OUTAGE | 02/04/2014 12:00 | 01/05/2014 12:00 | 29 days, | System be decommissioned. (Replaced my myproxy.gridpp.rl.ac.uk). |
Advanced warning for other interventions |
The following items are being discussed and are still to be formally scheduled and announced. |
Listing by category:
- Databases:
- Switch LFC/FTS/3D to new Database Infrastructure.
- Castor:
- Castor 2.1.14 testing was largely complete, although a new minor version (2.1.14-12) will be released soon.
- Networking:
- Move switches connecting recent disk servers batches ('11, '12) onto the Tier1 mesh network.
- Update core Tier1 network and change connection to site and OPN including:
- Make routing changes to allow the removal of the UKLight Router.
- Fabric
- We are phasing out the use of the software server used by the small VOs.
- Firmware updates on remaining EMC disk arrays (Castor, FTS/LFC)
- There will be circuit testing of the remaining (i.e. non-UPS) circuits in the machine room during 2014.
Entries in GOC DB starting between the 23rd and 30th April 2014. |
Service | Scheduled? | Outage/At Risk | Start | End | Duration | Reason |
---|---|---|---|---|---|---|
lcgui02.gridpp.rl.ac.uk, | SCHEDULED | OUTAGE | 30/04/2014 14:00 | 29/05/2014 13:00 | 28 days, 23 hours | Service being decommissioned. |
Whole Site | SCHEDULED | WARNING | 30/04/2014 10:00 | 30/04/2014 12:00 | 2 hours | RAL Tier1 site in warning state due to UPS/generator test. |
Whole Site | SCHEDULED | OUTAGE | 29/04/2014 07:00 | 29/04/2014 15:02 | 8 hours and 2 minutes | Site outage during Network Upgrade. |
lcgrbp01.gridpp.rl.ac.uk, | SCHEDULED | OUTAGE | 02/04/2014 12:00 | 01/05/2014 12:00 | 29 days, | System be decommissioned. (Replaced my myproxy.gridpp.rl.ac.uk). |
Open GGUS Tickets (Snapshot during morning of meeting) |
GGUS ID | Level | Urgency | State | Creation | Last Update | VO | Subject |
---|---|---|---|---|---|---|---|
104896 | Green | Less Urgent | In Progress | 2014-04-25 | 2014-04-25 | Argus failures with RAL wms | |
103197 | Amber | Less Urgent | Waiting Reply | 2014-04-09 | 2014-04-09 | RAL myproxy server and GridPP wiki | |
101968 | Red | Less Urgent | In Progress | 2014-03-11 | 2014-04-01 | Atlas | RAL-LCG2_SCRATCHDISK: One dataset to delete is causing 1379 deletion errors |
98249 | Red | Urgent | Waiting Reply | 2013-10-21 | 2014-04-23 | SNO+ | please configure cvmfs stratum-0 for SNO+ at RAL T1 |
Availability Report |
Key: Atlas HC = Atlas HammerCloud (Queue ANALY_RAL_SL6, Template 508); CMS HC = CMS HammerCloud
Day | OPS | Alice | Atlas | CMS | LHCb | Atlas HC | CMS HC | Comment |
---|---|---|---|---|---|---|---|---|
23/04/14 | 100 | 100 | 100 | 100 | 100 | 63 | 99 | (Atlas renaming files) |
24/04/14 | 100 | 100 | 100 | 100 | 100 | 93 | 99 | |
25/04/14 | 100 | 100 | 96.3 | 100 | 100 | 100 | 95 | SRM Put test errors "ERROR: 'NoneType' object has no attribute 'kill' exceptions.AttributeError" |
26/04/14 | 100 | 100 | 94.9 | 100 | 100 | 100 | 100 | SRM Put test errors "ERROR: 'NoneType' object has no attribute 'kill' exceptions.AttributeError" |
27/04/14 | 100 | 100 | 99.1 | 100 | 100 | 100 | 100 | SRM Put test errors "ERROR: 'NoneType' object has no attribute 'kill' exceptions.AttributeError" |
28/04/14 | 100 | 100 | 99.2 | 100 | 100 | 99 | 99 | One SRM Put test error "User timeout" |
29/04/14 | 66.5 | 66.5 | 65.8 | 66.5 | 65.8 | 100 | 99 | Planned Tier1 Network Intervention |