Difference between revisions of "Tier1 Operations Report 2014-05-07"
From GridPP Wiki
(→) |
(→) |
||
Line 89: | Line 89: | ||
! Duration | ! Duration | ||
! Reason | ! Reason | ||
+ | |- | ||
+ | | srm-lhcb-tape.gridpp.rl.ac.uk, | ||
+ | | SCHEDULED | ||
+ | | OUTAGE | ||
+ | | 13/05/2014 08:00 | ||
+ | | 13/05/2014 11:00 | ||
+ | | 3 hours | ||
+ | | Outage of tape system for update of library controller. | ||
+ | |- | ||
+ | | All Castor (SRM) endpoints | ||
+ | | SCHEDULED | ||
+ | | WARNING | ||
+ | | 13/05/2014 08:00 | ||
+ | | 13/05/2014 11:00 | ||
+ | | 3 hours | ||
+ | | Outage of tape system for update of library controller. | ||
|- | |- | ||
| lcgui02.gridpp.rl.ac.uk, | | lcgui02.gridpp.rl.ac.uk, | ||
Line 97: | Line 113: | ||
| 28 days, 23 hours | | 28 days, 23 hours | ||
| Service being decommissioned. | | Service being decommissioned. | ||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
|} | |} | ||
<!-- **********************End GOC DB Entries************************** -----> | <!-- **********************End GOC DB Entries************************** -----> |
Revision as of 09:56, 7 May 2014
RAL Tier1 Operations Report for 7th May 2014
Review of Issues during the week 30th April and 7th May 2014. |
- There have been problems with "CMSDisk" in Castor caused by it becoming very full.
- There have been problems with large numbers of jobs (from T2K) submitted to the batch system by the WMSs. A batch system parameter (max number of gridftp connections on ARC CEs) has been increased to try and alleviate this.
- Five files were reported lost to Atlas following the draining of a disk server. (GDSS600).
Resolved Disk Server Issues |
- GDSS566 (AtlasDataDisk - D1T0) failed on the evening of Tuesday(22nd April). No specific problem was identified but it looks to have been a fault with the disk controller. Server returned to service on Thursday(24th April). (This server is due to be decommissioned. We will drain it soon).
- GDFSS460 (GenTape - D0T1) failed on Saturday afternoon (26th April). The problem was traced to a power supply problem. The server was returned to service at the end of Monday afternoon(28th April.)
Current operational status and issues |
- The load related problems reported for the CMS Castor instance have not been seen for a few weeks. However, work is underway to tackle these problems, in particular servers with faster network connections will be moved into the disk cache in front of CMS_Tape when they become available.
- The Castor Team are now able to reproduce the intermittent failures of Castor access via the SRM that has been reported in recent weeks. Understanding of the problem is significantly advanced and further investigations are ongoing using the Castor Preprod instance. Ideas for a workaround are being developed.
- As reported before, working with Atlas the file deletion rate was somewhat improved. However, there is still a problem that needs to be understood.
- Problems with the infrastructure used to host many of our non-Castor services have largely been worked around, although not yet fixed. Some additional migrations of VMs has been necessary.
- We have now had repeated instances where the OPN link has not cleanly failed over to the backup link during problems with the primary.
- One of the network uplinks (for the 2012 disk servers) has been running at full capacity. We have a plan to move the switch into the new Tier1 mesh network to alleviate this.
Ongoing Disk Server Issues |
- None
Notable Changes made this last week. |
- RAL Tier1 network uplink has been migrated to the new Tier1 router pair
- WMS nodes (lcgwms04, lcgwms05, lcgwms06) upgraded to EMI-3 update 15.
- L&B nodes (lcglb01, lcglb02) upgraded to EMI-3 update 12.
- EMI-3 update 15 applied to top-BDII nodes (lcgbdii01, lcgbdii03, lcgbdii04).
- EMI v3.15.0 BDII-site applied to lcgsbdii01 and lcgsbdii02 nodes.
- We are starting to test CVMFS Client version 2.1.19.
- Condor has been upgraded from version 8.0.4 to version 8.0.6 and the hyperthreading settings have been altered with many machines now running at full hyperthreaded capacity.
Declared in the GOC DB |
Service | Scheduled? | Outage/At Risk | Start | End | Duration | Reason |
---|---|---|---|---|---|---|
srm-lhcb-tape.gridpp.rl.ac.uk, | SCHEDULED | OUTAGE | 13/05/2014 08:00 | 13/05/2014 11:00 | 3 hours | Outage of tape system for update of library controller. |
All Castor (SRM) endpoints | SCHEDULED | WARNING | 13/05/2014 08:00 | 13/05/2014 11:00 | 3 hours | Outage of tape system for update of library controller. |
lcgui02.gridpp.rl.ac.uk, | SCHEDULED | OUTAGE | 30/04/2014 14:00 | 29/05/2014 13:00 | 28 days, 23 hours | Service being decommissioned. |
Advanced warning for other interventions |
The following items are being discussed and are still to be formally scheduled and announced. |
Listing by category:
- Databases:
- Switch LFC/FTS/3D to new Database Infrastructure.
- Castor:
- Castor 2.1.14 testing was largely complete, although a new minor version (2.1.14-12) will be released soon.
- Networking:
- Move switches connecting recent disk servers batches ('11, '12) onto the Tier1 mesh network.
- Update core Tier1 network and change connection to site and OPN including:
- Make routing changes to allow the removal of the UKLight Router.
- Fabric
- We are phasing out the use of the software server used by the small VOs.
- Firmware updates on remaining EMC disk arrays (Castor, FTS/LFC)
- There will be circuit testing of the remaining (i.e. non-UPS) circuits in the machine room during 2014.
Entries in GOC DB starting between the 30th April and 7th May 2014. |
Service | Scheduled? | Outage/At Risk | Start | End | Duration | Reason |
---|---|---|---|---|---|---|
lcgui02.gridpp.rl.ac.uk, | SCHEDULED | OUTAGE | 30/04/2014 14:00 | 29/05/2014 13:00 | 28 days, 23 hours | Service being decommissioned. |
Whole site | SCHEDULED | WARNING | 30/04/2014 10:00 | 30/04/2014 12:00 | 2 hours | RAL Tier1 site in warning state due to UPS/generator test. |
Open GGUS Tickets (Snapshot during morning of meeting) |
GGUS ID | Level | Urgency | State | Creation | Last Update | VO | Subject |
---|---|---|---|---|---|---|---|
104896 | Green | Less Urgent | In Progress | 2014-04-25 | 2014-04-25 | Argus failures with RAL wms | |
103197 | Amber | Less Urgent | Waiting Reply | 2014-04-09 | 2014-04-09 | RAL myproxy server and GridPP wiki | |
101968 | Red | Less Urgent | In Progress | 2014-03-11 | 2014-04-01 | Atlas | RAL-LCG2_SCRATCHDISK: One dataset to delete is causing 1379 deletion errors |
98249 | Red | Urgent | Waiting Reply | 2013-10-21 | 2014-04-23 | SNO+ | please configure cvmfs stratum-0 for SNO+ at RAL T1 |
Availability Report |
Key: Atlas HC = Atlas HammerCloud (Queue ANALY_RAL_SL6, Template 508); CMS HC = CMS HammerCloud
Day | OPS | Alice | Atlas | CMS | LHCb | Atlas HC | CMS HC | Comment |
---|---|---|---|---|---|---|---|---|
30/04/14 | 100 | 100 | 100 | 100 | 100 | 99 | 99 | |
01/05/14 | 100 | 100 | 100 | 100 | 100 | 90 | 99 | |
02/05/14 | 100 | 100 | 100 | 100 | 100 | 98 | 99 | |
03/05/14 | 100 | 100 | 100 | 100 | 100 | 90 | 99 | Some RAL batch problems followed by problem with Atlas Hammercloud monitoring. |
04/05/14 | 100 | 100 | 100 | 100 | 100 | 87 | 99 | Some RAL batch problems followed by problem with Atlas Hammercloud monitoring. |
05/05/14 | 100 | 100 | 100 | 100 | 100 | 100 | 100 | |
06/05/14 | 100 | 100 | 100 | 100 | 100 | 100 | 99 |