RAL Tier1 Operations Report for 27th May 2015
Review of Issues during the week 20th to 27th May 2015.
|
Resolved Disk Server Issues
|
- GDSS649 (LHCbUser - D1T0) failed on Saturday 16th May when the system hung up. Following tests a faulty drive was replaced. It was returned to service on Monday morning (18th May).
Current operational status and issues
|
- We are running with a single router connecting the Tier1 network to the site network, rather than a resilient pair.
- Castor xroot performance problems seen by CMS - particularly in very long file open times. In order to elimiate possible causes of problems the CMS AAA xroot redirector was stopped for a while, although it has since been restarted. At least part of the problem is caused by two 'hot' disk servers and we are in the process of re-distributing the files from the frequently accessed datsets (the job to do this is currently running).
- The post mortem review of the network incident on the 8th April is being finalised.
- The intermittent, low-level, load-related packet loss over the OPN to CERN is still being tracked.
Ongoing Disk Server Issues
|
Notable Changes made since the last meeting.
|
- The Castor tape servers are being updated to SL6.
- Last week a problem with a new configuration of a batch of new worker nodes was reported. Most of this batch have now been re-set to have the usual worker node configuration.
Service
|
Scheduled?
|
Outage/At Risk
|
Start
|
End
|
Duration
|
Reason
|
cream-ce01, cream-ce02
|
SCHEDULED
|
OUTAGE
|
05/05/2015 12:00
|
02/06/2015 12:00
|
28 days
|
Decommissioning of CREAM CEs (cream-ce01.gridpp.rl.ac.uk, cream-ce02.gridpp.rl.ac.uk).
|
Advanced warning for other interventions
|
The following items are being discussed and are still to be formally scheduled and announced.
|
- (Ongoing at time of meeting). The Castor standby Oracle database system is being moved to the Atlas building. This is expected to take most of the working day during which time we are running Castor with a reduced level of backup.
- On Thursday morning, 28th May, there will be a short network intervention to separate some non-Tier1 services off our network so we can more easily investigate the router problems.
- Turn off ARC-CE05. This will leave four ARC CEs (as planned.) This fifth was set-up to temporarily workround a specific problem and is no longer required.
- Progressive upgrading of Castor Tape Servers to SL6.
- Upgrade Tier1 Castor Oracle Databases to version 11.2.0.4. Proposed timetable (delayed by one week since last week's report):
- Week 26-28 May: Install software on Database Systems (some 'At Risks')
- Tuesday 2nd June: Switchover and upgrade Neptune (ATLAS and GEN downtime - likely to be around one working day)
- Monday 8th June: Upgrade Neptune's standby (ATLAS and GEN at risk)
- Wednesday 10th June: Switchover Neptune and Pluto, and upgrade Pluto (All Tier1 Castor downtime - likely to be around one working day)
- Tuesday 16th June: Upgrade Pluto's standby (Tier1 at risk)
- Thursday 18th June: Switchover Pluto (All Tier1 Castor downtime - less than a working day)
Listing by category:
- Databases:
- Switch LFC/3D to new Database Infrastructure.
- Update to Oracle 11.2.0.4. This will affect all services that use Oracle databases (Castor, LFC and Atlas Frontier)
- Castor:
- Update SRMs to new version (includes updating to SL6).
- Update the Oracle databses behind Castor to version 11.2.0.4. Will require some downtimes (See above)
- Update disk servers to SL6.
- Update to Castor version 2.1.15.
- Networking:
- Resolve problems with primary Tier1 Router
- Enable the RIP protocol for updating routing tables on the Tier1 routers. (Install patch to Router software).
- Increase bandwidth of the link from the Tier1 into the RAL internal site network to 40Gbit.
- Make routing changes to allow the removal of the UKLight Router.
- Cabling/switch changes to the network in the UPS room to improve resilience.
- Fabric
- Firmware updates on remaining EMC disk arrays (Castor, FTS/LFC)
Entries in GOC DB starting since the last report.
|
Service
|
Scheduled?
|
Outage/At Risk
|
Start
|
End
|
Duration
|
Reason
|
cream-ce01, cream-ce02
|
SCHEDULED
|
OUTAGE
|
05/05/2015 12:00
|
02/06/2015 12:00
|
28 days,
|
Decommissioning of CREAM CEs (cream-ce01.gridpp.rl.ac.uk, cream-ce02.gridpp.rl.ac.uk).
|
Open GGUS Tickets (Snapshot during morning of meeting)
|
GGUS ID |
Level |
Urgency |
State |
Creation |
Last Update |
VO |
Subject
|
113910
|
Green
|
Urgent
|
In Progress
|
2015-05-26
|
2015-05-26
|
SNO+
|
RAL data staging
|
113836
|
Green
|
Less Urgent
|
In Progress
|
2015-05-20
|
2015-05-20
|
|
GLUE 1 vs GLUE 2 mismatch in published queues
|
112721
|
Green
|
Less Urgent
|
In Progress
|
2015-03-28
|
2015-05-14
|
Atlas
|
RAL-LCG2: SOURCE Failed to get source file size
|
109694
|
Red
|
Urgent
|
In Progress
|
2014-11-03
|
2015-05-19
|
SNO+
|
gfal-copy failing for files at RAL
|
108944
|
Red
|
Less Urgent
|
In Progress
|
2014-10-01
|
2015-05-26
|
CMS
|
AAA access test failing at T1_UK_RAL
|
Key: Atlas HC = Atlas HammerCloud (Queue ANALY_RAL_SL6, Template 508); CMS HC = CMS HammerCloud
Day |
OPS |
Alice |
Atlas |
CMS |
LHCb |
Atlas HC |
CMS HC |
Comment
|
20/05/15 |
100 |
100 |
98.0 |
100 |
100 |
100 |
100 |
Single SRM Put Test error: Error reading token data header: Connection closed
|
21/05/15 |
100 |
100 |
100 |
100 |
96.0 |
100 |
100 |
Single SRM test failure: [SRM_INVALID_PATH] No such file or directory
|
22/05/15 |
100 |
100 |
100 |
100 |
100 |
100 |
96 |
|
23/05/15 |
100 |
100 |
100 |
100 |
100 |
100 |
99 |
|
24/05/15 |
100 |
100 |
100 |
100 |
96.0 |
100 |
100 |
Single SRM test failure: [SRM_INVALID_PATH] No such file or directory
|
25/05/15 |
100 |
100 |
100 |
100 |
100 |
100 |
100 |
|
26/05/15 |
100 |
100 |
100 |
100 |
100 |
100 |
99 |
|