Difference between revisions of "Tier1 Operations Report 2015-04-01"
From GridPP Wiki
(→) |
(→) |
||
Line 11: | Line 11: | ||
* There were problems with the ARGUS server that affected batch work on Wednesday 18th March. This was fixed following a callout in the evening. | * There were problems with the ARGUS server that affected batch work on Wednesday 18th March. This was fixed following a callout in the evening. | ||
* A further investigation into the primary Tier1 router on the morning of Thursday 19th March failed to fix the problem whereby the router fails after a few minutes in service. | * A further investigation into the primary Tier1 router on the morning of Thursday 19th March failed to fix the problem whereby the router fails after a few minutes in service. | ||
− | * | + | * On Friday 20th, a problem was found and fixed that caused gridmap file updates to fail. |
* On Thursday 26th March, an uplink cable was replaced on a core switch. This may have fixed connectivity problems reported by the Nebraska T2 in the USA. | * On Thursday 26th March, an uplink cable was replaced on a core switch. This may have fixed connectivity problems reported by the Nebraska T2 in the USA. | ||
* On Friday the 27th March a PDU powering a network switch turned itself off. This affected connectivity to many of the database servers. Staff attended site and reset the PDU. Most services were affected for approximately 2.5 hours (9:30 pm until midnight). | * On Friday the 27th March a PDU powering a network switch turned itself off. This affected connectivity to many of the database servers. Staff attended site and reset the PDU. Most services were affected for approximately 2.5 hours (9:30 pm until midnight). |
Latest revision as of 10:18, 1 April 2015
RAL Tier1 Operations Report for 1st April 2015
Review of Issues during the fortnight 18th March to 1st April 2015. |
- There were problems with the ARGUS server that affected batch work on Wednesday 18th March. This was fixed following a callout in the evening.
- A further investigation into the primary Tier1 router on the morning of Thursday 19th March failed to fix the problem whereby the router fails after a few minutes in service.
- On Friday 20th, a problem was found and fixed that caused gridmap file updates to fail.
- On Thursday 26th March, an uplink cable was replaced on a core switch. This may have fixed connectivity problems reported by the Nebraska T2 in the USA.
- On Friday the 27th March a PDU powering a network switch turned itself off. This affected connectivity to many of the database servers. Staff attended site and reset the PDU. Most services were affected for approximately 2.5 hours (9:30 pm until midnight).
Resolved Disk Server Issues |
- None
Current operational status and issues |
- We are running with a single router connecting the Tier1 network to the site network, rather than a resilient pair.
Ongoing Disk Server Issues |
- None
Notable Changes made this last week. |
- gfal2 and davix rpms are in the process of being updated across the worker nodes.
Declared in the GOC DB |
Service | Scheduled? | Outage/At Risk | Start | End | Duration | Reason |
---|---|---|---|---|---|---|
All Castor (all SRM endpoints) | SCHEDULED | OUTAGE | 08/04/2015 10:00 | 08/04/2015 14:00 | 4 hours | Upgrade of Castor storage to version 2.1.14-15 |
Whole site | SCHEDULED | WARNING | 01/04/2015 07:45 | 01/04/2015 11:00 | 3 hours and 15 minutes | Warning on site for network test/reconfiguration and load test of UPS/generator. |
Advanced warning for other interventions |
The following items are being discussed and are still to be formally scheduled and announced. |
- Separate some non-Tier1 services off our network so as to be able to more easily investigate the router problems.
- Update Castor to 2.1-14-15 (Proposed date - Wednesday 8th April).
Listing by category:
- Databases:
- Switch LFC/3D to new Database Infrastructure.
- Update to Oracle 11.2.0.4
- Castor:
- Update Castor to 2.1.14-15 (Proposed date - Wednesday 8th April).
- Update SRMs to new version (includes updating to SL6).
- Fix discrepancies found in some of the Castor database tables and columns. (The issue has no operational impact.)
- Networking:
- Resolve problems with primary Tier1 Router
- Enable the RIP protocol for updating routing tables on the Tier1 routers. (Install patch to Router software).
- Increase bandwidth of the link from the Tier1 into the RAL internal site network to 40Gbit.
- Make routing changes to allow the removal of the UKLight Router.
- Fabric
- Firmware updates on remaining EMC disk arrays (Castor, FTS/LFC)
Entries in GOC DB starting since the last report. |
Service | Scheduled? | Outage/At Risk | Start | End | Duration | Reason |
---|---|---|---|---|---|---|
Whole site | SCHEDULED | WARNING | 01/04/2015 07:45 | 01/04/2015 11:00 | 3 hours and 15 minutes | Warning on site for network test/reconfiguration and load test of UPS/generator. |
Whole site | SCHEDULED | WARNING | 19/03/2015 07:45 | 19/03/2015 08:30 | 45 minutes | Warning during tests of Tier1s network router. Possibility of two short (few minute) breaks in connectivity to the RAL Tier1. |
Open GGUS Tickets (Snapshot during morning of meeting) |
GGUS ID | Level | Urgency | State | Creation | Last Update | VO | Subject |
---|---|---|---|---|---|---|---|
112721 | Green | Less Urgent | In Progress | 2015-03-28 | 2015-03-30 | Atlas | RAL-LCG2: SOURCE Failed to get source file size |
112713 | Green | Urgent | In Progress | 2015-03-27 | 2015-03-31 | CMS | Please clean up unmerged area - RAL |
111699 | Green | Less Urgent | In Progress | 2015-02-10 | 2015-03-23 | Atlas | gLExec hammercloud jobs keep failing at RAL-LCG2 & RALPP |
109694 | Red | Urgent | In Progress | 2014-11-03 | 2015-03-31 | SNO+ | gfal-copy failing for files at RAL |
108944 | Red | Less Urgent | In Progress | 2014-10-01 | 2015-03-30 | CMS | AAA access test failing at T1_UK_RAL |
Availability Report |
Key: Atlas HC = Atlas HammerCloud (Queue ANALY_RAL_SL6, Template 508); CMS HC = CMS HammerCloud
Day | OPS | Alice | Atlas | CMS | LHCb | Atlas HC | CMS HC | Comment |
---|---|---|---|---|---|---|---|---|
18/03/15 | 100 | 100 | 99 | 92 | 92 | 100 | 100 | CE tests failed owing to ARGUS problem. |
19/03/15 | 100 | 100 | 100 | 96 | 100 | 99 | 98 | Failed single SRM test and had some CE test errors. At time of planned network intervention. |
20/03/15 | 100 | 100 | 100 | 100 | 100 | 98 | 97 | |
21/03/15 | 100 | 100 | 100 | 100 | 100 | 100 | 98 | |
22/03/15 | 100 | 100 | 100 | 100 | 100 | 100 | 99 | |
23/03/15 | 100 | 100 | 100 | 100 | 100 | 100 | 96 | |
24/03/15 | 100 | 100 | 100 | 100 | 100 | 100 | 100 | |
25/03/15 | 100 | 100 | 100 | 100 | 100 | 100 | 100 | |
26/03/15 | 100 | 100 | 100 | 100 | 100 | 100 | 100 | |
27/03/15 | 92 | 97 | 92 | 100 | 100 | 89 | 100 | PDU problems caused switch to be without power |
28/03/15 | 97 | 95 | 96 | 100 | 100 | 98 | 82 | PDU problems caused switch to be without power |
29/03/15 | 100 | 100 | 100 | 100 | 100 | 100 | 100 | |
30/03/15 | 98 | 100 | 100 | 100 | 100 | 100 | 100 | Single failed file transfer. |
31/03/15 | 100 | 100 | 100 | 100 | 100 | 95 | 100 |