Difference between revisions of "Tier1 Operations Report 2015-02-04"
From GridPP Wiki
(→) |
(→) |
||
Line 21: | Line 21: | ||
| style="background-color: #f8d6a9; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Resolved Disk Server Issues | | style="background-color: #f8d6a9; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Resolved Disk Server Issues | ||
|} | |} | ||
− | * | + | * GDSS778 (LHCbDst - D1T0) did not come back after being shutdown for a reboot during scheduled work on Castor on Monday (2nd Feb). It was found to have a faulty disk drive. After being checked out it was returned to production during Tuesday afternoon (3rd) in read-only mode. It was put back in full production (read/write) this morning (4th Feb). |
<!-- ***********End Resolved Disk Server Issues*********** -----> | <!-- ***********End Resolved Disk Server Issues*********** -----> | ||
<!-- ***************************************************** -----> | <!-- ***************************************************** -----> |
Revision as of 11:06, 4 February 2015
RAL Tier1 Operations Report for 4th February 2015
Review of Issues during the week 28th January to 4th February 2015. |
- A test was carried on the problematic router during Thursday afternoon (22nd) when it failed within a few minutes of taking over as the master. A manual flip back to the other router was then carried out. This caused a 5-minute break in network connectivity to the Tier1.
- There was a problem with the LHCb SRMs yesterday (Tuesday 27th Jan). Some processes didn't re-connect to the databases following required reboots to pick up security updates.
Resolved Disk Server Issues |
- GDSS778 (LHCbDst - D1T0) did not come back after being shutdown for a reboot during scheduled work on Castor on Monday (2nd Feb). It was found to have a faulty disk drive. After being checked out it was returned to production during Tuesday afternoon (3rd) in read-only mode. It was put back in full production (read/write) this morning (4th Feb).
Current operational status and issues |
- We are running with a single router connecting the Tier1 network to the site network, rather than a resilient pair.
Ongoing Disk Server Issues |
- None
Notable Changes made this last week. |
- The safety checking of the electrical power circuits in the machine room has been completed.
- The migration of all data off T10000A & B media has been completed.
- On Friday (23rd Jan) the FTS3 srevice was upgraded to version 3.2.31.
- On Monday (26th Jan) some redundant Castor stager schemas were cleaned up.
Declared in the GOC DB |
Service | Scheduled? | Outage/At Risk | Start | End | Duration | Reason |
---|---|---|---|---|---|---|
All Castor (All SRMs) | SCHEDULED | WARNING | 11/02/2015 08:30 | 11/02/2015 15:00 | 6 hours and 30 minutes | Castor services At Risk during application of regular patches to back end database systems. |
Castor Atlas and GEN instances | SCHEDULED | WARNING | 04/02/2015 08:30 | 04/02/2015 15:00 | 6 hours and 30 minutes | Castor services At Risk during application of regular patches to back end database systems. |
Advanced warning for other interventions |
The following items are being discussed and are still to be formally scheduled and announced. |
- Investigate problems on the primary Tier1 router. Discussions with the vendor are ongoing.
- Track appropriate security updates.
- Move of connection for CERN Backup link on Tuesday 3rd Feb.
Listing by category:
- Databases:
- Application of Oracle PSU patches to database systems.
- A new database (Oracle RAC) has been set-up to host the Atlas 3D database. This is updated from CERN via Oracle GoldenGate. This system is yet to be brought into use. (Currently Atlas 3D/Frontier still uses the OGMA datase system, although this was also changed to update from CERN using Oracle Golden Gate.)
- Switch LFC/3D to new Database Infrastructure.
- Update to Oracle 11.2.0.4
- Castor:
- Update SRMs to new version (includes updating to SL6).
- Fix discrepancies were found in some of the Castor database tables and columns. (The issue has no operational impact.)
- Update Castor to 2.1-14-latest.
- Networking:
- Resolve problems with primary Tier1 Router
- Move switches connecting the 2011 disk servers batches onto the Tier1 mesh network.
- Make routing changes to allow the removal of the UKLight Router.
- Enable the RIP protocol for updating routing tables on the Tier1 routers. (Install patch to Router software).
- Fabric
- Firmware updates on remaining EMC disk arrays (Castor, FTS/LFC)
Entries in GOC DB starting since the last report. |
Service | Scheduled? | Outage/At Risk | Start | End | Duration | Reason |
---|---|---|---|---|---|---|
Atlas and GEN Castor instances. | SCHEDULED | WARNING | 04/02/2015 08:30 | 04/02/2015 15:00 | 6 hours and 30 minutes | Castor services At Risk during application of regular patches to back end database systems. |
All Castor (all SRMs). | SCHEDULED | OUTAGE | 02/02/2015 10:00 | 02/02/2015 12:21 | 2 hours and 21 minutes | Castor Storage System Stop for updates and reboots. |
srm-atlas.gridpp.rl.ac.uk | SCHEDULED | WARNING | 28/01/2015 09:00 | 28/01/2015 12:03 | 3 hours and 3 minutes | Warning while patching Castor disk servers |
Open GGUS Tickets (Snapshot during morning of meeting) |
GGUS ID | Level | Urgency | State | Creation | Last Update | VO | Subject |
---|---|---|---|---|---|---|---|
111477 | Green | Less Urgent | In Progress | 2015-01-29 | 2015-02-02 | CMS | RAL Staging Test for Run2 |
111347 | Green | Urgent | In Progress | 2015-01-22 | 2015-02-03 | CMS | T1_UK_RAL Consistency Check (January 2015) |
111120 | Green | Less Urgent | Waiting Reply | 2015-01-12 | 2015-01-22 | Atlas | large transfer errors from RAL-LCG2 to BNL-OSG2 |
109694 | Red | Urgent | On hold | 2014-11-03 | 2015-01-20 | SNO+ | gfal-copy failing for files at RAL |
108944 | Red | Urgent | In Progress | 2014-10-01 | 2015-01-28 | CMS | AAA access test failing at T1_UK_RAL |
107935 | Red | Less Urgent | On Hold | 2014-08-27 | 2015-01-28 | Atlas | BDII vs SRM inconsistent storage capacity numbers |
Availability Report |
Key: Atlas HC = Atlas HammerCloud (Queue ANALY_RAL_SL6, Template 508); CMS HC = CMS HammerCloud
Day | OPS | Alice | Atlas | CMS | LHCb | Atlas HC | CMS HC | Comment |
---|---|---|---|---|---|---|---|---|
28/01/15 | 100 | 100 | 98.9 | 100 | 100 | 100 | 99 | Single SRM test failure. Could not open connection to srm-atlas.gridpp.rl.ac.uk |
29/01/15 | 100 | 100 | 100 | 100 | 100 | 98 | 100 | |
30/01/15 | 100 | 100 | 100 | 100 | 100 | 98 | n/a | |
31/01/15 | 100 | 100 | 100 | 100 | 100 | 100 | 100 | |
01/02/15 | 100 | 100 | 100 | 100 | 100 | 100 | 100 | |
02/02/15 | 97.5 | 100 | 92 | 93 | 94 | 100 | 48 | Castor outage for system reboots to pick up security patch. |
03/02/15 | 100 | 100 | 100 | 73 | 96 | 100 | 85 | CMS: CE tests failed overnight. LHCb: SIngle SRM test failure. |