Difference between revisions of "Tier1 Operations Report 2015-02-18"
From GridPP Wiki
(→) |
(→) |
||
Line 137: | Line 137: | ||
! Reason | ! Reason | ||
|- | |- | ||
− | | All Castor | + | | All Castor (all SRM endpoints). |
| SCHEDULED | | SCHEDULED | ||
| WARNING | | WARNING | ||
| 11/02/2015 08:30 | | 11/02/2015 08:30 | ||
| 11/02/2015 15:00 | | 11/02/2015 15:00 | ||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
| 6 hours and 30 minutes | | 6 hours and 30 minutes | ||
| Castor services At Risk during application of regular patches to back end database systems. | | Castor services At Risk during application of regular patches to back end database systems. |
Revision as of 11:33, 18 February 2015
RAL Tier1 Operations Report for 18th February 2015
Review of Issues during the week 11th to 18th February 2015. |
- On Monday morning the Tier1 data link failed. This is the link between the Tier1 routers and the UKLight router). All data transfers stopped. The problem lasted for an hour from 09:00 to 10:00.
- A single file loss was reported to Atlas. This was picked up by the regular checksum valdiation process.
Resolved Disk Server Issues |
- GDSS656 (LHCbRawRdst - D1T0) failed on Thursday afternoon (12th Feb). It was returned to service on teh Saturday morning (14th). A disk drive failed while verifying the RAID array.
Current operational status and issues |
- We are running with a single router connecting the Tier1 network to the site network, rather than a resilient pair.
Ongoing Disk Server Issues |
- None
Notable Changes made this last week. |
- Condor update to version 8.2.7 applied to CEs and other internal Condor systems.
- Starting test deployments of condor 8.2.7 and cvmfs-client 2.1.20 on WOrker nodes.
Declared in the GOC DB |
Service | Scheduled? | Outage/At Risk | Start | End | Duration | Reason |
---|---|---|---|---|---|---|
All Castor (All SRMs) | SCHEDULED | WARNING | 11/02/2015 08:30 | 11/02/2015 15:00 | 6 hours and 30 minutes | Castor services At Risk during application of regular patches to back end database systems. |
Advanced warning for other interventions |
The following items are being discussed and are still to be formally scheduled and announced. |
- Investigate problems on the primary Tier1 router. Discussions with the vendor are ongoing.
- Physical move of 2011 disk server racks to make space for new delivery.
Listing by category:
- Databases:
- Application of Oracle PSU patches to database systems (ongoing)
- A new database (Oracle RAC) has been set-up to host the Atlas 3D database. This is updated from CERN via Oracle GoldenGate. This system is yet to be brought into use. (Currently Atlas 3D/Frontier still uses the OGMA datase system, although this was also changed to update from CERN using Oracle Golden Gate.)
- Switch LFC/3D to new Database Infrastructure.
- Update to Oracle 11.2.0.4
- Castor:
- Update SRMs to new version (includes updating to SL6).
- Fix discrepancies were found in some of the Castor database tables and columns. (The issue has no operational impact.)
- Update Castor to 2.1-14-latest.
- Networking:
- Resolve problems with primary Tier1 Router
- Move switches connecting the 2011 disk servers batches onto the Tier1 mesh network.
- Make routing changes to allow the removal of the UKLight Router.
- Enable the RIP protocol for updating routing tables on the Tier1 routers. (Install patch to Router software).
- Fabric
- Physical move of 2011 disk server racks to make space for new delivery.
- Firmware updates on remaining EMC disk arrays (Castor, FTS/LFC)
Entries in GOC DB starting since the last report. |
Service | Scheduled? | Outage/At Risk | Start | End | Duration | Reason |
---|---|---|---|---|---|---|
All Castor (all SRM endpoints). | SCHEDULED | WARNING | 11/02/2015 08:30 | 11/02/2015 15:00 | 6 hours and 30 minutes | Castor services At Risk during application of regular patches to back end database systems. |
Open GGUS Tickets (Snapshot during morning of meeting) |
GGUS ID | Level | Urgency | State | Creation | Last Update | VO | Subject |
---|---|---|---|---|---|---|---|
111699 | Green | Less Urgent | In Progress | 2015-02-10 | 2015-02-11 | Atlas | gLExec hammercloud jobs keep failing at RAL-LCG2 & RALPP |
111120 | Green | Less Urgent | Waiting Reply | 2015-01-12 | 2015-02-09 | Atlas | large transfer errors from RAL-LCG2 to BNL-OSG2 |
109694 | Red | Urgent | On hold | 2014-11-03 | 2015-01-20 | SNO+ | gfal-copy failing for files at RAL |
108944 | Red | Urgent | In Progress | 2014-10-01 | 2015-02-10 | CMS | AAA access test failing at T1_UK_RAL |
107935 | Red | Less Urgent | On Hold | 2014-08-27 | 2015-02-09 | Atlas | BDII vs SRM inconsistent storage capacity numbers |
Availability Report |
Key: Atlas HC = Atlas HammerCloud (Queue ANALY_RAL_SL6, Template 508); CMS HC = CMS HammerCloud
Day | OPS | Alice | Atlas | CMS | LHCb | Atlas HC | CMS HC | Comment |
---|---|---|---|---|---|---|---|---|
11/02/15 | 100 | 100 | 100 | 100 | 100 | 100 | 98 | |
12/02/15 | 100 | 100 | 100 | 100 | 100 | 100 | 99 | |
13/02/15 | 100 | 100 | 100 | 100 | 100 | 100 | 99 | |
14/02/15 | 100 | 100 | 100 | 100 | 100 | 98 | 99 | |
15/02/15 | 100 | 100 | 100 | 100 | 100 | 100 | 97 | |
16/02/15 | 100 | 100 | 95.0 | 96.0 | 96.0 | 100 | 96 | Break in network connectivity for data path |
17/02/15 | 100 | 100 | 100 | 100 | 100 | 100 | 99 |