Difference between revisions of "Tier1 Operations Report 2015-01-07"
From GridPP Wiki
(→) |
m (→) |
||
(3 intermediate revisions by one user not shown) | |||
Line 11: | Line 11: | ||
* On Sunday 21st December there was a problem with the LHCb Castor instance when teh transfer manager processes became unresponsive. These were restarted by the on-call team. | * On Sunday 21st December there was a problem with the LHCb Castor instance when teh transfer manager processes became unresponsive. These were restarted by the on-call team. | ||
* On Christmas Eve (24/12) there was a networking problem that affected the Tier1. The primary of our router pair stopped working. However, the problem was not seen by the secondary router and no automated switchover took place. The primary router was manually restarted – triggering the failover and connectivity was restored. An outage of 70 minutes was declared. Following some discussion the active router was flipped back to the primary during the afternoon to leave us in a resilient configuration for the holidays. | * On Christmas Eve (24/12) there was a networking problem that affected the Tier1. The primary of our router pair stopped working. However, the problem was not seen by the secondary router and no automated switchover took place. The primary router was manually restarted – triggering the failover and connectivity was restored. An outage of 70 minutes was declared. Following some discussion the active router was flipped back to the primary during the afternoon to leave us in a resilient configuration for the holidays. | ||
− | * During Christmas Day evening, at around 21:30, the problem recurred. Staff attended on site and connectivity was restored at around 01:00 on Boxing Day morning. This time the primary router would not restart. Connectivity has been through the secondary router since this incident with no resilience. The problems with the primary router are being followed up and there may be a hardware fault. | + | * During Christmas Day evening, at around 21:30, the above problem recurred. Staff attended on site and connectivity was restored at around 01:00 on Boxing Day morning. This time the primary router would not restart. Connectivity has been through the secondary router since this incident with no resilience. The problems with the primary router are being followed up and there may be a hardware fault. |
<!-- ***********End Review of Issues during last week*********** -----> | <!-- ***********End Review of Issues during last week*********** -----> | ||
<!-- *********************************************************** -----> | <!-- *********************************************************** -----> | ||
Line 55: | Line 55: | ||
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Notable Changes made this last week. | | style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Notable Changes made this last week. | ||
|} | |} | ||
− | + | * A significant number of systems had kernel updates applied during the days before Christmas following a security advisory. | |
* On Tuesday 6th Jan the Castor GEN instance headnodes were successfully upgraded to SL6. | * On Tuesday 6th Jan the Castor GEN instance headnodes were successfully upgraded to SL6. | ||
* The quarterly UPS/Generator load test took place successfully this morning (Wednesday 7th January). | * The quarterly UPS/Generator load test took place successfully this morning (Wednesday 7th January). | ||
− | + | * Condor updated to version to version 8.2.6 on the CEs. | |
<!-- *************End Notable Changes made this last week************** -----> | <!-- *************End Notable Changes made this last week************** -----> | ||
<!-- ****************************************************************** -----> | <!-- ****************************************************************** -----> | ||
Line 117: | Line 117: | ||
|} | |} | ||
<!-- ******* still to be formally scheduled and/or announced ******* -----> | <!-- ******* still to be formally scheduled and/or announced ******* -----> | ||
− | * The rollout of the RIP protocol to the Tier1 routers still has to be completed. A software patch from the vendors will be applied to the Tier1 Routers | + | * The rollout of the RIP protocol to the Tier1 routers still has to be completed. A software patch from the vendors will be applied to the Tier1 Routers in due course. (This was scheduled for 6th Jan but the problems encountered with one of the routers has delayed this). |
* Circuit testing of the remaining (i.e. non-UPS) circuits in the machine room: Tue-Thu 13-15 January & Tue-Thu 20-22 January. There are some systms that need to be re-powered in preparation for this work. | * Circuit testing of the remaining (i.e. non-UPS) circuits in the machine room: Tue-Thu 13-15 January & Tue-Thu 20-22 January. There are some systms that need to be re-powered in preparation for this work. | ||
'''Listing by category:''' | '''Listing by category:''' | ||
Line 243: | Line 243: | ||
| In Progress | | In Progress | ||
| 2014-11-26 | | 2014-11-26 | ||
− | | | + | | 2015-07-01 |
| N/A | | N/A | ||
| RAL-LCG2: please reinstall your perfsonar hosts(s) | | RAL-LCG2: please reinstall your perfsonar hosts(s) |
Latest revision as of 12:56, 7 January 2015
RAL Tier1 Operations Report for 7th January 2015
Review of Issues during the three weeks 17th December 2014 to 7th January 2015. |
- On Sunday 21st December there was a problem with the LHCb Castor instance when teh transfer manager processes became unresponsive. These were restarted by the on-call team.
- On Christmas Eve (24/12) there was a networking problem that affected the Tier1. The primary of our router pair stopped working. However, the problem was not seen by the secondary router and no automated switchover took place. The primary router was manually restarted – triggering the failover and connectivity was restored. An outage of 70 minutes was declared. Following some discussion the active router was flipped back to the primary during the afternoon to leave us in a resilient configuration for the holidays.
- During Christmas Day evening, at around 21:30, the above problem recurred. Staff attended on site and connectivity was restored at around 01:00 on Boxing Day morning. This time the primary router would not restart. Connectivity has been through the secondary router since this incident with no resilience. The problems with the primary router are being followed up and there may be a hardware fault.
Resolved Disk Server Issues |
- None
Current operational status and issues |
- We are running with a single router connecting the Tier1 network to the site network, rather than a resilient pair.
Ongoing Disk Server Issues |
- GDSS757 (CMSDisk D1T0) failed on Saturday 3rd January. It is undergoing tests.
Notable Changes made this last week. |
- A significant number of systems had kernel updates applied during the days before Christmas following a security advisory.
- On Tuesday 6th Jan the Castor GEN instance headnodes were successfully upgraded to SL6.
- The quarterly UPS/Generator load test took place successfully this morning (Wednesday 7th January).
- Condor updated to version to version 8.2.6 on the CEs.
Declared in the GOC DB |
Service | Scheduled? | Outage/At Risk | Start | End | Duration | Reason |
---|---|---|---|---|---|---|
Whole site | SCHEDULED | WARNING | 20/01/2015 08:30 | 22/01/2015 18:00 | 2 days, 9 hours and 30 minutes | Warning during safety checks on power circuits in machine room. Testing carried out during working hours on each day. |
Whole site. | SCHEDULED | WARNING | 13/01/2015 18:30 | 15/01/2015 18:00 | 1 day, 23 hours and 30 minutes | Warning during safety checks on power circuits in machine room. Testing carried out during working hours on each day. |
Whole site. | SCHEDULED | WARNING | 07/01/2015 10:00 | 07/01/2015 12:00 | 2 hours | Warning on site during quarterly UPS/Generator load test. |
Advanced warning for other interventions |
The following items are being discussed and are still to be formally scheduled and announced. |
- The rollout of the RIP protocol to the Tier1 routers still has to be completed. A software patch from the vendors will be applied to the Tier1 Routers in due course. (This was scheduled for 6th Jan but the problems encountered with one of the routers has delayed this).
- Circuit testing of the remaining (i.e. non-UPS) circuits in the machine room: Tue-Thu 13-15 January & Tue-Thu 20-22 January. There are some systms that need to be re-powered in preparation for this work.
Listing by category:
- Databases:
- A new database (Oracle RAC) has been set-up to host the Atlas 3D database. This is updated from CERN via Oracle GoldenGate. This system is yet to be brought into use. (Currently Atlas 3D/Frontier still uses the OGMA datase system, although this was also changed to update from CERN using Oracle Golden Gate.)
- Switch LFC/3D to new Database Infrastructure.
- Update to Oracle 11.2.0.4
- Castor:
- Update Castor headnodes to SL6 (Nameservers remain to be done).
- Update SRMs to new version (includes updating to SL6).
- Fix discrepancies were found in some of the Castor database tables and columns. (The issue has no operational impact.)
- Update Castor to 2.1-14-latest, this depends on SL6 being deployed.
- Networking:
- Move switches connecting the 2011 disk servers batches onto the Tier1 mesh network.
- Make routing changes to allow the removal of the UKLight Router.
- Enable the RIP protocol for updating routing tables on the Tier1 routers.
- Fabric
- Migration of data to new T10KD tapes. (Migration of CMS from 'B' to 'D' tapes finished; migration of GEN from 'A' to 'D' tapes to follow.)
- Firmware updates on remaining EMC disk arrays (Castor, FTS/LFC)
- There will be circuit testing of the remaining (i.e. non-UPS) circuits in the machine room during January.
Entries in GOC DB starting between the 17th December 2014 and 7th January 2015. |
Service | Scheduled? | Outage/At Risk | Start | End | Duration | Reason |
---|---|---|---|---|---|---|
Whole site | SCHEDULED | WARNING | 07/01/2015 10:00 | 07/01/2015 12:00 | 2 hours | Warning on site during quarterly UPS/Generator load test. |
Castor GEN instance | SCHEDULED | OUTAGE | 06/01/2015 10:00 | 06/01/2015 12:57 | 2 hours and 57 minutes | OS upgrade (SL6) on headnodes for CMS GEN instance |
Whole site | UNSCHEDULED | OUTAGE | 25/12/2014 21:30 | 26/12/2014 01:00 | 3 hours and 30 minutes | All Services unavailable due to network problem. (Retrospective entry). |
Whole site | UNSCHEDULED | OUTAGE | 24/12/2014 11:30 | 24/12/2014 12:40 | 1 hour and 10 minutes | All Services off air due to network problem. (Retrospective entry). |
srm-atlas, srm-cms-disk, srm-cms, srm-lhcb-tape, srm-lhcb | SCHEDULED | WARNING | 23/12/2014 10:00 | 23/12/2014 12:00 | 2 hours | Due to Kernel patching of EGI ADV 20141217, the Atlas, CMS and LHCb Castor headnodes will be rebooted. There may be short breaks in services |
arc-ce01, arc-ce02, arc-ce03, arc-ce04, arc-ce05, cream-ce01, cream-ce02 | SCHEDULED | OUTAGE | 23/12/2014 09:00 | 23/12/2014 13:00 | 4 hours | Due to Kernel patching of EGI ADV 20141217, the RAL tier1 batch farm worker nodes will need to be rebooted. |
Open GGUS Tickets (Snapshot during morning of meeting) |
GGUS ID | Level | Urgency | State | Creation | Last Update | VO | Subject |
---|---|---|---|---|---|---|---|
110776 | Green | Urgent | Waiting for Reply | 2014-12-15 | 2014-12-17 | CMS | Phedex Node Name Transition |
110605 | Green | Less Urgent | On Hold | 2014-12-08 | 2014-12-12 | ops | [Rod Dashboard] Issues detected at RAL-LCG2 (srm-cms-disk.gridpp.rl.ac.uk) |
110382 | Green | Less Urgent | In Progress | 2014-11-26 | 2015-07-01 | N/A | RAL-LCG2: please reinstall your perfsonar hosts(s) |
109712 | Red | Urgent | In Progress | 2014-10-29 | 2014-11-27 | CMS | Glexec exited with status 203; ... |
109694 | Red | Urgent | On hold | 2014-11-03 | 2014-12-18 | SNO+ | gfal-copy failing for files at RAL |
108944 | Red | Urgent | Waiting for Reply | 2014-10-01 | 2014-01-07 | CMS | AAA access test failing at T1_UK_RAL |
107935 | Red | Less Urgent | On Hold | 2014-08-27 | 2014-12-15 | Atlas | BDII vs SRM inconsistent storage capacity numbers |
Availability Report |
Key: Atlas HC = Atlas HammerCloud (Queue ANALY_RAL_SL6, Template 508); CMS HC = CMS HammerCloud
Day | OPS | Alice | Atlas | CMS | LHCb | Atlas HC | CMS HC | Comment |
---|---|---|---|---|---|---|---|---|
17/12/14 | 100 | 100 | 100 | 100 | 100 | 100 | 100 | |
18/12/14 | 100 | 100 | 100 | 100 | 100 | 100 | 100 | |
19/12/14 | 100 | 100 | 100 | 100 | 100 | 100 | 100 | |
20/12/14 | 100 | 100 | 100 | 100 | 100 | 100 | 100 | |
21/12/14 | 100 | 100 | 100 | 100 | 90.9 | 100 | 100 | Large number of pending requests in Castor. Transfer Managers unresponsive and restarted. |
22/12/14 | 100 | 100 | 100 | 100 | 100 | 100 | 100 | |
23/12/14 | 83.3 | 83.3 | 83.3 | 83.3 | 83.3 | 97 | 100 | Batch farm having kernel updates applied. |
24/12/14 | 97.0 | 100 | 95.9 | 91.8 | 98.2 | 100 | 96 | Problem with primary Tier1 Router. |
25/12/14 | 90.7 | 100 | 90.1 | 92.3 | 88.9 | 98 | N/A | Problem with primary Tier1 Router. Late evening. |
26/12/14 | 89.5 | 71.2 | 94.9 | 85.7 | 90.7 | 100 | 100 | Problem with primary Tier1 Router. Continuation of yesterday's problem. |
27/12/14 | 100 | 100 | 100 | 100 | 100 | 100 | 99 | |
28/12/14 | 100 | 100 | 100 | 100 | 100 | 98 | 100 | |
29/12/14 | 100 | 100 | 100 | 100 | 100 | 100 | 98 | |
30/12/14 | 100 | 100 | 100 | 100 | 100 | 100 | 100 | |
31/12/14 | 100 | 100 | 100 | 100 | 100 | 100 | 100 | |
01/01/15 | 100 | 100 | 100 | 100 | 100 | 100 | 100 | |
02/01/15 | 100 | 100 | 100 | 100 | 100 | 100 | 100 | |
03/01/15 | 100 | 100 | 100 | 100 | 96.8 | 100 | 99 | Single SRM test failure: Error listing file: No such file or directory. |
04/01/15 | 100 | 100 | 100 | 100 | 99.0 | 100 | 100 | Previous test was just before midnight. |
05/01/15 | 100 | 100 | 100 | 100 | 100 | 100 | ? | |
06/01/15 | 100 | 100 | 100 | 100 | 95.8 | 100 | ? | Single SRM test failure: User timeout over |