Difference between revisions of "Tier1 Operations Report 2014-10-08"
From GridPP Wiki
(→) |
m (→) |
||
(4 intermediate revisions by one user not shown) | |||
Line 11: | Line 11: | ||
* As reported last week there was a problem with one of the site BDIIs that caused a problem for SAM tests against CEs through Tuesday/Wednesday last week (20/9 - 01/10). | * As reported last week there was a problem with one of the site BDIIs that caused a problem for SAM tests against CEs through Tuesday/Wednesday last week (20/9 - 01/10). | ||
* On Monday (6th October) performance problems were experienced with Castor - particularly affecting Atlas. It was found that the battery for the memory cache in the disk array hosting the Castor databases had failed. The resulted in much slower performance of the disk array, which in turn impacted Castor performance. A reconfiguration was made to lighten the load on the disk array and restore performance. One of the two databases (Neptune, which hosts the Atlas and GEN SRM databases) was moved to the standby database system. This required an outage of the Castor Atlas and GEN instances which lasted around 2 hours. The standby is, under normal operation, kept up to date using Oracle 'DataGuard' which replicates the updates applied to the primary database onto the standby. At the time of writing this report we are awaiting an engineer to come and fix the battery problem. | * On Monday (6th October) performance problems were experienced with Castor - particularly affecting Atlas. It was found that the battery for the memory cache in the disk array hosting the Castor databases had failed. The resulted in much slower performance of the disk array, which in turn impacted Castor performance. A reconfiguration was made to lighten the load on the disk array and restore performance. One of the two databases (Neptune, which hosts the Atlas and GEN SRM databases) was moved to the standby database system. This required an outage of the Castor Atlas and GEN instances which lasted around 2 hours. The standby is, under normal operation, kept up to date using Oracle 'DataGuard' which replicates the updates applied to the primary database onto the standby. At the time of writing this report we are awaiting an engineer to come and fix the battery problem. | ||
− | * Two files were declared lost to ALICE, both from AliceDisk. These | + | * Two files were declared lost to ALICE, both from AliceDisk. These were picked up by the Castor checksum checker. |
<!-- ***********End Review of Issues during last week*********** -----> | <!-- ***********End Review of Issues during last week*********** -----> | ||
<!-- *********************************************************** -----> | <!-- *********************************************************** -----> | ||
Line 34: | Line 34: | ||
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Current operational status and issues | | style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Current operational status and issues | ||
|} | |} | ||
− | * The problems with the Atlas Frontier systems was caused by the performance of the disk array that is being used temporarily by the Cronos back end database. An alternative, faster, disk | + | * The problems reported last week with the Atlas Frontier systems was caused by the performance of the disk array that is being used temporarily by the Cronos back end database. An alternative, faster, disk array is being tested. |
− | * The Castor databases are currently running in a temporary configuration while we await an engineer to come and fix the problem with the battery | + | * The Castor databases are currently running in a temporary configuration while we await an engineer to come and fix the problem with the battery powering the cache on a disk array. |
<!-- ***********End Current operational status and issues*********** -----> | <!-- ***********End Current operational status and issues*********** -----> | ||
<!-- *************************************************************** -----> | <!-- *************************************************************** -----> | ||
Line 59: | Line 59: | ||
* On Monday (6th October) access to the two cream CEs (cream-ce01, cream-ce02) was modified to only accept ALICE, dteam and ops jobs plus snoplus.snolab.ca. | * On Monday (6th October) access to the two cream CEs (cream-ce01, cream-ce02) was modified to only accept ALICE, dteam and ops jobs plus snoplus.snolab.ca. | ||
* Oracle patches (PSU) applied to the production Neptune database (Castor Atlas & GEN) on Thursday (2nd October). | * Oracle patches (PSU) applied to the production Neptune database (Castor Atlas & GEN) on Thursday (2nd October). | ||
+ | * ALICE have made successful tests running batch work through the ARC CEs. | ||
<!-- *************End Notable Changes made this last week************** -----> | <!-- *************End Notable Changes made this last week************** -----> | ||
<!-- ****************************************************************** -----> | <!-- ****************************************************************** -----> | ||
Line 109: | Line 110: | ||
<!-- ******* still to be formally scheduled and/or announced ******* -----> | <!-- ******* still to be formally scheduled and/or announced ******* -----> | ||
* The rollout of the RIP protocol to the Tier1 routers still has to be completed. | * The rollout of the RIP protocol to the Tier1 routers still has to be completed. | ||
− | |||
* First quarter 2015: Circuit testing of the remaining (i.e. non-UPS) circuits in the machine room. | * First quarter 2015: Circuit testing of the remaining (i.e. non-UPS) circuits in the machine room. | ||
'''Listing by category:''' | '''Listing by category:''' |
Latest revision as of 10:50, 15 October 2014
RAL Tier1 Operations Report for 8th October 2014
Review of Issues during the week 1st to 8th October 2014. |
- As reported last week there was a problem with one of the site BDIIs that caused a problem for SAM tests against CEs through Tuesday/Wednesday last week (20/9 - 01/10).
- On Monday (6th October) performance problems were experienced with Castor - particularly affecting Atlas. It was found that the battery for the memory cache in the disk array hosting the Castor databases had failed. The resulted in much slower performance of the disk array, which in turn impacted Castor performance. A reconfiguration was made to lighten the load on the disk array and restore performance. One of the two databases (Neptune, which hosts the Atlas and GEN SRM databases) was moved to the standby database system. This required an outage of the Castor Atlas and GEN instances which lasted around 2 hours. The standby is, under normal operation, kept up to date using Oracle 'DataGuard' which replicates the updates applied to the primary database onto the standby. At the time of writing this report we are awaiting an engineer to come and fix the battery problem.
- Two files were declared lost to ALICE, both from AliceDisk. These were picked up by the Castor checksum checker.
Resolved Disk Server Issues |
- GDSS707 (AtlasDataDisk - D1T0) failed around 5am on Sunday morning (5th Oct). It was restarted and tested but no fault found. The system was returned to service this morning (8th Oct).
- GDSS720 was taken out of services during the afternoon of Wednesday 1st October as there were performance problems accessing files on the server via Castor. It was returned to service the following morning. No configuration problems were found.
Current operational status and issues |
- The problems reported last week with the Atlas Frontier systems was caused by the performance of the disk array that is being used temporarily by the Cronos back end database. An alternative, faster, disk array is being tested.
- The Castor databases are currently running in a temporary configuration while we await an engineer to come and fix the problem with the battery powering the cache on a disk array.
Ongoing Disk Server Issues |
- None.
Notable Changes made this last week. |
- On Monday (6th October) access to the two cream CEs (cream-ce01, cream-ce02) was modified to only accept ALICE, dteam and ops jobs plus snoplus.snolab.ca.
- Oracle patches (PSU) applied to the production Neptune database (Castor Atlas & GEN) on Thursday (2nd October).
- ALICE have made successful tests running batch work through the ARC CEs.
Declared in the GOC DB |
Service | Scheduled? | Outage/At Risk | Start | End | Duration | Reason |
---|---|---|---|---|---|---|
arc-ce02, arc-ce03, arc-ce04 | SCHEDULED | WARNING | 09/10/2014 10:00 | 09/10/2014 11:00 | 1 hour | Upgrade of ARC CEs to version 4.2.0 |
All Castor (all SRM endpoints) | UNSCHEDULED | WARNING | 07/10/2014 15:00 | 08/10/2014 17:00 | 1 day, 2 hours | Extending warning while databases running with degraded RAID battery |
Advanced warning for other interventions |
The following items are being discussed and are still to be formally scheduled and announced. |
- The rollout of the RIP protocol to the Tier1 routers still has to be completed.
- First quarter 2015: Circuit testing of the remaining (i.e. non-UPS) circuits in the machine room.
Listing by category:
- Databases:
- Apply latest Oracle patches (PSU) to the production database systems (Castor, LFC). (Underway).
- A new database (Oracle RAC) has been set-up to host the Atlas3D database. This is updated from CERN via Oracle GoldenGate.
- Switch LFC/3D to new Database Infrastructure.
- Castor:
- Update Castor headnodes to SL6.
- Fix discrepancies were found in some of the Castor database tables and columns. (The issue has no operational impact.)
- Networking:
- Move switches connecting the 2011 disk servers batches onto the Tier1 mesh network.
- Make routing changes to allow the removal of the UKLight Router.
- Enable the RIP protocol for updating routing tables on the Tier1 routers.
- Fabric
- Migration of data to new T10KD tapes. (Migration of CMS from 'B' to 'D' tapes underway; migration of GEN from 'A' to 'D' tapes to follow.)
- Firmware updates on remaining EMC disk arrays (Castor, FTS/LFC)
- There will be circuit testing of the remaining (i.e. non-UPS) circuits in the machine room (Expected first quarter 2015).
Entries in GOC DB starting between the 1st and 8th October 2014. |
Service | Scheduled? | Outage/At Risk | Start | End | Duration | Reason |
---|---|---|---|---|---|---|
All Castor (all SRM endpoints) | UNSCHEDULED | WARNING | 07/10/2014 15:00 | 08/10/2014 17:00 | 1 day, 2 hours | Extending warning while databases running with degraded RAID battery |
arc-ce01.gridpp.rl.ac.uk | SCHEDULED | WARNING | 07/10/2014 10:00 | 07/10/2014 11:00 | 1 hour | Upgrade of ARC CE to version 4.2.0 |
All Castor (all SRM endpoints) | UNSCHEDULED | WARNING | 06/10/2014 17:00 | 07/10/2014 15:00 | 22 hours | At risk on all Castor instances due to problems with RAID cache battery on Castor DB RAID arrays |
All Castor (all SRM endpoints) | UNSCHEDULED | OUTAGE | 06/10/2014 16:00 | 06/10/2014 16:22 | 22 minutes | Extending downtime to switch over dataguard for SRM databases |
srm-alice.gridpp.rl.ac.uk, srm-atlas.gridpp.rl.ac.uk, srm-biomed.gridpp.rl.ac.uk, srm-dteam.gridpp.rl.ac.uk, srm-hone.gridpp.rl.ac.uk, srm-ilc.gridpp.rl.ac.uk, srm-lhcb.gridpp.rl.ac.uk, srm-mice.gridpp.rl.ac.uk, srm-minos.gridpp.rl.ac.uk, srm-na62.gridpp.rl.ac.uk, srm-pheno.gridpp.rl.ac.uk, srm-snoplus.gridpp.rl.ac.uk, srm-superb.gridpp.rl.ac.uk, srm-t2k.gridpp.rl.ac.uk, | UNSCHEDULED | OUTAGE | 06/10/2014 14:30 | 06/10/2014 16:00 | 1 hour and 30 minutes | Downtime to switch to failover databases because of hardware failure |
Whole Site | SCHEDULED | WARNING | 01/10/2014 10:00 | 01/10/2014 12:00 | 2 hours | RAL Tier1 site in warning state due to UPS/generator test. |
Open GGUS Tickets (Snapshot during morning of meeting) |
GGUS ID | Level | Urgency | State | Creation | Last Update | VO | Subject |
---|---|---|---|---|---|---|---|
108944 | Green | Urgent | In Progress | 2014-10-01 | 2014-10-01 | CMS | AAA access test failing at T1_UK_RAL |
108845 | Green | Urgent | In Progress | 2014-09-27 | 2014-10-07 | Atlas | RAL-LCG2: Source connection timeout plus globus_ftp_client error |
108546 | Amber | Less Urgent | In Progress | 2014-09-16 | 2014-09-22 | Atlas | RAL-LCG2_HIMEM_SL6: production jobs failed |
107935 | Red | Less Urgent | In Progress | 2014-08-27 | 2014-10-07 | Atlas | BDII vs SRM inconsistent storage capacity numbers |
107880 | Red | Less Urgent | Waiting Reply | 2014-08-26 | 2014-10-08 | SNO+ | srmcp failure |
106324 | Red | Urgent | In Progress | 2014-06-18 | 2014-09-23 | CMS | pilots losing network connections at T1_UK_RAL |
Availability Report |
Key: Atlas HC = Atlas HammerCloud (Queue ANALY_RAL_SL6, Template 508); CMS HC = CMS HammerCloud
Day | OPS | Alice | Atlas | CMS | LHCb | Atlas HC | CMS HC | Comment |
---|---|---|---|---|---|---|---|---|
01/10/14 | 67.6 | 100 | 100 | 95.5 | 100 | 100 | n/a | Multiple CE test failures. Appear to be due to problem on one of our site BDIIs. (Problem started yesterday). |
02/10/14 | 100 | 100 | 98.1 | 100 | 100 | 97 | 100 | Two SRM test failures. Both "could not open connection to srm-atlas.gridpp.rl.ac.uk". One approximately coincidental with Oracle PSU upgrade. |
03/10/14 | 100 | 100 | 100 | 100 | 100 | 100 | 100 | |
04/10/14 | 100 | 100 | 100 | 100 | 100 | 100 | n/a | |
05/10/14 | 100 | 100 | 99.3 | 100 | 100 | 100 | n/a | Single SRM test failure |
06/10/14 | 100 | 100 | 78.9 | 100 | 100 | 81 | 91 | Battery failure in disk array holding Castor databases lead to poor database performance. |
07/10/14 | 100 | 100 | 99.1 | 100 | 100 | 99 | 100 | Single SRM test failure |