Difference between revisions of "Tier1 Operations Report 2014-10-22"
From GridPP Wiki
(→) |
(→) |
||
(6 intermediate revisions by one user not shown) | |||
Line 9: | Line 9: | ||
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Review of Issues during the week 15th to 22nd October 2014. | | style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Review of Issues during the week 15th to 22nd October 2014. | ||
|} | |} | ||
− | * | + | * A problem reported last week, with a user filling up the spool areas on the ARC CEs has been fixed. The user was contacted (and responded) and in addition an automated process that cleans up the large log files put in place. |
<!-- ***********End Review of Issues during last week*********** -----> | <!-- ***********End Review of Issues during last week*********** -----> | ||
<!-- *********************************************************** -----> | <!-- *********************************************************** -----> | ||
Line 32: | Line 32: | ||
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Current operational status and issues | | style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Current operational status and issues | ||
|} | |} | ||
− | * | + | * There have been problems with three out of the four the ARC CEs since the end of last week. The process of tidying up and reporting completed jobs is taking a long time. ARC-CE01 has been declared down and is being drained as part of the investigations. |
<!-- ***********End Current operational status and issues*********** -----> | <!-- ***********End Current operational status and issues*********** -----> | ||
<!-- *************************************************************** -----> | <!-- *************************************************************** -----> | ||
Line 55: | Line 55: | ||
|} | |} | ||
* The production FTS3 service was upgraded to version 3.2.29-1 on Monday (20th Oct). | * The production FTS3 service was upgraded to version 3.2.29-1 on Monday (20th Oct). | ||
− | * Yesterday, Tueday 21st, Oracle update was applied to the OGMA (Atlas 3D) database system. This morning it was switched to update the Atlas 3D information from CERN using Oracle GoldenGate, rather than Oracle Streams. This alleviates a problem whereby the replacement database system for OGMA ( | + | * Yesterday, Tueday 21st, an Oracle update was applied to the OGMA (Atlas 3D) database system. This morning it was switched to update the Atlas 3D information from CERN using Oracle GoldenGate, rather than Oracle Streams. This update was done during a planned Atlas test. However, it transpired that another site was had load issues with their Frontier/database system and the updated OGMA/Frontier system was returned to service very quickly after completing the change. (This alleviates a problem whereby the replacement database system for OGMA ("Cronos") was not performing well enough owing to limitations of the disk array being used temporarily within that system.) |
<!-- *************End Notable Changes made this last week************** -----> | <!-- *************End Notable Changes made this last week************** -----> | ||
<!-- ****************************************************************** -----> | <!-- ****************************************************************** -----> | ||
Line 97: | Line 97: | ||
|} | |} | ||
<!-- ******* still to be formally scheduled and/or announced ******* -----> | <!-- ******* still to be formally scheduled and/or announced ******* -----> | ||
+ | * The Oracle regular "PSU" patches will be applied to the Pluto Castor standby database system on Monday (27th Oct) and to the Pluto production database on Wednesday (29th Oct). This is expected to be a transparent intervention. The 'Pluto' database hosts the Castor Nameserver as well as the CMS and LHCb stager databases. | ||
* The rollout of the RIP protocol to the Tier1 routers still has to be completed. | * The rollout of the RIP protocol to the Tier1 routers still has to be completed. | ||
* First quarter 2015: Circuit testing of the remaining (i.e. non-UPS) circuits in the machine room. | * First quarter 2015: Circuit testing of the remaining (i.e. non-UPS) circuits in the machine room. |
Latest revision as of 11:09, 22 October 2014
RAL Tier1 Operations Report for 22nd October 2014
Review of Issues during the week 15th to 22nd October 2014. |
- A problem reported last week, with a user filling up the spool areas on the ARC CEs has been fixed. The user was contacted (and responded) and in addition an automated process that cleans up the large log files put in place.
Resolved Disk Server Issues |
- GDSS648 (LHCbUser - D1T0) failed on Monday afternoon (13th Oct). It was returned to service on 16th October after its network interface was replaced. It was initially in readonly mode as a precaution. This morning (22nd) it was reverted to full (read & write) operation.
- GDSS720 (AtlasDataDisk - D1T0) has been completely drained pending more invasive investigations of the fault seen.
Current operational status and issues |
- There have been problems with three out of the four the ARC CEs since the end of last week. The process of tidying up and reporting completed jobs is taking a long time. ARC-CE01 has been declared down and is being drained as part of the investigations.
Ongoing Disk Server Issues |
- GDSS763 (AtlasDataDisk - D1T0) failed on Friday (17th) - the third time on around a month. It was returned to production on Monday (20th) in readonly mode. It is being drained ahead of further investigations.
Notable Changes made this last week. |
- The production FTS3 service was upgraded to version 3.2.29-1 on Monday (20th Oct).
- Yesterday, Tueday 21st, an Oracle update was applied to the OGMA (Atlas 3D) database system. This morning it was switched to update the Atlas 3D information from CERN using Oracle GoldenGate, rather than Oracle Streams. This update was done during a planned Atlas test. However, it transpired that another site was had load issues with their Frontier/database system and the updated OGMA/Frontier system was returned to service very quickly after completing the change. (This alleviates a problem whereby the replacement database system for OGMA ("Cronos") was not performing well enough owing to limitations of the disk array being used temporarily within that system.)
Declared in the GOC DB |
Service | Scheduled? | Outage/At Risk | Start | End | Duration | Reason |
---|---|---|---|---|---|---|
arc-ce01.gridpp.rl.ac.uk | UNSCHEDULED | OUTAGE | 20/10/2014 13:45 | 23/10/2014 17:00 | 3 days, 3 hours and 15 minutes | Investigating problems with the ARC CE. |
Advanced warning for other interventions |
The following items are being discussed and are still to be formally scheduled and announced. |
- The Oracle regular "PSU" patches will be applied to the Pluto Castor standby database system on Monday (27th Oct) and to the Pluto production database on Wednesday (29th Oct). This is expected to be a transparent intervention. The 'Pluto' database hosts the Castor Nameserver as well as the CMS and LHCb stager databases.
- The rollout of the RIP protocol to the Tier1 routers still has to be completed.
- First quarter 2015: Circuit testing of the remaining (i.e. non-UPS) circuits in the machine room.
Listing by category:
- Databases:
- Apply latest Oracle patches (PSU) to the production database systems (Castor, LFC). (Underway).
- A new database (Oracle RAC) has been set-up to host the Atlas3D database. This is updated from CERN via Oracle GoldenGate.
- Switch LFC/3D to new Database Infrastructure.
- Castor:
- Update Castor headnodes to SL6.
- Fix discrepancies were found in some of the Castor database tables and columns. (The issue has no operational impact.)
- Networking:
- Move switches connecting the 2011 disk servers batches onto the Tier1 mesh network.
- Make routing changes to allow the removal of the UKLight Router.
- Enable the RIP protocol for updating routing tables on the Tier1 routers.
- Fabric
- Migration of data to new T10KD tapes. (Migration of CMS from 'B' to 'D' tapes underway; migration of GEN from 'A' to 'D' tapes to follow.)
- Firmware updates on remaining EMC disk arrays (Castor, FTS/LFC)
- There will be circuit testing of the remaining (i.e. non-UPS) circuits in the machine room (Expected first quarter 2015).
Entries in GOC DB starting between the 15th and 22nd October 2014. |
Service | Scheduled? | Outage/At Risk | Start | End | Duration | Reason |
---|---|---|---|---|---|---|
arc-ce01.gridpp.rl.ac.uk | UNSCHEDULED | OUTAGE | 20/10/2014 13:45 | 23/10/2014 17:00 | 3 days, 3 hours and 15 minutes | Investigating problems with the ARC CE. |
lcgwms06.gridpp.rl.ac.uk, | UNSCHEDULED | OUTAGE | 15/10/2014 12:00 | 15/10/2014 16:37 | 4 hours and 37 minutes | Problems following EMI update |
Open GGUS Tickets (Snapshot during morning of meeting) |
GGUS ID | Level | Urgency | State | Creation | Last Update | VO | Subject |
---|---|---|---|---|---|---|---|
109399 | Green | Less Urgent | In Progress | 2014-10-17 | 2014-10-21 | [Rod Dashboard] Issues detected at RAL-LCG2 (CE problems) | |
109360 | Green | Less Urgent | Waiting for Reply | 2014-10-20 | 2014-10-21 | SNO+ | Nagios tests failing at RAL |
109329 | Green | Less Urgent | In Progress | 2014-10-14 | 2014-10-14 | CMS | access to lcgvo04.gridpp.rl.ac.uk |
109276 | Green | Less Urgent | In Progress | 2014-10-11 | 2014-10-13 | CMS | Submissions to RAL FTS3 REST interface are failing for some users |
109267 | Green | Urgent | Waiting for Reply | 2014-10-10 | 2014-10-16 | CMS | possible trouble accessing pileup dataset |
108944 | Green | Urgent | In Progress | 2014-10-01 | 2014-10-17 | CMS | AAA access test failing at T1_UK_RAL |
107935 | Red | Less Urgent | On Hold | 2014-08-27 | 2014-10-15 | Atlas | BDII vs SRM inconsistent storage capacity numbers |
107880 | Red | Less Urgent | In Progress | 2014-08-26 | 2014-10-15 | SNO+ | srmcp failure |
106324 | Red | Urgent | On Hold | 2014-06-18 | 2014-10-13 | CMS | pilots losing network connections at T1_UK_RAL |
Availability Report |
Key: Atlas HC = Atlas HammerCloud (Queue ANALY_RAL_SL6, Template 508); CMS HC = CMS HammerCloud
Day | OPS | Alice | Atlas | CMS | LHCb | Atlas HC | CMS HC | Comment |
---|---|---|---|---|---|---|---|---|
15/10/14 | 100 | 100 | 66.2 | 68.7 | 100 | 100 | 100 | Problem on ARC-CEs (started previous day). Plus, for Atlas, single SRM Put test failure "HANDLING TIMEOUT" |
16/10/14 | 100 | 100 | 91.6 | 92.0 | 100 | 100 | 100 | CERN power cut (plus one Atlas SRM Put failure: HANDLING TIMEOUT) |
17/10/14 | 100 | 100 | 100 | 100 | 100 | 100 | 100 | |
18/10/14 | 100 | 100 | 99.2 | 100 | 100 | 100 | 100 | Single SRM Put Test failure: User timeout. |
19/10/14 | 100 | 100 | 100 | 100 | 100 | 0 | 100 | |
20/10/14 | 100 | 100 | 99.0 | 100 | 100 | 100 | 100 | Single SRM Put Test failure: could not open connection to srm-atlas.gridpp.rl.ac.uk |
21/10/14 | 100 | 100 | 100 | 91.6 | 100 | 100 | 99 | Problems with the ARC CEs. |