Difference between revisions of "Tier1 Operations Report 2015-05-13"
From GridPP Wiki
(→) |
(→) |
||
(11 intermediate revisions by one user not shown) | |||
Line 9: | Line 9: | ||
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Review of Issues during the week 6th to 13th May 2015. | | style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Review of Issues during the week 6th to 13th May 2015. | ||
|} | |} | ||
− | * At the end of last week the Atlas queue ANALY_RAL_SL6 was being put offline then back online a few times per day by Atlas. This was traced to a problem with a new configuration of a batch of new worker nodes. These | + | * At the end of last week the Atlas queue ANALY_RAL_SL6 was being put offline then back online a few times per day by Atlas. This was traced to a problem with a new configuration of a batch of new worker nodes. These nodes are temporarily disabled pending a fix. |
− | * Problems with Castor xroot response for CMS have been very acute. In particular the time required to open files is often excessively long. Work is ongoing to understand this problem. It is clear that many files are located on two particularly 'hot' disk servers which may account for most of the effect - although this is not the only cause of the problem. In order to elimiate possible causes of problems the CMS AAA xroot redirector | + | * Problems with Castor xroot response for CMS have been very acute. In particular the time required to open files is often excessively long. Work is ongoing to understand this problem. It is clear that many files are located on two particularly 'hot' disk servers which may account for most of the effect - although this is not the only cause of the problem. In order to elimiate possible causes of problems on Friday (8th May) the CMS AAA xroot redirector was stopped. |
+ | * As reported last week the network latency difference between the two directions over the OPN and the balancing of the paired link to the UKLIght router have been fixed and remained good this last week. However, intermittent, low-level, load-related packet loss over the OPN to CERN is still being seen and tracked. | ||
<!-- ***********End Review of Issues during last week*********** -----> | <!-- ***********End Review of Issues during last week*********** -----> | ||
<!-- *********************************************************** -----> | <!-- *********************************************************** -----> | ||
Line 33: | Line 34: | ||
|} | |} | ||
* We are running with a single router connecting the Tier1 network to the site network, rather than a resilient pair. | * We are running with a single router connecting the Tier1 network to the site network, rather than a resilient pair. | ||
+ | * Castor xroot performance problems seen by CMS - particularly in file open times. | ||
* The post mortem review of the network incident on the 8th April is being finalised. | * The post mortem review of the network incident on the 8th April is being finalised. | ||
<!-- ***********End Current operational status and issues*********** -----> | <!-- ***********End Current operational status and issues*********** -----> | ||
Line 55: | Line 57: | ||
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Notable Changes made since the last meeting. | | style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Notable Changes made since the last meeting. | ||
|} | |} | ||
− | * gfal2 and davix rpms are in the process of being updated across the worker nodes | + | * gfal2 and davix rpms are in the process of being updated across the worker nodes and should be completed this week. |
− | + | * A start has been made on updating the Castor tape servers to SL6. | |
− | + | * On Thursday (7th May) the production FTS3 upgraded to 3.2.33 + gfal upgraded to 2.9.1 | |
− | * A start has been made on updating the Castor tape servers to SL6. | + | |
− | * | + | |
− | + | ||
− | + | ||
<!-- *************End Notable Changes made this last week************** -----> | <!-- *************End Notable Changes made this last week************** -----> | ||
<!-- ****************************************************************** -----> | <!-- ****************************************************************** -----> | ||
Line 104: | Line 102: | ||
<!-- ******* still to be formally scheduled and/or announced ******* -----> | <!-- ******* still to be formally scheduled and/or announced ******* -----> | ||
* Turn off ARC-CE05. This will leave four ARC CEs (as planned.) This fifth was set-up to temporarily workround a specific problem and is no longer required. | * Turn off ARC-CE05. This will leave four ARC CEs (as planned.) This fifth was set-up to temporarily workround a specific problem and is no longer required. | ||
− | * Upgrade Tier1 Castor Oracle Databases to version 11.2.0.4. Proposed timetable: | + | * Progressive upgrading of Castor Tape Servers to SL6. |
+ | * Upgrade Tier1 Castor Oracle Databases to version 11.2.0.4. Proposed timetable (delayed by one week since last week's report): | ||
** Week 26-28 May: Install software on Database Systems (some 'At Risks') | ** Week 26-28 May: Install software on Database Systems (some 'At Risks') | ||
** Tuesday 2nd June: Switchover and upgrade Neptune (ATLAS and GEN downtime) | ** Tuesday 2nd June: Switchover and upgrade Neptune (ATLAS and GEN downtime) | ||
Line 180: | Line 179: | ||
|-style="background:#b7f1ce" | |-style="background:#b7f1ce" | ||
! GGUS ID !! Level !! Urgency !! State !! Creation !! Last Update !! VO !! Subject | ! GGUS ID !! Level !! Urgency !! State !! Creation !! Last Update !! VO !! Subject | ||
+ | |- | ||
+ | | 113591 | ||
+ | | Green | ||
+ | | Top Priority | ||
+ | | Waiting Reply | ||
+ | | 2015-05-07 | ||
+ | | 2015-05-12 | ||
+ | | Atlas | ||
+ | | RAL-LCG2 DATATAPE, all tranfers are failing at destination | ||
|- | |- | ||
| 113320 | | 113320 | ||
Line 186: | Line 194: | ||
| In Progress | | In Progress | ||
| 2015-04-27 | | 2015-04-27 | ||
− | | 2015-05- | + | | 2015-05-12 |
| CMS | | CMS | ||
| Data transfer issues between T1_UK_RAL_MSS, T1_UK_RAL_Buffer and T1_UK_RAL_Disk | | Data transfer issues between T1_UK_RAL_MSS, T1_UK_RAL_Buffer and T1_UK_RAL_Disk | ||
Line 216: | Line 224: | ||
| Atlas | | Atlas | ||
| RAL-LCG2: SOURCE Failed to get source file size | | RAL-LCG2: SOURCE Failed to get source file size | ||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
|- | |- | ||
| 109694 | | 109694 | ||
Line 231: | Line 230: | ||
| In Progress | | In Progress | ||
| 2014-11-03 | | 2014-11-03 | ||
− | | 2015- | + | | 2015-05-13 |
| SNO+ | | SNO+ | ||
| gfal-copy failing for files at RAL | | gfal-copy failing for files at RAL | ||
Line 240: | Line 239: | ||
| In Progress | | In Progress | ||
| 2014-10-01 | | 2014-10-01 | ||
− | | 2015- | + | | 2015-05-11 |
| CMS | | CMS | ||
| AAA access test failing at T1_UK_RAL | | AAA access test failing at T1_UK_RAL |
Latest revision as of 12:07, 13 May 2015
RAL Tier1 Operations Report for 13th May2015
Review of Issues during the week 6th to 13th May 2015. |
- At the end of last week the Atlas queue ANALY_RAL_SL6 was being put offline then back online a few times per day by Atlas. This was traced to a problem with a new configuration of a batch of new worker nodes. These nodes are temporarily disabled pending a fix.
- Problems with Castor xroot response for CMS have been very acute. In particular the time required to open files is often excessively long. Work is ongoing to understand this problem. It is clear that many files are located on two particularly 'hot' disk servers which may account for most of the effect - although this is not the only cause of the problem. In order to elimiate possible causes of problems on Friday (8th May) the CMS AAA xroot redirector was stopped.
- As reported last week the network latency difference between the two directions over the OPN and the balancing of the paired link to the UKLIght router have been fixed and remained good this last week. However, intermittent, low-level, load-related packet loss over the OPN to CERN is still being seen and tracked.
Resolved Disk Server Issues |
- None
Current operational status and issues |
- We are running with a single router connecting the Tier1 network to the site network, rather than a resilient pair.
- Castor xroot performance problems seen by CMS - particularly in file open times.
- The post mortem review of the network incident on the 8th April is being finalised.
Ongoing Disk Server Issues |
- None
Notable Changes made since the last meeting. |
- gfal2 and davix rpms are in the process of being updated across the worker nodes and should be completed this week.
- A start has been made on updating the Castor tape servers to SL6.
- On Thursday (7th May) the production FTS3 upgraded to 3.2.33 + gfal upgraded to 2.9.1
Declared in the GOC DB |
Service | Scheduled? | Outage/At Risk | Start | End | Duration | Reason |
---|---|---|---|---|---|---|
cream-ce01, cream-ce02 | SCHEDULED | OUTAGE | 05/05/2015 12:00 | 02/06/2015 12:00 | 28 days | Decommissioning of CREAM CEs (cream-ce01.gridpp.rl.ac.uk, cream-ce02.gridpp.rl.ac.uk). |
Advanced warning for other interventions |
The following items are being discussed and are still to be formally scheduled and announced. |
- Turn off ARC-CE05. This will leave four ARC CEs (as planned.) This fifth was set-up to temporarily workround a specific problem and is no longer required.
- Progressive upgrading of Castor Tape Servers to SL6.
- Upgrade Tier1 Castor Oracle Databases to version 11.2.0.4. Proposed timetable (delayed by one week since last week's report):
- Week 26-28 May: Install software on Database Systems (some 'At Risks')
- Tuesday 2nd June: Switchover and upgrade Neptune (ATLAS and GEN downtime)
- Monday 8th June: Upgrade Neptune's standby (ATLAS and GEN at risk)
- Wednesday 10th June: Switchover Neptune and Pluto, and upgrade Pluto (All Tier1 Castor downtime)
- Tuesday 16th June: Upgrade Pluto's standby (Tier1 at risk)
- Thursday 18th June: Switchover Pluto (All Tier1 Castor downtime)
Listing by category:
- Databases:
- Switch LFC/3D to new Database Infrastructure.
- Update to Oracle 11.2.0.4. This will affect all services that use Oracle databases (Castor, LFC and Atlas Frontier)
- Castor:
- Update SRMs to new version (includes updating to SL6).
- Update the Oracle databses behind Castor to version 11.2.0.4. Will require some downtimes (See above)
- Update disk servers to SL6.
- Update to Castor version 2.1.15.
- Networking:
- Separate some non-Tier1 services off our network so as to be able to more easily investigate the router problems.
- Resolve problems with primary Tier1 Router
- Enable the RIP protocol for updating routing tables on the Tier1 routers. (Install patch to Router software).
- Increase bandwidth of the link from the Tier1 into the RAL internal site network to 40Gbit.
- Make routing changes to allow the removal of the UKLight Router.
- Cabling/switch changes to the network in the UPS room to improve resilience.
- Fabric
- Firmware updates on remaining EMC disk arrays (Castor, FTS/LFC)
Entries in GOC DB starting since the last report. |
Service | Scheduled? | Outage/At Risk | Start | End | Duration | Reason |
---|---|---|---|---|---|---|
lcgfts3.gridpp.rl.ac.uk, | SCHEDULED | WARNING | 07/05/2015 11:00 | 07/05/2015 12:00 | 1 hour | Update of Production FTS3 Server to version 3.2.33 |
cream-ce01, cream-ce02 | SCHEDULED | OUTAGE | 05/05/2015 12:00 | 02/06/2015 12:00 | 28 days, | Decommissioning of CREAM CEs (cream-ce01.gridpp.rl.ac.uk, cream-ce02.gridpp.rl.ac.uk). |
Open GGUS Tickets (Snapshot during morning of meeting) |
GGUS ID | Level | Urgency | State | Creation | Last Update | VO | Subject |
---|---|---|---|---|---|---|---|
113591 | Green | Top Priority | Waiting Reply | 2015-05-07 | 2015-05-12 | Atlas | RAL-LCG2 DATATAPE, all tranfers are failing at destination |
113320 | Green | Urgent | In Progress | 2015-04-27 | 2015-05-12 | CMS | Data transfer issues between T1_UK_RAL_MSS, T1_UK_RAL_Buffer and T1_UK_RAL_Disk |
112866 | Green | Less Urgent | In Progress | 2015-04-02 | 2015-04-07 | CMS | Many jobs are failed/aborted at T1_UK_RAL |
112819 | Green | Less Urgent | In Progress | 2015-04-02 | 2015-04-07 | SNO+ | ArcSync hanging |
112721 | Green | Less Urgent | In Progress | 2015-03-28 | 2015-04-16 | Atlas | RAL-LCG2: SOURCE Failed to get source file size |
109694 | Red | Urgent | In Progress | 2014-11-03 | 2015-05-13 | SNO+ | gfal-copy failing for files at RAL |
108944 | Red | Less Urgent | In Progress | 2014-10-01 | 2015-05-11 | CMS | AAA access test failing at T1_UK_RAL |
Availability Report |
Key: Atlas HC = Atlas HammerCloud (Queue ANALY_RAL_SL6, Template 508); CMS HC = CMS HammerCloud
Day | OPS | Alice | Atlas | CMS | LHCb | Atlas HC | CMS HC | Comment |
---|---|---|---|---|---|---|---|---|
06/05/15 | 100 | 100 | 100 | 100 | 100 | 100 | 100 | |
07/05/15 | 100 | 100 | 100 | 100 | 100 | 97 | 88 | |
08/05/15 | 100 | 100 | 100 | 100 | 100 | 100 | 92 | |
09/05/15 | 100 | 100 | 100 | 100 | 100 | 100 | 100 | |
10/05/15 | 100 | 100 | 100 | 100 | 100 | 100 | 99 | |
11/05/15 | 100 | 100 | 100 | 100 | 96.0 | 100 | 98 | Single SRM test failure on list. (HANDLING TIMEOUT) |
12/05/15 | 100 | 100 | 98.0 | 100 | 100 | 100 | 99 | A single SRM test failure. ([SRM_INVALID_PATH] No such file or directory) |