Difference between revisions of "Tier1 Operations Report 2015-10-28"
From GridPP Wiki
(→) |
(→) |
||
(14 intermediate revisions by 2 users not shown) | |||
Line 1: | Line 1: | ||
− | == | + | ==RAL Tier1 Operations Report for 28st October 2015== |
__NOTOC__ | __NOTOC__ | ||
====== ====== | ====== ====== | ||
Line 7: | Line 7: | ||
{| width="100%" cellspacing="0" cellpadding="0" style="background-color: #ffffff; border: 1px solid silver; border-collapse: collapse; width: 100%; margin: 0 0 1em 0;" | {| width="100%" cellspacing="0" cellpadding="0" style="background-color: #ffffff; border: 1px solid silver; border-collapse: collapse; width: 100%; margin: 0 0 1em 0;" | ||
|- | |- | ||
− | | style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Review of Issues during the week | + | | style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Review of Issues during the week 21st to 28th October 2015. |
|} | |} | ||
− | * | + | * Early on Saturday (24/10/2015) morning, arc-ce02 developed problems. It was probably due to problems with the underlying hyper-visor. Some other less critical machines on the same hyper-visor were also affected. |
− | * | + | * On Monday (26/10/2015) evening, there were problems with the WMS servers. A user was submitting jobs with very big output files. This was filling up the working partition on the machines. |
<!-- ***********End Review of Issues during last week*********** -----> | <!-- ***********End Review of Issues during last week*********** -----> | ||
<!-- *********************************************************** -----> | <!-- *********************************************************** -----> | ||
Line 21: | Line 21: | ||
| style="background-color: #f8d6a9; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Resolved Disk Server Issues | | style="background-color: #f8d6a9; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Resolved Disk Server Issues | ||
|} | |} | ||
− | * | + | * gdss637 (AtlasTape - D0T1) was returned to production on Thursday 22nd Oct. |
+ | * gdss657 (LHCb_RAW - D0T1) was returned to production on Monday 26th Oct. | ||
<!-- ***********End Resolved Disk Server Issues*********** -----> | <!-- ***********End Resolved Disk Server Issues*********** -----> | ||
<!-- ***************************************************** -----> | <!-- ***************************************************** -----> | ||
Line 45: | Line 46: | ||
| style="background-color: #f8d6a9; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Ongoing Disk Server Issues | | style="background-color: #f8d6a9; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Ongoing Disk Server Issues | ||
|} | |} | ||
− | * | + | * gdss707 (AtlasDataDisk - D1T0) has been out of production since Friday (16th Oct). The server is currently undergoing testing with fabric. |
− | * | + | * gdss665 (AtlasTape - D0T1) failed on Sat (24th Oct). This server is still with the fabric team. |
− | * | + | * gdss663 (AtlasTape - D0T1) failed on Sun (25th Oct). This server is still with the fabric team. |
+ | * gdss664 (AtlasTape - D0T1) has had a few issues starting over the weekend. This morning (28th Oct) we decided to remove it from production so it can be investigated. | ||
<!-- ***************End Ongoing Disk Server Issues**************** -----> | <!-- ***************End Ongoing Disk Server Issues**************** -----> | ||
<!-- ************************************************************* -----> | <!-- ************************************************************* -----> | ||
Line 58: | Line 60: | ||
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Notable Changes made since the last meeting. | | style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Notable Changes made since the last meeting. | ||
|} | |} | ||
− | * | + | * None |
− | + | ||
<!-- *************End Notable Changes made this last week************** -----> | <!-- *************End Notable Changes made this last week************** -----> | ||
<!-- ****************************************************************** -----> | <!-- ****************************************************************** -----> | ||
Line 173: | Line 174: | ||
| 23/10/15 || 100 || 100 || style="background-color: lightgrey;" | 92 || 100 || 100 || 100 || 100 || (Four SRM test failures. Three on ‘GET’, one on ‘PUT’. All: “__main__.TimeoutException” | | 23/10/15 || 100 || 100 || style="background-color: lightgrey;" | 92 || 100 || 100 || 100 || 100 || (Four SRM test failures. Three on ‘GET’, one on ‘PUT’. All: “__main__.TimeoutException” | ||
|- | |- | ||
− | | 24/10/15 || 100 || 100 || 100 || 100 || 100 || | + | | 24/10/15 || 100 || 100 || 100 || 100 || 100 || 100 || 100 || |
|- | |- | ||
| 25/10/15 || 100 || 100 || 100 || 100 || 100 || 97 || 98 || | | 25/10/15 || 100 || 100 || 100 || 100 || 100 || 97 || 98 || | ||
Line 179: | Line 180: | ||
| 26/10/15 || 100 || 100 || 100 || 100 || 100 || 100 || 100 || | | 26/10/15 || 100 || 100 || 100 || 100 || 100 || 100 || 100 || | ||
|- | |- | ||
− | | 27/10/15 || 100 || 100 || 100 || 100 || 100 || | + | | 27/10/15 || 100 || 100 || 100 || 100 || 100 || 100 || 100 || |
|} | |} | ||
<!-- **********************End Availability Report************************** -----> | <!-- **********************End Availability Report************************** -----> | ||
<!-- *********************************************************************** -----> | <!-- *********************************************************************** -----> |
Latest revision as of 12:58, 28 October 2015
RAL Tier1 Operations Report for 28st October 2015
Review of Issues during the week 21st to 28th October 2015. |
- Early on Saturday (24/10/2015) morning, arc-ce02 developed problems. It was probably due to problems with the underlying hyper-visor. Some other less critical machines on the same hyper-visor were also affected.
- On Monday (26/10/2015) evening, there were problems with the WMS servers. A user was submitting jobs with very big output files. This was filling up the working partition on the machines.
Resolved Disk Server Issues |
- gdss637 (AtlasTape - D0T1) was returned to production on Thursday 22nd Oct.
- gdss657 (LHCb_RAW - D0T1) was returned to production on Monday 26th Oct.
Current operational status and issues |
- The LHCb problem with a low but persistent rate of failure when copying the results of batch jobs to Castor. There is also a further problem that sometimes occurs when these (failed) writes are attempted to storage at other sites.
- The intermittent, low-level, load-related packet loss seen over external connections is still being tracked. Likewise we have been working to understand some remaining low level of packet loss seen within a part of our Tier1 network.
- Long-standing CMS issues. The two items that remain are CMS Xroot (AAA) redirection and file open times. Work is ongoing into the Xroot redirection with a new server having been added in recent weeks. File open times using Xroot remain slow but this is a less significant problem.
Ongoing Disk Server Issues |
- gdss707 (AtlasDataDisk - D1T0) has been out of production since Friday (16th Oct). The server is currently undergoing testing with fabric.
- gdss665 (AtlasTape - D0T1) failed on Sat (24th Oct). This server is still with the fabric team.
- gdss663 (AtlasTape - D0T1) failed on Sun (25th Oct). This server is still with the fabric team.
- gdss664 (AtlasTape - D0T1) has had a few issues starting over the weekend. This morning (28th Oct) we decided to remove it from production so it can be investigated.
Notable Changes made since the last meeting. |
- None
Declared in the GOC DB |
- None
Advanced warning for other interventions |
The following items are being discussed and are still to be formally scheduled and announced. |
- Upgrade of remaining Castor disk servers (those in tape-backed service classes) to SL6. This will be transparent to users.
- Some detailed internal network re-configurations to enable the removal of the old 'core' switch from our network. This includes changing the way the UKLIGHT router connects into the Tier1 network.
Listing by category:
- Databases:
- Switch LFC/3D to new Database Infrastructure.
- Castor:
- Update SRMs to new version (includes updating to SL6).
- Update disk servers to SL6 (ongoing)
- Update to Castor version 2.1.15.
- Networking:
- Complete changes needed to remove the old core switch from the Tier1 network.
- Make routing changes to allow the removal of the UKLight Router.
- Fabric
- Firmware updates on remaining EMC disk arrays (Castor, LFC)
Entries in GOC DB starting since the last report. |
- None
Open GGUS Tickets (Snapshot during morning of meeting) |
GGUS ID | Level | Urgency | State | Creation | Last Update | VO | Subject |
---|---|---|---|---|---|---|---|
117171 | very urgent | waiting for reply | 2015-10-24 | 2015-10-27 | LHCb | Aborted pilots on arc-ce02.gridpp.rl.ac.uk | |
116866 | Green | Less Urgent | On Hold | 2015-10-12 | 2015-10-19 | SNO+ | snoplus support at RAL-LCG2 (pilot role) |
116864 | Green | Urgent | In Progress | 2015-10-12 | 2015-10-26 | CMS | T1_UK_RAL AAA opening and reading test failing again... |
Availability Report |
Key: Atlas HC = Atlas HammerCloud (Queue ANALY_RAL_SL6, Template 508); CMS HC = CMS HammerCloud
Day | OPS | Alice | Atlas | CMS | LHCb | Atlas HC | CMS HC | Comment |
---|---|---|---|---|---|---|---|---|
21/10/15 | 100 | 100 | 100 | 100 | 100 | 98 | 100 | |
22/10/15 | 100 | 100 | 98 | 100 | 100 | 100 | 100 | SRM test failure on PUT. (__main__.TimeoutException) |
23/10/15 | 100 | 100 | 92 | 100 | 100 | 100 | 100 | (Four SRM test failures. Three on ‘GET’, one on ‘PUT’. All: “__main__.TimeoutException” |
24/10/15 | 100 | 100 | 100 | 100 | 100 | 100 | 100 | |
25/10/15 | 100 | 100 | 100 | 100 | 100 | 97 | 98 | |
26/10/15 | 100 | 100 | 100 | 100 | 100 | 100 | 100 | |
27/10/15 | 100 | 100 | 100 | 100 | 100 | 100 | 100 |