Difference between revisions of "Tier1 Operations Report 2016-11-16"
From GridPP Wiki
(Created page with "==RAL Tier1 Operations Report for 16th November 2016== __NOTOC__ ====== ====== <!-- ************************************************************* -----> <!-- ***********Star...") |
|||
(11 intermediate revisions by one user not shown) | |||
Line 10: | Line 10: | ||
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Review of Issues during the week 9th to 16th November 2016. | | style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Review of Issues during the week 9th to 16th November 2016. | ||
|} | |} | ||
− | * There | + | * There was a problem with one of the Atlas Frontier launchpads on Monday 14th. All three such systems were updated to the newer release (3.5.22) |
− | + | ||
− | + | ||
<!-- ***********End Review of Issues during last week*********** -----> | <!-- ***********End Review of Issues during last week*********** -----> | ||
<!-- *********************************************************** -----> | <!-- *********************************************************** -----> | ||
Line 23: | Line 21: | ||
| style="background-color: #f8d6a9; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Resolved Disk Server Issues | | style="background-color: #f8d6a9; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Resolved Disk Server Issues | ||
|} | |} | ||
− | * | + | * GDSS713 (CMS Disk - D1T0) crashed on Sunday morning. It was returned to service the following day - initially read-only. Our checks did not find any fault on the server. |
<!-- ***************************************************** -----> | <!-- ***************************************************** -----> | ||
Line 56: | Line 54: | ||
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Notable Changes made since the last meeting. | | style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Notable Changes made since the last meeting. | ||
|} | |} | ||
− | * | + | * There was a short (one to two hour) interruption to tape mounts while the system that runs the tape library control software was swapped on Tuesday morning (15th Nov). |
− | * LHCb | + | * LHCb writing to the 'D' tapes. The migration of their data from 'C' to 'D' tapes is underway - with around 100 (some 10%) of the tapes done. |
<!-- *************End Notable Changes made this last week************** -----> | <!-- *************End Notable Changes made this last week************** -----> | ||
<!-- ****************************************************************** -----> | <!-- ****************************************************************** -----> | ||
Line 68: | Line 66: | ||
| style="background-color: #d8e8ff; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Declared in the GOC DB | | style="background-color: #d8e8ff; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Declared in the GOC DB | ||
|} | |} | ||
− | + | None | |
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
<!-- **********************End GOC DB Entries************************** -----> | <!-- **********************End GOC DB Entries************************** -----> | ||
<!-- ****************************************************************** -----> | <!-- ****************************************************************** -----> | ||
Line 126: | Line 107: | ||
! Reason | ! Reason | ||
|- | |- | ||
− | | | + | | gridftp.echo.stfc.ac.uk, s3.echo.stfc.ac.uk, s3.echo.stfc.ac.uk, xrootd.echo.stfc.ac.uk |
| SCHEDULED | | SCHEDULED | ||
− | | | + | | OUTAGE |
− | | | + | | 16/11/2016 10:30 |
− | | | + | | 16/11/2016 14:30 |
− | | | + | | 4 hours |
− | | | + | | Upgrading backend network behind Echo Storage service |
|} | |} | ||
<!-- **********************End GOC DB Entries************************** -----> | <!-- **********************End GOC DB Entries************************** -----> | ||
Line 148: | Line 129: | ||
|-style="background:#b7f1ce" | |-style="background:#b7f1ce" | ||
! GGUS ID !! Level !! Urgency !! State !! Creation !! Last Update !! VO !! Subject | ! GGUS ID !! Level !! Urgency !! State !! Creation !! Last Update !! VO !! Subject | ||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
|- | |- | ||
| 124876 | | 124876 | ||
| Green | | Green | ||
| Less Urgent | | Less Urgent | ||
− | | | + | | On Hold |
| 2016-11-07 | | 2016-11-07 | ||
− | | 2016-11- | + | | 2016-11-15 |
| OPS | | OPS | ||
| [Rod Dashboard] Issue detected : hr.srce.GridFTP-Transfer-ops@gridftp.echo.stfc.ac.uk | | [Rod Dashboard] Issue detected : hr.srce.GridFTP-Transfer-ops@gridftp.echo.stfc.ac.uk | ||
|- | |- | ||
| 124785 | | 124785 | ||
− | | | + | | Red |
| Urgent | | Urgent | ||
− | | | + | | Reopened |
− | + | ||
| 2016-11-02 | | 2016-11-02 | ||
+ | | 2016-11-09 | ||
| CMS | | CMS | ||
| Configuration updated AAA - CMS Site Name missing | | Configuration updated AAA - CMS Site Name missing | ||
|- | |- | ||
| 124606 | | 124606 | ||
− | | | + | | Yellow |
| Very Urgent | | Very Urgent | ||
| In Progress | | In Progress | ||
Line 185: | Line 156: | ||
| CMS | | CMS | ||
| Consistency Check for T1_UK_RAL | | Consistency Check for T1_UK_RAL | ||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
|- | |- | ||
| 122827 | | 122827 | ||
Line 216: | Line 169: | ||
| Yellow | | Yellow | ||
| Less Urgent | | Less Urgent | ||
− | | | + | | In Progress |
| 2016-03-22 | | 2016-03-22 | ||
| 2016-08-09 | | 2016-08-09 | ||
Line 263: | Line 216: | ||
<!-- **********************End Availability Report************************** -----> | <!-- **********************End Availability Report************************** -----> | ||
<!-- *********************************************************************** -----> | <!-- *********************************************************************** -----> | ||
+ | |||
+ | ====== ====== | ||
+ | <!-- ************************************************************************* -----> | ||
+ | <!-- **********************Start Availability Report************************** -----> | ||
+ | {| width="100%" cellspacing="0" cellpadding="0" style="background-color: #ffffff; border: 1px solid silver; border-collapse: collapse; width: 100%; margin: 0 0 1em 0;" | ||
+ | |- | ||
+ | | style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Notes from Meeting. | ||
+ | |} | ||
+ | * There was a discussion about the Tier1's response to the CVE-2016-5195 security issue. In particular that the batch system stopped accepting new work until the patch was received and applied. Storage continued to be available. | ||
+ | * At present Durham is the only Dirac site from which we have received production data. Test data has been received from the Leicester site. |
Latest revision as of 10:53, 17 November 2016
RAL Tier1 Operations Report for 16th November 2016
Review of Issues during the week 9th to 16th November 2016. |
- There was a problem with one of the Atlas Frontier launchpads on Monday 14th. All three such systems were updated to the newer release (3.5.22)
Resolved Disk Server Issues |
- GDSS713 (CMS Disk - D1T0) crashed on Sunday morning. It was returned to service the following day - initially read-only. Our checks did not find any fault on the server.
Current operational status and issues |
- There is a problem seen by LHCb of a low but persistent rate of failure when copying the results of batch jobs to Castor. There is also a further problem that sometimes occurs when these (failed) writes are attempted to storage at other sites.
- The intermittent, low-level, load-related packet loss that has been seen over external connections is still being tracked. The replacement of the UKLight router appears to have reduced this - but we are allowing more time to pass before drawing any conclusions.
Ongoing Disk Server Issues |
- None
Notable Changes made since the last meeting. |
- There was a short (one to two hour) interruption to tape mounts while the system that runs the tape library control software was swapped on Tuesday morning (15th Nov).
- LHCb writing to the 'D' tapes. The migration of their data from 'C' to 'D' tapes is underway - with around 100 (some 10%) of the tapes done.
Declared in the GOC DB |
None
Advanced warning for other interventions |
The following items are being discussed and are still to be formally scheduled and announced. |
Listing by category:
- Castor:
- Merge AtlasScratchDisk and LhcbUser into larger disk pools
- Update to Castor version 2.1.15. Planning to roll out January 2017. (Proposed dates: 10th Jan: Nameserver; 17th Jan: First stager (LHCb); 24th Jan: Stager (Atlas); 26th Jan: Stager (GEN); 31st Jan: Final stager (CMS)).
- Update SRMs to new version, including updating to SL6. This will be done after the Castor 2.1.15 update.
- Fabric
- Firmware updates on older disk servers.
Entries in GOC DB starting since the last report. |
Service | Scheduled? | Outage/At Risk | Start | End | Duration | Reason |
---|---|---|---|---|---|---|
gridftp.echo.stfc.ac.uk, s3.echo.stfc.ac.uk, s3.echo.stfc.ac.uk, xrootd.echo.stfc.ac.uk | SCHEDULED | OUTAGE | 16/11/2016 10:30 | 16/11/2016 14:30 | 4 hours | Upgrading backend network behind Echo Storage service |
Open GGUS Tickets (Snapshot during morning of meeting) |
GGUS ID | Level | Urgency | State | Creation | Last Update | VO | Subject |
---|---|---|---|---|---|---|---|
124876 | Green | Less Urgent | On Hold | 2016-11-07 | 2016-11-15 | OPS | [Rod Dashboard] Issue detected : hr.srce.GridFTP-Transfer-ops@gridftp.echo.stfc.ac.uk |
124785 | Red | Urgent | Reopened | 2016-11-02 | 2016-11-09 | CMS | Configuration updated AAA - CMS Site Name missing |
124606 | Yellow | Very Urgent | In Progress | 2016-10-24 | 2016-11-01 | CMS | Consistency Check for T1_UK_RAL |
122827 | Green | Less Urgent | Waiting for Reply | 2016-07-12 | 2016-10-11 | SNO+ | Disk area at RAL |
120350 | Yellow | Less Urgent | In Progress | 2016-03-22 | 2016-08-09 | LSST | Enable LSST at RAL |
117683 | Red | Less Urgent | On Hold | 2015-11-18 | 2016-10-05 | CASTOR at RAL not publishing GLUE 2 (Updated. There are ongoing discussions with GLUE & WLCG) |
Availability Report |
Key: Atlas HC = Atlas HammerCloud (Queue ANALY_RAL_SL6, Template 808); CMS HC = CMS HammerCloud
Day | OPS | Alice | Atlas | CMS | LHCb | Atlas HC | CMS HC | Comment |
---|---|---|---|---|---|---|---|---|
09/11/16 | 100 | 100 | 100 | 100 | 100 | 92 | N/A | |
10/11/16 | 100 | 100 | 100 | 100 | 100 | 100 | 100 | |
11/11/16 | 100 | 100 | 100 | 100 | 100 | 100 | N/A | |
12/11/16 | 100 | 100 | 100 | 99 | 100 | 100 | N/A | Single SRM test failure: User timeout over |
13/11/16 | 100 | 100 | 100 | 100 | 100 | 100 | 97 | |
14/11/16 | 100 | 100 | 100 | 100 | 100 | 100 | 93 | |
15/11/16 | 100 | 100 | N/A | 100 | 100 | 100 | 100 | (Atlas tests not reporting). |
Notes from Meeting. |
- There was a discussion about the Tier1's response to the CVE-2016-5195 security issue. In particular that the batch system stopped accepting new work until the patch was received and applied. Storage continued to be available.
- At present Durham is the only Dirac site from which we have received production data. Test data has been received from the Leicester site.