Difference between revisions of "Tier1 Operations Report 2016-09-07"
From GridPP Wiki
(→) |
(→) |
||
(11 intermediate revisions by one user not shown) | |||
Line 1: | Line 1: | ||
− | ==RAL Tier1 Operations Report for 7th | + | ==RAL Tier1 Operations Report for 7th September 2016== |
__NOTOC__ | __NOTOC__ | ||
Line 10: | Line 10: | ||
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Review of Issues during the fortnight 24th August to 9th September 2016. | | style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Review of Issues during the fortnight 24th August to 9th September 2016. | ||
|} | |} | ||
− | * | + | * The intermittent packet loss that was reported across a part of the Tier1 network has gone away for the last week. The cause was not understood but the replacement of a transceiver on the 24th August correlates with the improvement. (This note updated after the meeting). |
+ | * On Friday evening 24th August there was a problem with the squids that are used for cvmfs. This in turn caused problems for the cvmfs clients on many worker nodes. This problem was cleaned up during the next day. | ||
+ | * There was a problem for xroot traffic into the RAL Tier1 for two hours this morning. One of the steps made during the tightening of access controls on the data path had to be reverted. | ||
<!-- ***********End Review of Issues during last week*********** -----> | <!-- ***********End Review of Issues during last week*********** -----> | ||
<!-- *********************************************************** -----> | <!-- *********************************************************** -----> | ||
Line 54: | Line 56: | ||
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Notable Changes made since the last meeting. | | style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Notable Changes made since the last meeting. | ||
|} | |} | ||
− | * | + | * A number of services have been moved to the Hyper-V 2012 infrastructure. |
− | * | + | * Access controls for network traffic coming in via the bypass link has been tightened. |
<!-- *************End Notable Changes made this last week************** -----> | <!-- *************End Notable Changes made this last week************** -----> | ||
<!-- ****************************************************************** -----> | <!-- ****************************************************************** -----> | ||
Line 76: | Line 78: | ||
! Reason | ! Reason | ||
|- | |- | ||
− | | | + | | All Castor that uses tape |
| SCHEDULED | | SCHEDULED | ||
− | | | + | | WARNING |
− | | | + | | 13/09/2016 08:00 |
− | | | + | | 13/09/2016 17:00 |
− | | | + | | 9 hours |
− | | | + | | Maintenance on Tape Library. Tape access for read will stop. Writes will be buffered on disk and flushed to tape after the maintenance has completed. |
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
|} | |} | ||
<!-- **********************End GOC DB Entries************************** -----> | <!-- **********************End GOC DB Entries************************** -----> | ||
Line 166: | Line 160: | ||
|-style="background:#b7f1ce" | |-style="background:#b7f1ce" | ||
! GGUS ID !! Level !! Urgency !! State !! Creation !! Last Update !! VO !! Subject | ! GGUS ID !! Level !! Urgency !! State !! Creation !! Last Update !! VO !! Subject | ||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
|- | |- | ||
| 123504 | | 123504 | ||
Line 222: | Line 207: | ||
|- | |- | ||
| 120350 | | 120350 | ||
− | | | + | | Yellow |
| Less Urgent | | Less Urgent | ||
| Waiting Reply | | Waiting Reply |
Latest revision as of 14:58, 7 September 2016
RAL Tier1 Operations Report for 7th September 2016
Review of Issues during the fortnight 24th August to 9th September 2016. |
- The intermittent packet loss that was reported across a part of the Tier1 network has gone away for the last week. The cause was not understood but the replacement of a transceiver on the 24th August correlates with the improvement. (This note updated after the meeting).
- On Friday evening 24th August there was a problem with the squids that are used for cvmfs. This in turn caused problems for the cvmfs clients on many worker nodes. This problem was cleaned up during the next day.
- There was a problem for xroot traffic into the RAL Tier1 for two hours this morning. One of the steps made during the tightening of access controls on the data path had to be reverted.
Resolved Disk Server Issues |
- GDSS776 (LHCbDst - D1T0) failed with a read-only file system on Thursday 1st September, It was put back in service the following day - initially read-only as a RAID rebuild was still taking place. 16 files being written when the server crashed were lost.
Current operational status and issues |
- There is a problem seen by LHCb of a low but persistent rate of failure when copying the results of batch jobs to Castor. There is also a further problem that sometimes occurs when these (failed) writes are attempted to storage at other sites.
- The intermittent, low-level, load-related packet loss seen over external connections is still being tracked.
Ongoing Disk Server Issues |
- GDSS665 (AtlasTape - D0T1) failed with a read-only filesystem yesterday on Sunday 4th Sep. All files that were awaiting migration have now been copied to tape. Server still under investigation.
Notable Changes made since the last meeting. |
- A number of services have been moved to the Hyper-V 2012 infrastructure.
- Access controls for network traffic coming in via the bypass link has been tightened.
Declared in the GOC DB |
Service | Scheduled? | Outage/At Risk | Start | End | Duration | Reason |
---|---|---|---|---|---|---|
All Castor that uses tape | SCHEDULED | WARNING | 13/09/2016 08:00 | 13/09/2016 17:00 | 9 hours | Maintenance on Tape Library. Tape access for read will stop. Writes will be buffered on disk and flushed to tape after the maintenance has completed. |
Advanced warning for other interventions |
The following items are being discussed and are still to be formally scheduled and announced. |
Listing by category:
- Preventative Maintenance on the Tape Libraries. Tuesday 13th September.
Listing by category:
- Castor:
- Update SRMs to new version, including updating to SL6. This will be done after the Castor 2.1.15 update.
- Update to Castor version 2.1.15. This awaits successful resolution and testing of the new version.
- Migration of LHCb data from T10KC to T10KD tapes.
- Networking:
- Replace the UKLight Router. Then upgrade the 'bypass' link to the RAL border routers to 2*40Gbit.
- Fabric
- Firmware updates on older disk servers.
Entries in GOC DB starting since the last report. |
Service | Scheduled? | Outage/At Risk | Start | End | Duration | Reason |
---|---|---|---|---|---|---|
arc-ce01.gridpp.rl.ac.uk | SCHEDULED | OUTAGE | 25/08/2016 10:00 | 02/09/2016 09:59 | 7 days, 23 hours and 59 minutes | ARC-CE01 being drained ahead of a reconfiguration and move to run on different infrastructure. |
srm-biomed.gridpp.rl.ac.uk | SCHEDULED | OUTAGE | 04/08/2016 14:00 | 05/09/2016 14:00 | 32 days, | Storage for BIOMED is no longer supported since the removal of the GENScratch storage area. |
Open GGUS Tickets (Snapshot during morning of meeting) |
GGUS ID | Level | Urgency | State | Creation | Last Update | VO | Subject |
---|---|---|---|---|---|---|---|
123504 | Green | Less Urgent | In Progress | 2016-08-19 | 2016-08-23 | T2K | proxy expiration |
123403 | Green | Less Urgent | Waiting Reply | 2016-08-15 | 2016-08-17 | FTS gets a SIGSEGV during a transfer | |
122827 | Green | Less Urgent | In Progress | 2016-07-12 | 2016-08-22 | SNO+ | Disk area at RAL |
122364 | Green | Less Urgent | On Hold | 2016-06-27 | 2016-08-23 | cvmfs support at RAL-LCG2 for solidexperiment.org | |
121687 | Red | Less Urgent | On Hold | 2016-05-20 | 2016-05-23 | packet loss problems seen on RAL-LCG perfsonar | |
120350 | Yellow | Less Urgent | Waiting Reply | 2016-03-22 | 2016-08-09 | LSST | Enable LSST at RAL |
117683 | Amber | Less Urgent | On Hold | 2015-11-18 | 2016-04-05 | CASTOR at RAL not publishing GLUE 2 |
Availability Report |
Key: Atlas HC = Atlas HammerCloud (Queue ANALY_RAL_SL6, Template 729); CMS HC = CMS HammerCloud
Day | OPS | Alice | Atlas | CMS | LHCb | Atlas HC | CMS HC | Comment |
---|---|---|---|---|---|---|---|---|
24/08/16 | 100 | 100 | 100 | 100 | 100 | 100 | 100 | |
25/08/16 | 100 | 100 | 100 | 100 | 100 | 100 | 100 | |
26/08/16 | 100 | 100 | 100 | 100 | 100 | N/A | 100 | |
27/08/16 | 100 | 100 | 100 | 100 | 96 | N/A | 100 | Single SRM error on listing: [SRM_INVALID_PATH] No such file or directory |
28/08/16 | 100 | 100 | 100 | 100 | 100 | N/A | 100 | |
29/08/16 | 100 | 100 | 100 | 100 | 100 | N/A | 100 | |
30/08/16 | 100 | 100 | 100 | 100 | 100 | N/A | 100 | |
31/08/16 | 100 | 100 | 100 | 100 | 100 | N/A | 100 | |
01/09/16 | 100 | 100 | 100 | 98 | 100 | N/A | 100 | Single SRM test failure because of a user timeout error |
02/09/16 | 100 | 100 | 100 | 96 | 100 | N/A | 100 | Two SRM test failure because of a user timeout errors |
03/09/16 | 100 | 100 | 100 | 94 | 100 | N/A | 100 | Single SRM test failure because of a user timeout error - but next test did not run for a while. |
04/09/16 | 100 | 100 | 100 | 100 | 100 | N/A | 100 | |
05/09/16 | 100 | 100 | 100 | 98 | 100 | N/A | 100 | Single SRM test failure because of a user timeout error |
06/09/16 | 100 | 100 | 100 | 100 | 100 | N/A | 100 |