Difference between revisions of "Tier1 Operations Report 2018-01-24"
From GridPP Wiki
(→) |
(→) |
||
(21 intermediate revisions by 3 users not shown) | |||
Line 11: | Line 11: | ||
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Review of Issues during the week 18th January 2018 to 24th January 2018 | | style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Review of Issues during the week 18th January 2018 to 24th January 2018 | ||
|} | |} | ||
− | + | *Patching for Spectre and Meltdown was taking place last week. (E.g. an outage of Castor for a few hours last Wednesday, 17th Jan). | |
− | + | *Problems since last week with the Atlas Castor instance. There has been a high rate of deletion requests, and the back end stager database has been struggling to keep up with the total load (reads, writes and deletes). | |
− | + | * A prolonged downtime required with CASTOR/ATLAS. The issue is primarily to do with the increased load on the Atlas Stager DB. Extensive investigations has revelled it to be linked with garbage collection. The Castor/DB team have discovered that the diskcopy table is fragmented. The table has to be recreated. For this the stager daemon has to be shut down and, hence, the declared extended downtime. | |
− | + | * One of the BMS (Building Management Systems) in the R89 machine room has been intermittently faulty causing some system restarts. This system manages the pumps. In one case the chillers failed to restart. It was planned to replace a faulty card in the system this morning (24th Jan) - which should have taken around 30minutes. However, a test yesterday with one of the pumps running ‘stand-alone’ failed. During the start-up procedure the remaining pumps ramped-up to balance the water flow-rate to compensate for the loss of one pump. This caused a power surge to pump4 and it tripped causing damage to the fuse carrier. This is now being looked at and a new date for the BMS board swap will be planned. | |
<!-- ***********End Review of Issues during last week*********** -----> | <!-- ***********End Review of Issues during last week*********** -----> | ||
<!-- *********************************************************** -----> | <!-- *********************************************************** -----> | ||
Line 36: | Line 36: | ||
| style="background-color: #f8d6a9; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Resolved Castor Disk Server Issues | | style="background-color: #f8d6a9; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Resolved Castor Disk Server Issues | ||
|} | |} | ||
− | * gdss736 ( | + | * gdss736 (lhcbDst - D1T0) – rebuilt and back in production (RO) |
− | * | + | * gdss776 (lhcbDst - D1T0) - Failed Wednesday afternoon (17th). Returned to service on Friday.) |
<!-- ***************************************************** -----> | <!-- ***************************************************** -----> | ||
Line 70: | Line 70: | ||
|} | |} | ||
* Security patching done or underway. | * Security patching done or underway. | ||
+ | * The WMS service has been declared as not in production in the GOC DB. | ||
+ | * Updating Echo CEPH to the "Luminous" version underway. The service will continue to operate during this intervention. | ||
<!-- *************End Notable Changes made this last week************** -----> | <!-- *************End Notable Changes made this last week************** -----> | ||
<!-- ****************************************************************** -----> | <!-- ****************************************************************** -----> | ||
Line 79: | Line 81: | ||
|- | |- | ||
| style="background-color: #d8e8ff; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Entries in GOC DB starting since the last report. | | style="background-color: #d8e8ff; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Entries in GOC DB starting since the last report. | ||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
|} | |} | ||
{| border=1 align=center | {| border=1 align=center | ||
Line 101: | Line 92: | ||
! Reason | ! Reason | ||
|- | |- | ||
− | |srm-alice.gridpp.rl.ac.uk, srm-atlas.gridpp.rl.ac.uk, srm-biomed.gridpp.rl.ac.uk, srm-cert.gridpp.rl.ac.uk, srm-cms-disk.gridpp.rl.ac.uk, srm-cms.gridpp.rl.ac.uk, srm-dteam.gridpp.rl.ac.uk, srm-ilc.gridpp.rl.ac.uk, srm-lhcb.gridpp.rl.ac.uk, srm-lhcb.gridpp.rl.ac.uk, srm-mice.gridpp.rl.ac.uk, srm-minos.gridpp.rl.ac.uk, srm-na62.gridpp.rl.ac.uk, srm-pheno.gridpp.rl.ac.uk, srm-preprod.gridpp.rl.ac.uk, srm-snoplus.gridpp.rl.ac.uk, srm-solid.gridpp.rl.ac.uk, srm-t2k.gridpp.rl.ac.uk, | + | | srm-atlas.gridpp.rl.ac.uk, |
+ | | UNSCHEDULED | ||
+ | | OUTAGE | ||
+ | | 22/01/2018 22:00 | ||
+ | | 23/01/2018 10:00 | ||
+ | | 12 hours | ||
+ | | urgent fixes needed on Oracle DB backend - extension | ||
+ | |- | ||
+ | | srm-atlas.gridpp.rl.ac.uk | ||
+ | | UNSCHEDULED | ||
+ | | OUTAGE | ||
+ | | 22/01/2018 17:45 | ||
+ | | 22/01/2018 22:00 | ||
+ | | 4 hours and 15 minutes | ||
+ | | urgent fixes needed on Oracle DB backend | ||
+ | |- | ||
+ | | srm-alice.gridpp.rl.ac.uk, srm-atlas.gridpp.rl.ac.uk, srm-biomed.gridpp.rl.ac.uk, srm-cert.gridpp.rl.ac.uk, srm-cms-disk.gridpp.rl.ac.uk, srm-cms.gridpp.rl.ac.uk, srm-dteam.gridpp.rl.ac.uk, srm-ilc.gridpp.rl.ac.uk, srm-lhcb.gridpp.rl.ac.uk, srm-lhcb.gridpp.rl.ac.uk, srm-mice.gridpp.rl.ac.uk, srm-minos.gridpp.rl.ac.uk, srm-na62.gridpp.rl.ac.uk, srm-pheno.gridpp.rl.ac.uk, srm-preprod.gridpp.rl.ac.uk, srm-snoplus.gridpp.rl.ac.uk, srm-solid.gridpp.rl.ac.uk, srm-t2k.gridpp.rl.ac.uk, | ||
| SCHEDULED | | SCHEDULED | ||
| OUTAGE | | OUTAGE | ||
Line 107: | Line 114: | ||
| 17/01/2018 13:00 | | 17/01/2018 13:00 | ||
| 3 hours | | 3 hours | ||
− | |Outage of Castor Storage to apply Security patches. | + | | Outage of Castor Storage to apply Security patches. |
|- | |- | ||
− | |lcglb01.gridpp.rl.ac.uk, lcglb02.gridpp.rl.ac.uk, lcgwms04.gridpp.rl.ac.uk, lcgwms05.gridpp.rl.ac.uk, | + | | lcglb01.gridpp.rl.ac.uk, lcglb02.gridpp.rl.ac.uk, lcgwms04.gridpp.rl.ac.uk, lcgwms05.gridpp.rl.ac.uk, |
| SCHEDULED | | SCHEDULED | ||
| OUTAGE | | OUTAGE | ||
Line 115: | Line 122: | ||
| 19/01/2018 12:00 | | 19/01/2018 12:00 | ||
| 7 days, 2 hours | | 7 days, 2 hours | ||
− | |WMS Decommissioning RAL Tier1 | + | | WMS Decommissioning RAL Tier1 |
+ | |} | ||
+ | <!-- **********************End GOC DB Entries************************** -----> | ||
+ | <!-- ****************************************************************** -----> | ||
+ | |||
+ | ====== ====== | ||
+ | <!-- ******************************************************************** -----> | ||
+ | <!-- **********************Start GOC DB Entries************************** -----> | ||
+ | {| width="100%" cellspacing="0" cellpadding="0" style="background-color: #ffffff; border: 1px solid silver; border-collapse: collapse; width: 100%; margin: 0 0 1em 0;" | ||
+ | |- | ||
+ | | style="background-color: #d8e8ff; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Declared in the GOC DB | ||
+ | |} | ||
+ | {| border=1 align=center | ||
+ | |- bgcolor="#7c8aaf" | ||
+ | ! Service | ||
+ | ! Scheduled? | ||
+ | ! Outage/At Risk | ||
+ | ! Start | ||
+ | ! End | ||
+ | ! Duration | ||
+ | ! Reason | ||
+ | |- | ||
+ | | srm-atlas.gridpp.rl.ac.uk, | ||
+ | | UNSCHEDULED | ||
+ | | OUTAGE | ||
+ | | 23/01/2018 15:45 | ||
+ | | 24/01/2018 14:00 | ||
+ | | 22 hours and 15 minutes | ||
+ | | emergency downtime of Castor Atlas while rebuilding some database tables | ||
|} | |} | ||
<!-- **********************End GOC DB Entries************************** -----> | <!-- **********************End GOC DB Entries************************** -----> | ||
Line 131: | Line 166: | ||
<!-- ******* still to be formally scheduled and/or announced ******* -----> | <!-- ******* still to be formally scheduled and/or announced ******* -----> | ||
'''Ongoing or Pending - but not yet formally announced:''' | '''Ongoing or Pending - but not yet formally announced:''' | ||
− | + | * Update to next CEPH version ("Luminous"). Ongoing. | |
'''Listing by category:''' | '''Listing by category:''' | ||
* Castor: | * Castor: | ||
Line 137: | Line 172: | ||
** Move to generic Castor headnodes. | ** Move to generic Castor headnodes. | ||
* Echo: | * Echo: | ||
− | ** Update to next CEPH version ("Luminous"). | + | ** Update to next CEPH version ("Luminous"). Ongoing. |
* Networking | * Networking | ||
** Extend the number of services on the production network with IPv6 dual stack. (Done for Perfsonar, FTS3, all squids and the CVMFS Stratum-1 servers). | ** Extend the number of services on the production network with IPv6 dual stack. (Done for Perfsonar, FTS3, all squids and the CVMFS Stratum-1 servers). | ||
− | |||
− | |||
* Internal | * Internal | ||
** DNS servers will be rolled out within the Tier1 network. | ** DNS servers will be rolled out within the Tier1 network. | ||
Line 158: | Line 191: | ||
{| border=1 align=center | {| border=1 align=center | ||
|- bgcolor="#7c8aaf" | |- bgcolor="#7c8aaf" | ||
− | ! | + | ! Request id |
− | ! | + | ! Affected vo |
− | + | ||
− | + | ||
− | + | ||
! Status | ! Status | ||
! Priority | ! Priority | ||
− | ! | + | ! Date of creation |
− | ! Last | + | ! Last update |
− | ! | + | ! Type of problem |
! Subject | ! Subject | ||
|- | |- | ||
− | | style="background-color: | + | | style="background-color: red;" | 117683 |
− | | | + | | none |
− | | | + | | on hold |
− | | | + | | less urgent |
− | | | + | | 18/11/2015 |
− | | | + | | 03/01/2018 |
− | + | | Information System | |
− | + | | CASTOR at RAL not publishing GLUE 2 | |
− | + | ||
− | | | + | |
− | | | + | |
|- | |- | ||
− | | style="background-color: | + | | style="background-color: red;" | 124876 |
− | | | + | | ops |
− | | | + | | on hold |
− | | | + | | less urgent |
− | | | + | | 07/11/2016 |
− | | | + | | 13/11/2017 |
− | | | + | | Operations |
− | | | + | | [Rod Dashboard] Issue detected : hr.srce.GridFTP-Transfer-ops@gridftp.echo.stfc.ac.uk |
− | + | ||
− | + | ||
− | + | ||
|- | |- | ||
− | | style="background-color: | + | | style="background-color: red;" | 127597 |
− | + | ||
| cms | | cms | ||
− | | | + | | on hold |
− | + | ||
− | + | ||
| urgent | | urgent | ||
− | | | + | | 07/04/2017 |
− | | | + | | 05/10/2017 |
− | | | + | | File Transfer |
− | | | + | | Check networking and xrootd RAL-CERN performance |
+ | |- | ||
+ | | style="background-color: orange;" | 132589 | ||
+ | | lhcb | ||
+ | | in progress | ||
+ | | very urgent | ||
+ | | 21/12/2017 | ||
+ | | 24/01/2018 | ||
+ | | Local Batch System | ||
+ | | Killed pilots at RAL | ||
|- | |- | ||
| style="background-color: green;" | 132712 | | style="background-color: green;" | 132712 | ||
− | |||
| other | | other | ||
− | |||
− | |||
| in progress | | in progress | ||
| less urgent | | less urgent | ||
− | | 2018 | + | | 04/01/2018 |
− | | 2018 | + | | 23/01/2018 |
| Other | | Other | ||
| support for the hyperk VO (RAL-LCG2) | | support for the hyperk VO (RAL-LCG2) | ||
|- | |- | ||
− | | style="background-color: | + | | style="background-color: green;" | 132802 |
− | | | + | | cms |
− | + | ||
− | + | ||
− | + | ||
| in progress | | in progress | ||
− | | | + | | urgent |
− | | | + | | 11/01/2018 |
− | | | + | | 24/01/2018 |
− | | | + | | CMS_AAA WAN Access |
− | | | + | | Low HC xrootd success rates at T1_UK_RAL |
|- | |- | ||
− | | style="background-color: | + | | style="background-color: green;" | 132830 |
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
| cms | | cms | ||
− | | | + | | reopened |
− | | | + | | very urgent |
− | | | + | | 12/01/2018 |
+ | | 24/01/2018 | ||
+ | | CMS_AAA WAN Access | ||
+ | | Reading issues T1_UK_RAL | ||
+ | |- | ||
+ | | style="background-color: green;" | 132844 | ||
+ | | atlas | ||
+ | | in progress | ||
| urgent | | urgent | ||
− | | | + | | 14/01/2018 |
− | | | + | | 19/01/2018 |
− | | | + | | Storage Systems |
− | | | + | | UK RAL-LCG2 DATADISK transfer errors "DESTINATION OVERWRITE srm-ifce err:" |
|- | |- | ||
− | | style="background-color: | + | | style="background-color: green;" | 132935 |
− | | | + | | atlas |
− | | | + | | in progress |
− | + | ||
− | + | ||
− | + | ||
| less urgent | | less urgent | ||
− | | | + | | 18/01/2018 |
− | | | + | | 22/01/2018 |
− | | | + | | Storage Systems |
− | + | | UK RAL-LCG2: deletion errors | |
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | | RAL-LCG2 | + | |
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
|} | |} | ||
<!-- **********************End Availability Report************************** -----> | <!-- **********************End Availability Report************************** -----> | ||
Line 295: | Line 298: | ||
! Day !! OPS !! Alice !! Atlas !! CMS !! LHCb !! Atlas Echo !! Comment | ! Day !! OPS !! Alice !! Atlas !! CMS !! LHCb !! Atlas Echo !! Comment | ||
|- | |- | ||
− | | | + | | 17/01/18 || style="background-color: yellow;" | 87.15 || 100 || 100 || 100 || 100 || 100 || |
|- | |- | ||
− | | | + | | 18/01/18 || 100 || 100 || 100 || 100 || style="background-color: yellow;" |98|| 100 || |
|- | |- | ||
− | | | + | | 19/01/18 || 100 || 100 || 100 || 100 || 100|| 100 || |
|- | |- | ||
− | | | + | | 20/01/18 || 100 || style="background-color: yellow;" | 92 || 100 || 100 || 100 || 100 || |
|- | |- | ||
− | | | + | | 21/01/18 || 100 || style="background-color: red;" | 0 || 100 || 100 || 100 || 100 || |
|- | |- | ||
− | | | + | | 22/01/18 || 100 || style="background-color: red;" |49 || style="background-color: yellow;" | 65|| 100 || style="background-color: yellow;" | 89 || 100 || |
|- | |- | ||
− | | | + | | 23/01/18 || 100 || style="background-color: yellow;" |94 || style="background-color: red;" | 0 || 100 || style="background-color: yellow;" | 94 || 100 || |
|} | |} | ||
Line 323: | Line 326: | ||
! Day !! Atlas HC !! CMS HC !! Comment | ! Day !! Atlas HC !! CMS HC !! Comment | ||
|- | |- | ||
− | | | + | | 17/01/18 || style="background-color: yellow;" | 92 || 100 || |
|- | |- | ||
− | | | + | | 18/01/18 || 100 || 100 || |
|- | |- | ||
− | | | + | | 19/01/18 || 100 || style="background-color: yellow;" | 99 || |
|- | |- | ||
− | | | + | | 20/01/18 || 100 || style="background-color: yellow;" | 99 || |
|- | |- | ||
− | | | + | | 21/01/18 || 100 || 100 || |
|- | |- | ||
− | | | + | | 22/01/18 || style="background-color: yellow;" | 59 || 100 || |
|- | |- | ||
− | | | + | | 23/01/18 || 100 || style="background-color: yellow;" | 99 || |
|} | |} | ||
− | |||
<!-- **********************End Hammercloud Test Report************************** -----> | <!-- **********************End Hammercloud Test Report************************** -----> | ||
<!-- *********************************************************************** -----> | <!-- *********************************************************************** -----> | ||
Line 348: | Line 350: | ||
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Notes from Meeting. | | style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Notes from Meeting. | ||
|} | |} | ||
− | * | + | * There was a discussion around performance of the Tier1 when accessing off-site data from the worker nodes. This data path goes through the site firewall which is causing some restrictions. |
Latest revision as of 14:26, 24 January 2018
RAL Tier1 Operations Report for 24th January 2018
Review of Issues during the week 18th January 2018 to 24th January 2018 |
- Patching for Spectre and Meltdown was taking place last week. (E.g. an outage of Castor for a few hours last Wednesday, 17th Jan).
- Problems since last week with the Atlas Castor instance. There has been a high rate of deletion requests, and the back end stager database has been struggling to keep up with the total load (reads, writes and deletes).
- A prolonged downtime required with CASTOR/ATLAS. The issue is primarily to do with the increased load on the Atlas Stager DB. Extensive investigations has revelled it to be linked with garbage collection. The Castor/DB team have discovered that the diskcopy table is fragmented. The table has to be recreated. For this the stager daemon has to be shut down and, hence, the declared extended downtime.
- One of the BMS (Building Management Systems) in the R89 machine room has been intermittently faulty causing some system restarts. This system manages the pumps. In one case the chillers failed to restart. It was planned to replace a faulty card in the system this morning (24th Jan) - which should have taken around 30minutes. However, a test yesterday with one of the pumps running ‘stand-alone’ failed. During the start-up procedure the remaining pumps ramped-up to balance the water flow-rate to compensate for the loss of one pump. This caused a power surge to pump4 and it tripped causing damage to the fuse carrier. This is now being looked at and a new date for the BMS board swap will be planned.
Current operational status and issues |
- Ongoing security patching.
Resolved Castor Disk Server Issues |
- gdss736 (lhcbDst - D1T0) – rebuilt and back in production (RO)
- gdss776 (lhcbDst - D1T0) - Failed Wednesday afternoon (17th). Returned to service on Friday.)
Ongoing Castor Disk Server Issues |
- gdss717 (CMSTape - D0T1) – multiple drive failure
Limits on concurrent batch system jobs. |
- CMS Multicore 550
Notable Changes made since the last meeting. |
- Security patching done or underway.
- The WMS service has been declared as not in production in the GOC DB.
- Updating Echo CEPH to the "Luminous" version underway. The service will continue to operate during this intervention.
Entries in GOC DB starting since the last report. |
Service | Scheduled? | Outage/At Risk | Start | End | Duration | Reason |
---|---|---|---|---|---|---|
srm-atlas.gridpp.rl.ac.uk, | UNSCHEDULED | OUTAGE | 22/01/2018 22:00 | 23/01/2018 10:00 | 12 hours | urgent fixes needed on Oracle DB backend - extension |
srm-atlas.gridpp.rl.ac.uk | UNSCHEDULED | OUTAGE | 22/01/2018 17:45 | 22/01/2018 22:00 | 4 hours and 15 minutes | urgent fixes needed on Oracle DB backend |
srm-alice.gridpp.rl.ac.uk, srm-atlas.gridpp.rl.ac.uk, srm-biomed.gridpp.rl.ac.uk, srm-cert.gridpp.rl.ac.uk, srm-cms-disk.gridpp.rl.ac.uk, srm-cms.gridpp.rl.ac.uk, srm-dteam.gridpp.rl.ac.uk, srm-ilc.gridpp.rl.ac.uk, srm-lhcb.gridpp.rl.ac.uk, srm-lhcb.gridpp.rl.ac.uk, srm-mice.gridpp.rl.ac.uk, srm-minos.gridpp.rl.ac.uk, srm-na62.gridpp.rl.ac.uk, srm-pheno.gridpp.rl.ac.uk, srm-preprod.gridpp.rl.ac.uk, srm-snoplus.gridpp.rl.ac.uk, srm-solid.gridpp.rl.ac.uk, srm-t2k.gridpp.rl.ac.uk, | SCHEDULED | OUTAGE | 17/01/2018 10:00 | 17/01/2018 13:00 | 3 hours | Outage of Castor Storage to apply Security patches. |
lcglb01.gridpp.rl.ac.uk, lcglb02.gridpp.rl.ac.uk, lcgwms04.gridpp.rl.ac.uk, lcgwms05.gridpp.rl.ac.uk, | SCHEDULED | OUTAGE | 12/01/2018 10:00 | 19/01/2018 12:00 | 7 days, 2 hours | WMS Decommissioning RAL Tier1 |
Declared in the GOC DB |
Service | Scheduled? | Outage/At Risk | Start | End | Duration | Reason |
---|---|---|---|---|---|---|
srm-atlas.gridpp.rl.ac.uk, | UNSCHEDULED | OUTAGE | 23/01/2018 15:45 | 24/01/2018 14:00 | 22 hours and 15 minutes | emergency downtime of Castor Atlas while rebuilding some database tables |
Advanced warning for other interventions |
The following items are being discussed and are still to be formally scheduled and announced. |
Ongoing or Pending - but not yet formally announced:
- Update to next CEPH version ("Luminous"). Ongoing.
Listing by category:
- Castor:
- Update systems to use SL7 and configured by Quattor/Aquilon. (Tape servers done)
- Move to generic Castor headnodes.
- Echo:
- Update to next CEPH version ("Luminous"). Ongoing.
- Networking
- Extend the number of services on the production network with IPv6 dual stack. (Done for Perfsonar, FTS3, all squids and the CVMFS Stratum-1 servers).
- Internal
- DNS servers will be rolled out within the Tier1 network.
- Infrastructure
- Testing of power distribution boards in the R89 machine room is being scheduled for some time late July / Early August. The effect of this on our services is being discussed.
Open GGUS Tickets (Snapshot during morning of meeting) |
Request id | Affected vo | Status | Priority | Date of creation | Last update | Type of problem | Subject |
---|---|---|---|---|---|---|---|
117683 | none | on hold | less urgent | 18/11/2015 | 03/01/2018 | Information System | CASTOR at RAL not publishing GLUE 2 |
124876 | ops | on hold | less urgent | 07/11/2016 | 13/11/2017 | Operations | [Rod Dashboard] Issue detected : hr.srce.GridFTP-Transfer-ops@gridftp.echo.stfc.ac.uk |
127597 | cms | on hold | urgent | 07/04/2017 | 05/10/2017 | File Transfer | Check networking and xrootd RAL-CERN performance |
132589 | lhcb | in progress | very urgent | 21/12/2017 | 24/01/2018 | Local Batch System | Killed pilots at RAL |
132712 | other | in progress | less urgent | 04/01/2018 | 23/01/2018 | Other | support for the hyperk VO (RAL-LCG2) |
132802 | cms | in progress | urgent | 11/01/2018 | 24/01/2018 | CMS_AAA WAN Access | Low HC xrootd success rates at T1_UK_RAL |
132830 | cms | reopened | very urgent | 12/01/2018 | 24/01/2018 | CMS_AAA WAN Access | Reading issues T1_UK_RAL |
132844 | atlas | in progress | urgent | 14/01/2018 | 19/01/2018 | Storage Systems | UK RAL-LCG2 DATADISK transfer errors "DESTINATION OVERWRITE srm-ifce err:" |
132935 | atlas | in progress | less urgent | 18/01/2018 | 22/01/2018 | Storage Systems | UK RAL-LCG2: deletion errors |
Availability Report |
Day | OPS | Alice | Atlas | CMS | LHCb | Atlas Echo | Comment |
---|---|---|---|---|---|---|---|
17/01/18 | 87.15 | 100 | 100 | 100 | 100 | 100 | |
18/01/18 | 100 | 100 | 100 | 100 | 98 | 100 | |
19/01/18 | 100 | 100 | 100 | 100 | 100 | 100 | |
20/01/18 | 100 | 92 | 100 | 100 | 100 | 100 | |
21/01/18 | 100 | 0 | 100 | 100 | 100 | 100 | |
22/01/18 | 100 | 49 | 65 | 100 | 89 | 100 | |
23/01/18 | 100 | 94 | 0 | 100 | 94 | 100 |
Hammercloud Test Report |
Key: Atlas HC = Atlas HammerCloud (Queue ANALY_RAL_UCORE, Template 841); CMS HC = CMS HammerCloud
Day | Atlas HC | CMS HC | Comment |
---|---|---|---|
17/01/18 | 92 | 100 | |
18/01/18 | 100 | 100 | |
19/01/18 | 100 | 99 | |
20/01/18 | 100 | 99 | |
21/01/18 | 100 | 100 | |
22/01/18 | 59 | 100 | |
23/01/18 | 100 | 99 |
Notes from Meeting. |
- There was a discussion around performance of the Tier1 when accessing off-site data from the worker nodes. This data path goes through the site firewall which is causing some restrictions.