Difference between revisions of "Tier1 Operations Report 2018-01-10"
From GridPP Wiki
(→) |
(→) |
||
Line 145: | Line 145: | ||
! Subject | ! Subject | ||
|- | |- | ||
− | | 132748 | + | | style="background-color: green;" | 132748 |
| USER | | USER | ||
| ops | | ops | ||
Line 157: | Line 157: | ||
| [Rod Dashboard] Issues detected at RAL-LCG2 | | [Rod Dashboard] Issues detected at RAL-LCG2 | ||
|- | |- | ||
− | | 132712 | + | | style="background-color: green;" | 132712 |
| USER | | USER | ||
| other | | other | ||
Line 169: | Line 169: | ||
| support for the hyperk VO (RAL-LCG2) | | support for the hyperk VO (RAL-LCG2) | ||
|- | |- | ||
− | | 132589 | + | | style="background-color: green;" | 132589 |
| TEAM | | TEAM | ||
| lhcb | | lhcb | ||
Line 181: | Line 181: | ||
| Killed pilots at RAL | | Killed pilots at RAL | ||
|- | |- | ||
− | | 131815 | + | | style="background-color: yellow;" | 131815 |
| USER | | USER | ||
| t2k.org | | t2k.org | ||
Line 193: | Line 193: | ||
| Extremely long download times for T2K files on tape at RAL | | Extremely long download times for T2K files on tape at RAL | ||
|- | |- | ||
− | | 127597 | + | | style="background-color: red;" | 127597 |
| USER | | USER | ||
| cms | | cms | ||
Line 205: | Line 205: | ||
| Check networking and xrootd RAL-CERN performance | | Check networking and xrootd RAL-CERN performance | ||
|- | |- | ||
− | | 124876 | + | | style="background-color: red;" | 124876 |
| USER | | USER | ||
| ops | | ops | ||
Line 217: | Line 217: | ||
| [Rod Dashboard] Issue detected : hr.srce.GridFTP-Transfer-ops@gridftp.echo.stfc.ac.uk | | [Rod Dashboard] Issue detected : hr.srce.GridFTP-Transfer-ops@gridftp.echo.stfc.ac.uk | ||
|- | |- | ||
− | | 117683 | + | | style="background-color: red;" | 117683 |
| USER | | USER | ||
| none | | none |
Revision as of 11:41, 9 January 2018
RAL Tier1 Operations Report for 10th January 2018
Review of Issues during the week 21st December 2017 to 3rd January 2018 |
- Network connectivity issues on Stack 9 in the UPS room overnight Thursday/Friday 21/22 December affected some internal systems (mainly monitoring). A member of staff attended overnight and a faulty transceiver was found the be the cause and replaced. External services were unaffected.
- Operations over the Christmas and New year holiday were generally stable although not completely quiet for the oncall team. There were some Castor disk server failures and staff did attend site over the holiday to replace failed disk drives. Our monitoring flagged problems with LHCb SAM tests - although these turned out to be a caused by a problem with a LHCb certificate.
- The number of Atlas batch jobs being run is lower than expected. The batch (Condor) scheduling will be looked at to try and understand and improve this.
Current operational status and issues |
- None
Resolved Castor Disk Server Issues |
- GDSS688 (cmsDisk - D1T0) is back in production.
- GDSS743 (atlasStripInput - D1T0) is back in production.
- GDSS757 (cmsDisk - D1T0) is back in production.
Ongoing Castor Disk Server Issues |
- GDSS756 (cmsDisk - D1T0) not in production.
Limits on concurrent batch system jobs. |
- CMS Multicore 550
Notable Changes made since the last meeting. |
- The ATLAS quota on Echo was increased by 500TB yesterday (2nd Jan). It now stands at 4.6PBytes.
- A round of Linux kernel patching of back-end database systems has been completed.
Entries in GOC DB starting since the last report. |
No downtime scheduled in the GOCDB between 2017-12-20 and 2018-01-03
Declared in the GOC DB |
- None
Advanced warning for other interventions |
The following items are being discussed and are still to be formally scheduled and announced. |
Ongoing or Pending - but not yet formally announced:
Listing by category:
- Castor:
- Update systems to use SL7 and configured by Quattor/Aquilon. (Tape servers done)
- Move to generic Castor headnodes.
- Echo:
- Update to next CEPH version ("Luminous").
- Networking
- Extend the number of services on the production network with IPv6 dual stack. (Done for Perfsonar, FTS3, all squids and the CVMFS Stratum-1 servers).
- Services
- Internal
- DNS servers will be rolled out within the Tier1 network.
- Infrastructure
- Testing of power distribution boards in the R89 machine room is being scheduled for some time in March. The effect of this on our services is being discussed.
Open GGUS Tickets (Snapshot during morning of meeting) |
Ticket-ID | Type | VO | Notified Site | Resp. Unit | Status | Priority | Creation | Last Update | ToI | Subject |
---|---|---|---|---|---|---|---|---|---|---|
132748 | USER | ops | RAL-LCG2 | NGI_UK | in progress | less urgent | 2018-01-08 14:45:00 | 2018-01-09 09:58:00 | Operations | [Rod Dashboard] Issues detected at RAL-LCG2 |
132712 | USER | other | RAL-LCG2 | NGI_UK assign to:lcg-support@gridpp.rl.ac.uk | in progress | less urgent | 2018-01-04 16:22:00 | 2018-01-08 13:35:00 | Other | support for the hyperk VO (RAL-LCG2) |
132589 | TEAM | lhcb | RAL-LCG2 | NGI_UK | in progress | very urgent | 2017-12-21 06:45:00 | 2018-01-03 15:21:00 | Local Batch System | Killed pilots at RAL |
131815 | USER | t2k.org | RAL-LCG2 | NGI_UK | in progress | less urgent | 2017-11-13 14:42:00 | 2017-12-01 19:30:00 | Storage Systems | Extremely long download times for T2K files on tape at RAL |
127597 | USER | cms | RAL-LCG2 | NGI_UK assign to:lcg-support@gridpp.rl.ac.uk share with:sexton@fnal.gov | on hold | urgent | 2017-04-07 10:34:00 | 2017-10-05 09:14:00 | File Transfer | Check networking and xrootd RAL-CERN performance |
124876 | USER | ops | RAL-LCG2 | NGI_UK assign to:lcg-support@gridpp.rl.ac.uk | on hold | less urgent | 2016-11-07 12:06:00 | 2017-11-13 16:55:00 | Operations | [Rod Dashboard] Issue detected : hr.srce.GridFTP-Transfer-ops@gridftp.echo.stfc.ac.uk |
117683 | USER | none | RAL-LCG2 | NGI_UK assign to:lcg-support@gridpp.rl.ac.uk | on hold | less urgent | 2015-11-18 11:36:00 | 2018-01-03 15:26:00 | Information System | CASTOR at RAL not publishing GLUE 2 |
Availability Report |
Day | OPS | Alice | Atlas | CMS | LHCb | Atlas Echo | Comment |
---|---|---|---|---|---|---|---|
03/01/18 | 100 | 100 | 100 | 100 | 100 | 100 | |
04/01/18 | 100 | 100 | 100 | 100 | 100 | 100 | |
05/01/18 | 100 | 100 | 100 | 100 | 100 | 100 | |
06/01/18 | 100 | 100 | 100 | 100 | 100 | 100 | |
07/01/18 | 100 | 100 | 100 | 100 | 100 | 100 | |
08/01/18 | 100 | 100 | 100 | 100 | 100 | 100 | |
09/01/18 | 100 | 100 | 100 | 100 | 100 | 100 |
Hammercloud Test Report |
Key: Atlas HC = Atlas HammerCloud (Queue ANALY_RAL_SL6, Template 845); Atlas HC Echo = Atlas Echo (Template 841);CMS HC = CMS HammerCloud
Day | Atlas HC | Atlas HC Echo | CMS HC | Comment |
---|---|---|---|---|
20/12/17 | 100 | 0 | 100 | Atlas HC Echo - No test run in time bin |
21/12/17 | 98 | 0 | 100 | Atlas HC Echo - No test run in time bin |
22/12/17 | 100 | 0 | 98 | Atlas HC Echo - No test run in time bin |
23/12/17 | 100 | 0 | 100 | Atlas HC Echo - No test run in time bin |
24/12/17 | 0 | 0 | 100 | Atlas HC Echo - No test run in time bin |
25/12/17 | 100 | 0 | 100 | Atlas HC Echo - No test run in time bin |
26/12/17 | 100 | 0 | 100 | Atlas HC Echo - No test run in time bin |
27/12/17 | 100 | 0 | 100 | Atlas HC Echo - No test run in time bin |
28/12/17 | 100 | 0 | 100 | Atlas HC Echo - No test run in time bin |
29/12/17 | 100 | 0 | 100 | Atlas HC Echo - No test run in time bin |
30/12/17 | 93 | 0 | 100 | Atlas HC Echo - No test run in time bin |
31/12/17 | 100 | 0 | 100 | Atlas HC Echo - No test run in time bin |
01/01/18 | 100 | 0 | 100 | Atlas HC Echo - No test run in time bin |
02/01/18 | 100 | 0 | 100 | Atlas HC Echo - No test run in time bin |
Notes from Meeting. |
- CMS quota on Echo will be increased from 250 to 500TB. CMS will then do more stress testing.
- MICE have probably taken their final data. Some problems with rate of transfer of data into Castor will no longer be relevant.