Difference between revisions of "Tier1 Operations Report 2018-06-25"
From GridPP Wiki
(→) |
(→) |
||
Line 302: | Line 302: | ||
! Comments | ! Comments | ||
|- | |- | ||
− | | 2018-06- | + | | 2018-06-18 |
+ | | 100 | ||
| 100 | | 100 | ||
| 100 | | 100 | ||
− | |||
| 100 | | 100 | ||
| 100 | | 100 | ||
Line 311: | Line 311: | ||
| | | | ||
|- | |- | ||
− | | 2018-06- | + | | 2018-06-19 |
− | | | + | | 99 |
− | | | + | | 99 |
− | | | + | | 99 |
| 100 | | 100 | ||
| 100 | | 100 | ||
Line 320: | Line 320: | ||
| | | | ||
|- | |- | ||
− | | 2018-06- | + | | 2018-06-20 |
− | + | ||
| 100 | | 100 | ||
| 100 | | 100 | ||
+ | | style="background-color: red;" | 96 | ||
| 100 | | 100 | ||
| 100 | | 100 | ||
Line 329: | Line 329: | ||
| | | | ||
|- | |- | ||
− | | 2018-06- | + | | 2018-06-21 |
− | + | ||
| 100 | | 100 | ||
| 100 | | 100 | ||
+ | | style="background-color: red;" | 93 | ||
| 100 | | 100 | ||
| 100 | | 100 | ||
Line 338: | Line 338: | ||
| | | | ||
|- | |- | ||
− | | 2018-06- | + | | 2018-06-22 |
| 100 | | 100 | ||
| 100 | | 100 | ||
− | | style="background-color: | + | | style="background-color: red;" | 85 |
| 100 | | 100 | ||
| 100 | | 100 | ||
Line 347: | Line 347: | ||
| | | | ||
|- | |- | ||
− | | 2018-06- | + | | 2018-06-23 |
| 100 | | 100 | ||
| 100 | | 100 | ||
Line 356: | Line 356: | ||
| | | | ||
|- | |- | ||
− | | 2018-06- | + | | 2018-06-24 |
| 100 | | 100 | ||
| 100 | | 100 | ||
Line 365: | Line 365: | ||
| | | | ||
|- | |- | ||
− | | 2018-06- | + | | 2018-06-25 |
| 100 | | 100 | ||
| 100 | | 100 |
Revision as of 08:19, 25 June 2018
RAL Tier1 Operations Report for 25th June 2018
Review of Issues during the week 18th June to the 25th June 2018. |
- 12/6/18 there was an internal network problem, which took around ~3 hours to resolve. This seemed to mostly affect facilities rather than Tier-1 services. The concern is not that a piece of hardware failed, but that it doesn’t always switch seamlessly to the backup links (especially for IPv6).
- 19/6/18 09:00 - 13:00 IPv6 was not available. Fabric attempted a route switch failed. Should have dropped to IPv4 but didn't. Need to raise this issue with Networking/Fabric to try and resolve once and for all
Current operational status and issues |
- None.
Resolved Castor Disk Server Issues |
- gdss746 - ATLAS - atlasStripInut - d1t0. Currently in intervention.
- gdss685 - ATLAS - atlasTape - d1t0. Currently in intervention.
Ongoing Castor Disk Server Issues |
- gdss776 - LHCb - LHCb_FAILOVER,LHCb-Disk - d1t0. Currently in intervention.
Limits on concurrent batch system jobs. |
- CMS Multicore 550
Notable Changes made since the last meeting. |
- None.
Entries in GOC DB starting since the last report. |
Declared in the GOC DB |
- None
Advanced warning for other interventions |
The following items are being discussed and are still to be formally scheduled and announced. |
Listing by category:
- Castor:
- Update systems to use SL7 and configured by Quattor/Aquilon. (Tape servers done)
- Move to generic Castor headnodes.
- Networking
- Extend the number of services on the production network with IPv6 dual stack. (Done for Perfsonar, FTS3, all squids and the CVMFS Stratum-1 servers).
- Internal
- DNS servers will be rolled out within the Tier1 network.
- Infrastructure
- Testing of power distribution boards in the R89 machine room is being scheduled for some time late July / Early August. The effect of this on our services is being discussed.
Open
GGUS Tickets (Snapshot taken during morning of the meeting). |
Request id | Affected vo | Status | Priority | Date of creation | Last update | Type of problem | Subject | Scope |
---|---|---|---|---|---|---|---|---|
135455 | cms | in progress | less urgent | 31/05/2018 | 04/06/2018 | File Transfer | Checksum verification at RAL | EGI |
135293 | ops | on hold | less urgent | 23/05/2018 | 04/06/2018 | Operations | [Rod Dashboard] Issues detected at RAL-LCG2 | EGI |
134685 | dteam | in progress | less urgent | 23/04/2018 | 11/06/2018 | Middleware | please upgrade perfsonar host(s) at RAL-LCG2 to CentOS7 | EGI |
124876 | ops | on hold | less urgent | 07/11/2016 | 13/11/2017 | Operations | [Rod Dashboard] Issue detected : hr.srce.GridFTP-Transfer-ops@gridftp.echo.stfc.ac.uk | EGI |
GGUS Tickets Closed Last week |
Request id | Affected vo | Status | Priority | Date of creation | Last update | Type of problem | Subject | Scope |
---|---|---|---|---|---|---|---|---|
135740 | none | solved | urgent | 20/06/2018 | 20/06/2018 | File Access | CVMFS issue on RALfor /cvmfs/dune.opensciencegrid.org/ | EGI |
135723 | lhcb | solved | top priority | 19/06/2018 | 19/06/2018 | File Transfer | lcgfts3 FTS server fails all transfers | WLCG |
135711 | cms | solved | urgent | 18/06/2018 | 22/06/2018 | CMS_Central Workflows | T1_UK_RAL production jobs failing | WLCG |
135516 | cms | closed | urgent | 06/06/2018 | 22/06/2018 | CMS_Facilities | T1_UK_RAL WN-analysis and HammerCloud failures | WLCG |
135133 | cms | closed | urgent | 15/05/2018 | 19/06/2018 | CMS_Data Transfers | Likely corrupted File at T1_UK_RAL | WLCG |
134703 | cms | closed | urgent | 23/04/2018 | 21/06/2018 | CMS_Data Transfers | Transfer failing from RAL_Disk | WLCG |
127597 | cms | closed | urgent | 07/04/2017 | 21/06/2018 | File Transfer | Check networking and xrootd RAL-CERN performance | EGI |
Availability Report |
Target Availability for each site is 97.0% | Red <90% | Orange <97% |
Day | Atlas | Atlas-Echo | CMS | LHCB | Alice | OPS | Comments |
---|---|---|---|---|---|---|---|
2018-06-18 | 100 | 100 | 100 | 100 | 100 | 100 | |
2018-06-19 | 99 | 99 | 99 | 100 | 100 | 100 | |
2018-06-20 | 100 | 100 | 96 | 100 | 100 | 100 | |
2018-06-21 | 100 | 100 | 93 | 100 | 100 | 100 | |
2018-06-22 | 100 | 100 | 85 | 100 | 100 | 100 | |
2018-06-23 | 100 | 100 | 100 | 100 | 100 | 100 | |
2018-06-24 | 100 | 100 | 100 | 100 | 100 | 100 | |
2018-06-25 | 100 | 100 | 100 | 100 | 100 | 100 |
Hammercloud Test Report |
Target Availability for each site is 97.0% | Red <90% | Orange <97% |
Day | Atlas HC | CMS HC | Comment |
---|---|---|---|
2018/06/11 | 97 | 72 | |
2018/06/12 | 97 | 100 | |
2018/06/13 | 95 | 77 | |
2018/06/14 | 92 | 45 | |
2018/06/15 | 85 | 73 | |
2018/06/16 | 95 | 100 | |
2018/06/17 | 97 | 66 | 100 |
2018/06/18 | - | - |
Key: Atlas HC = Atlas HammerCloud (Queue RAL-LCG2_UCORE, Template 841); CMS HC = CMS HammerCloud
Notes from Meeting. |
- Notes from meeting of 20th June:
- LHBb - A number of drive failures have resulted in 8 files lost, 4 not on name server 4 just plain lost.
- Issue with files appearing to be calculating a different checksum dependent on location, plain weird. Was thought to be a RAID issue but investigations have suggested this is not the case.