Difference between revisions of "Tier1 Operations Report 2018-07-09"
From GridPP Wiki
(→) |
(→) |
||
Line 181: | Line 181: | ||
! Scope | ! Scope | ||
|- | |- | ||
− | | | + | | style="background-color: green;" | 136002 |
− | | | + | | cms |
| solved | | solved | ||
− | | | + | | urgent |
− | | | + | | 09/07/2018 |
− | | | + | | 09/07/2018 |
− | | | + | | CMS_Facilities |
− | | | + | | T1_UK_RAL SE Xrootd read failure |
| WLCG | | WLCG | ||
|- | |- | ||
− | | | + | | style="background-color: green;" | 135940 |
− | | | + | | cms |
− | | | + | | solved |
+ | | urgent | ||
+ | | 04/07/2018 | ||
+ | | 06/07/2018 | ||
+ | | CMS_Data Transfers | ||
+ | | Transfers failing to RAL_Disk - no data available | ||
+ | | WLCG | ||
+ | |- | ||
+ | | style="background-color: green;" | 135901 | ||
+ | | ops | ||
+ | | verified | ||
| less urgent | | less urgent | ||
− | | | + | | 03/07/2018 |
− | | | + | | 06/07/2018 |
− | | | + | | Operations |
− | | | + | | [Rod Dashboard] Issue detected : org.bdii.GLUE2-Validate@site-bdii.gridpp.rl.ac.uk |
+ | | EGI | ||
+ | |- | ||
+ | | 135822 | ||
+ | | cms | ||
+ | | solved | ||
+ | | very urgent | ||
+ | | 26/06/2018 | ||
+ | | 05/07/2018 | ||
+ | | CMS_Central Workflows | ||
+ | | File Read Problems for Production at T1_UK_RAL | ||
+ | | WLCG | ||
+ | |- | ||
+ | | style="background-color: green;" | 135740 | ||
+ | | none | ||
+ | | closed | ||
+ | | urgent | ||
+ | | 20/06/2018 | ||
+ | | 04/07/2018 | ||
+ | | File Access | ||
+ | | CVMFS issue on RALfor /cvmfs/dune.opensciencegrid.org/ | ||
+ | | EGI | ||
+ | |- | ||
+ | | style="background-color: green;" | 135711 | ||
+ | | cms | ||
+ | | closed | ||
+ | | urgent | ||
+ | | 18/06/2018 | ||
+ | | 06/07/2018 | ||
+ | | CMS_Central Workflows | ||
+ | | T1_UK_RAL production jobs failing | ||
| WLCG | | WLCG | ||
|- | |- | ||
| 135455 | | 135455 | ||
| cms | | cms | ||
− | | | + | | closed |
| less urgent | | less urgent | ||
| 31/05/2018 | | 31/05/2018 | ||
− | | | + | | 09/07/2018 |
| File Transfer | | File Transfer | ||
| Checksum verification at RAL | | Checksum verification at RAL | ||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
| EGI | | EGI | ||
|- | |- | ||
Line 229: | Line 259: | ||
| Operations | | Operations | ||
| [Rod Dashboard] Issue detected : egi.eu.lowAvailability-/RAL-LCG2@RAL-LCG2_Availability | | [Rod Dashboard] Issue detected : egi.eu.lowAvailability-/RAL-LCG2@RAL-LCG2_Availability | ||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
| EGI | | EGI | ||
|- | |- |
Revision as of 09:50, 10 July 2018
RAL Tier1 Operations Report for 9th July 2018
Review of Issues during the week 25th June to the 2nd July 2018. |
Current operational status and issues |
- None.
Resolved Castor Disk Server Issues |
- None
Ongoing Castor Disk Server Issues |
- None.
Limits on concurrent batch system jobs. |
- CMS Multicore 550
Notable Changes made since the last meeting. |
- None.
Entries in GOC DB starting since the last report. |
Declared in the GOC DB |
- No downtime scheduled in the GOCDB for next 2 weeks
Advanced warning for other interventions |
The following items are being discussed and are still to be formally scheduled and announced. |
Listing by category:
- Castor:
- Update systems to use SL7 and configured by Quattor/Aquilon. (Tape servers done)
- Move to generic Castor headnodes.
- Networking
- Extend the number of services on the production network with IPv6 dual stack. (Done for Perfsonar, FTS3, all squids and the CVMFS Stratum-1 servers).
- Internal
- DNS servers will be rolled out within the Tier1 network.
- Infrastructure
- There is currently a scheduled three day period (July 24th - July 26th), where RAL Tier-1 will be undertaking server room circuit breaker testing. Although it is believed that these tests should not effect our services, such is the nature of these tests all Tier-1 production services should be considered "At Risk" during this time. All VOs are asked to give consideration with respect to possible unexpected outage to any high priority/critical jobs they may wish to run during this period.
Open
GGUS Tickets (Snapshot taken during morning of the meeting). |
Request id | Affected vo | Status | Priority | Date of creation | Last update | Type of problem | Subject | Scope |
---|---|---|---|---|---|---|---|---|
134685 | dteam | in progress | less urgent | 23/04/2018 | 09/07/2018 | Middleware | please upgrade perfsonar host(s) at RAL-LCG2 to CentOS7 | EGI |
124876 | ops | reopened | less urgent | 07/11/2016 | 28/06/2018 | Operations | [Rod Dashboard] Issue detected : hr.srce.GridFTP-Transfer-ops@gridftp.echo.stfc.ac.uk | EGI |
GGUS Tickets Closed Last week |
Request id | Affected vo | Status | Priority | Date of creation | Last update | Type of problem | Subject | Scope |
---|---|---|---|---|---|---|---|---|
136002 | cms | solved | urgent | 09/07/2018 | 09/07/2018 | CMS_Facilities | T1_UK_RAL SE Xrootd read failure | WLCG |
135940 | cms | solved | urgent | 04/07/2018 | 06/07/2018 | CMS_Data Transfers | Transfers failing to RAL_Disk - no data available | WLCG |
135901 | ops | verified | less urgent | 03/07/2018 | 06/07/2018 | Operations | [Rod Dashboard] Issue detected : org.bdii.GLUE2-Validate@site-bdii.gridpp.rl.ac.uk | EGI |
135822 | cms | solved | very urgent | 26/06/2018 | 05/07/2018 | CMS_Central Workflows | File Read Problems for Production at T1_UK_RAL | WLCG |
135740 | none | closed | urgent | 20/06/2018 | 04/07/2018 | File Access | CVMFS issue on RALfor /cvmfs/dune.opensciencegrid.org/ | EGI |
135711 | cms | closed | urgent | 18/06/2018 | 06/07/2018 | CMS_Central Workflows | T1_UK_RAL production jobs failing | WLCG |
135455 | cms | closed | less urgent | 31/05/2018 | 09/07/2018 | File Transfer | Checksum verification at RAL | EGI |
135342 | ops | verified | less urgent | 27/05/2018 | 02/07/2018 | Operations | [Rod Dashboard] Issue detected : egi.eu.lowAvailability-/RAL-LCG2@RAL-LCG2_Availability | EGI |
135293 | ops | verified | less urgent | 23/05/2018 | 02/07/2018 | Operations | [Rod Dashboard] Issues detected at RAL-LCG2 | EGI |
Availability Report |
Target Availability for each site is 97.0% | Red <90% | Orange <97% |
Day | Atlas | Atlas-Echo | CMS | LHCB | Alice | OPS | Comments |
---|---|---|---|---|---|---|---|
2018-06-25 | 98 | 98 | 99 | 100 | 100 | 100 | |
2018-06-26 | 100 | 100 | 98 | 100 | 100 | 100 | |
2018-06-27 | 100 | 100 | 100 | 100 | 100 | 100 | |
2018-06-28 | 100 | 100 | 100 | 100 | 100 | 100 | |
2018-06-29 | 100 | 100 | 99 | 100 | 100 | 100 | |
2018-06-30 | 100 | 100 | 100 | 100 | 100 | 100 | |
2018-07-01 | 100 | 100 | 100 | 100 | 100 | 100 | |
2018-07-02 | 100 | 100 | 100 | 100 | 100 | 100 |
Hammercloud Test Report |
Target Availability for each site is 97.0% | Red <90% | Orange <97% |
Day | Atlas HC | CMS HC | Comment |
---|---|---|---|
2018/06/25 | 100 | 96 | |
2018/06/26 | 0 | 100 | 0 : No test run in time bin |
2018/06/27 | 0 | 58 | 0 : No test run in time bin |
2018/06/28 | 0 | 100 | 0 : No test run in time bin |
2018/06/29 | 0 | 92 | 0 : No test run in time bin |
2018/06/30 | 100 | 98 | |
2018/06/01 | 99 | 100 |
Key: Atlas HC = Atlas HammerCloud (Queue RAL-LCG2_UCORE, Template 841); CMS HC = CMS HammerCloud
Notes from Meeting. |
- GGUS ticket this morning ref xrootd - failing request were hitting the external gateway - it shouldn't.
- Lots of transfers with similar failures. All failing requests hitting xrootd servers are from CMs - None from Atlas.
- Not enough resource to handle requests.
- 1. If they talk to the internal gateway will be redirected to the external - though this is not confirmed
- 2. Mapping within the node/docker container is failing. A failing config somewhere, but TB doesn't think it's our fault as Atlas is running ok for the same config.