Difference between revisions of "Tier1 Operations Report 2018-11-27"
From GridPP Wiki
(→) |
(→) |
||
Line 97: | Line 97: | ||
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Notable Changes made since the last meeting. | | style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Notable Changes made since the last meeting. | ||
|} | |} | ||
− | * | + | * NTR |
<!-- *************End Notable Changes made this last week************** -----> | <!-- *************End Notable Changes made this last week************** -----> | ||
<!-- ****************************************************************** -----> | <!-- ****************************************************************** -----> |
Revision as of 08:03, 27 November 2018
RAL Tier1 Operations Report for 27th November 2018
Review of Issues during the week 20th November to the 27th November 2018. |
- PPD Tier-2 reverted its switch to IPv6. This resolved a variety of problems for the Tier-2 some of which were being blamed on the Tier-1 (e.g. FTS service failures).
- We believe the CMS AAA issues have been resolved. We will close the tickets but are continuing to look at ways of building more resilience in to the service. The change that appears to have fixed the problems were made on the 23rd November. This change reduced the chunk size being requested from Echo from 64MB to 4MB meaning that small data requests would be served much faster, with a slight (~10%) reduction in performance for large data requests. These changes were only made to the CMS AAA service. Since then the SAM tests have been all passing. 90% of the the CMS AAA Hammer Cloud test jobs have been passing, which is extremely good. The Hammer Cloud tests involve jobs at other sites requesting data from RAL. There can be a significant failure rate that is nothing to do with the site and anything above 70% success rate is considered a pass. The throughput on the proxy machines is much more balanced with the new chunk size in place.
- The problem report by NA62 in the previous week, when they couldn’t recall data in a timely manner, was the result of a forgotten cron job on the new system. This cron assigns new media to tape pools as they run short. The ATLAS tape pool ran out of tapes and a 200 000 file backlog built up before this was noticed. The tape system priorities writing to tape above recalls (to ensure data is safe), and therefore once the problem was fixed, the next ~48 hours were dominated with clearing this back log.
- Since 22nd November, SAM tests against the (Old - tape only) CMS Castor instance appear to be "missing" from the reports (and in some plots appear to indicate 100% failure). If we check the actual results they are passing. Do not currently understand the issue, migration to new endpoint is only a week away.
Current operational status and issues |
- NTR
Resolved Castor Disk Server Issues |
Machine | VO | DiskPool | dxtx | Comments |
---|---|---|---|---|
- | - | - | - | - |
Ongoing Castor Disk Server Issues |
Machine | VO | DiskPool | dxtx | Comments |
---|---|---|---|---|
- | - | - | - | - |
Limits on concurrent batch system jobs. |
- None currently enforced.
Notable Changes made since the last meeting. |
- NTR
Entries in GOC DB starting since the last report. |
Service | ID | Scheduled? | Outage/At Risk | Start | End | Duration | Reason |
---|---|---|---|---|---|---|---|
- | - | - | - | - | - | - | - |
Declared in the GOC DB |
Service | ID | Scheduled? | Outage/At Risk | Start | End | Duration | Reason |
---|---|---|---|---|---|---|---|
CASTOR | 26338 | Yes | Outage | 20/11/2018 | 20/11/2018 | 3Hrs | CASTOR out as part of Oracle Patch Instalation (Neptune and Pluto environments) |
-
No ongoing downtime -
No downtime scheduled in the GOCDB for next 2 weeks
Advanced warning for other interventions |
The following items are being discussed and are still to be formally scheduled and announced. |
Listing by category:
- DNS servers will be rolled out within the Tier1 network.
Open
GGUS Tickets (Snapshot taken during morning of the meeting). |
Request id | Affected vo | Status | Priority | Date of creation | Last update | Type of problem | Subject | Scope |
---|---|---|---|---|---|---|---|---|
138406 | cms | in progress | urgent | 21/11/2018 | 21/11/2018 | CMS_Data Transfers | Stuck transfers from RAL_Buffer | WLCG |
138361 | t2k.org | in progress | less urgent | 19/11/2018 | 19/11/2018 | Other | RAL-LCG2: t2k.org LFC to DFC transition | EGI |
138356 | cms | in progress | urgent | 19/11/2018 | 20/11/2018 | CMS_Data Transfers | Transfers failing from T1_UK_RAL_Buffer to T1_UK_RAL_Disk | WLCG |
138327 | cms | in progress | urgent | 16/11/2018 | 20/11/2018 | CMS_Data Transfers | RAL FTS reporting connection issue with many hosts | WLCG |
138033 | atlas | in progress | urgent | 01/11/2018 | 15/11/2018 | Other | singularity jobs failing at RAL | EGI |
137897 | enmr.eu | waiting for reply | less urgent | 23/10/2018 | 20/11/2018 | Accounting | enmr.eu accounting at RAL | EGI |
137822 | lhcb | in progress | top priority | 18/10/2018 | 14/11/2018 | File Transfer | FTS server seems in bad state. | WLCG |
137650 | cms | in progress | top priority | 09/10/2018 | 19/11/2018 | CMS_AAA WAN Access | Low HC xrootd success rates at T1_UK_RAL | WLCG |
GGUS Tickets Closed Last week |
Request id | Affected vo | Status | Priority | Date of creation | Last update | Type of problem | Subject | Scope |
---|---|---|---|---|---|---|---|---|
138331 | cms | solved | urgent | 16/11/2018 | 19/11/2018 | CMS_Data Transfers | Posible expired proxy at RAL | WLCG |
138315 | cms | solved | urgent | 15/11/2018 | 19/11/2018 | CMS_Data Transfers | Transfers failing from T2_US_Wisconsin to T1_UK_RAL_Disk | WLCG |
138218 | cms | solved | urgent | 09/11/2018 | 14/11/2018 | CMS_Data Transfers | Transfers failing from RAL_Buffer to TIFR | WLCG |
138007 | snoplus.snolab.ca | closed | less urgent | 30/10/2018 | 19/11/2018 | File Access | RAL drops connection after 180s of downloading | EGI |
138002 | cms | closed | top priority | 30/10/2018 | 19/11/2018 | CMS_Data Transfers | Issues with RAL FTS | WLCG |
137994 | cms | closed | urgent | 30/10/2018 | 19/11/2018 | CMS_Data Transfers | Transfers failing between RAL and T1_FR_CCIN2P3_Disk | WLCG |
137942 | cms | closed | urgent | 25/10/2018 | 19/11/2018 | CMS_Data Transfers | Failing transfers via IPv6 between T1_UK_RAL and T1_DE_KIT | WLCG |
137153 | t2k.org | verified | urgent | 12/09/2018 | 20/11/2018 | Data Management - generic | LFC entry has file size 0, preventsw registering of additional replicas | EGI |
Availability Report |
Target Availability for each site is 97.0% | Red <90% | Orange <97% |
Day | Atlas | Atlas-Echo | CMS | LHCB | Alice | OPS | Comments |
---|---|---|---|---|---|---|---|
2018-11-14 | 100 | 100 | 100 | 100 | 100 | 100 | |
2018-11-15 | 100 | 100 | 100 | 100 | 96 | 100 | |
2018-11-16 | 100 | 100 | 100 | 100 | 100 | 100 | |
2018-11-17 | 100 | 100 | 100 | 100 | 100 | 100 | |
2018-11-18 | 100 | 100 | 35 | 100 | 100 | 100 | CMS problem was the CE tests not running, see GGUS#138351. |
2018-11-19 | 100 | 100 | 45 | 100 | 100 | 100 | CMS problem was the CE tests not running, see GGUS#138351. |
2018-11-20 | 100 | 100 | 56 | 100 | 100 | 87 | CMS problem was the CE tests not running, see GGUS#138351. |
Hammercloud Test Report |
Target Availability for each site is 97.0% | Red <90% | Orange <97% |
Day | Atlas HC | CMS HC | Comment |
---|---|---|---|
2018-11-06 | 98 | 100 | |
2018-11-07 | 100 | 99 | |
2018-11-08 | 100 | 99 | |
2018-11-09 | 100 | 98 | |
2018-11-10 | 100 | 96 | |
2018-11-11 | 100 | 92 | |
2018-11-12 | 100 | 99 | |
2018-11-13 | 100 | 99 |
Key: Atlas HC = Atlas HammerCloud (Queue RAL-LCG2_UCORE, Template 841); CMS HC = CMS HammerCloud
Notes from Meeting. |