Difference between revisions of "Tier1 Operations Report 2018-11-27"

From GridPP Wiki
Jump to: navigation, search
(Created page with "==RAL Tier1 Operations Report for 27th November 2018== __NOTOC__ ====== ====== <!-- ************************************************************* -----> <!-- ***********Sta...")
 
()
Line 11: Line 11:
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Review of Issues during the week 20th November to the 27th November 2018.
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Review of Issues during the week 20th November to the 27th November 2018.
 
|}
 
|}
* On Thursday 15th November, all non-LHC VOs were migrated to the new consolidated Castor tape instance.
+
* PPD Tier-2 reverted its switch to IPv6.  This resolved a variety of problems for the Tier-2 some of which were being blamed on the Tier-1 (e.g. FTS service failures).
* CMS AAA issues are ongoing, although we may have finally turned a corner! It was found that certain CMS requests do not specify the Ceph pool to use (in this case it should be “cms”)There was no default pool set, thus the request failsIt was found that a bug meant that any subsequent requests would be sent to the (non-set) default pool causing all transfers to failThis required a restart of the XRootD process.  For the CMS AAA service we have added a default pool and monitoring of the logs, which will restart the process should it detect that transfers are using a non-set poolSAM test results were at 96% over the weekend.  
+
 
* We have been experiencing issues with (Ceph/Echo), Cluster Vision 17 disk servers that were occasionally/randomly rebooting.  This has now been resolved with a firmware patch that was rolled out last week.  The hardware continues to be weighted up and we exceed 1Tb/s aggregated throughput on Friday 16th - graph below.
+
* We believe the CMS AAA issues have been resolved. We will close the tickets but are continuing to look at ways of building more resilience in to the serviceThe change that appears to have fixed the problems were made on the 23rd NovemberThis change reduced the chunk size being requested from Echo from 64MB to 4MB meaning that small data requests would be served much faster, with a slight (~10%) reduction in performance for large data requestsThese changes were only made to the CMS AAA service.    Since then the SAM tests have been all passing.  90% of the the CMS AAA Hammer Cloud test jobs have been passing, which is extremely goodThe Hammer Cloud tests involve jobs at other sites requesting data from RAL.  There can be a significant failure rate that is nothing to do with the site and anything above 70% success rate is considered a pass.  The throughput on the proxy machines is much more balanced with the new chunk size in place.
[[File:Echoplot.png]]
+
 
 +
* The problem report by NA62 in the previous week, when they couldn’t recall data in a timely manner, was the result of a forgotten cron job on the new system.  This cron assigns new media to tape pools as they run short.  The ATLAS tape pool ran out of tapes and a 200 000 file backlog built up before this was noticed.  The tape system priorities writing to tape above recalls (to ensure data is safe), and therefore once the problem was fixed, the next ~48 hours were dominated with clearing this back log.
 +
 
 +
* Since 22nd November, SAM tests against the (Old - tape only) CMS Castor instance appear to be "missing" from the reports (and in some plots appear to indicate 100% failure).  If we check the actual results they are passing. Do not currently understand the issue, migration to new endpoint is only a week away.
 +
 
 +
 
 
<!-- ***********End Review of Issues during last week*********** ----->
 
<!-- ***********End Review of Issues during last week*********** ----->
 
<!-- *********************************************************** ----->
 
<!-- *********************************************************** ----->

Revision as of 07:53, 27 November 2018

RAL Tier1 Operations Report for 27th November 2018

Review of Issues during the week 20th November to the 27th November 2018.
  • PPD Tier-2 reverted its switch to IPv6. This resolved a variety of problems for the Tier-2 some of which were being blamed on the Tier-1 (e.g. FTS service failures).
  • We believe the CMS AAA issues have been resolved. We will close the tickets but are continuing to look at ways of building more resilience in to the service. The change that appears to have fixed the problems were made on the 23rd November. This change reduced the chunk size being requested from Echo from 64MB to 4MB meaning that small data requests would be served much faster, with a slight (~10%) reduction in performance for large data requests. These changes were only made to the CMS AAA service. Since then the SAM tests have been all passing. 90% of the the CMS AAA Hammer Cloud test jobs have been passing, which is extremely good. The Hammer Cloud tests involve jobs at other sites requesting data from RAL. There can be a significant failure rate that is nothing to do with the site and anything above 70% success rate is considered a pass. The throughput on the proxy machines is much more balanced with the new chunk size in place.
  • The problem report by NA62 in the previous week, when they couldn’t recall data in a timely manner, was the result of a forgotten cron job on the new system. This cron assigns new media to tape pools as they run short. The ATLAS tape pool ran out of tapes and a 200 000 file backlog built up before this was noticed. The tape system priorities writing to tape above recalls (to ensure data is safe), and therefore once the problem was fixed, the next ~48 hours were dominated with clearing this back log.
  • Since 22nd November, SAM tests against the (Old - tape only) CMS Castor instance appear to be "missing" from the reports (and in some plots appear to indicate 100% failure). If we check the actual results they are passing. Do not currently understand the issue, migration to new endpoint is only a week away.


Current operational status and issues
  • NTR
Resolved Castor Disk Server Issues
Machine VO DiskPool dxtx Comments
- - - - -
Ongoing Castor Disk Server Issues
Machine VO DiskPool dxtx Comments
gdss788 WLCG - gridTape d0t1


Limits on concurrent batch system jobs.
  • None currently enforced.
Notable Changes made since the last meeting.
  • All non-LHC VOs were migrated to the new consolidated Castor tape instance
Entries in GOC DB starting since the last report.
Service ID Scheduled? Outage/At Risk Start End Duration Reason
- - - - - - - -
Declared in the GOC DB
Service ID Scheduled? Outage/At Risk Start End Duration Reason
CASTOR 26338 Yes Outage 20/11/2018 20/11/2018 3Hrs CASTOR out as part of Oracle Patch Instalation (Neptune and Pluto environments)
  • No ongoing downtime
  • No downtime scheduled in the GOCDB for next 2 weeks
Advanced warning for other interventions
The following items are being discussed and are still to be formally scheduled and announced.

Listing by category:

  • DNS servers will be rolled out within the Tier1 network.
Open

GGUS Tickets (Snapshot taken during morning of the meeting).

Request id Affected vo Status Priority Date of creation Last update Type of problem Subject Scope
138406 cms in progress urgent 21/11/2018 21/11/2018 CMS_Data Transfers Stuck transfers from RAL_Buffer WLCG
138361 t2k.org in progress less urgent 19/11/2018 19/11/2018 Other RAL-LCG2: t2k.org LFC to DFC transition EGI
138356 cms in progress urgent 19/11/2018 20/11/2018 CMS_Data Transfers Transfers failing from T1_UK_RAL_Buffer to T1_UK_RAL_Disk WLCG
138327 cms in progress urgent 16/11/2018 20/11/2018 CMS_Data Transfers RAL FTS reporting connection issue with many hosts WLCG
138033 atlas in progress urgent 01/11/2018 15/11/2018 Other singularity jobs failing at RAL EGI
137897 enmr.eu waiting for reply less urgent 23/10/2018 20/11/2018 Accounting enmr.eu accounting at RAL EGI
137822 lhcb in progress top priority 18/10/2018 14/11/2018 File Transfer FTS server seems in bad state. WLCG
137650 cms in progress top priority 09/10/2018 19/11/2018 CMS_AAA WAN Access Low HC xrootd success rates at T1_UK_RAL WLCG
GGUS Tickets Closed Last week
Request id Affected vo Status Priority Date of creation Last update Type of problem Subject Scope
138331 cms solved urgent 16/11/2018 19/11/2018 CMS_Data Transfers Posible expired proxy at RAL WLCG
138315 cms solved urgent 15/11/2018 19/11/2018 CMS_Data Transfers Transfers failing from T2_US_Wisconsin to T1_UK_RAL_Disk WLCG
138218 cms solved urgent 09/11/2018 14/11/2018 CMS_Data Transfers Transfers failing from RAL_Buffer to TIFR WLCG
138007 snoplus.snolab.ca closed less urgent 30/10/2018 19/11/2018 File Access RAL drops connection after 180s of downloading EGI
138002 cms closed top priority 30/10/2018 19/11/2018 CMS_Data Transfers Issues with RAL FTS WLCG
137994 cms closed urgent 30/10/2018 19/11/2018 CMS_Data Transfers Transfers failing between RAL and T1_FR_CCIN2P3_Disk WLCG
137942 cms closed urgent 25/10/2018 19/11/2018 CMS_Data Transfers Failing transfers via IPv6 between T1_UK_RAL and T1_DE_KIT WLCG
137153 t2k.org verified urgent 12/09/2018 20/11/2018 Data Management - generic LFC entry has file size 0, preventsw registering of additional replicas EGI

Availability Report

Target Availability for each site is 97.0% Red <90% Orange <97%
Day Atlas Atlas-Echo CMS LHCB Alice OPS Comments
2018-11-14 100 100 100 100 100 100
2018-11-15 100 100 100 100 96 100
2018-11-16 100 100 100 100 100 100
2018-11-17 100 100 100 100 100 100
2018-11-18 100 100 35 100 100 100 CMS problem was the CE tests not running, see GGUS#138351.
2018-11-19 100 100 45 100 100 100 CMS problem was the CE tests not running, see GGUS#138351.
2018-11-20 100 100 56 100 100 87 CMS problem was the CE tests not running, see GGUS#138351.
Hammercloud Test Report
Target Availability for each site is 97.0% Red <90% Orange <97%
Day Atlas HC CMS HC Comment
2018-11-06 98 100
2018-11-07 100 99
2018-11-08 100 99
2018-11-09 100 98
2018-11-10 100 96
2018-11-11 100 92
2018-11-12 100 99
2018-11-13 100 99

Key: Atlas HC = Atlas HammerCloud (Queue RAL-LCG2_UCORE, Template 841); CMS HC = CMS HammerCloud

Notes from Meeting.