Difference between revisions of "Tier1 Operations Report 2017-05-31"

From GridPP Wiki
Jump to: navigation, search
()
()
 
(26 intermediate revisions by 2 users not shown)
Line 10: Line 10:
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Review of Issues during the week 24th to 31st May 2017.
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Review of Issues during the week 24th to 31st May 2017.
 
|}
 
|}
* There is a specific problem with Castor affecting LHCb when a TURL returned by the SRM does not always work when used for xroot access owing to an incorrect hostname. This is now largely understood although not yet fixed.
+
* A problem of ARP poisoning on the Tier1 network is affecting some monitoring used by Echo. The cause of this is now believed to be understood and the fix (a setting change in the OPN Router) has been applied.
* A problem of ARP poisoning on the Tier1 network is affecting some monitoring used by CEPH ECHO. Attempts to understand the source and cause of this are taking place.
+
* There is a problem on the site firewall which is causing problems for some specific data flows. This was being investigated in connection with videoconferencing problems. It is expected that this is having an effect on our data that flows through the firewall (such as to/from worker nodes).
* There is a problem on the site firewall which is causing problems for some specific data flows. It was being investigated in connection with videoconferencing problems. It is not clear if this could be having any effect on our services.
+
* Atlas carried out a large deletion test of files in Echo last week. Overall the results were pleasing with the bulk of the files deleted successfully and in a reasonable timescale. However, some (one to two thousand) files failed to delete. Most of these were cleaned up manually leaving a handful for debugging purposes. However, the system subsequently also deleted these remaining files without manual intervention.
* ECHO deletions ??? (GGUS ticket)
+
 
<!-- ***********End Review of Issues during last week*********** ----->
 
<!-- ***********End Review of Issues during last week*********** ----->
 
<!-- *********************************************************** ----->
 
<!-- *********************************************************** ----->
Line 24: Line 23:
 
| style="background-color: #f8d6a9; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Resolved Disk Server Issues
 
| style="background-color: #f8d6a9; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Resolved Disk Server Issues
 
|}
 
|}
* GDSS724 (AtlasDataDisk - D1T0) Crashed on Wednesday evening (157th May). It was returned to service, intially read-only, on the 19th. Three zero-sized files lost at time of crash.
+
* GDSS773 (LHCbDst - D1T0) crashed on Sunday (21st May). It was returned to service Thursday morning (25th) although no fault was found during the diagnostic testing.
 +
* GDSS804 (LHCbDst - D1T0) was taken out of production on Tuesday 23rd as it was showing memory errors. However, the memory tests then failed to find anything and it was returned to service the following day.
 
<!-- ***************************************************** ----->
 
<!-- ***************************************************** ----->
  
Line 34: Line 34:
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Current operational status and issues
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Current operational status and issues
 
|}
 
|}
* We are still seeing a rate of failures of the CMS SAM tests against the SRM. These are affecting our (CMS) availabilities. CMS are also looking at file access performance and have turned off "lazy-download". This will be re-addresses once we have upgraded to Castor 2.1.16.
+
* We are still seeing a rate of failures of the CMS SAM tests against the SRM. These are affecting our (CMS) availabilities. CMS are also looking at file access performance and have turned off "lazy-download". The CMS SRM SAM test success rate has improved since the Castor 2.1.16 upgrade on the 25th May, although is still not 100%. It is still planned to re-visit this issue now Castor has been upgraded.
* LHCb Castor performance has been OK since the 2.1.16 update although this was not under and high load. A load test (mimicing the stipping/merging campain) is being carried out with LHCb today (24th May). Comment on load test.
+
* LHCb Castor performance has been OK since the 2.1.16 update although this was not under and high load. A load test (mimicking the stripping/merging campaign) was carried out last Wednesday (24th May). This was successful in that the specific performance/timeout problem seen before the Castor 2.1.16 upgrade was not seen. The main limitation encountered within Castor was load on the older disk servers in the instance. Following the 2.1.16 upgrade and this (successful) test this item is being removed from the list of ongoing issues.
 
<!-- ***********End Current operational status and issues*********** ----->
 
<!-- ***********End Current operational status and issues*********** ----->
 
<!-- *************************************************************** ----->
 
<!-- *************************************************************** ----->
Line 46: Line 46:
 
| style="background-color: #f8d6a9; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Ongoing Disk Server Issues
 
| style="background-color: #f8d6a9; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Ongoing Disk Server Issues
 
|}
 
|}
* GDSS773 (LHCbDst - D1T0) crashed on Sunday (21st May). Investigations are ongoing.
+
* GDSS658 (AtlasScratchDisk - D1T0) crashed yesterday afternoon (30th May). It is still undergoing tests.
* GDSS804 (LHCbDst - D1T0) was taken out of production yesterday as it was showing memory errors. These are still being checked out.
+
 
<!-- ***************End Ongoing Disk Server Issues**************** ----->
 
<!-- ***************End Ongoing Disk Server Issues**************** ----->
 
<!-- ************************************************************* ----->
 
<!-- ************************************************************* ----->
Line 69: Line 68:
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Notable Changes made since the last meeting.
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Notable Changes made since the last meeting.
 
|}
 
|}
* Atlas Castor instance updated to Castor version 2.1.16-13. (The Atlas SRMs were already at version 2.1.16).
+
* CMS Castor and GEN instances updated to Castor version 2.1.16-13 and the associated SRMs also upgraded to version 2.1.16.
 +
* For LHCb Castor the xroot manager was installed on the (LHCb) stager. This resolves a problem introduced by version 2.1.16 (and reported last week) that affected LHCb where a TURL returned by the SRM did not always work when used for xroot access owing to an incorrect hostname.
 
* CEs being migrated to use the load balancers in front of the argus service.
 
* CEs being migrated to use the load balancers in front of the argus service.
* Atlas now have 2PBytes of data in ECHO which is their current allocation in there. This means Atlas' use of ECHO will now need to include deleting data to make room for more. (Note: ECHO has more capacity than this in total.)
+
* Batch access has been enabled for LIGO and the MICE pilot role.
 +
* Parameter change in the OPN Router to stop it providing proxy ARP information.
 
<!-- *************End Notable Changes made this last week************** ----->
 
<!-- *************End Notable Changes made this last week************** ----->
 
<!-- ****************************************************************** ----->
 
<!-- ****************************************************************** ----->
Line 114: Line 115:
 
<!-- ******* still to be formally scheduled and/or announced ******* ----->
 
<!-- ******* still to be formally scheduled and/or announced ******* ----->
 
'''Pending - but not yet formally announced:'''
 
'''Pending - but not yet formally announced:'''
* Update Castor (including SRMs) to version 2.1.16. Central nameserver and LHCb and Atlas stagers done. Current plan: CMS stager and SRMs on Thursday 25th May. GEN to follow.
+
* Increase OPN link to CERN from 2*10Gbit to 3*10Gbit links.
* Update Castor SRMs - CMS & GEN still to do. These are being done at the same time as the Catsor 2.1.16 updates.
+
 
'''Listing by category:'''
 
'''Listing by category:'''
 
* Castor:  
 
* Castor:  
** Update SRMs to new version, including updating to SL6.
+
** Move to generic Castor headnodes.
** Update Castor to version 2.1.16 (ongoing)
+
 
** Merge AtlasScratchDisk into larger Atlas disk pool.
 
** Merge AtlasScratchDisk into larger Atlas disk pool.
 
* Networking
 
* Networking
Line 173: Line 172:
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Open GGUS Tickets (Snapshot during morning of meeting)
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Open GGUS Tickets (Snapshot during morning of meeting)
 
|}
 
|}
{|border="1" cellpadding="1",center;
+
* GGUS system currently not available. No report on open GGUS tickets. (GGUS service was back at the end of the morning).
|+
+
|-style="background:#b7f1ce"
+
! GGUS ID !! Level !! Urgency !! State !! Creation !! Last Update !! VO !! Subject
+
|-
+
| 128398
+
| Green
+
| Top Priority
+
| Waiting for Reply
+
| 2017-05-18
+
| 2017-05-24
+
| LHCb
+
| File cannot be opened using xroot at RAL
+
|-
+
| 128308
+
| Green
+
| Urgent
+
| In Progress
+
| 2017-05-14
+
| 2017-05-15
+
| CMS
+
| Description: T1_UK_RAL in error for about 6 hours
+
|-
+
| 127967
+
| Green
+
| Less Urgent
+
| On Hold
+
| 2017-04-27
+
| 2017-04-28
+
| MICE
+
| Enabling pilot role for mice VO at RAL-LCG2
+
|-
+
| 127612
+
| Yellow
+
| Alarm
+
| In Progress
+
| 2017-04-08
+
| 2017-05-19
+
| LHCb
+
| CEs at RAL not responding
+
|-
+
| 127597
+
| Yellow
+
| Urgent
+
| Waiting for Reply
+
| 2017-04-07
+
| 2017-05-16
+
| CMS
+
| Check networking and xrootd RAL-CERN performance
+
|-
+
| 127240
+
| Red
+
| Urgent
+
| In Progress
+
| 2017-03-21
+
| 2017-05-15
+
| CMS
+
| Staging Test at UK_RAL for Run2
+
|-
+
| 124876
+
| Red
+
| Less Urgent
+
| On Hold
+
| 2016-11-07
+
| 2017-01-01
+
| OPS
+
| [Rod Dashboard] Issue detected : hr.srce.GridFTP-Transfer-ops@gridftp.echo.stfc.ac.uk
+
|-
+
| 117683
+
| Red
+
| Less Urgent
+
| On Hold
+
| 2015-11-18
+
| 2017-05-10
+
|
+
| CASTOR at RAL not publishing GLUE 2.
+
|}
+
 
<!-- **********************End GGUS Tickets************************** ----->
 
<!-- **********************End GGUS Tickets************************** ----->
 
<!-- ****************************************************************** ----->
 
<!-- ****************************************************************** ----->
Line 260: Line 183:
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Availability Report
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Availability Report
 
|}
 
|}
Key: Atlas HC = Atlas HammerCloud (Queue ANALY_RAL_SL6, Template 845); Atlas HC ECHO = Atlas ECHO (Template 841);CMS HC = CMS HammerCloud
+
Key: Atlas HC = Atlas HammerCloud (Queue ANALY_RAL_SL6, Template 845); Atlas HC Echo = Atlas Echo (Template 841);CMS HC = CMS HammerCloud
 
{|border="1" cellpadding="1",center;
 
{|border="1" cellpadding="1",center;
 
|+
 
|+
 
|-style="background:#b7f1ce"
 
|-style="background:#b7f1ce"
! Day !! OPS !! Alice !! Atlas !! CMS !! LHCb !! Atlas ECHO !! Atlas HC !! Atlas HC ECHO !! CMS HC !! Comment
+
! Day !! OPS !! Alice !! Atlas !! CMS !! LHCb !! Atlas Echo !! Atlas HC !! Atlas HC Echo !! CMS HC !! Comment
 
|-
 
|-
 
| 24/05/17 || 100 || 100 || 100 || style="background-color: lightgrey;" | 97 || 100 || 100 || 99 || 99 || 100 || SRM Test failures (user timeout)
 
| 24/05/17 || 100 || 100 || 100 || style="background-color: lightgrey;" | 97 || 100 || 100 || 99 || 99 || 100 || SRM Test failures (user timeout)
Line 290: Line 213:
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Notes from Meeting.
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Notes from Meeting.
 
|}
 
|}
* None yet.
+
* Steps being taken towards CMS and LHCb testing use of Echo.
 +
* Alice ran a lot of Monte-Carlo jobs across the grid over the last weekend. However, the CPU efficiency of these jobs at RAL was bad. This is being investigated.
 +
* Edinburgh Dirac site transferring data in production. Currently around 2 to 4TB per day. There is a total of around 400TB to transfer in this tranche.
 +
* The upgrading of the Castor SRMs means that we no longer have any external services using SL5.

Latest revision as of 13:14, 31 May 2017

RAL Tier1 Operations Report for 31st May 2017

Review of Issues during the week 24th to 31st May 2017.
  • A problem of ARP poisoning on the Tier1 network is affecting some monitoring used by Echo. The cause of this is now believed to be understood and the fix (a setting change in the OPN Router) has been applied.
  • There is a problem on the site firewall which is causing problems for some specific data flows. This was being investigated in connection with videoconferencing problems. It is expected that this is having an effect on our data that flows through the firewall (such as to/from worker nodes).
  • Atlas carried out a large deletion test of files in Echo last week. Overall the results were pleasing with the bulk of the files deleted successfully and in a reasonable timescale. However, some (one to two thousand) files failed to delete. Most of these were cleaned up manually leaving a handful for debugging purposes. However, the system subsequently also deleted these remaining files without manual intervention.
Resolved Disk Server Issues
  • GDSS773 (LHCbDst - D1T0) crashed on Sunday (21st May). It was returned to service Thursday morning (25th) although no fault was found during the diagnostic testing.
  • GDSS804 (LHCbDst - D1T0) was taken out of production on Tuesday 23rd as it was showing memory errors. However, the memory tests then failed to find anything and it was returned to service the following day.
Current operational status and issues
  • We are still seeing a rate of failures of the CMS SAM tests against the SRM. These are affecting our (CMS) availabilities. CMS are also looking at file access performance and have turned off "lazy-download". The CMS SRM SAM test success rate has improved since the Castor 2.1.16 upgrade on the 25th May, although is still not 100%. It is still planned to re-visit this issue now Castor has been upgraded.
  • LHCb Castor performance has been OK since the 2.1.16 update although this was not under and high load. A load test (mimicking the stripping/merging campaign) was carried out last Wednesday (24th May). This was successful in that the specific performance/timeout problem seen before the Castor 2.1.16 upgrade was not seen. The main limitation encountered within Castor was load on the older disk servers in the instance. Following the 2.1.16 upgrade and this (successful) test this item is being removed from the list of ongoing issues.
Ongoing Disk Server Issues
  • GDSS658 (AtlasScratchDisk - D1T0) crashed yesterday afternoon (30th May). It is still undergoing tests.
Limits on concurrent batch system jobs.
  • CMS Multicore 550
Notable Changes made since the last meeting.
  • CMS Castor and GEN instances updated to Castor version 2.1.16-13 and the associated SRMs also upgraded to version 2.1.16.
  • For LHCb Castor the xroot manager was installed on the (LHCb) stager. This resolves a problem introduced by version 2.1.16 (and reported last week) that affected LHCb where a TURL returned by the SRM did not always work when used for xroot access owing to an incorrect hostname.
  • CEs being migrated to use the load balancers in front of the argus service.
  • Batch access has been enabled for LIGO and the MICE pilot role.
  • Parameter change in the OPN Router to stop it providing proxy ARP information.
Declared in the GOC DB
Service Scheduled? Outage/At Risk Start End Duration Reason
Castor GEN instace (srm-alice, srm-biomed, srm-dteam, srm-ilc, srm-mice, srm-na62, srm-pheno, srm-snoplus, srm-t2k) SCHEDULED OUTAGE 31/05/2017 10:00 31/05/2017 15:00 5 hours Upgrade of Castor GEN instance to version 2.1.16.
Advanced warning for other interventions
The following items are being discussed and are still to be formally scheduled and announced.

Pending - but not yet formally announced:

  • Increase OPN link to CERN from 2*10Gbit to 3*10Gbit links.

Listing by category:

  • Castor:
    • Move to generic Castor headnodes.
    • Merge AtlasScratchDisk into larger Atlas disk pool.
  • Networking
    • Increase OPN link to CERN from 2*10Gbit to 3*10Gbit links.
    • Enable first services on production network with IPv6 now that the addressing scheme has been agreed. (Perfsonar already working over IPv6).
  • Services
    • Put argus systems behind a load balancers to improve resilience.
    • The production FTS needs updating. This will no longer support the soap interface. (The "test" FTS , used by Atlas, has already been upgraded.)
Entries in GOC DB starting since the last report.
Service Scheduled? Outage/At Risk Start End Duration Reason
Castor GEN instace (srm-alice, srm-biomed, srm-dteam, srm-ilc, srm-mice, srm-na62, srm-pheno, srm-snoplus, srm-t2k) SCHEDULED OUTAGE 31/05/2017 10:00 31/05/2017 15:00 5 hours Upgrade of Castor GEN instance to version 2.1.16.
srm-cms-disk.gridpp.rl.ac.uk SCHEDULED OUTAGE 25/05/2017 10:00 25/05/2017 13:03 3 hours and 3 minutes Upgrade of CMS Castor instance to version 2.1.16.
Open GGUS Tickets (Snapshot during morning of meeting)
  • GGUS system currently not available. No report on open GGUS tickets. (GGUS service was back at the end of the morning).
Availability Report

Key: Atlas HC = Atlas HammerCloud (Queue ANALY_RAL_SL6, Template 845); Atlas HC Echo = Atlas Echo (Template 841);CMS HC = CMS HammerCloud

Day OPS Alice Atlas CMS LHCb Atlas Echo Atlas HC Atlas HC Echo CMS HC Comment
24/05/17 100 100 100 97 100 100 99 99 100 SRM Test failures (user timeout)
25/05/17 100 100 100 87 100 100 100 100 98 SRM Test failures (user timeout)
26/05/17 100 100 98 98 100 100 100 100 100 SRM Test failures: Atlas: One error ‘User belonging to VO not authorized to access space token ATLASDATADISK; CMS: user timeout.
27/05/17 100 100 100 98 100 100 100 100 100 SRM Test failures (user timeout)
28/05/17 100 100 100 97 100 100 100 100 100 SRM Test failures (user timeout)
29/05/17 100 100 100 98 100 100 98 100 100 SRM Test failures (user timeout)
30/05/17 100 100 100 94 100 100 92 100 100 SRM Test failures (user timeout)
Notes from Meeting.
  • Steps being taken towards CMS and LHCb testing use of Echo.
  • Alice ran a lot of Monte-Carlo jobs across the grid over the last weekend. However, the CPU efficiency of these jobs at RAL was bad. This is being investigated.
  • Edinburgh Dirac site transferring data in production. Currently around 2 to 4TB per day. There is a total of around 400TB to transfer in this tranche.
  • The upgrading of the Castor SRMs means that we no longer have any external services using SL5.