Difference between revisions of "Tier1 Operations Report 2017-03-15"

From GridPP Wiki
Jump to: navigation, search
()
()
 
(16 intermediate revisions by one user not shown)
Line 10: Line 10:
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Review of Issues during the week 8th to 15th March 2017.
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Review of Issues during the week 8th to 15th March 2017.
 
|}
 
|}
* Following a discussion at last week's liaison meeting an apparant cap on LHCb batch jobs was removed. However, it turned out that the restriction in the Condor configuration file was not erroneous and had no effect. LHCb batch jobs were not being limited.
+
* There were a very large FTS transfer queue for Atlas during the middle of last week. Resolved by Atlas.
* There was a problem reported for access to AtlasScratchDisk in Castor this morning (8th Mar). Atlas have reported a large backlog of outstanding file transfers. Being worked on at time of meeting.
+
* There was a problem with the Microsoft Hyper-V 2012 hypervisor cluster on Thursday afternoon 9th March. This hoists many service VMs. Two out of the five nodes appear to have updated a particular component - and attempted to move VMs to other nodes. It took a few hours for the system to recover. This affected a number of services including BDIIs, FTS nodes and CEs. However, the resilience built into these services meant that the operational effect was small.
 +
* We have an ongoing problem with the SRM SAM tests for Atlas which are failing a lot of the time. We have confirmed this is not affecting Atlas operationally it is just the tests that fails. We still have a GGUS ticket open with Atlas as the test appears to be problematic.
 +
* There was a problem with squid systems on Sunday (12th Mar). Under high load there was a resource limitation. This is being resolved by increasing one of the OS parameters.
 +
* Yesterday (Tuesday 14th) there were problems with Castor during a reconfiguration of the back-end database nodes. A GGUS ticket was received from LHCb - other VOs were also affected.
 
<!-- ***********End Review of Issues during last week*********** ----->
 
<!-- ***********End Review of Issues during last week*********** ----->
 
<!-- *********************************************************** ----->
 
<!-- *********************************************************** ----->
Line 45: Line 48:
 
|}
 
|}
 
* None
 
* None
 
 
<!-- ***************End Ongoing Disk Server Issues**************** ----->
 
<!-- ***************End Ongoing Disk Server Issues**************** ----->
 
<!-- ************************************************************* ----->
 
<!-- ************************************************************* ----->
Line 68: Line 70:
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Notable Changes made since the last meeting.
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Notable Changes made since the last meeting.
 
|}
 
|}
* CMS PhEDEx debug transfers switched from CASTOR to CEPH ECHO.
+
* Increased max_filedesc parameter on squids to enable them to better cope with high load.
* Ongoing work applying security and other patches. More back-end database systems have been updated to remove a software layer ("asmlib").
+
* ECHO: Two additional 'MON' boxes are being set-up bringing the total to five. The existing three can cope with normal activity but the additional ones would speed up recoveries and starts. Two additional gateway nodes are also being set-up (also bringing the total to five) which will improve access bandwidth.
* IPv6 has been disabled across systems in preparation for enabling IPv6 through the routers.
+
* The first of two chillers for the R89 machine room air-conditioning has been replaced.
* This morning (8th March) work has been carried out to enabled IPv6 through the Tier's routers.
+
 
<!-- *************End Notable Changes made this last week************** ----->
 
<!-- *************End Notable Changes made this last week************** ----->
 
<!-- ****************************************************************** ----->
 
<!-- ****************************************************************** ----->
Line 98: Line 99:
 
'''Pending - but not yet formally announced:'''
 
'''Pending - but not yet formally announced:'''
 
* Update Castor SRMs. Propose LHCb SRMs first - target date 22nd March.
 
* Update Castor SRMs. Propose LHCb SRMs first - target date 22nd March.
* Chiller replacement - work imminent.
+
* Chiller replacement - work ongoing.
 
* Merge AtlasScratchDisk into larger Atlas disk pool.
 
* Merge AtlasScratchDisk into larger Atlas disk pool.
 
'''Listing by category:'''
 
'''Listing by category:'''
Line 106: Line 107:
 
* Databases
 
* Databases
 
** Removal of "asmlib" layer on Oracle database nodes. (Ongoing)
 
** Removal of "asmlib" layer on Oracle database nodes. (Ongoing)
 +
* Networking
 +
** Enable first services on production network with IPv6 once addressing scheme agreed.
 
* Infrastructure:
 
* Infrastructure:
 
** Two of the chillers supplying the air-conditioning for the R89 machine room will be replaced.
 
** Two of the chillers supplying the air-conditioning for the R89 machine room will be replaced.
Line 169: Line 172:
 
|-
 
|-
 
| 126184
 
| 126184
| Green
+
| Yellow
 
| Less Urgent
 
| Less Urgent
 
| In Progress
 
| In Progress
Line 211: Line 214:
 
! Day !! OPS !! Alice !! Atlas !! CMS !! LHCb !! Atlas HC !! Atlas HC ECHO !! CMS HC !! Comment
 
! Day !! OPS !! Alice !! Atlas !! CMS !! LHCb !! Atlas HC !! Atlas HC ECHO !! CMS HC !! Comment
 
|-
 
|-
| 01/03/17 || 100 || 100 || style="background-color: lightgrey;" | 83 || style="background-color: lightgrey;" | 81 || 100 || 99 || 100 || 99 || Atlas: Ongoing problems with SRM test; CMS - CE test failures due to poblem with argus server.
+
| 08/03/17 || 100 || 100 || style="background-color: lightgrey;" | 60 || style="background-color: lightgrey;" | 96 || 100 || 75 || 196 || 100 || Atlas: Ongoing problems with  SRM test (Atlas Castor restarted to try and fix this - but no effect); CMS - timeouts in SRM tests.
|-
+
| 02/03/17 || 100 || 100 || style="background-color: lightgrey;" | 59 || style="background-color: lightgrey;" | 99 || 100 || 100 || 99 || 100 || Atlas: Ongoing problems with SRM test; CMS - timeouts in SRM tests.
+
|-
+
| 03/03/17 || 100 || 100 || style="background-color: lightgrey;" | 91 || style="background-color: lightgrey;" | 98 || 100 || 97 || 100 || 100 || Atlas: Ongoing problems with SRM test; CMS - timeouts in SRM tests.
+
|-
+
| 04/03/17 || 100 || 100 || style="background-color: lightgrey;" | 96 || 100 || 100 || 100 || 100 || 99 || Atlas: Ongoing problems with SRM test.
+
|-
+
| 05/03/17 || 100 || style="background-color: lightgrey;" | 97 || style="background-color: lightgrey;" | 98 || style="background-color: lightgrey;" | 88 || style="background-color: lightgrey;" | 92 || 97 || 100 || 100 || Atlas: Ongoing problems with SRM test; CMS - timeouts in SRM tests; LHCb - some SRM test failures.
+
|-
+
| 06/03/17 || 100 || style="background-color: lightgrey;" | 68 || style="background-color: lightgrey;" | 89 || style="background-color: lightgrey;" | 97 || 100 || 99 || 98 || 100 || Alice: Central monitoring problem; Atlas: Ongoing problems with SRM test; CMS - timeouts in SRM tests.
+
|-
+
| 07/03/17 || 100 || 100 || 100 || style="background-color: lightgrey;" | 93 || 100 || 94 || 99 || 100 || Timeouts on CMS SRM tests.
+
 
+
|-
+
| 08/03/17 || 100 || 100 || 60 || 96 || 100 || 100 || 100 || 100 || Atlas: Ongoing problems with  SRM test (Atlas Castor restarted to try and fix this - but no effect); CMS - timeouts in SRM tests.
+
 
|-
 
|-
| 09/03/17 || 100 || 100 || 35 || 96 || 100 || 100 || 100 || 100 || Atlas: Ongoing problems with  SRM test; CMS - timeouts in SRM tests.
+
| 09/03/17 || 100 || 100 || style="background-color: lightgrey;" | 35 || style="background-color: lightgrey;" | 96 || 100 || 88 || 100 || 100 || Atlas: Ongoing problems with  SRM test; CMS - timeouts in SRM tests.
 
|-
 
|-
| 10/03/17 || 100 || 100 || 94 || 95 || 100 || 100 || 100 || 100 || Atlas: Ongoing problems with  SRM test; CMS - timeouts in SRM tests.
+
| 10/03/17 || 100 || 100 || style="background-color: lightgrey;" | 94 || style="background-color: lightgrey;" | 95 || 100 || 96 || 100 || 100 || Atlas: Ongoing problems with  SRM test; CMS - timeouts in SRM tests.
 
|-
 
|-
| 11/03/17 || 100 || 100 || 92 || 199 || 100 || 100 || 100 || 100 || Atlas: Ongoing problems with  SRM test; CMS - timeouts in SRM tests.
+
| 11/03/17 || 100 || 100 || style="background-color: lightgrey;" | 92 || style="background-color: lightgrey;" | 199 || 100 || 99 || 100 || 100 || Atlas: Ongoing problems with  SRM test; CMS - timeouts in SRM tests.
 
|-
 
|-
| 12/03/17 || 100 || 100 || 96 || 196 || 100 || 100 || 100 || 100 || Atlas: Ongoing problems with  SRM test; CMS - timeouts in SRM tests.
+
| 12/03/17 || 100 || 100 || style="background-color: lightgrey;" | 96 || style="background-color: lightgrey;" | 196 || style="background-color: lightgrey;" | 96 || 97 || 94 || 100 || Atlas: Ongoing problems with  SRM test; CMS - timeouts in SRM tests.
 
|-
 
|-
| 13/03/17 || 100 || 100 || 96 || 197 || 100 || 100 || 100 || 100 || Atlas: Ongoing problems with  SRM test; CMS - timeouts in SRM tests.
+
| 13/03/17 || 100 || 100 || style="background-color: lightgrey;" | 96 || style="background-color: lightgrey;" | 197 || 100 || 99 || 100 || 100 || Atlas: Ongoing problems with  SRM test; CMS - timeouts in SRM tests.
 
|-
 
|-
| 14/03/17 || 100 || 100 || 100 || 100 || 100 || 100 || 100 || 100 ||
+
| 14/03/17 || 100 || 100 || style="background-color: lightgrey;" | 92 || style="background-color: lightgrey;" | 84 || 100 || 92 || 100 || 93 || Atlas & CMS - problems during work on back-end Castor databases.
 
|}
 
|}
 
<!-- **********************End Availability Report************************** ----->
 
<!-- **********************End Availability Report************************** ----->
Line 250: Line 238:
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Notes from Meeting.
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Notes from Meeting.
 
|}
 
|}
* ECHO: Two additional 'MON' boxes are being set-up bringing the total to five. The existing three can cope with normal activity but the additional ones would speed up recoveries and starts. Two additional gateway nodes are also being set-up (also bringing the total to five) which will improve bandwidth.
+
* None yet.
* VO Dirac: some tar files moved from Edinburgh but little activity from sites other than Durham as yet.
+

Latest revision as of 12:18, 15 March 2017

RAL Tier1 Operations Report for 15th March 2017

Review of Issues during the week 8th to 15th March 2017.
  • There were a very large FTS transfer queue for Atlas during the middle of last week. Resolved by Atlas.
  • There was a problem with the Microsoft Hyper-V 2012 hypervisor cluster on Thursday afternoon 9th March. This hoists many service VMs. Two out of the five nodes appear to have updated a particular component - and attempted to move VMs to other nodes. It took a few hours for the system to recover. This affected a number of services including BDIIs, FTS nodes and CEs. However, the resilience built into these services meant that the operational effect was small.
  • We have an ongoing problem with the SRM SAM tests for Atlas which are failing a lot of the time. We have confirmed this is not affecting Atlas operationally it is just the tests that fails. We still have a GGUS ticket open with Atlas as the test appears to be problematic.
  • There was a problem with squid systems on Sunday (12th Mar). Under high load there was a resource limitation. This is being resolved by increasing one of the OS parameters.
  • Yesterday (Tuesday 14th) there were problems with Castor during a reconfiguration of the back-end database nodes. A GGUS ticket was received from LHCb - other VOs were also affected.
Resolved Disk Server Issues
  • GDSS689 (AtlasDataDisk - D1T0) reported 'fsprobe' errors and was taken out of production last Wednesday (8th March). It was returned to service on Friday (10th) having had two disks replaced.
  • GDSS623 (GenTape - D0T1) had one partition go read-only on Friday evening, 10th Mar. It was put back in service read-only on the SUnday (12th) so that the files awaiting migration to tape could be drained off.
Current operational status and issues
  • We are still seeing a rate of failures of the CMS SAM tests against the SRM. These are affecting our (CMS) availabilities but the level of failures is reduced as compared to a few weeks ago.
Ongoing Disk Server Issues
  • None
Limits on concurrent batch system jobs.
  • Atlas Pilot (Analysis) 1500
  • CMS Multicore 460
Notable Changes made since the last meeting.
  • Increased max_filedesc parameter on squids to enable them to better cope with high load.
  • ECHO: Two additional 'MON' boxes are being set-up bringing the total to five. The existing three can cope with normal activity but the additional ones would speed up recoveries and starts. Two additional gateway nodes are also being set-up (also bringing the total to five) which will improve access bandwidth.
  • The first of two chillers for the R89 machine room air-conditioning has been replaced.
Declared in the GOC DB

None

Advanced warning for other interventions
The following items are being discussed and are still to be formally scheduled and announced.

Pending - but not yet formally announced:

  • Update Castor SRMs. Propose LHCb SRMs first - target date 22nd March.
  • Chiller replacement - work ongoing.
  • Merge AtlasScratchDisk into larger Atlas disk pool.

Listing by category:

  • Castor:
    • Update SRMs to new version, including updating to SL6.
    • Bring some newer disk servers ('14 generation) into service, replacing some older ('12 generation) servers.
  • Databases
    • Removal of "asmlib" layer on Oracle database nodes. (Ongoing)
  • Networking
    • Enable first services on production network with IPv6 once addressing scheme agreed.
  • Infrastructure:
    • Two of the chillers supplying the air-conditioning for the R89 machine room will be replaced.
Entries in GOC DB starting since the last report.
Service Scheduled? Outage/At Risk Start End Duration Reason
Whole site UNSCHEDULED OUTAGE 09/03/2017 12:10 09/03/2017 13:52 1 hour and 42 minutes Problems with virtualisation infrastructure, some services degraded or unavailable
Whole site SCHEDULED WARNING 08/03/2017 07:00 08/03/2017 11:00 4 hours Warning on site during network intervention in preparation for IPv6.
Open GGUS Tickets (Snapshot during morning of meeting)
GGUS ID Level Urgency State Creation Last Update VO Subject
126905 Green Less Urgent In Progress 2017-03-02 2017-03-02 solid finish commissioning cvmfs server for solidexperiment.org
126184 Yellow Less Urgent In Progress 2017-01-26 2017-02-07 Atlas Request of inputs for new sites monitoring
124876 Red Less Urgent On Hold 2016-11-07 2017-01-01 OPS [Rod Dashboard] Issue detected : hr.srce.GridFTP-Transfer-ops@gridftp.echo.stfc.ac.uk
117683 Red Less Urgent On Hold 2015-11-18 2017-03-02 CASTOR at RAL not publishing GLUE 2.
Availability Report

Key: Atlas HC = Atlas HammerCloud (Queue ANALY_RAL_SL6, Template 845); Atlas HC ECHO = Atlas ECHO (Template 842);CMS HC = CMS HammerCloud

Day OPS Alice Atlas CMS LHCb Atlas HC Atlas HC ECHO CMS HC Comment
08/03/17 100 100 60 96 100 75 196 100 Atlas: Ongoing problems with SRM test (Atlas Castor restarted to try and fix this - but no effect); CMS - timeouts in SRM tests.
09/03/17 100 100 35 96 100 88 100 100 Atlas: Ongoing problems with SRM test; CMS - timeouts in SRM tests.
10/03/17 100 100 94 95 100 96 100 100 Atlas: Ongoing problems with SRM test; CMS - timeouts in SRM tests.
11/03/17 100 100 92 199 100 99 100 100 Atlas: Ongoing problems with SRM test; CMS - timeouts in SRM tests.
12/03/17 100 100 96 196 96 97 94 100 Atlas: Ongoing problems with SRM test; CMS - timeouts in SRM tests.
13/03/17 100 100 96 197 100 99 100 100 Atlas: Ongoing problems with SRM test; CMS - timeouts in SRM tests.
14/03/17 100 100 92 84 100 92 100 93 Atlas & CMS - problems during work on back-end Castor databases.
Notes from Meeting.
  • None yet.