Difference between revisions of "Tier1 Operations Report 2016-07-06"

From GridPP Wiki
Jump to: navigation, search
()
()
Line 60: Line 60:
 
* Updated dteam VOMS servers info on various services.
 
* Updated dteam VOMS servers info on various services.
 
* The migration of Atlas data from "C" to "D" tapes continues. We have migrated around 650 of the 1300 tapes so far. (I.e. we are around half way).
 
* The migration of Atlas data from "C" to "D" tapes continues. We have migrated around 650 of the 1300 tapes so far. (I.e. we are around half way).
 +
* Successful UPS/Generator load test this morning.
 
<!-- *************End Notable Changes made this last week************** ----->
 
<!-- *************End Notable Changes made this last week************** ----->
 
<!-- ****************************************************************** ----->
 
<!-- ****************************************************************** ----->

Revision as of 10:59, 6 July 2016

RAL Tier1 Operations Report for 6th July 2016

Review of Issues during the week 29th June to 6th July 2016.
  • The problems with the "ACSLS" control software for the tape library crashing as reported at previous meetings continued up until last Wednesday (22nd). Various actions were taken to understand the problem and we have worked closely with the vendor (Oracle). Since that date the system has been stable - with no crashes at all for a week. We do have a reduced number of Tier1 tape drives in use - although this has been sufficient to keep up with the existing workload. However, the cause of the problem has not been understood and our investigations continue. We will be changing various configurations of the tape library during the working day with the aim of understanding this problem, returning to what be believe is a known good configuration overnight.
  • At the start of this period we continued to see high load on the OPN link. However this has eased off for the last week or so.
  • On Friday there was a problem with lcgwms04. A user had submitted many jobs needing large output sandboxes. The user was contacted and responded quickly. It took us a while to clear jobs already in lcgwms04 and there were some ongoing problems until the start of this week.
  • Yesterday afternoon (Tuesday 28th) there was a problem with the Atlas Castor instance that lasted for a few hours. This was resolved by a restart of processes. We have suggestion as to the cause of this problem (a resource limit) although it is not conclusive.
Resolved Disk Server Issues
  • GDSS748 (AtlasDataDisk - D1T0) crashed on the afternoon of Sunday morning 19th June. There were problems with the RAID array which were finally fixed by transplanting all the disks into another chassis which was re-named. The server was put back in service read-only on Tuesday 21st June. It is being drained ahead of sorting out the re-named system.
  • GDSS743 (AtlasDataDisk - D1T0) crashed in the early hours of Monday 20th June. Following two disk replacements it was put back in service the following day, initially read-only. Once teh disk rebuilds had all completed it was set back to read/write.
Current operational status and issues
  • There is a problem seen by LHCb of a low but persistent rate of failure when copying the results of batch jobs to Castor. There is also a further problem that sometimes occurs when these (failed) writes are attempted to storage at other sites. A recent modification has improved, but not completed fixed this.
  • The intermittent, low-level, load-related packet loss seen over external connections is still being tracked. Likewise we have been working to understand some remaining low level of packet loss seen within a part of our Tier1 network.
Ongoing Disk Server Issues
  • None.
Notable Changes made since the last meeting.
  • Updated dteam VOMS servers info on various services.
  • The migration of Atlas data from "C" to "D" tapes continues. We have migrated around 650 of the 1300 tapes so far. (I.e. we are around half way).
  • Successful UPS/Generator load test this morning.
Declared in the GOC DB
  • None
Advanced warning for other interventions
The following items are being discussed and are still to be formally scheduled and announced.

Listing by category:

  • Databases:
    • Switch LFC/3D to new Database Infrastructure.
  • Castor:
    • Update SRMs to new version, including updating to SL6. This will be done after the Castor 2.1.15 update.
    • Update to Castor version 2.1.15. This awaits successful resolution and testing of the new version.
    • Migration of data from T10KC to T10KD tapes (Affects Atlas & LHCb data).
  • Networking:
    • Replace the UKLight Router. Then upgrade the 'bypass' link to the RAL border routers to 2*40Gbit.
  • Fabric
    • Firmware updates on remaining EMC disk arrays (Castor, LFC)
Entries in GOC DB starting since the last report.
Service Scheduled? Outage/At Risk Start End Duration Reason
lcgwms06.gridpp.rl.ac.uk, SCHEDULED OUTAGE 01/06/2016 11:00 30/06/2016 11:00 29 days, Server lcgwms06.gridpp.rl.ac.uk Decommissioning
Open GGUS Tickets (Snapshot during morning of meeting)
GGUS ID Level Urgency State Creation Last Update VO Subject
122364 Green Less Urgent On Hold 2016-06-27 2016-06-29 cvmfs support at RAL-LCG2 for solidexperiment.org
121687 Yellow Less Urgent On Hold 2016-05-20 2016-05-23 packet loss problems seen on RAL-LCG perfsonar
120810 Green Urgent In Progress 2016-04-13 2016-06-24 Biomed Decommissioning of SE srm-biomed.gridpp.rl.ac.uk - forbid write access for biomed users
120350 Green Less Urgent In Progress 2016-03-22 2016-05-06 LSST Enable LSST at RAL
119841 Red Less Urgent On Hold 2016-03-01 2016-04-26 LHCb HTTP support for lcgcadm04.gridpp.rl.ac.uk
117683 Yellow Less Urgent On Hold 2015-11-18 2016-04-05 CASTOR at RAL not publishing GLUE 2
Availability Report

Key: Atlas HC = Atlas HammerCloud (Queue ANALY_RAL_SL6, Template 729); CMS HC = CMS HammerCloud

Day OPS Alice Atlas CMS LHCb Atlas HC CMS HC Comment
29/06/16 100 100 100 100 100 100 100
30/06/16 100 100 100 100 100 100 100
01/07/16 100 100 100 100 100 100 100
02/07/16 100 100 100 100 100 100 100
03/07/16 100 100 100 100 100 100 100
04/07/16 100 100 100 100 100 100 100
05/07/16 100 100 100 100 100 100 100