Difference between revisions of "Tier1 Operations Report 2018-02-13"

From GridPP Wiki
Jump to: navigation, search
()
()
Line 66: Line 66:
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Notable Changes made since the last meeting.
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Notable Changes made since the last meeting.
 
|}
 
|}
* The Upgrade of Echo to the latest CEPH release (“Luminous”) was carried out successfully during the second half of last week. This was done as a rolling update with the service available throughout.
+
* Move to IPv6 connectivity - currently on 10Gbit links - to share the higher bandwidth (40Gbit) links used by IPv4. This has been approved at the CAB meeting (13/2/18), and will implemented on the 21/2/2018.
* ALICE VOBOXes (lcgvo07 and lcgvo09) and LHCb VOBOX (lcgvo10) are now dual stack IPv4/6.
+
* The Hyper-K VO has been enabled on the batch farm.
+
* Atlas have moved their FTS transfers from our "test" FTS service to the Production one. This change was made at our request and effectively consolidates us with a single production FTS service.
+
 
<!-- *************End Notable Changes made this last week************** ----->
 
<!-- *************End Notable Changes made this last week************** ----->
 
<!-- ****************************************************************** ----->
 
<!-- ****************************************************************** ----->

Revision as of 14:03, 13 February 2018

RAL Tier1 Operations Report for 14th February 2018

Review of Issues during the week 7th February 2018 January 2018 to 14th February 2018
  • We had an incident with our disk servers that deal with tape recalls/migrations on our Gen instance. A network misconfiguration was introduced after they had been physically move to a new rack location last Thursday. As a result, tape transfers were not running over the weekend, with this issue being resolved on Monday morning. We are holding a post-mortem review on this incident with the view of improving the relevant processes and to ensure we reduce the risk of this problem happening again.
Current operational status and issues
  • The problems with data flows through the site firewall being restricted is still present. New firewall kit has be purchased and delivered, awaiting deployment.
Resolved Castor Disk Server Issues
  • None
Ongoing Castor Disk Server Issues
  • None
Limits on concurrent batch system jobs.
  • CMS Multicore 550
  • ALICE jobs 750. (Attempt to re-balance farm)
Notable Changes made since the last meeting.
  • Move to IPv6 connectivity - currently on 10Gbit links - to share the higher bandwidth (40Gbit) links used by IPv4. This has been approved at the CAB meeting (13/2/18), and will implemented on the 21/2/2018.
Entries in GOC DB starting since the last report.
  • None
Declared in the GOC DB
  • No downtime scheduled in the GOCDB between 2018-02-07 and 2018-02-14
Advanced warning for other interventions
The following items are being discussed and are still to be formally scheduled and announced.

Ongoing or Pending - but not yet formally announced:

  • Move IPv6 connectivity - currently on 10Gbit links - to share the higher bandwidth (40Gbit) links used by IPv4.

Listing by category:

  • Castor:
    • Update systems to use SL7 and configured by Quattor/Aquilon. (Tape servers done)
    • Move to generic Castor headnodes.
  • Networking
    • Extend the number of services on the production network with IPv6 dual stack. (Done for Perfsonar, FTS3, all squids and the CVMFS Stratum-1 servers).
    • Move IPv6 connectivity - currently on 10Gbit links - to share the higher bandwidth (40Gbit) links used by IPv4.
    • Replacement (upgrade) of RAL firewall.
  • Internal
    • DNS servers will be rolled out within the Tier1 network.
    • Infrastructure
  • Testing of power distribution boards in the R89 machine room is being scheduled for some time late July / Early August. The effect of this on our services is being discussed.
Open GGUS Tickets (Snapshot during morning of meeting)
Request id Affected vo Status Priority Date of creation Last update Type of problem Subject Scope
117683 none on hold less urgent 18/11/2015 03/01/2018 Information System CASTOR at RAL not publishing GLUE 2 EGI
124876 ops on hold less urgent 07/11/2016 13/11/2017 Operations [Rod Dashboard] Issue detected : hr.srce.GridFTP-Transfer-ops@gridftp.echo.stfc.ac.uk EGI
127597 cms on hold urgent 07/04/2017 29/01/2018 File Transfer Check networking and xrootd RAL-CERN performance EGI
132589 lhcb in progress very urgent 21/12/2017 31/01/2018 Local Batch System Killed pilots at RAL WLCG
133399 atlas in progress less urgent 09/02/2018 12/02/2018 File Transfer Transfers to RAL-LCG2-ECHO fail with "Address already in use" WLCG
133421 snoplus.snolab.ca in progress urgent 12/02/2018 12/02/2018 File Transfer Failed Transfers since Friday 10:00 EGI
GGUS Tickets Closed Last week
Request id Affected vo Status Priority Date of creation Last update Type of problem Subject Scope
132802 cms closed urgent 11/01/2018 09/02/2018 CMS_AAA WAN Access Low HC xrootd success rates at T1_UK_RAL WLCG
132935 atlas closed less urgent 18/01/2018 07/02/2018 Storage Systems UK RAL-LCG2: deletion errors WLCG
133092 atlas closed top priority 29/01/2018 12/02/2018 Other RAL FTS server in troubles WLCG
133425 ops verified less urgent 12/02/2018 13/02/2018 Operations [Rod Dashboard] Issue detected : org.nordugrid.ARC-CE-sw-csh-ops@arc-ce02.gridpp.rl.ac.uk EGI
Availability Report
Day ALICE ATLAS ATLAS-ECHO CMS LHCb OPS Comment
2018-02-07 100 100 100 100 100 100
2018-02-08 100 100 100 100 100 100
2018-02-09 100 100 100 100 100 100
2018-02-10 91 100 100 100 100 100
2018-02-11 99 100 100 100 100 100
2018-02-12 79 98 100 100 100 100
2018-02-13 100 100 100 100 100 100
Hammercloud Test Report

Key: Atlas HC = Atlas HammerCloud (Queue RAL-LCG2_UCORE, Template 841); CMS HC = CMS HammerCloud

Day Atlas HC CMS HC Comment
2018/02/07 100 100
2018/02/08 98 100
2018/02/09 100 100
2018/02/10 90 100
2018/02/11 100 100
2018/02/12 94 100
2018/02/13 96 100
Notes from Meeting.
  • It was noted that one morning this week Echo was delivering data to teh worker nodes at around 15GBytes/sec.
  • The MICE experiment is coming to an end. Future acess to MICE data was briefly discussed.
  • It was noted that there are plans to connect the RAL network to JANET at 100Gbit.