Difference between revisions of "Tier1 Operations Report 2015-11-11"

From GridPP Wiki
Jump to: navigation, search
(Created page with "==RAL Tier1 Operations Report for 11th November 2015== __NOTOC__ ====== ====== <!-- ************************************************************* -----> <!-- ***********Start...")
 
()
 
(8 intermediate revisions by one user not shown)
Line 9: Line 9:
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Review of Issues during the week 4th to 11th November 2015.
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Review of Issues during the week 4th to 11th November 2015.
 
|}
 
|}
* We again saw high load on the AtlasTape Castor instance - exacerbated by the failure of some disk servers in the cache in front of this tape area.
+
* We have been investigating the behaviour of some batch jobs as there is a low level of failures that are not understood.
 +
* We are investigating a problem that has affected some Castor disk servers ability to make DNS lookups.
 
<!-- ***********End Review of Issues during last week*********** ----->
 
<!-- ***********End Review of Issues during last week*********** ----->
 
<!-- *********************************************************** ----->
 
<!-- *********************************************************** ----->
Line 20: Line 21:
 
| style="background-color: #f8d6a9; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Resolved Disk Server Issues
 
| style="background-color: #f8d6a9; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Resolved Disk Server Issues
 
|}
 
|}
* gdss665 (AtlasTape - D0T1) failed on Sat (24th Oct). Following a disk replacement and updating the firmware in the disk controller the system was re-run through the acceptance testing for 5 days before being returned to service yesterday (3rd November).
+
* gdss664 (AtlasTape - D0T1) was removed from service on the 28th Oct. The system was having some problems running some network commands which were resolved by a reboot. Following there placement of a failing disk and a disk controller firmware update the server was re-tested before being put back into service on Wednesday afternoon (4th Nov).
* gdss663 (AtlasTape - D0T1) failed on Sun (25th Oct). Following a disk and battery replacement and updating the firmware in the disk controller the system was re-run through the acceptance testing for 5 days before being returned to service yesterday (3rd November).
+
 
<!-- ***********End Resolved Disk Server Issues*********** ----->
 
<!-- ***********End Resolved Disk Server Issues*********** ----->
 
<!-- ***************************************************** ----->
 
<!-- ***************************************************** ----->
Line 32: Line 32:
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Current operational status and issues
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Current operational status and issues
 
|}
 
|}
* The LHCb problem with a low but persistent rate of failure when copying the results of batch jobs to Castor. There is also a further problem that sometimes occurs when these (failed) writes are attempted to storage at other sites.
+
* There is a problem seen by LHCb of a low but persistent rate of failure when copying the results of batch jobs to Castor. There is also a further problem that sometimes occurs when these (failed) writes are attempted to storage at other sites.
 
* The intermittent, low-level, load-related packet loss seen over external connections is still being tracked. Likewise we have been working to understand some remaining low level of packet loss seen within a part of our Tier1 network.
 
* The intermittent, low-level, load-related packet loss seen over external connections is still being tracked. Likewise we have been working to understand some remaining low level of packet loss seen within a part of our Tier1 network.
* Long-standing CMS issues. The two items that remain are CMS Xroot (AAA) redirection and file open times. Work is ongoing into the Xroot redirection with a new server having been added in recent weeks. File open times using Xroot remain slow but this is a less significant problem.
 
 
<!-- ***********End Current operational status and issues*********** ----->
 
<!-- ***********End Current operational status and issues*********** ----->
 
<!-- *************************************************************** ----->
 
<!-- *************************************************************** ----->
Line 46: Line 45:
 
|}
 
|}
 
* gdss707 (AtlasDataDisk - D1T0) has been out of production since Friday (16th Oct). The server was drained and is currently undergoing testing with fabric.
 
* gdss707 (AtlasDataDisk - D1T0) has been out of production since Friday (16th Oct). The server was drained and is currently undergoing testing with fabric.
* gdss664 (AtlasTape - D0T1) was removed from service on the 28th Oct. The system was having some problems running some network commands. These were resolved by a reboot. A failing disk was also replaced. The system has had the disk controller firmware updated and has been re-running the acceptance tests since yesterday (3rd Nov).
+
* GDSS720 (AtlasDataDisk - D1T0) failed on Monday evening (9th Nov). The error looks to be in the CPU. The server is being drained ahead of the CPU being swapped.
 +
* Both GDS663 (AtlasTape - D0T1) and GDSS676 (CMSTape - D0T1) were taken out of service early this morning (11th Nov). In both cases the servers were not responding correctly to DNS lookups. This is similar to the problem seen recently on GDSS664. The problem is under investigation. There are no migration candidates on either server now.
 +
* GDSS654 (LHCbRawRDst - D0T1) was taken out of service this morning. A second disk was found to have failed while an earlier replaced disk was being rebuilt.
 +
 
 
<!-- ***************End Ongoing Disk Server Issues**************** ----->
 
<!-- ***************End Ongoing Disk Server Issues**************** ----->
 
<!-- ************************************************************* ----->
 
<!-- ************************************************************* ----->
Line 57: Line 59:
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Notable Changes made since the last meeting.
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Notable Changes made since the last meeting.
 
|}
 
|}
* None
+
* Two tape servers are now running Castor 2.1.15.
 +
* Another set (of four) Castor D0T1 disk servers have been upgraded to SL6.
 
<!-- *************End Notable Changes made this last week************** ----->
 
<!-- *************End Notable Changes made this last week************** ----->
 
<!-- ****************************************************************** ----->
 
<!-- ****************************************************************** ----->
Line 121: Line 124:
 
|-style="background:#b7f1ce"
 
|-style="background:#b7f1ce"
 
! GGUS ID !! Level !! Urgency !! State !! Creation !! Last Update !! VO !! Subject
 
! GGUS ID !! Level !! Urgency !! State !! Creation !! Last Update !! VO !! Subject
|-
 
| 117277
 
| Green
 
| Urgent
 
| Waiting for Reply
 
| 2015-10-30
 
| 2015-11-03
 
| Atlas
 
| UK RAL-LCG2 staging error "bring-online timeout has been exceeded" (over 500 errors)
 
|-
 
| 117248
 
| Green
 
| Less Urgent
 
| In Progress
 
| 2015-10-28
 
| 2015-11-03
 
|
 
| Incorrect Certificate for SRM (new certs requested but not yet deployed)
 
 
|-
 
|-
 
| 116866
 
| 116866
Line 152: Line 137:
 
| Green
 
| Green
 
| Urgent
 
| Urgent
| In Progress
+
| On Hold
 
| 2015-10-12
 
| 2015-10-12
| 2015-10-26
+
| 2015-11-04
 
| CMS
 
| CMS
 
| T1_UK_RAL AAA opening and reading test failing again...
 
| T1_UK_RAL AAA opening and reading test failing again...

Latest revision as of 14:05, 11 November 2015

RAL Tier1 Operations Report for 11th November 2015

Review of Issues during the week 4th to 11th November 2015.
  • We have been investigating the behaviour of some batch jobs as there is a low level of failures that are not understood.
  • We are investigating a problem that has affected some Castor disk servers ability to make DNS lookups.
Resolved Disk Server Issues
  • gdss664 (AtlasTape - D0T1) was removed from service on the 28th Oct. The system was having some problems running some network commands which were resolved by a reboot. Following there placement of a failing disk and a disk controller firmware update the server was re-tested before being put back into service on Wednesday afternoon (4th Nov).
Current operational status and issues
  • There is a problem seen by LHCb of a low but persistent rate of failure when copying the results of batch jobs to Castor. There is also a further problem that sometimes occurs when these (failed) writes are attempted to storage at other sites.
  • The intermittent, low-level, load-related packet loss seen over external connections is still being tracked. Likewise we have been working to understand some remaining low level of packet loss seen within a part of our Tier1 network.
Ongoing Disk Server Issues
  • gdss707 (AtlasDataDisk - D1T0) has been out of production since Friday (16th Oct). The server was drained and is currently undergoing testing with fabric.
  • GDSS720 (AtlasDataDisk - D1T0) failed on Monday evening (9th Nov). The error looks to be in the CPU. The server is being drained ahead of the CPU being swapped.
  • Both GDS663 (AtlasTape - D0T1) and GDSS676 (CMSTape - D0T1) were taken out of service early this morning (11th Nov). In both cases the servers were not responding correctly to DNS lookups. This is similar to the problem seen recently on GDSS664. The problem is under investigation. There are no migration candidates on either server now.
  • GDSS654 (LHCbRawRDst - D0T1) was taken out of service this morning. A second disk was found to have failed while an earlier replaced disk was being rebuilt.


Notable Changes made since the last meeting.
  • Two tape servers are now running Castor 2.1.15.
  • Another set (of four) Castor D0T1 disk servers have been upgraded to SL6.
Declared in the GOC DB
  • None
Advanced warning for other interventions
The following items are being discussed and are still to be formally scheduled and announced.
  • Upgrade of remaining Castor disk servers (those in tape-backed service classes) to SL6. This will be transparent to users.
  • Some detailed internal network re-configurations to enable the removal of the old 'core' switch from our network. This includes changing the way the UKLIGHT router connects into the Tier1 network.

Listing by category:

  • Databases:
    • Switch LFC/3D to new Database Infrastructure.
  • Castor:
    • Update SRMs to new version (includes updating to SL6).
    • Update disk servers to SL6 (ongoing)
    • Update to Castor version 2.1.15.
  • Networking:
    • Complete changes needed to remove the old core switch from the Tier1 network.
    • Make routing changes to allow the removal of the UKLight Router.
  • Fabric
    • Firmware updates on remaining EMC disk arrays (Castor, LFC)
Entries in GOC DB starting since the last report.
  • None
Open GGUS Tickets (Snapshot during morning of meeting)
GGUS ID Level Urgency State Creation Last Update VO Subject
116866 Green Less Urgent On Hold 2015-10-12 2015-10-19 SNO+ snoplus support at RAL-LCG2 (pilot role)
116864 Green Urgent On Hold 2015-10-12 2015-11-04 CMS T1_UK_RAL AAA opening and reading test failing again...
Availability Report

Key: Atlas HC = Atlas HammerCloud (Queue ANALY_RAL_SL6, Template 508); CMS HC = CMS HammerCloud

Day OPS Alice Atlas CMS LHCb Atlas HC CMS HC Comment
04/11/15 100 100 98 100 100 92 100 Single SRM test failure "could not open connection to srm-atlas.gridpp.rl.ac.uk:8443"
05/11/15 100 100 100 100 100 93 100
06/11/15 100 100 100 100 100 90 100
07/11/15 100 100 100 100 100 97 100
08/11/15 100 100 100 100 100 100 N/A
09/11/15 100 100 100 100 100 93 100
10/11/15 100 100 100 100 100 100 100