|
|
Line 8: |
Line 8: |
| {| width="100%" cellspacing="0" cellpadding="0" style="background-color: #ffffff; border: 1px solid silver; border-collapse: collapse; width: 100%; margin: 0 0 1em 0;" | | {| width="100%" cellspacing="0" cellpadding="0" style="background-color: #ffffff; border: 1px solid silver; border-collapse: collapse; width: 100%; margin: 0 0 1em 0;" |
| |- | | |- |
− | | style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Review of Issues during the week 25th October to 11th November 2017. | + | | style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Review of Issues during the week 25th October to 1st November 2017. |
| |} | | |} |
| * At last week's meeting we reported that CMS were switched over to using xrootd.echo.stfc.ac.uk meaning that CMS jobs used xrootd.echo.stfc.ac.uk as the primary means of accessing local data via xrootd. That change was reverted - but has since been re-applied. There was a further issue where long-lived CMS jobs needed to complete before the changes were picked up by all CMS batch jobs. | | * At last week's meeting we reported that CMS were switched over to using xrootd.echo.stfc.ac.uk meaning that CMS jobs used xrootd.echo.stfc.ac.uk as the primary means of accessing local data via xrootd. That change was reverted - but has since been re-applied. There was a further issue where long-lived CMS jobs needed to complete before the changes were picked up by all CMS batch jobs. |
Revision as of 14:47, 1 November 2017
RAL Tier1 Operations Report for 11th November 2017
Review of Issues during the week 25th October to 1st November 2017.
|
- At last week's meeting we reported that CMS were switched over to using xrootd.echo.stfc.ac.uk meaning that CMS jobs used xrootd.echo.stfc.ac.uk as the primary means of accessing local data via xrootd. That change was reverted - but has since been re-applied. There was a further issue where long-lived CMS jobs needed to complete before the changes were picked up by all CMS batch jobs.
Resolved Disk Server Issues
|
Current operational status and issues
|
- There is a problem on the site firewall which is causing problems for some specific data flows. Discussions have been taking place with the vendor. This is having an effect on our data that flows through the firewall (such as to/from worker nodes).
Ongoing Disk Server Issues
|
Limits on concurrent batch system jobs.
|
Notable Changes made since the last meeting.
|
- Re-distribution of data in Echo onto the 2015 capacity hardware is ongoing. This is expected to complete in a few weeks.
- CMS batch switched to use Echo as the primary means of accessing local data via xrootd. If data not found there it will fail over to Castor.
- The Echo Gateways have had a parameter change that means the GridFTP gateways make better use of memory. This will enable the number of connections to each gateway server to be increased.
- A start has been made on updating Castor tape servers to SL7.
Advanced warning for other interventions
|
The following items are being discussed and are still to be formally scheduled and announced.
|
Ongoing or Pending - but not yet formally announced:
- Re-distribute the data in Echo onto the 2015 capacity hardware. (Ongoing)
- Update the LHCb Castor SRMs so as to be able to configure timeouts.
Listing by category:
- Castor:
- Update systems (initially tape servers) to use SL7 and configured by Quattor/Aquilon.
- Move to generic Castor headnodes.
- Echo:
- Re-distribute the data in Echo onto the remaining 2015 capacity hardware.
- Update to next CEPH version ("Luminous").
- Networking
- Extend the number of services on the production network with IPv6 dual stack. (Done for Perfsonar, all squids and the CVMFS Stratum-1 servers).
- Services
- The Production and "Test" (Atlas) FTS3 services will be merged and will make use of a resilient distributed database.
Entries in GOC DB starting since the last report.
|
Open GGUS Tickets (Snapshot during morning of meeting)
|
GGUS ID |
Level |
Urgency |
State |
Creation |
Last Update |
VO |
Subject
|
131299
|
Green
|
Urgent
|
In Progress
|
2017-10-24
|
2017-10-24
|
CMS
|
T1_UK_RAL HammerCloud failures
|
131213
|
Green
|
Urgent
|
Waiting for Reply
|
2017-10-19
|
2017-10-23
|
CMS
|
Issues with fallback requests
|
130949
|
Green
|
Urgent
|
In Progress
|
2017-10-06
|
2017-10-23
|
CMS
|
Transfers failing to T1_UK_RAL_Disk
|
130207
|
Amber
|
Urgent
|
On Hold
|
2017-08-24
|
2017-10-11
|
MICE
|
Timeouts when copying MICE reco data to CASTOR
|
127597
|
Red
|
Urgent
|
On Hold
|
2017-04-07
|
2017-10-05
|
CMS
|
Check networking and xrootd RAL-CERN performance
|
124876
|
Red
|
Less Urgent
|
On Hold
|
2016-11-07
|
2017-01-01
|
OPS
|
[Rod Dashboard] Issue detected : hr.srce.GridFTP-Transfer-ops@gridftp.echo.stfc.ac.uk
|
117683
|
Red
|
Less Urgent
|
On Hold
|
2015-11-18
|
2017-07-06
|
|
CASTOR at RAL not publishing GLUE 2.
|
Day |
OPS |
Alice |
Atlas |
CMS |
LHCb |
Atlas Echo |
Comment
|
25/10/17 |
100 |
100 |
98 |
99 |
100 |
100 |
SRM Test failures.
|
26/10/17 |
100 |
100 |
98 |
99 |
100 |
100 |
SRM Test failures.
|
27/10/17 |
100 |
100 |
98 |
99 |
100 |
100 |
SRM Test failures.
|
28/10/17 |
100 |
100 |
98 |
100 |
100 |
100 |
SRM Test failures.
|
29/10/17 |
100 |
100 |
98 |
100 |
100 |
100 |
SRM Test failures.
|
30/10/17 |
100 |
100 |
100 |
100 |
100 |
100 |
|
31/10/17 |
100 |
100 |
98 |
91 |
100 |
100 |
Atlas: Single SRM test failure; CMS: CE test failures caused by xroot data access problems to echo.
|
Key: Atlas HC = Atlas HammerCloud (Queue ANALY_RAL_SL6, Template 845); Atlas HC Echo = Atlas Echo (Template 841);CMS HC = CMS HammerCloud
Day |
Atlas HC |
Atlas HC Echo |
CMS HC |
Comment
|
25/10/17 |
100 |
94 |
100 |
|
26/10/17 |
96 |
100 |
100 |
|
27/10/17 |
100 |
98 |
100 |
|
28/10/17 |
100 |
100 |
100 |
|
29/10/17 |
100 |
100 |
100 |
|
30/10/17 |
100 |
100 |
99 |
|
31/10/17 |
100 |
100 |
99 |
|