Difference between revisions of "Tier1 Operations Report 2017-02-08"
From GridPP Wiki
(→) |
(→) |
||
Line 304: | Line 304: | ||
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Notes from Meeting. | | style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Notes from Meeting. | ||
|} | |} | ||
− | * | + | * ECHO: Atlas have run some production jobs that have stored outputs in ECHO. |
+ | * LHCb are awaiting the upgrade of the Castor SRMs in order to be able to use a newer ssl library. | ||
+ | * MICE will resume data taking in about a week's time. This will be a run of around three weeks duration. They still need to confirm their data processing works with Castor 2.1.15. | ||
+ | * VO Dirac: There have been successful test transfers from teh Edinburgh and Cambridge HPC Dirac sites. This means we have now had successful transfers from four out of the five Dirac sites. | ||
+ | * Raja reported that the long standing issue of local LHCbs failing to write into Castor. This has been resolved. Since the Castor update the error rate has dropped to around one job per day (out of around 15,00 jobs per day. This is similar to the rate seen at other Tier1 sites. This issue is therefore now considered fixed and will be dropped from this report. |
Revision as of 13:02, 9 February 2017
RAL Tier1 Operations Report for 8th February 2017
Review of Issues during the fortnight 25th January to 8th February 2017. |
- There was also a problem with ALICE after the 'GEN' upgrade. ALICE require a special version of the xroot component for Castor. Checks that the xroot component would install under 2.1.15 had been made - but a newer version was needed. Once this had been provided there was a further ALICE specific configuration error that had to be tracked down. This caused a significant loss of availability for ALICE (failed between the 26th and 30th January).
- Since the castor upgrade we have seen a couple of further problems:
- There has been a problem with the LHCb instance - we see a database resource (number of cursors) exhausted - and have had to restart the service to clear stuck transfers (on 1st Feb). A similar operation was carried out for Atlas (on 31st Jan).
- We have been failing tests for CMS xroot redirection. This appears to have started a couple of days after the CMS stager upgrade and is not yet understood.
- We are also failing CMS tests for an SRM endpoint defined in the GOC DB but not in production ("srm-cms-disk"). This should not have tests running against it and needs following up with CMS. Even though this test should not matter we would like to understand why it has stopped working after the Castor upgrade.
Resolved Disk Server Issues |
- GDSS780 (LHCbDst - D1T0) crashed on 25th Jan. It was returned to service later that day after a memory swap-around.
- GDSS687 (AtlasDataDisk - D1T0) was removed from production on 27th January when it was found to have two faulty disk drives. It was returned to service on the 30th after the drive replacements.
- GDSS776 (LHCbDst - D1T0) crashed on 3rd Feb. It was returned to service on the evening of the same day after being checked. Five files were lost from the time of the crash.
Current operational status and issues |
- There is a problem seen by LHCb of a low but persistent rate of failure when copying the results of batch jobs to Castor. There is also a further problem that sometimes occurs when these (failed) writes are attempted to storage at other sites.
- We have been seeing a rate of failures of the CMS SAM tests against the SRM. These are affecting our (CMS) availabilities. A correction to the list of CMS 'services' being tests is helping with the resulting availability measure.
Ongoing Disk Server Issues |
- None
Notable Changes made since the last meeting. |
- Castor 2.1.15 updates carried out on the GEN and LHCb stagers.
- Migration of LHCb data from 'C' to 'D' tapes has been completed. All Tier1 data is now on T10KD tapes.
- CMS PhEDEx debug transfers switched from CASTOR to Ceph (This had been tried previously but the change reverted.)
Declared in the GOC DB |
Service | Scheduled? | Outage/At Risk | Start | End | Duration | Reason |
---|---|---|---|---|---|---|
Whole site | SCHEDULED | WARNING | 01/03/2017 07:00 | 01/03/2017 11:00 | 4 hours | Warning on site during network intervention in preparation for IPv6. |
All Castor and ECHO storage and Perfsonar. | SCHEDULED | WARNING | 22/02/2017 07:00 | 22/02/2017 11:00 | 4 hours | Warning on Storage and Perfsonar during network intervention in preparation for IPv6. |
Advanced warning for other interventions |
The following items are being discussed and are still to be formally scheduled and announced. |
Pending - but not yet formally announced:
- Merge AtlasScratchDisk into larger Atlas disk pool.
Listing by category:
- Castor:
- Update SRMs to new version, including updating to SL6. This will be done after the Castor 2.1.15 update.
- Networking:
- Enabling IPv6 onto production network.
Entries in GOC DB starting since the last report. |
Service | Scheduled? | Outage/At Risk | Start | End | Duration | Reason |
---|---|---|---|---|---|---|
IPv6 testbed nodes | UNSCHEDULED | OUTAGE | 01/02/2017 07:30 | 01/02/2017 12:00 | 4 hours and 30 minutes | RAL IPv6 testbed network intervention |
srm-cms-disk.gridpp.rl.ac.uk, srm-cms.gridpp.rl.ac.uk, | SCHEDULED | OUTAGE | 31/01/2017 10:00 | 31/01/2017 11:46 | 1 hour and 46 minutes | Castor 2.1.15 Upgrade. Only affecting CMS instance. (CMS stager component being upgraded). |
srm-alice.gridpp.rl.ac.uk, | UNSCHEDULED | OUTAGE | 27/01/2017 16:00 | 30/01/2017 14:00 | 2 days, 22 hours | Continuing problems with Alice SRM xrootd access |
srm-alice.gridpp.rl.ac.uk, | UNSCHEDULED | OUTAGE | 27/01/2017 12:00 | 27/01/2017 16:00 | 4 hours | Continuing problems with Alice SRM xrootd access |
srm-alice.gridpp.rl.ac.uk, | UNSCHEDULED | OUTAGE | 26/01/2017 16:00 | 27/01/2017 12:00 | 20 hours | Problems for alice storage after Castor upgrade at RAL |
Castor GEN instance | SCHEDULED | OUTAGE | 26/01/2017 10:00 | 26/01/2017 16:00 | 6 hours | Castor 2.1.15 Upgrade. Only affecting GEN instance. (GEN stager component being upgraded). |
Open GGUS Tickets (Snapshot during morning of meeting) |
GGUS ID | Level | Urgency | State | Creation | Last Update | VO | Subject |
---|---|---|---|---|---|---|---|
126376 | Green | Urgent | In Progress | 2017-02-05 | 2017-02-08 | CMS | SAM3 CE & SRM test failures at T1_UK_RAL |
126296 | Green | Urgent | Waiting Reply | 2017-02-01 | 2017-02-06 | CMS | SAM SRM test errors at T1_UK_RAL |
126184 | Green | Less Urgent | In Progress | 2017-01-26 | 2017-02-07 | Atlas | Request of inputs for new sites monitoring |
124876 | Red | Less Urgent | On Hold | 2016-11-07 | 2017-01-01 | OPS | [Rod Dashboard] Issue detected : hr.srce.GridFTP-Transfer-ops@gridftp.echo.stfc.ac.uk |
117683 | Red | Less Urgent | On Hold | 2015-11-18 | 2016-12-07 | CASTOR at RAL not publishing GLUE 2. We looked at this as planned in December (report). |
Availability Report |
Key: Atlas HC = Atlas HammerCloud (Queue ANALY_RAL_SL6, Template 844); CMS HC = CMS HammerCloud
Day | OPS | Alice | Atlas | CMS | LHCb | Atlas HC | CMS HC | Comment |
---|---|---|---|---|---|---|---|---|
25/01/17 | 100 | 100 | 100 | 100 | 100 | 99 | 100 | |
26/01/17 | -1 | 63 | 92 | 100 | 100 | 100 | 100 | ALICE: GEN Castor stager 2.1.15 upgrade; Atlas: Checks timed out. |
27/01/17 | 100 | 0 | 100 | 100 | 100 | 100 | 100 | Alice specific problems after the Castor GEN upgrade. |
28/01/17 | 100 | 0 | 100 | 100 | 100 | 100 | 100 | Alice specific problems after the Castor GEN upgrade. |
29/01/17 | 100 | 0 | 100 | 100 | 100 | 98 | 98 | Alice specific problems after the Castor GEN upgrade. |
30/01/17 | 100 | 32 | 86 | 100 | 100 | 90 | 100 | Atlas: Could not open connection to srm-atlas.gridpp.rl.ac.uk; Alice specific problems after the Castor GEN upgrade. |
31/01/17 | -1 | 100 | 100 | 100 | 100 | 89 | 100 | |
01/02/17 | 100 | 100 | 100 | 100 | 85 | 100 | N/A | SRM test failures to list file. |
02/02/17 | 100 | 100 | 100 | 100 | 100 | 100 | 97 | |
03/02/17 | 100 | 100 | 100 | 100 | 96 | 98 | 100 | SRM test failures to list file. |
04/02/17 | 100 | 100 | 98 | 100 | 100 | 100 | 100 | Could not open connection to srm-atlas.gridpp.rl.ac.uk |
05/02/17 | 100 | 100 | 90 | 100 | 100 | 100 | 100 | Checks timed out. |
06/02/17 | 100 | 100 | 85 | 100 | 100 | 99 | 100 | Checks timed out. |
07/02/17 | 100 | 100 | 100 | 100 | 100 | 99 | 100 |
Notes from Meeting. |
- ECHO: Atlas have run some production jobs that have stored outputs in ECHO.
- LHCb are awaiting the upgrade of the Castor SRMs in order to be able to use a newer ssl library.
- MICE will resume data taking in about a week's time. This will be a run of around three weeks duration. They still need to confirm their data processing works with Castor 2.1.15.
- VO Dirac: There have been successful test transfers from teh Edinburgh and Cambridge HPC Dirac sites. This means we have now had successful transfers from four out of the five Dirac sites.
- Raja reported that the long standing issue of local LHCbs failing to write into Castor. This has been resolved. Since the Castor update the error rate has dropped to around one job per day (out of around 15,00 jobs per day. This is similar to the rate seen at other Tier1 sites. This issue is therefore now considered fixed and will be dropped from this report.