Difference between revisions of "Tier1 Operations Report 2017-02-08"

From GridPP Wiki
Jump to: navigation, search
()
()
 
(18 intermediate revisions by 2 users not shown)
Line 10: Line 10:
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Review of Issues during the fortnight 25th January to 8th February 2017.
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Review of Issues during the fortnight 25th January to 8th February 2017.
 
|}
 
|}
* We have still been seeing SAM SRM tests failures for CMS. These are owing to the total load on the instance. (Now recorded as an ongoing operational problem below).
+
* There was also a problem with ALICE after the 'GEN' upgrade. ALICE require a special version of the xroot component for Castor. Checks that the xroot component would install under 2.1.15 had been made - but a newer version was needed. Once this had been provided there was a further ALICE specific configuration error that had to be tracked down. This caused a significant loss of availability for ALICE (failed between the 26th and 30th January).
* There was a performance problem on the LHCb Castor instance following the upgrade last Wednesday. This was resolved by the end of the following day.
+
* Since the castor upgrade we have seen a couple of further problems:
* On Monday the LHCb Castor instance was stopped while OS security patches were applied and nodes rebooted. It took a couple of hours after the planned intervention to get the last disk server back as a couple of them had problems on reboot.
+
** There has been a problem with the LHCb instance - we see a database resource (number of cursors) exhausted - and have had to restart the service to clear stuck transfers (on 1st Feb). A similar operation was carried out for Atlas (on 31st Jan).  
 +
** We have been failing tests for CMS xroot redirection. This appears to have started a couple of days after the CMS stager upgrade and is not yet understood.
 +
** We are also failing CMS tests for an SRM endpoint defined in the GOC DB but not in production ("srm-cms-disk"). This should not have tests running against it and needs following up with CMS. Even though this test should not matter we would like to understand why it has stopped working after the Castor upgrade.
 
<!-- ***********End Review of Issues during last week*********** ----->
 
<!-- ***********End Review of Issues during last week*********** ----->
 
<!-- *********************************************************** ----->
 
<!-- *********************************************************** ----->
Line 23: Line 25:
 
| style="background-color: #f8d6a9; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Resolved Disk Server Issues
 
| style="background-color: #f8d6a9; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Resolved Disk Server Issues
 
|}
 
|}
* GDSS772 (LHCbDst - D1T0) failed on Thursday evening, 19th Jan. Back read-only the following afternoon. A disk drive was replaced. Two files reported lost.
+
* GDSS780 (LHCbDst - D1T0) crashed on 25th Jan. It was returned to service later that day after a memory swap-around.
* GDSS667 (AtlasScratchDisk - D1T0) failed on Sunday morning (22nd Jan). It was returned to service read-only the following afternoon. One drive with a lot of media errors was replaced. Eleven files reported lost to Atlas.
+
* GDSS687 (AtlasDataDisk - D1T0) was removed from production on 27th January when it was found to have two faulty disk drives. It was returned to service on the 30th after the drive replacements.
* GDSS776 (LHCbDst - D1T0) has problems after the reboots to pick up security patches on Monday (23rd). It was returned to service teh following day.
+
* GDSS776 (LHCbDst - D1T0) crashed on 3rd Feb. It was returned to service on the evening of the same day after being checked. Five files were lost from the time of the crash.
 
<!-- ***************************************************** ----->
 
<!-- ***************************************************** ----->
  
Line 36: Line 38:
 
|}
 
|}
 
* There is a problem seen by LHCb of a low but persistent rate of failure when copying the results of batch jobs to Castor. There is also a further problem that sometimes occurs when these (failed) writes are attempted to storage at other sites.
 
* There is a problem seen by LHCb of a low but persistent rate of failure when copying the results of batch jobs to Castor. There is also a further problem that sometimes occurs when these (failed) writes are attempted to storage at other sites.
* We are seeing a rate of failures of the CMS SAM tests against the SRM. These are affecting our (CMS) availabilities. We attribute these failures to load on Castor.
+
* We have been seeing a rate of failures of the CMS SAM tests against the SRM. These are affecting our (CMS) availabilities. A correction to the list of CMS 'services' being tests is helping with the resulting availability measure.
 
<!-- ***********End Current operational status and issues*********** ----->
 
<!-- ***********End Current operational status and issues*********** ----->
 
<!-- *************************************************************** ----->
 
<!-- *************************************************************** ----->
Line 47: Line 49:
 
| style="background-color: #f8d6a9; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Ongoing Disk Server Issues
 
| style="background-color: #f8d6a9; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Ongoing Disk Server Issues
 
|}
 
|}
* GDSS780 (LHCbDst - D1T0) crashed at around 8am this morning (Wed 25th Jan). System under investigation.
+
* None
 
<!-- ***************End Ongoing Disk Server Issues**************** ----->
 
<!-- ***************End Ongoing Disk Server Issues**************** ----->
 
<!-- ************************************************************* ----->
 
<!-- ************************************************************* ----->
Line 58: Line 60:
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Notable Changes made since the last meeting.
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Notable Changes made since the last meeting.
 
|}
 
|}
* Castor 2.1.15 updates carried out on the LHCb and Atlas stagers.
+
* Castor 2.1.15 updates carried out on the GEN and LHCb stagers.
* The top BDIIs have been put behind load balancers. (This was recently done for the site BDIIs)
+
* Migration of LHCb data from 'C' to 'D' tapes has been completed. All Tier1 data is now on T10KD tapes.
* Migration of LHCb data from 'C' to 'D' tapes ongoing. Now over 90% done with less than 100 out of the 1000 tapes still to do.
+
* CMS PhEDEx debug transfers switched from CASTOR to Ceph (This had been tried previously but the change reverted.)
* A resiliency test was carried out on ECHO (CEPH) - checking its ability to handle failures of a rack of equipment etc. The system passed this successfully. An internal data migration test within ECHO is now underway. This has had some impact on external access.
+
 
<!-- *************End Notable Changes made this last week************** ----->
 
<!-- *************End Notable Changes made this last week************** ----->
 
<!-- ****************************************************************** ----->
 
<!-- ****************************************************************** ----->
Line 72: Line 73:
 
| style="background-color: #d8e8ff; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Declared in the GOC DB
 
| style="background-color: #d8e8ff; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Declared in the GOC DB
 
|}
 
|}
 +
 +
 
{| border=1 align=center
 
{| border=1 align=center
 
|- bgcolor="#7c8aaf"
 
|- bgcolor="#7c8aaf"
Line 82: Line 85:
 
! Reason
 
! Reason
 
|-
 
|-
| Castor CMS instance
+
| Whole site
 
| SCHEDULED
 
| SCHEDULED
| OUTAGE
+
| WARNING
| 31/01/2017 10:00
+
| 01/03/2017 07:00
| 31/01/2017 16:00
+
| 01/03/2017 11:00
| 6 hours
+
| 4 hours
| Castor 2.1.15 Upgrade. Only affecting CMS instance. (CMS stager component being upgraded).
+
| Warning on site during network intervention in preparation for IPv6.
 
|-
 
|-
| Castor GEN instance
+
| All Castor and ECHO storage and Perfsonar.
 
| SCHEDULED
 
| SCHEDULED
| OUTAGE
+
| WARNING
| 26/01/2017 10:00
+
| 22/02/2017 07:00
| 26/01/2017 16:00
+
| 22/02/2017 11:00
| 6 hours
+
| 4 hours
| Castor 2.1.15 Upgrade. Only affecting GEN instance. (GEN stager component being upgraded).
+
| Warning on Storage and Perfsonar during network intervention in preparation for IPv6.
 
|}
 
|}
 
<!-- **********************End GOC DB Entries************************** ----->
 
<!-- **********************End GOC DB Entries************************** ----->
Line 115: Line 118:
 
'''Listing by category:'''
 
'''Listing by category:'''
 
* Castor:  
 
* Castor:  
** Update to Castor version 2.1.15. This upgrade is now part done.
 
 
** Update SRMs to new version, including updating to SL6. This will be done after the Castor 2.1.15 update.
 
** Update SRMs to new version, including updating to SL6. This will be done after the Castor 2.1.15 update.
 +
* Networking:
 +
** Enabling IPv6 onto production network.
 +
* Databases
 +
** Removal of "asmlib" layer on Oracle database nodes.
 
<!-- ***************End Advanced warning for other interventions*************** ----->
 
<!-- ***************End Advanced warning for other interventions*************** ----->
 
<!-- ************************************************************************** ----->
 
<!-- ************************************************************************** ----->
Line 127: Line 133:
 
| style="background-color: #7c8aaf; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Entries in GOC DB starting since the last report.
 
| style="background-color: #7c8aaf; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Entries in GOC DB starting since the last report.
 
|}
 
|}
 
 
 
{| border=1 align=center
 
{| border=1 align=center
 
|- bgcolor="#7c8aaf"
 
|- bgcolor="#7c8aaf"
Line 139: Line 143:
 
! Reason
 
! Reason
 
|-
 
|-
|srm-atlas.gridpp.rl.ac.uk
+
| IPv6 testbed nodes
| SCHEDULED
+
| UNSCHEDULED
 
| OUTAGE
 
| OUTAGE
| 24/01/2017 10:00
+
| 01/02/2017 07:30
| 24/01/2017 13:27
+
| 01/02/2017 12:00
| 3 hours and 27 minutes
+
| 4 hours and 30 minutes
| Castor 2.1.15 Upgrade. Only affecting Atlas instance. (Atlas stager component being upgraded).
+
| RAL IPv6 testbed network intervention
 
|-
 
|-
| srm-lhcb.gridpp.rl.ac.uk
+
| srm-cms-disk.gridpp.rl.ac.uk, srm-cms.gridpp.rl.ac.uk,
 
| SCHEDULED
 
| SCHEDULED
 
| OUTAGE
 
| OUTAGE
| 23/01/2017 10:30
+
| 31/01/2017 10:00
| 23/01/2017 12:30
+
| 31/01/2017 11:46
| 2 hours
+
| 1 hour and 46 minutes
|Downtime while patching CVE-7117 on LHCb Castor instance.
+
| Castor 2.1.15 Upgrade. Only affecting CMS instance. (CMS stager component being upgraded).
 
|-
 
|-
|srm-lhcb.gridpp.rl.ac.uk
+
| srm-alice.gridpp.rl.ac.uk,
 
| UNSCHEDULED
 
| UNSCHEDULED
| WARNING
+
| OUTAGE
| 19/01/2017 18:00
+
| 27/01/2017 16:00
| 20/01/2017 17:00
+
| 30/01/2017 14:00
| 23 hours
+
| 2 days, 22 hours
| Ongoing problems with LHCb Castor instance when under load
+
| Continuing problems with Alice SRM xrootd access
 
|-
 
|-
| srm-lhcb.gridpp.rl.ac.uk  
+
| srm-alice.gridpp.rl.ac.uk,
 
| UNSCHEDULED
 
| UNSCHEDULED
| WARNING
+
| OUTAGE
| 19/01/2017 10:00
+
| 27/01/2017 12:00
| 19/01/2017 18:00
+
| 27/01/2017 16:00
| 8 hours
+
| 4 hours
| Ongoing problems with LHCb Castor instance when under load.
+
| Continuing problems with Alice SRM xrootd access
 
|-
 
|-
| srm-lhcb.gridpp.rl.ac.uk  
+
| srm-alice.gridpp.rl.ac.uk,
 
| UNSCHEDULED
 
| UNSCHEDULED
 
| OUTAGE
 
| OUTAGE
| 18/01/2017 20:00
+
| 26/01/2017 16:00
| 19/01/2017 10:03
+
| 27/01/2017 12:00
| 14 hours and 3 minutes
+
| 20 hours
| Problem with LHCb Castor instance.
+
| Problems for alice storage after Castor upgrade at RAL
 
|-
 
|-
| srm-lhcb.gridpp.rl.ac.uk
+
| Castor GEN instance
 
| SCHEDULED
 
| SCHEDULED
 
| OUTAGE
 
| OUTAGE
| 18/01/2017 10:00
+
| 26/01/2017 10:00
| 18/01/2017 15:39
+
| 26/01/2017 16:00
| 5 hours and 39 minutes
+
| 6 hours
| Castor 2.1.15 Upgrade. Only affecting LHCb instance. (LHCb stager component being upgraded).
+
| Castor 2.1.15 Upgrade. Only affecting GEN instance. (GEN stager component being upgraded).
 
|}
 
|}
 
<!-- **********************End GOC DB Entries************************** ----->
 
<!-- **********************End GOC DB Entries************************** ----->
Line 201: Line 205:
 
|-style="background:#b7f1ce"
 
|-style="background:#b7f1ce"
 
! GGUS ID !! Level !! Urgency !! State !! Creation !! Last Update !! VO !! Subject
 
! GGUS ID !! Level !! Urgency !! State !! Creation !! Last Update !! VO !! Subject
 +
|-
 +
| 126376
 +
| Green
 +
| Urgent
 +
| In Progress
 +
| 2017-02-05
 +
| 2017-02-08
 +
| CMS
 +
| SAM3 CE & SRM test failures at T1_UK_RAL
 +
|-
 +
| 126296
 +
| Green
 +
| Urgent
 +
| Waiting Reply
 +
| 2017-02-01
 +
| 2017-02-06
 +
| CMS
 +
| SAM SRM test errors at T1_UK_RAL
 +
 +
|-
 +
| 126184
 +
| Green
 +
| Less Urgent
 +
| In Progress
 +
| 2017-01-26
 +
| 2017-02-07
 +
| Atlas
 +
| Request of inputs for new sites monitoring
 
|-
 
|-
 
| 124876
 
| 124876
| Amber
+
| Red
 
| Less Urgent
 
| Less Urgent
 
| On Hold
 
| On Hold
Line 274: Line 306:
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Notes from Meeting.
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Notes from Meeting.
 
|}
 
|}
* More detail was given on the problems encountered by the LHCb Castor instance after the stager update. These were a version of a particular library used by Castor needed updating, a Castor parameter and a database parameter was also adjusted.
+
* ECHO: Atlas have run some production jobs that have stored outputs in ECHO.
* There will need to be an update to all Oracle database servers to remove a component called ASMLIB. This si being tested now.
+
* LHCb are awaiting the upgrade of the Castor SRMs in order to be able to use a newer ssl library.
* Some ALICE problems were reported last week and a cap put on the number of batch jobs they can run. This cap has since been removed and no further problems encountered. (At the peak ALICE were running around 10,000 jobs on our batch farm yesterday. Capacity being available as Atlas were not running during the Atlas Castor update).
+
* MICE will resume data taking in about a week's time. This will be a run of around three weeks duration. They still need to confirm their data processing works with Castor 2.1.15.
 +
* VO Dirac: There have been successful test transfers from the Edinburgh and Cambridge HPC Dirac sites. This means we have now had successful transfers from four out of the five Dirac sites.
 +
* Raja reported that the long standing issue of local LHCbs failing to write into Castor. This has been resolved. Since the Castor update the error rate has dropped to around one job per day (out of around 15,000 jobs per day. This is similar to the rate seen at other Tier1 sites. This issue is therefore now considered fixed and will be dropped from this report.

Latest revision as of 13:03, 9 February 2017

RAL Tier1 Operations Report for 8th February 2017

Review of Issues during the fortnight 25th January to 8th February 2017.
  • There was also a problem with ALICE after the 'GEN' upgrade. ALICE require a special version of the xroot component for Castor. Checks that the xroot component would install under 2.1.15 had been made - but a newer version was needed. Once this had been provided there was a further ALICE specific configuration error that had to be tracked down. This caused a significant loss of availability for ALICE (failed between the 26th and 30th January).
  • Since the castor upgrade we have seen a couple of further problems:
    • There has been a problem with the LHCb instance - we see a database resource (number of cursors) exhausted - and have had to restart the service to clear stuck transfers (on 1st Feb). A similar operation was carried out for Atlas (on 31st Jan).
    • We have been failing tests for CMS xroot redirection. This appears to have started a couple of days after the CMS stager upgrade and is not yet understood.
    • We are also failing CMS tests for an SRM endpoint defined in the GOC DB but not in production ("srm-cms-disk"). This should not have tests running against it and needs following up with CMS. Even though this test should not matter we would like to understand why it has stopped working after the Castor upgrade.
Resolved Disk Server Issues
  • GDSS780 (LHCbDst - D1T0) crashed on 25th Jan. It was returned to service later that day after a memory swap-around.
  • GDSS687 (AtlasDataDisk - D1T0) was removed from production on 27th January when it was found to have two faulty disk drives. It was returned to service on the 30th after the drive replacements.
  • GDSS776 (LHCbDst - D1T0) crashed on 3rd Feb. It was returned to service on the evening of the same day after being checked. Five files were lost from the time of the crash.
Current operational status and issues
  • There is a problem seen by LHCb of a low but persistent rate of failure when copying the results of batch jobs to Castor. There is also a further problem that sometimes occurs when these (failed) writes are attempted to storage at other sites.
  • We have been seeing a rate of failures of the CMS SAM tests against the SRM. These are affecting our (CMS) availabilities. A correction to the list of CMS 'services' being tests is helping with the resulting availability measure.
Ongoing Disk Server Issues
  • None
Notable Changes made since the last meeting.
  • Castor 2.1.15 updates carried out on the GEN and LHCb stagers.
  • Migration of LHCb data from 'C' to 'D' tapes has been completed. All Tier1 data is now on T10KD tapes.
  • CMS PhEDEx debug transfers switched from CASTOR to Ceph (This had been tried previously but the change reverted.)
Declared in the GOC DB


Service Scheduled? Outage/At Risk Start End Duration Reason
Whole site SCHEDULED WARNING 01/03/2017 07:00 01/03/2017 11:00 4 hours Warning on site during network intervention in preparation for IPv6.
All Castor and ECHO storage and Perfsonar. SCHEDULED WARNING 22/02/2017 07:00 22/02/2017 11:00 4 hours Warning on Storage and Perfsonar during network intervention in preparation for IPv6.
Advanced warning for other interventions
The following items are being discussed and are still to be formally scheduled and announced.

Pending - but not yet formally announced:

  • Merge AtlasScratchDisk into larger Atlas disk pool.

Listing by category:

  • Castor:
    • Update SRMs to new version, including updating to SL6. This will be done after the Castor 2.1.15 update.
  • Networking:
    • Enabling IPv6 onto production network.
  • Databases
    • Removal of "asmlib" layer on Oracle database nodes.
Entries in GOC DB starting since the last report.
Service Scheduled? Outage/At Risk Start End Duration Reason
IPv6 testbed nodes UNSCHEDULED OUTAGE 01/02/2017 07:30 01/02/2017 12:00 4 hours and 30 minutes RAL IPv6 testbed network intervention
srm-cms-disk.gridpp.rl.ac.uk, srm-cms.gridpp.rl.ac.uk, SCHEDULED OUTAGE 31/01/2017 10:00 31/01/2017 11:46 1 hour and 46 minutes Castor 2.1.15 Upgrade. Only affecting CMS instance. (CMS stager component being upgraded).
srm-alice.gridpp.rl.ac.uk, UNSCHEDULED OUTAGE 27/01/2017 16:00 30/01/2017 14:00 2 days, 22 hours Continuing problems with Alice SRM xrootd access
srm-alice.gridpp.rl.ac.uk, UNSCHEDULED OUTAGE 27/01/2017 12:00 27/01/2017 16:00 4 hours Continuing problems with Alice SRM xrootd access
srm-alice.gridpp.rl.ac.uk, UNSCHEDULED OUTAGE 26/01/2017 16:00 27/01/2017 12:00 20 hours Problems for alice storage after Castor upgrade at RAL
Castor GEN instance SCHEDULED OUTAGE 26/01/2017 10:00 26/01/2017 16:00 6 hours Castor 2.1.15 Upgrade. Only affecting GEN instance. (GEN stager component being upgraded).
Open GGUS Tickets (Snapshot during morning of meeting)
GGUS ID Level Urgency State Creation Last Update VO Subject
126376 Green Urgent In Progress 2017-02-05 2017-02-08 CMS SAM3 CE & SRM test failures at T1_UK_RAL
126296 Green Urgent Waiting Reply 2017-02-01 2017-02-06 CMS SAM SRM test errors at T1_UK_RAL
126184 Green Less Urgent In Progress 2017-01-26 2017-02-07 Atlas Request of inputs for new sites monitoring
124876 Red Less Urgent On Hold 2016-11-07 2017-01-01 OPS [Rod Dashboard] Issue detected : hr.srce.GridFTP-Transfer-ops@gridftp.echo.stfc.ac.uk
117683 Red Less Urgent On Hold 2015-11-18 2016-12-07 CASTOR at RAL not publishing GLUE 2. We looked at this as planned in December (report).
Availability Report

Key: Atlas HC = Atlas HammerCloud (Queue ANALY_RAL_SL6, Template 844); CMS HC = CMS HammerCloud

Day OPS Alice Atlas CMS LHCb Atlas HC CMS HC Comment
25/01/17 100 100 100 100 100 99 100
26/01/17 -1 63 92 100 100 100 100 ALICE: GEN Castor stager 2.1.15 upgrade; Atlas: Checks timed out.
27/01/17 100 0 100 100 100 100 100 Alice specific problems after the Castor GEN upgrade.
28/01/17 100 0 100 100 100 100 100 Alice specific problems after the Castor GEN upgrade.
29/01/17 100 0 100 100 100 98 98 Alice specific problems after the Castor GEN upgrade.
30/01/17 100 32 86 100 100 90 100 Atlas: Could not open connection to srm-atlas.gridpp.rl.ac.uk; Alice specific problems after the Castor GEN upgrade.
31/01/17 -1 100 100 100 100 89 100
01/02/17 100 100 100 100 85 100 N/A SRM test failures to list file.
02/02/17 100 100 100 100 100 100 97
03/02/17 100 100 100 100 96 98 100 SRM test failures to list file.
04/02/17 100 100 98 100 100 100 100 Could not open connection to srm-atlas.gridpp.rl.ac.uk
05/02/17 100 100 90 100 100 100 100 Checks timed out.
06/02/17 100 100 85 100 100 99 100 Checks timed out.
07/02/17 100 100 100 100 100 99 100
Notes from Meeting.
  • ECHO: Atlas have run some production jobs that have stored outputs in ECHO.
  • LHCb are awaiting the upgrade of the Castor SRMs in order to be able to use a newer ssl library.
  • MICE will resume data taking in about a week's time. This will be a run of around three weeks duration. They still need to confirm their data processing works with Castor 2.1.15.
  • VO Dirac: There have been successful test transfers from the Edinburgh and Cambridge HPC Dirac sites. This means we have now had successful transfers from four out of the five Dirac sites.
  • Raja reported that the long standing issue of local LHCbs failing to write into Castor. This has been resolved. Since the Castor update the error rate has dropped to around one job per day (out of around 15,000 jobs per day. This is similar to the rate seen at other Tier1 sites. This issue is therefore now considered fixed and will be dropped from this report.