Difference between revisions of "Tier1 Operations Report 2016-11-30"
From GridPP Wiki
(→) |
(→) |
||
(5 intermediate revisions by one user not shown) | |||
Line 10: | Line 10: | ||
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Review of Issues during the week 23rd to 30th November 2016. | | style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Review of Issues during the week 23rd to 30th November 2016. | ||
|} | |} | ||
− | * It was found that some worker nodes were being put offline owing to clock drift. This was around 10% of them on Monday). This was traced to a problem within the NTP daemon and was fixed by a configuration change. | + | * It was found that some worker nodes were being put offline owing to clock drift. (This was around 10% of them on Monday). This was traced to a problem within the NTP daemon and was fixed by a configuration change. |
<!-- ***********End Review of Issues during last week*********** -----> | <!-- ***********End Review of Issues during last week*********** -----> | ||
<!-- *********************************************************** -----> | <!-- *********************************************************** -----> | ||
Line 57: | Line 57: | ||
* There was restart test of the ECHO Ceph system yesterday> this was to understand how best to do this and set-up appropriate operating procedures. | * There was restart test of the ECHO Ceph system yesterday> this was to understand how best to do this and set-up appropriate operating procedures. | ||
* LHCb writing to the 'D' tapes. The migration of their data from 'C' to 'D' tapes is underway - with around 300 (some 30%) of the tapes done. | * LHCb writing to the 'D' tapes. The migration of their data from 'C' to 'D' tapes is underway - with around 300 (some 30%) of the tapes done. | ||
− | * An update to the FTS3 | + | * An update to the FTS3 service (to version 3.5.7) has taken place this morning. |
+ | * Increased number of CMS multicore jobs allowed to run as the previous limit was a bit too low. (This is a further increase as compared to that of around a month ago). | ||
<!-- *************End Notable Changes made this last week************** -----> | <!-- *************End Notable Changes made this last week************** -----> | ||
<!-- ****************************************************************** -----> | <!-- ****************************************************************** -----> | ||
Line 135: | Line 136: | ||
! GGUS ID !! Level !! Urgency !! State !! Creation !! Last Update !! VO !! Subject | ! GGUS ID !! Level !! Urgency !! State !! Creation !! Last Update !! VO !! Subject | ||
|- | |- | ||
− | | | + | | 125157 |
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
| Green | | Green | ||
| Less Urgent | | Less Urgent | ||
− | | | + | | Waiting for Reply |
− | | 2016-11- | + | | 2016-11-24 |
− | | 2016-11- | + | | 2016-11-29 |
− | | | + | | |
− | | | + | | Creation of a repository within the EGI CVMFS infrastructure |
|- | |- | ||
| 124876 | | 124876 | ||
Line 161: | Line 153: | ||
| OPS | | OPS | ||
| [Rod Dashboard] Issue detected : hr.srce.GridFTP-Transfer-ops@gridftp.echo.stfc.ac.uk | | [Rod Dashboard] Issue detected : hr.srce.GridFTP-Transfer-ops@gridftp.echo.stfc.ac.uk | ||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
|- | |- | ||
| 124606 | | 124606 | ||
Line 176: | Line 159: | ||
| In Progress | | In Progress | ||
| 2016-10-24 | | 2016-10-24 | ||
− | | 2016-11- | + | | 2016-11-24 |
| CMS | | CMS | ||
| Consistency Check for T1_UK_RAL | | Consistency Check for T1_UK_RAL | ||
|- | |- | ||
− | | | + | | 124478 |
| Green | | Green | ||
| Less Urgent | | Less Urgent | ||
− | | | + | | In Progress |
| 2016-11-18 | | 2016-11-18 | ||
| 2016-11-18 | | 2016-11-18 | ||
Line 223: | Line 206: | ||
! Day !! OPS !! Alice !! Atlas !! CMS !! LHCb !! Atlas HC !! CMS HC !! Comment | ! Day !! OPS !! Alice !! Atlas !! CMS !! LHCb !! Atlas HC !! CMS HC !! Comment | ||
|- | |- | ||
− | | 23/11/16 || 100 || 100 || 100 || 100 || 100 || | + | | 23/11/16 || 100 || 100 || 100 || 100 || 100 || 100 || 100 || |
|- | |- | ||
− | | 24/11/16 || 100 || 100 || 100 || 100 || 100 || | + | | 24/11/16 || 100 || 100 || 100 || 100 || 100 || 98 || 100 || |
|- | |- | ||
− | | 25/11/16 || 100 || 100 || 100 || 100 || 100 || | + | | 25/11/16 || 100 || 100 || 100 || 100 || 100 || 100 || 100 || |
|- | |- | ||
− | | 26/11/16 || 100 || 100 || 100 || style="background-color: lightgrey;" | 98 || 100 || | + | | 26/11/16 || 100 || 100 || 100 || style="background-color: lightgrey;" | 98 || 100 || 100 || 98 || Two SRM 'GET' test failures. Both user timeout error |
|- | |- | ||
− | | 27/11/16 || 100 || 100 || 100 || 100 || 100 || | + | | 27/11/16 || 100 || 100 || 100 || 100 || 100 || 100 || 100 || |
|- | |- | ||
− | | 28/11/16 || 100 || 100 || 100 || 100 || 100 || | + | | 28/11/16 || 100 || 100 || 100 || 100 || 100 || 100 || 98 || |
|- | |- | ||
− | | 29/11/16 || style="background-color: lightgrey;" | 96.5 || 100 || 100 || 100 || 100 || | + | | 29/11/16 || style="background-color: lightgrey;" | 96.5 || 100 || 100 || 100 || 100 || 100 || 99 || Central monitoring problem affected other sites too. |
|} | |} | ||
<!-- **********************End Availability Report************************** -----> | <!-- **********************End Availability Report************************** -----> | ||
Line 247: | Line 230: | ||
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Notes from Meeting. | | style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Notes from Meeting. | ||
|} | |} | ||
− | * | + | * The number of CMS multicore jobs we are running has been slightly increased. We are seeing some low job efficiencies (around 30% yesterday - up to around 55% today). Cause not understood. Also looks like two disk servers in CMSDisk are under particular load. We will chase up which files on these servers are being accessed. |
+ | * MICE_ There is a data taking until mid-December. There is an outstanding need to standardise the ownership of MICE files in Castor. Initially some writes were done under a particular user's ID. MICE will provide a list of files to correct/fix. | ||
+ | * Dirac: We expect around 700TB of data from each site. Ongoing work needed in the Catsor configuration to ensure separation of the files from the different Dirac sites. |
Latest revision as of 14:23, 30 November 2016
RAL Tier1 Operations Report for 30th November 2016
Review of Issues during the week 23rd to 30th November 2016. |
- It was found that some worker nodes were being put offline owing to clock drift. (This was around 10% of them on Monday). This was traced to a problem within the NTP daemon and was fixed by a configuration change.
Resolved Disk Server Issues |
- None.
Current operational status and issues |
- There is a problem seen by LHCb of a low but persistent rate of failure when copying the results of batch jobs to Castor. There is also a further problem that sometimes occurs when these (failed) writes are attempted to storage at other sites.
- The intermittent, low-level, load-related packet loss that has been seen over external connections is still being tracked. The replacement of the UKLight router appears to have reduced this - but we are allowing more time to pass before drawing any conclusions.
Ongoing Disk Server Issues |
- None
Notable Changes made since the last meeting. |
- Maintenance was carried out on the UPS and generator in R89 yesterday.
- There was restart test of the ECHO Ceph system yesterday> this was to understand how best to do this and set-up appropriate operating procedures.
- LHCb writing to the 'D' tapes. The migration of their data from 'C' to 'D' tapes is underway - with around 300 (some 30%) of the tapes done.
- An update to the FTS3 service (to version 3.5.7) has taken place this morning.
- Increased number of CMS multicore jobs allowed to run as the previous limit was a bit too low. (This is a further increase as compared to that of around a month ago).
Declared in the GOC DB |
None
Advanced warning for other interventions |
The following items are being discussed and are still to be formally scheduled and announced. |
Pending - but not yet formally announced:
- Firmware update on Clustervision '13 disk servers. These are distributed as follows: AtlasDataDisk: 12; CMSDisk: 5; LHCbDst: 12.
- Merge AtlasScratchDisk and LhcbUser into larger disk pools - Possible date Thursday 8th Dec.
Listing by category:
- Castor:
- Merge AtlasScratchDisk and LhcbUser into larger disk pools
- Update to Castor version 2.1.15. Planning to roll out January 2017. (Proposed dates: 10th Jan: Nameserver; 17th Jan: First stager (LHCb); 24th Jan: Stager (Atlas); 26th Jan: Stager (GEN); 31st Jan: Final stager (CMS)).
- Update SRMs to new version, including updating to SL6. This will be done after the Castor 2.1.15 update.
- Fabric
- Firmware updates on older disk servers.
Entries in GOC DB starting since the last report. |
Service | Scheduled? | Outage/At Risk | Start | End | Duration | Reason |
---|---|---|---|---|---|---|
lcgfts3.gridpp.rl.ac.uk, | SCHEDULED | WARNING | 30/11/2016 11:00 | 30/11/2016 13:00 | 2 hours | Upgrade of FTS3 service |
Open GGUS Tickets (Snapshot during morning of meeting) |
GGUS ID | Level | Urgency | State | Creation | Last Update | VO | Subject |
---|---|---|---|---|---|---|---|
125157 | Green | Less Urgent | Waiting for Reply | 2016-11-24 | 2016-11-29 | Creation of a repository within the EGI CVMFS infrastructure | |
124876 | Green | Less Urgent | On Hold | 2016-11-07 | 2016-11-21 | OPS | [Rod Dashboard] Issue detected : hr.srce.GridFTP-Transfer-ops@gridftp.echo.stfc.ac.uk |
124606 | Red | Top Priority | In Progress | 2016-10-24 | 2016-11-24 | CMS | Consistency Check for T1_UK_RAL |
124478 | Green | Less Urgent | In Progress | 2016-11-18 | 2016-11-18 | Jobs submitted via RAL WMS stuck in state READY forever and ever and ever | |
122827 | Green | Less Urgent | In Progress | 2016-07-12 | 2016-10-11 | SNO+ | Disk area at RAL |
117683 | Red | Less Urgent | On Hold | 2015-11-18 | 2016-10-05 | CASTOR at RAL not publishing GLUE 2 (Updated. There are ongoing discussions with GLUE & WLCG) |
Availability Report |
Key: Atlas HC = Atlas HammerCloud (Queue ANALY_RAL_SL6, Template 808); CMS HC = CMS HammerCloud
Day | OPS | Alice | Atlas | CMS | LHCb | Atlas HC | CMS HC | Comment |
---|---|---|---|---|---|---|---|---|
23/11/16 | 100 | 100 | 100 | 100 | 100 | 100 | 100 | |
24/11/16 | 100 | 100 | 100 | 100 | 100 | 98 | 100 | |
25/11/16 | 100 | 100 | 100 | 100 | 100 | 100 | 100 | |
26/11/16 | 100 | 100 | 100 | 98 | 100 | 100 | 98 | Two SRM 'GET' test failures. Both user timeout error |
27/11/16 | 100 | 100 | 100 | 100 | 100 | 100 | 100 | |
28/11/16 | 100 | 100 | 100 | 100 | 100 | 100 | 98 | |
29/11/16 | 96.5 | 100 | 100 | 100 | 100 | 100 | 99 | Central monitoring problem affected other sites too. |
Notes from Meeting. |
- The number of CMS multicore jobs we are running has been slightly increased. We are seeing some low job efficiencies (around 30% yesterday - up to around 55% today). Cause not understood. Also looks like two disk servers in CMSDisk are under particular load. We will chase up which files on these servers are being accessed.
- MICE_ There is a data taking until mid-December. There is an outstanding need to standardise the ownership of MICE files in Castor. Initially some writes were done under a particular user's ID. MICE will provide a list of files to correct/fix.
- Dirac: We expect around 700TB of data from each site. Ongoing work needed in the Catsor configuration to ensure separation of the files from the different Dirac sites.