Difference between revisions of "Tier1 Operations Report 2017-01-18"
From GridPP Wiki
(→) |
(→) |
||
Line 129: | Line 129: | ||
'''Listing by category:''' | '''Listing by category:''' | ||
* Castor: | * Castor: | ||
− | ** Update to Castor version 2.1.15. Dates announced via GOC DB for early 2017. | + | ** Update to Castor version 2.1.15. Dates announced via GOC DB for early 2017 and ongoing. |
** Update SRMs to new version, including updating to SL6. This will be done after the Castor 2.1.15 update. | ** Update SRMs to new version, including updating to SL6. This will be done after the Castor 2.1.15 update. | ||
<!-- ***************End Advanced warning for other interventions*************** -----> | <!-- ***************End Advanced warning for other interventions*************** -----> |
Revision as of 11:43, 18 January 2017
RAL Tier1 Operations Report for 18th January 2017
Review of Issues during the week 11th to 18th January 2017. |
- We have still been seeing SAM SRM tests failures for CMS. These are owing to the total load on the instance.
- LHCb have reported a problem accessing some files - and a GGUS ticket was opened about this. This may now be solved (we are awaiting confirmation). A problem was found with the xroot configuration on one disk server.
- Some disk errors had been seen on hypervisors in our High Availability Hyper-V 2012 cluster. Errors on two of the network connections supporting the iSCSI links to the disk array were found. These were swapped on Monday (16th Jan) - during an unscheduled 'warning'. However, this has not resolved the problem.
- The tape migration queues for Atlas, CMS and LHCb were growing from around 6pm on Saturday until Monday morning. It looks like a tape was stuck in one drive and others became blocked when being asked to mount the same tape.
Resolved Disk Server Issues |
- None
Current operational status and issues |
- There is a problem seen by LHCb of a low but persistent rate of failure when copying the results of batch jobs to Castor. There is also a further problem that sometimes occurs when these (failed) writes are attempted to storage at other sites.
Ongoing Disk Server Issues |
- None
Notable Changes made since the last meeting. |
- Changes made to the publishing of CPU capacity to the information system (GLUE1 & GLUE2).
- Migration of LHCb data from 'C' to 'D' tapes ongoing. Now a little over 80% done. Around 170 out of the 1000 tapes still to do.
- The site-BDIIs are have been put fully behind the load balancers.
- The (internal) Castor "repack" instance was upgraded to Castor version 2.1.15 on Monday (16th). The upgrade of the LHCb stager ongoing at time of meeting.
Declared in the GOC DB |
Service | Scheduled? | Outage/At Risk | Start | End | Duration | Reason |
---|---|---|---|---|---|---|
Castor CMS instance | SCHEDULED | OUTAGE | 31/01/2017 10:00 | 31/01/2017 16:00 | 6 hours | Castor 2.1.15 Upgrade. Only affecting CMS instance. (CMS stager component being upgraded). |
Castor GEN instance | SCHEDULED | OUTAGE | 26/01/2017 10:00 | 26/01/2017 16:00 | 6 hours | Castor 2.1.15 Upgrade. Only affecting GEN instance. (GEN stager component being upgraded). |
Castor Atlas instance | SCHEDULED | OUTAGE | 24/01/2017 10:00 | 24/01/2017 16:00 | 6 hours | Castor 2.1.15 Upgrade. Only affecting Atlas instance. (Atlas stager component being upgraded). |
Castor LHCb instance | SCHEDULED | OUTAGE | 18/01/2017 10:00 | 18/01/2017 16:00 | 6 hours | Castor 2.1.15 Upgrade. Only affecting LHCb instance. (LHCb stager component being upgraded). |
Advanced warning for other interventions |
The following items are being discussed and are still to be formally scheduled and announced. |
Pending - but not yet formally announced:
- Merge AtlasScratchDisk into larger Atlas disk pool.
Listing by category:
- Castor:
- Update to Castor version 2.1.15. Dates announced via GOC DB for early 2017 and ongoing.
- Update SRMs to new version, including updating to SL6. This will be done after the Castor 2.1.15 update.
Entries in GOC DB starting since the last report. |
Service | Scheduled? | Outage/At Risk | Start | End | Duration | Reason |
---|---|---|---|---|---|---|
srm-lhcb.gridpp.rl.ac.uk | SCHEDULED | OUTAGE | 18/01/2017 10:00 | 18/01/2017 16:00 | 6 hours | Castor 2.1.15 Upgrade. Only affecting LHCb instance. (LHCb stager component being upgraded). |
Most services (not Castor) | UNSCHEDULED | WARNING | 16/01/2017 13:30 | 16/01/2017 14:30 | 1 hour | Warning on site services during short intervention on system supporting VMs. |
Open GGUS Tickets (Snapshot during morning of meeting) |
GGUS ID | Level | Urgency | State | Creation | Last Update | VO | Subject |
---|---|---|---|---|---|---|---|
125856 | Green | Top Priority | Waiting Reply | 2017-01-06 | 2016-01-18 | LHCb | Permission denied for some files |
124876 | Amber | Less Urgent | On Hold | 2016-11-07 | 2017-01-01 | OPS | [Rod Dashboard] Issue detected : hr.srce.GridFTP-Transfer-ops@gridftp.echo.stfc.ac.uk |
117683 | Red | Less Urgent | On Hold | 2015-11-18 | 2016-12-07 | CASTOR at RAL not publishing GLUE 2. We looked at this as planned in December (report). |
Availability Report |
Key: Atlas HC = Atlas HammerCloud (Queue ANALY_RAL_SL6, Template 808); CMS HC = CMS HammerCloud
Day | OPS | Alice | Atlas | CMS | LHCb | Atlas HC | CMS HC | Comment |
---|---|---|---|---|---|---|---|---|
11/01/17 | 100 | 100 | 100 | 92 | 100 | N/A | 100 | SRM test failures - User timeout |
12/01/17 | 100 | 100 | 100 | 93 | 100 | N/A | N/A | SRM test failures - User timeout |
13/01/17 | 100 | 100 | 100 | 97 | 100 | N/A | 100 | SRM test failures - User timeout |
14/01/17 | 100 | 100 | 100 | 95 | 100 | N/A | 100 | SRM test failures - User timeout |
15/01/17 | 100 | 100 | 100 | 100 | 100 | N/A | 100 | |
16/01/17 | 100 | 100 | 100 | 97 | 100 | N/A | 100 | SRM test failures - User timeout |
17/01/17 | 100 | 100 | 100 | 96 | 96 | N/A | 100 | LHCb: Single SRM failure on list; CMS: SRM test failures - User timeout |
Notes from Meeting. |
- None yet