Operations Bulletin Latest

From GridPP Wiki
Jump to: navigation, search

Bulletin archive


Week commencing Monday 4th February 2019
Task Areas
General updates

Tuesday 5th February

  • The EGI RP/RC A/R Report for January 2019 is available. UKI overall fine. Bham and RAL-LCG2 may wish to examine their results.
  • A reminder.... Please could everyone think about their WLCG (and beyond) engagements and commitments and put anything of note in the wiki table here.
  • Notes from yesterday's WLCG ops meeting are available. Issues at RAL noted by LHCb.
  • It is CMS week this week.
  • In case missed here is Elena's ATLAS update:
    • https://ggus.eu/index.php?mode=ticket_info&ticket_id=138033: singularity jobs failing at RAL. A job that Alessandra submitted still has a problem with the home directory. Ral is trying to fix it.
    • There was a power lost to a rack and the two storage nodes in it in Lancaster over weekend which cause a problem to ATLAS jobs. This was fixed on Monday.
    • Raul asked for srmless atlas transfers for Brunel. Elena is changing transfer configuration in AGIS but the jobs are still failing. Peter will look into this.
    • There was a discussion on lightweight ATLAS Grid sites @ADC weekly last week. Sheffield, Cambridge, Brunel, Durham, Bham and Sussex should become diskless sites.
  • There is an ongoing discussion about LSST jobs on GridPP resources that may be of wider interest. Their VOMS was down so the question was raised about using GridPP VOMS or setting up another VOMS in the UK to read its data from the SLAC instance.
  • The 9th DIRAC Users Workshop will be held in London 14. - 17. May 2019. Here is the registration link.
  • December's WLCG A/R final report is available at via this link.
  • The figures for January 2019 are now available and updates from 3 sites requested:
  • Simon G asks: VAC with VM condor VM to support local batch jobs? Has anyone done it??
  • EGI has started an HTCondor integration process and captures progress here: https://ggus.eu/index.php?mode=ticket_info&ticket_id=139377.
  • End of Support for CREAM-CE: The CREAM working group has announced that official support for the CREAM-CE component will cease at the end of the EOSC-hub project, i.e. in Dec 2020. The CREAM working group will be providing full support until the end of 2019, including one minor release already scheduled. During 2020 only security updates will be released.

Tuesday 29th January

  • Site information for ATLAS Sites Jamboree

Tuesday 22nd January

  • There was a GDB last week: Agenda.
  • ATLAS would like to start a more forceful migration to CentOS7 and have the vast majority of resources, if not all, migrated by June 1. Please track your status in our wiki.
  • The latest from the WLCG ops meeting yesterday can be found here.


WLCG Operations Coordination - AgendasWiki Page

Monday 10th December

  • WLCG ops coordination meeting held Thursday 6th. minutes.
    • IPv6 deployment update
    • CentOS / CC7 timelines for lxbatch and lxplus
    • Container WG update
    • CREAM CE end of support is Dec 2020
  • Next meeting 7th February 2019.

Tuesday 4th December

Tuesday 30th October

  • The ops meeting is shifting to the 8th November.

Tuesday 16th October

  • There was an ops coordination meeting on 11th October: Agenda/slides | Minutes.
  • Next meeting is on 1st November.


Monday 8th October

  • There will be an Ops Coordination meeting this Thursday 12th: Agenda.

Monday 17th September

  • There was a WLCG ops coordination meeting last Thursday. The minutes are now available and the presentations uploaded to this agenda page.
  • Highlights:
    • A report on EOS incidents, improvements and plans
    • DPM Storage Resource Reporting deployment TF introduction
    • CMS CRIC is deployed in production
    • Sites should upgrade perfSONAR to v4.1 on CentOS 7
    • The next meeting is planned for 11th October.



Tier-1 - Status Page

5th February 2019 Report for the Experiments Liaison Report (04/02/2019) is here.

  • CPU Efficiencies have improved for CMS (>80%), although it is still fluctuating a lot. ATLAS has still 60 - 70% efficiency, Atlas liaison has investigated, the fluctuations look like mostly a result of a different mix of types of jobs, especially failed jobs and group production, which have a lower efficiency. The overall efficiency is similar, maybe a bit better than this time last year.
  • The system drive in a disk server for LHCb failed on Thursday afternoon. This was a 14 generation machine (dual purpose for Ceph). The operating system was put on the SSD (to leave other disks for capacity), which is attached to the underside of the motherboard… Fabric is going to perform open heart surgery today to install another disk.
  • The disk buffer in front of our new Castor tape instance almost filled up. Currently we don’t know the exact cause but on the 25th January (after several months of working perfectly), the Garbage collection daemon stopped working quickly and properly (clearing only a few files an hour). We have manually been wiping files from the tape buffer to keep space clear while we understand the problem.
  • While we were investigating the full buffer we found that NA62 has been writing files to the “disk” endpoint on wlcgTape. This endpoint does not get written to tape and was designed for a small number of functional test files (e.g. SAM tests which get copied in and immediately deleted). There are ~197k files using up 11TB of space that as it currently stands will never be migrated to tape (and if they are not used will be deleted eventually). Most of these files have been written in the last few weeks. Tier-1 manager has started an urgent conversation with NA62 to find out how important these files are.
  • ARC-CE04 has stopped working again. We are not sure if this is related to the number of LHCb jobs submitted to this CE or not. We have rolled out an updated version of the software to arc-ce05 for testing, which should fix the problem but at the very least it will mean that the ARC developers will need to look at the error. Unfortunately it is likely to break backward compatibility with some VOs. It would be desirable if we could get LHCb to submit their jobs more evenly across our CEs (currently the ratio is 0:25:25:50 for ARC-CE04[1-4] respectively).
Storage & Data Management - Agendas/Minutes

Wed 05 Dec

  • Rucio update - towards a recommended data management thing?
  • IRIS needs a data lake? or distributed filesystem?

Wed 28 Nov

  • Round table update!

Wed 21 Nov

  • DPM sites: We are now not recommending not upgrading to DOME! unless you're using YAIM
  • More operational issues on IPv6 routed networks

Wed 14 Nov

  • Interesting operational issue switching ATLAS from SRM to xroot
  • Do we have a "data management solution" and if so, what does it look like?

Wed 07 Nov

  • More "data lake" and documentation
  • Progress with caching but need to watch access patterns


Tier-2 Evolution - GridPP JIRA

Monday 14th January

  • Separate problems with LHCb and ATLAS VM definitions being investigated


Accounting - UK Grid Metrics HEPSPEC06 Atlas Dashboard HS06

Tuesday 6th February


Tuesday 30th January

Tuesday 24th Oct


Monday 16th January

  • The discussion topic for next week will be accounting comparisons. Please note Alessandra's comments last week.

Monday 14th November

  • Alessandra has written an FAQ to extract numbers from ATLAS and APEL avoiding the SSB.

Monday 26th September

  • A problem with the APEL Pub and Sync tests developed last Tuesday and was resolved on Wednesday. This had a temporary impact on the accounting portal.
Documentation - KeyDocs

Tuesday 5th Feb 2019 For enmr, certificate of voms-02.pd.infn.it

New DN: /DC=org/DC=terena/DC=tcs/C=IT/L=Frascati/O=Istituto Nazionale di Fisica Nucleare/CN=voms-02.pd.infn.it, New CA_DN: /C=NL/ST=Noord-Holland/L=Amsterdam/O=TERENA/CN=TERENA eScience SSL CA 3

Please check approved VOs: https://www.gridpp.ac.uk/wiki/GridPP_approved_VOs

Tuesday 18th September 2018

  • We need to flag documents that are obsolete and update our monitoring!

Tuesday 18th September 2018

  • Changes to Approved VOs: LZ record at Imperial now showing up in the portal (had been inserted as a special case until now.)


Tuesday 11th June 2018

Steve Jones is updating the User Guide with various mods that have cropped up since Tom Whyntie left (pending permission to edit GridPP website docs...)

Tuesday 11th June 2018

Approved VOs now contains two new SKA VOMS servers (at Oxford and Imperial, as well as Manchester).

https://www.gridpp.ac.uk/wiki/GridPP_approved_VOs


Tuesday 8th May 2018

  • Updating GridPP website certificate to avoid Chrome warnings

General note

See the worst KeyDocs list for documents needing review now and the names of the responsible people.

Interoperation - EGI ops agendas Indico schedule

Friday 1st February

Thursday 17th January EGI OMB meeting

  • CREAMCE to be out of support by Dec 2020
  • More effort to fix accounting issue of HTCondor CE

Monday 14th January 2019

  • No UK specific issue mentioned.
  • IPv6 readiness plan is going to be summarized at OMB.

Thursday 15 Nov 2018

  • Interesting discussion about UMD Product ID card
  • Information System/BDII future
    • Discussion about UK BDII plan and time line: Monitoring can be an issue

Monday 08 October 2018

  • Argo notification is working so sites can subscribe to notification by enabling notification from GOCDB

Monday 17th September

  • There was an EGI ops meeting on 10th September. The agenda page is here.

Tuesday 24 July

  • EGI OPS meeting on 16 July 2018, meeting agenda [1]
  • UMD 4.7.0 release

Recommend not to update CREAM issue with canl-java, in progress https://ggus.eu/index.php?mode=ticket_info&ticket_id=136074

  • Does site requires MPI for Centos7? Is any site willing to support MPI for grid?
  • No meeting in August, next meeting on 10 Sep


Monday 2nd July

Monday 11th June

  • Meeting agenda : [2]
  • Upcoming releases relevant to UK sites
    • gfal2 major release(2.5.4)
    • Arc 5.4.2
  • Yearly review of information in GOCDB. UK ticket status is in progress. Should each site update the ticket?
  • Sites can enable ARGO notification through GOCDB. It can be done at site level or at the service level
  • Final call for WMS decommission. No UK WMS is in the list.
  • WebDav probes will be added to ARGO_MON_CRITICAL profile which means that unavailability of webdav will be added to A/R figures. All UK endpoints are passing the test at the moment
  • Next meeting 9th July

Monday 14th May

  • Next meeting June 11th

Tuesday 24th April

  • The next EGI OMB is on 3rd May

Tuesday 6/3


Monitoring - Links MyWLCG

Tuesday 4th July

  • There were a number of useful links provided in the monitoring talks at the WLCG workshop in Manchester - especially those in the Wednesday sessions.

Monday 13th February

  • This category is pretty much inactive. Are there any topics under "monitoring" that anyone wants reported at this ops meeting? If not we will remove this section from the regular updates area of the bulletin and just leave the main links.

Tuesday 1st December


Tuesday 16th June

  • F Melaccio & D Crooks decided to add a FAQs section devoted to common monitoring issues under the monitoring page.
  • Feedback welcome.


On-duty - Dashboard ROD rota

Tuesday 5th February

  • Birmingham decommissioning of SRM and BDII still going on so tickets are on hold.
  • Few availability tickets on hold
  • Lancaster has WebDav ticket on hold which seems to be effect of DOME rollout

Tuesday 14th August

  • A couple of new availability tickets (QMUL and Lancaster), both for well-publicised reasons. Otherwise quiet. AM on shift this week.


Rollout Status WLCG Baseline

Monday 20th November



Historical References


Security - Incident Procedure Policies Rota

Tuesday 29th January

  • Current status


Services - PerfSonar production dashboard |PerfSonar development dashboard | OSG Networking and perfSONAR pages | GridPP VOMS

- This includes notifying of (inter)national services that will have an outage in the coming weeks or will be impacted by work elsewhere. (Cross-check the Tier-1 update).

Monday 10th September

Monday 2nd July

Monday 30th April

Monday 19th February

Please could sites upgrade their perfsonar hosts to CentOS7. Instructions are here. Current OS versions here.

  • Duncan has recreated the UK perfSONAR mesh. Link here!


Tickets

Monday 4th February 2019, 14.30 GMT
41 Open UK Tickets this month.

NGI
139506 (4/2)
The NGI got a ticket regarding Birmingham's availability figures, which are thrown by the decommissioning of their SE. We need to formulate a reponse, but we should perhaps ask for an A/R recomputation for January for the site. Assigned (4/2)

OXFORD
139431 (30/1)
A request from CMS to updated the site's site-local-config. Being looked at. In progress (31/1)

138647 (3/12/18)
Ticket tracking the t2k DFC migration at Oxford. Kashif has supplied the best file dump that he can without DOME installed. Daniela has asked the VO if they can enact a "clean slate" solution at Oxford to make life easier for all. In progress (31/1)

131615 (3/11/17)
Oxford's IPv6 ticket. Kashif has kept this up to date, with some semi-positive news - things are moving in the right direction, however slowly. On Hold (7/1)

BRISTOL
139410 (30/1)
CMS ticket for transfer failures from Florida to the site. Investigation suggests that this might be an IPv6 issue. In progress (4/2)

131613 (3/11/17)
Bristol's IPv6 ticket. Good progress here, but more holes needed to be poked in the site's v6 firewall. We'll need to check the PS mesh (still all grey for Bristol's v6 endpoints at time of writing). In progress (4/2)

BIRMINGHAM
137801 (17/10/18)
Ticket tracking the decommissioning of the Birmingham DPM. The node was removed from gocdb and switched off last week. I can't remember how long these tickets need to be kept open - I should look that up really. Just remember to keep your logs for 90 days Mark! In progress (30/1)

138894 (17/12/18)
This ROD ticket for the decommissioned SE might have hit a problem - Mark removed the server from the gocdb but there's still an alarm on the dashboard... On Hold (9/1)

138244 (12/11/18)
Meanwhile since killing off the old DPM completely the Birmingham Availability/Reliability figures have started to fix themselves. On Hold (1/2)

131612 (3/11/17)
Birmingham's v6 ticket. Some good news just before Christmas, hopefully Mark will be able to start dual-stacking once he's cleared his plate a bit. On Hold (24/12/18)

GLASGOW
131611 (3/11/17)
Only the v6 ticket at Glasgow. Last update (today) was a request for info from the v6 ticket watchers. In progress (4/12/18)

EDINBURGH
139240 (21/1)
An LHCB ticket about jobs failing, tracked to a "black hole" node that was took offline. Last update was waiting on the VO to confirm if the problem has gone away, which they were having problems doing due to having "issues" at the time. If there's no word from LHCB soon then I would close this ticket. In progress (22/1)

138243 (12/1/18)
An availability ticket. I'm a little confused as to why there's still an alarm on the dashboard, as the argo page looks to my eyes like the site has had >85% availability over the last 30 days (only one non-100% day). On Hold (1/2)

131610 (3/11/17)
ECDF's v6 ticket. Some positive news back in early December, the ticket could do with an update. In progress (4/12/18)

DURHAM
131609 (3/11/17)
Another site with just the v6 ticket. Last update was the start of December, any news from your network team at all? On Hold (4/12/18)

SHEFFIELD
138649 (3/12/18)
Sheffield's t2k DFC migration ticket. The site's status is the same as Oxford, and was included in Daniela's query to t2k in that ticket. In progress (9/1)

131608 (3/11/17)
Sheffield's v6 ticket. In great need of an update. In progress (30/10)

MANCHESTER
131607 (3/11/17)
Only the v6 ticket at Manchester too. Things were looking good towards the end of last year, any news? In progress (27/11/18)

LIVERPOOL
139411 (30/1)
A request from Biomed querying if they still need to use the -s option to use the site's space token (note that they're still using lcg tools). John replied that currently this is still the case, but in the DOME future it won't be (due to quotatokens being applied to a directory). On Hold (1/2)

138648 (3/12/18)
Liverpool's t2k DFC migration ticket. Unlike the other two sites Liverpool is planning on migrating to DOME soonish, so they might not require a "clean slate solution". On Hold (18/12/18)

131606 (3/11/17)
Liverpool's v6 ticket. Last report had the networking team look at this in the New Year (so now-ish) to dual stack the storage, whilst the perfsonars are happily dual-stacked already. Please update the ticket once you know more (whoch will hopefully be soon-ish). In Progress (5/12/18)

LANCASTER
137996 (30/10/18)
A ROD ticket for an http test failure caused by DPM not quite handling http file moves quite right. Waiting on an updated version of DPM to get into epel - I will ask the devs today how that's going. On Hold (14/1)

UCL
139101 (8/1)
A ROD ticket for APEL publishing test failures. Ben has called Andrew McNab in for help installing things. In Progress (30/1)

RHUL
131603 (7/11/17)
Just the v6 ticket at RHUL too. Simon confirms that there's been no news on this front. In progress (23/1)

QMUL
139430 (30/1)
Another CMS ticket to update the site-local-config. Daniela has sorted it and has asked CMS to confirm. Waiting for reply (4/2)

139097 (7/1)
LHCB seeing data transfer problems, but this was a while ago. Dan has asked if problems persist. Waiting for reply (30/1)

138364 (19/11/18)
QM's t2k DFC migration ticket. Dan was ready to do the data moving bit, just asked for a confirmation of that needed to be done. Is the move underway Dan? In progress (16/1)

134573 (17/4/19)
CMS request to install singularity. Dan is rolling this into the move to C7, which was in the testing phase last November. Any recent news? On Hold (5/11/18)

IMPERIAL
139454 (31/1)
A ticket from a t2k user having trouble accessing post-DFC migration data at RALPP - which for reasons had to be routed to Imperial. Daniela can't spot any problems, so it looks like a user side issue. Although it might be worth checking the t2k.org .lsc files at RALPP. Assigned (should be something else) (31/1)

138359 (19/11/18)
Daniela runs such a tight ship at IC that she has to assign other issues to her site - this is the DFC migration master ticket. On Hold (22/1)

BRUNEL
139344 (28/1)
CMS transfer failures at Brunel. The storage is working fine, but it looks like some files aren't at Brunel that CMS things should be at Brunel, with no explanation of where they went. It's being investigated. In progress (4/2)

100IT still have ticket: 137306 (last update 16/1)

TIER 1
138361 (19/11/18)
The Tier 1's t2k DFC migration ticket. The ticket looks done with, just waiting on t2k to see if things are okay. That seems to be a little unclear, but that might be a VO side problem. In progress (31/1)

138665 (4/12/18)
The original mice LFC ticket, on hold whilst the above is sorted out.

139476 (1/2)
With the MICE LFC dead in the water this is the request for a dump to migrate to the DFC. In progress (4/2)

139306 (24/1)
A request from Duncan to upgrade the RAL perfsonar hosts (and fix some configs). In progress (29/1)

138891 (17/12)
A ROD availability ticket that looks a bit off - John thinks this is due to invalid tests being run and has opened a counter ticket: 139198 - from that the test in question is due to be removed this week. On Hold (16/1)

139477 (1/2)
A ROD ticket for a couple of sickly ARC CEs. One node is fixed, the other was already on the naughty step for having a high load (possibly from the A-REX slapd process), and it's being poked and prodded. In progress (4/2)

138500 (26/11/18)
CMS transfers from T2_PL_SWIERK failing. File transfer experts were about to be called in, and the ticket is now On Hold. Is it going to be a tough one to debug? On Hold (30/1)

138033 (1/11/18)
Atlas ticket for singuarlity job failures at RAL. Still lots of back and forth here, with great efforts from James and Alessandra. In progress (31/1)

139414 (30/1)
LHCB jobs seg faulting. It appears these errors all occurred on VMs, and now those VMs have passed on the errors have disappeared too. As there's no way to easily proceed (VM necromancy isn't a thing afaik) then it looks like this one can be closed. In progress (4/2)

Tools - MyEGI Nagios

Monday 20th November

Tuesday 18th July

  • Following our ops discussion last week, Steve will focus his tests on supporting the GridPP DIRAC area and decommission the other tests.


VOs - GridPP VOMS VO IDs Approved VO table

Monday 20th November

  • Tom Whyntie has requested (and been granted) access to the GridPP VO to get some pipelines working for large-scale processing and analysis of MRI scans associated with the UK Biobank project.
  • All VOs in the incubation page being prompted for updates by the end of November (required input for OC documents).
  • QMUL (Steve L) is following up on the biomed MoU. GridPP want to be cited in research papers for the support our resources/sites provide.


Site Updates

Date



Meeting Summaries
Project Management Board - MembersMinutes Quarterly Reports

Empty

GridPP ops meeting - Agendas Actions Core Tasks

Empty


RAL Tier-1 Experiment Liaison Meeting (Wednesday 13:30) Agenda Meeting takes place on Vidyo.

Highlights from this meeting are now included in the Tier1 report farther up this page.

WLCG Grid Deployment Board - Agendas MB agendas

Empty



NGI UK - Homepage CA

Empty

Events
UK ATLAS - Shifter view News & Links

Atlas S&C week 2-6 Feb 2015

Production

• Prodsys-2 in production since Dec 1st

• Deployment has not been transparent , many issued has been solved, the grid is filled again

• MC15 is expected to start soon, waiting for physics validations, evgen testing is underway and close to finalised.. Simulation expected to be broadly similar to MC14, no blockers expected.

Rucio

• Rucio in production since Dec 1st and is ready for LHC RUN-2. Some fields need improvements, including transfer and deletion agents, documentation and monitoring.

Rucio dumps available.

Dark data cleaning

files declaration . Only Only DDM ops can issue lost files declaration for now, cloud support needs to fill a ticket.

• Webdav panda functional tests with Hammercloud are ongoing

Monitoring

Main page

DDM Accounting

space

Deletion

ASAP

• ASAP (ATLAS Site Availability Performance) in place. Every 3 months the T2s sites performing BELOW 80% are reported to the International Computing Board.


UK CMS

Empty

UK LHCb

Empty

UK OTHER
  • N/A
To note

  • N/A