Operations Bulletin 130616

From GridPP Wiki
Jump to: navigation, search

Bulletin archive


Week commencing Monday 5th June 2016
Task Areas
General updates

' Monday 6th June

Monday 30th May

  • Discussion in WLCG regarding the use of information system (vs GOCDB)| Glasgow experience. Impacts on other VOs?
  • Tier-2 Hardware Survey. The target date was the end of May.
  • Sam: edg-mkgridmap, voms and RHEL/Centos 6.8. It seems there is a solution with edg-mkgridmap-4.0.3.
  • "Digital Infrastructures for Research 2016" (DI4R) conference: The DI4R 2016 will be held in Kraków on 28-30 September 2016. The conference will be hosted by ACC Cyfronet AGH and organised jointly by EGI, EUDAT, GÉANT, OpenAIRE and RDA Europe.
  • Pre-GDB - IPv6 workshop - Tuesday 7th June 2016.
  • Andrew L: HTCondor Machine/Job Features testing?.
  • 2nd Developers@CERN Forum - Python at CERN. 30th/31st May. Vidyo available.
  • OMB meeting last Thursday 24th. Topics:
    • Central monitoring (2 instances since March. Switchover planned 1st July.)
    • EGI Engage update
    • Indigo-DataCloud
    • Monitoring of cloud services
    • Security update
WLCG Operations Coordination - AgendasWiki Page

Monday 7th June

  • There was a WLCG ops coordination meeting last Thursday: Agenda | Minutes
    • some of May's events:
    • The decision to create a WLCG Accounting Task Force was approved at the May 24th MB.
    • PIC published on May 18th the SIR on the T10KD problems they were experiencing since last December.
    • Web info around certificates was updated to reflect changes around the OSG PKI.
    • GGUS Release on June 1st. Release notes available.

Tuesday 24th May

  • There was a middleware readiness meeting on 18th May. Agenda | Minutes
  • The next WLCG ops coordination meeting will be on 2nd June (focus monitoring consolidation).

Monday 1st May

  • There was a WLCG ops coordination meeting last Thursday. Agenda | Minutes.
    • SL5 decommissioning (EGI, April 30, 2016). SL5 'ops' service probes will get CRITICAL. The whole retirement process is tracked here.
    • New Traceability and isolation working group will have a dedicated session in May pre-GDB.
    • New Task Force on Accounting review is under preparation to start addressing accounting issues.
    • A detailed review of the Networking and Transfers WG was done. This includes a status report, ongoing actions and future R&D projects. More details in the agenda slides.
Tier-1 - Status Page

Tuesday 7th June A reminder that there is a weekly Tier-1 experiment liaison meeting. Notes from the last meeting here

  • There have been problems with the RAL Tier1 tape library. There was an intervention on Wednesday (1st June) during which some components were replaced. However, one of these had a fault that led to a short circuit. The library was taken down from Thursday (2nd) to Friday (3rd) for checks to be made and there was no tape access during this time. At the end of Friday afternoon the service was restored. We also have an ongoing problem with the control software crashing. This does not have a big impact on the service at the moment. However, we have announced (via the GOC DB) an intervention today while we investigate it.
  • "GEN Scratch" storage in Castor will be decommissioned. This has been announced via an EGI broadcast. Write Access to this area was disabled last week - ahead of full withdrawal in a few weeks time.
  • One of our three WMS systems (lcgwms06) is being decommissioned. It was stopped from receiving new work last week (on the 1st June).
  • Reminder: HEP SYSMAN at RAL 21-23 June. The first day being a Security training day. Please register ASAP! See: http://hepwww.rl.ac.uk/sysman/june2016/main.html
Storage & Data Management - Agendas/Minutes

Wednesday 01 June

  • LSST experiences with using GridPP - what is the minimal effort to get started? Are the data features useful/known/used?
  • Lots of interesting submissions to CHEP
  • IPv6 - time to showcase our work? Can we still show interesting IPv6 work in a post-ewanian infrastructure?

Wednesday 18 May

  • EGI data hub; see home page for the presentation download.

Wednesday 04 May

  • GridPP/ATLAS input for GDB next week
  • Future evolution of T2s...

Wednesday 27 Apr

  • Report from DPM collaboration meeting
  • GridPP as an einfrastructure (update)

Wednesday 20 Apr

  • Report from GridPP36. Closer to understanding what a future T2 looks like.
  • Report from DataCentreWorld/CloudExpo: Usual bigger and better data centre servers, networks, combined with cloud applications adoption and exciting new IoT devices
Tier-2 Evolution - GridPP JIRA

Mon 6 Jun

  • GridPP VM vs DIRAC version mismatch found and worked-around.

Friday 29 Apr

  • Vac 01.00 released.

Tuesday 5 Apr

  • Vac 00.22 release (01.00 pre-release) ready. Includes Squid-on-hypervisor config in Puppet module.
  • Aim to do Vac 01.00 release immediately after Pitlochry.

Tuesday 22 Mar

  • GridPP DIRAC SAM tests now have gridpp and vo.northgrid.ac.uk pages
  • Some pheno jobs have been run on VAC.Manchester.uk
  • ATLAS cloud init VMs now running at Liverpool too

Tuesday 15 Mar

  • Vac 00.21 deployed on all/some machines at all 5 production Vac sites
  • Oxford Vac-in-a-Box set up in production
  • ATLAS pilot factory at CERN configured for our ATLAS cloud init VMs
  • ATLAS cloud init VMs running production jobs at Manchester and Oxford; being enabled at other sites
  • vo.northgrid.ac.uk DIRAC jobs running in GridPP DIRAC VMs: should work for any GridPP DIRAC VO, since a parameter to the VM config.
  • Multipayload LHCb pilot scripts tested in VMs: same pilot scripts can be used on VM or batch sites in multiprocessor slots.


Accounting - UK Grid Metrics HEPSPEC06 Atlas Dashboard HS06

Tuesday 9th February

  • 4th Feb: The data from the APEL summariser that was fixed yesterday has now propagated through the data pipeline and the Accounting Portal views and the Sync and Pub tests are all working again.
  • Sheffield is slightly behind other sites (but looks normal) and so is QMUL.

Tuesday 24th November

  • Slight delay for Sheffield.

Tuesday 3rd November

  • APEL delay (normal state) Lancaster and Sheffield.

Tuesday 20th October The WLCG MB decided to create a Benchmarking Task force led by Helge Meinhard see talk

Documentation - KeyDocs

Friday 27th May

  • New Approved VOs published; icecube and fermilab

https://www.gridpp.ac.uk/wiki/GridPP_approved_VOs

Tuesday 17th May

  • TW implementing use-cases summary matrix according to these:
    • CERN@school - LUCID (full workflow with DIRAC)
    • GalDyn (full workflow with DIRAC)
    • PRaVDA (full workflow with DIRAC)
    • LSST (full workflow with Ganga + DIRAC)
    • EUCLID (full workflow at the RAL T1)
    • DiRAC (data storage/transfer)
    • SNO+ (data transfer/networking)

Wednesday 4th May

Introduction added to explain _why_ CGROUPS are desired when using ARC/Condor.

https://www.gridpp.ac.uk/wiki/Enable_Cgroups_in_HTCondor


Tuesday 26th April

Statement on recent history of LSST VOMS records. To be discussed at Ops Meeting.

  • Feb 16: 3 x VOMS servers, port 1503, names: voms.fnal.gov (DigiCert-Grid, admin), voms1.fnal.gov (DigiCert-Grid, not admin), voms2.fnal.gov (opensciencegrid , admin).
  • Apr 18: 1 x VOMS server. port 15003, names: voms.fnal.gov (opensciencegrid, admin). Caused by a hitch with security that caused the omission of some data. GGUS: https://ggus.eu/?mode=ticket_info&ticket_id=120925. Fixed.
  • Apr 21: 3 x VOMS servers, port 15003, names: voms.fnal.gov (opensciencegrid, admin), voms1.fnal.gov (opensciencegrid, not admin), voms2.fnal.gov opensciencegrid admin

So, similar to how it was in feb, but DNs (and CA_DNs) of 2 certs changed. But other things to note:

Please discuss. General note

See the worst KeyDocs list for documents needing review now and the names of the responsible people.

Interoperation - EGI ops agendas

Monday 9th May

  • There was an EGI ops meeting today.
    • SL5 status was reviewed. Services remaining: WMS/LB being decommissioned at Glasgow; Lanacaster BDII scheduled for upgrade; Oxford SAM/VO Nagios. This is only on SL5 and plans across all NGIs to move to a central ARGO service are in place; RAL T1 Castor SRM systems. SRM upgrade waiting on Castor upgrade.


Monday 18th April


Monitoring - Links MyWLCG

Tuesday 1st December


Tuesday 16th June

  • F Melaccio & D Crooks decided to add a FAQs section devoted to common monitoring issues under the monitoring page.
  • Feedback welcome.


Tuesday 31st March

Monday 7th December

On-duty - Dashboard ROD rota

Tuesday 31st May

  • EFDA Jet : EFDA admin's certificate has expired. He has applied for new certificate.
  • Brunel: Brunel is running two site bdii's and it is confusing EGI status dashboard. A ticket has been opened.
  • RHUL: It had some network issues and it seems that it hasn't been solved yet.
  • Liverpool, Sussex and ECDF has an availability ticket

Tuesday 15th March

  • Normal week with few alarms which were fixed in time. Birmingham has a low availability ticket. ECDF has ticket on hold as it test ARC CE and putting this CE in downtime might effect proper job test from ATLAS.


Rollout Status WLCG Baseline

Tuesday 7th December

  • Raul reports: validation of site BDII on Centos 7 done.

Tuesday 15th September

Tuesday 12th May

  • MW Readiness WG meeting Wed May 6th at 4pm. Attended by Raul, Matt, Sam and Jeremy.


References


Security - Incident Procedure Policies Rota

Monday 13th June

  • CRITICAL risk vulnerability concerning iperf3 used in perfSONAR


Tuesday 7th June

  • Reminder - HEP SYSMAN Security training Tues Jun 21st
  • EGI SVG advisory for 'critical' risk vulnerability in one product
  • EGI SVG advisory for 'high' risk vulnerability in Nova API for OpenStack

Tuesday 31st May

  • 'Low' Risk Vulnerability concerning Panda Pilot factory payload verification
  • 'Low' risk vulnerability concerning DIRAC Pilot factory payload verification
  • Arbitrary file overwrite in one product.
  • Configuration issue in one product.
  • Reminder - HEP SYSMAN Security training Tues Jun 21st

Tuesday 24th May

  • Nothing to report

The EGI security dashboard.


Services - PerfSonar dashboard | GridPP VOMS

- This includes notifying of (inter)national services that will have an outage in the coming weeks or will be impacted by work elsewhere. (Cross-check the Tier-1 update).

Tuesday 10th May

  • Next LHCOPN and LHCONE meeting: Helsinki (FI) 19-20 of September 2016: Agenda.

Tuesday 22nd March

Tuesday 8th December

  • Given the recent network issues and role of GridPP DIRAC, there are plans to have a slave DNS for gridpp.ac.uk at Imperial and hopefully the T1 too. Andrew will seek an update to the whois records and name servers once the other host sites are confirmed.
  • Looking at the network problems this week will be of interest. Duncan supplied this link and Ewan the one for the dual stack instance.
Tickets

Monday 6th June 2016, 14.00 BST
35 Open UK Tickets this month.

NGI
121987
The NGI's very urgent reponse times for May weren't up to par - a dug into the reason why and updated the ticket. In progress (6/6) Update - solved once explanation given

119995 (7/3)
Uncertified NGS sites to clear up - Jeremy has been on it. In Progress (17/5)

SUSSEX
118289 (10/12/15)
Pilot ticket - Jeremy M thought he had got it, but Daniela's tests say otherwise (although the errors look like the CE playing up). In progress (26/5)

121797 (26/5)
Sno+ dirac jobs failing at Sussex - looks like the same problem as above. No word from the site - I'll poke Jeremy M offline. Assigned (26/5)

120735 (11/4)
Low availability ROD ticket. Hopefully Sussex will have a clear month. On Hold (6/6)

OXFORD
121641 (18/5)
Wrong capacities reported in REBUS - this ball has landed in Oxford's court, with Pete G looking at the SE publishing. Assigned (I set it In Progress) (3/6)

121924 (2/6)
An interesting ticket from Duncan, concerning a drop in perfsonar throughput performance at Oxford. Still just Assigned (2/6)

BRISTOL
120455 (29/3)
Validation of a new HTCondor CE at Bristol by CMS. At last check Bristol were testing the CERN accounting daemon, but that was a few weeks ago. Any news? In progress (9/5)

121989 (6/6)
Super-fresh ROD ticket (.glexec.CREAMCE-JobSubmit tests). Assigned (6/6) Update - solved 10 minutes after a wrote this.

BIRMINGHAM
121125 (28/4)
Missing ATLAS SE dumps. At last check Matt W was having troubles with xrdcp-ing the dumps into his DPM, Alessandra suggested that others succeeded using rfcp (with the caveat that rfcp might not be around much longer). In progress (1/6)

GLASGOW
120135 (11/3)
HTTP support ticket. Any news? An update would be nice, no matter how vacuous. On holding the ticket would be even better. In progress (7/4) Update - Solved, tests are green.

121929 (2/6)
Glasgow's SE "not working" for Biomed- which it isn't meant to - but biomed support was still being published. Gareth is sorting that out, and will close the ticket once the Biomed references are purged. All good. In progress (3/3)

120351 (22/3)
Enabling LSST at Glasgow. I'll repeat Alessandra's question in the ticket - any news? On hold (5/5)

EDINBURGH
121465 (11/5)
ROD Availability ticket, just waiting for time to pass. On hold (31/5)

121990 (6/6)
BDII issues caused a few ROD test failures - Marcus fixed things around lunchtime, hopefully they heal up soon (to answer Marcus's question, I believe BDII changes typically take 2-3 hours to fully propagate). In progress (6/6) Update - can likely be closed, tests are green again.

120004 (7/3)
ROD test failures for the ARCHER test CE. The last update has Andy asking if the ticket could be put on "long term hold"? That's if someone can't manually edit the gocdb to set this service "Monitored=N, Production=Y". On hold (24/5)

SHEFFIELD
121991 (6/6)
A fresh ROD ticket, srm tests were failing. Elena freed up some space and things should be good now. Waiting for reply (6/6) Update - tests are passing, another for the solved pile?

LIVERPOOL
121759 (25/5)
Another Availability ROD ticket. John identified the cause as likely due to the DPM publishing problems after a cert upgrade (that's got me too before). Just needs time to heal these wounds now. On hold (27/5) Update- actually the ticket isn't on hold, but it should be...hint...

RHUL
121575 (16/5)
Yet another availability ticket (May was not kind to the UK). Likely needs On Holding whilst the metrics "fix" themselves - if the underlying problems have passed. In progress (16/5)

QMUL
120352 (22/3)
LSSt support at QM. Dan reports today that LSST should be enabled on three CEs. Nice one. Waiting for testing. (6/6)

120204 (15/3)
LHCB problems, due to an issue submitting jobs to dual stack CEs from CERN. The referenced issue (120586) has had a priority bump and a few extra parties cc'd in, so hopefully there will be some movement on it. On Hold (25/4)

BRUNEL
121573 (16/5)
ROD BDII issue ticket - possibly due to multiple site BDIIs (although I was under the same impression as Daniela). Kashif has opened a related ticket (121760) which Raul commented on today to asj for confirmation the issues are related). On hold (27/5)

121813 (27/5)
Brunel failing CMS validation - likely due to cvmfs playing up on two nodes. The ticket has turned a conversation on CMS site settings, and seems to be chugging along fine. In progress (6/6)

EFDA-JET
121899 (1/6)
Low availability JET ticket. Assigned (1/6)

121837 (30/5)
JET SE not working for biomed. I thought that JET stopped supporting Biomed a while ago, I'll need to check my notes. Assigned (30/5)

100IT
121189 (2/5)
A ticket I don't understand! Waiting for reply (16/5)

TIER 1
119841 (1/3)
HTTP support at the Tier 1 - on hold awaiting dev support. On Hold (26/4)

121687 (20/5)
Another perfsonar performance ticket from Duncan. A router that could be the cause is due to be replaced, things will be looked into in more detail after. On hold (23/5)

121894 (1/6)
A request for the Tier 1's plans to deploy a "LHCOPN IPv6 Peering, incl. dualstack Perfsonar". The upcoming router replacement is a blocker for this. In progress (1/6)

120810 (13/4)
Biomed requiring a bit of extra reassurance during the decommissioning of their volume. In progress (20/6)

120350 (22/3)
Enabling LSST at RAL. Things were looking good, but it looks like progress stalled rolling out the VO to the workers (aka the hard bit). Any news? In progress (6/5)

121322 (10/5)
A Sno+ user having trouble accessing files at the Tier 1. Whilst the issue appears to be fixed for the example file, the user lists a few more fules which they have trouble downloaded a subset of. Reopened (3/6)

117683 (18/11)
Castor not publishing glue 2. Awaiting some background dev work. Any news? On hold (5/4)

DECOMMISSIONING TICKETS
120664- GenScratch Disk Pool at the Tier 1.
121258- WMSes & LB at the Tier 1 (I previously misadvertised this as being a Glasgow decommissioning ticket, thus revealing to everyone my secret- that I don't actually properly read every ticket.
All handed perfectly.

Tools - MyEGI Nagios

Tuesday 5th April 2016

Oxford had a scheduled network warning so active nagios instance was moved from Oxford to Lancaster. I am not planning to move it back to Oxford for the time being.


Tuesday 26th Jan 2016

One of the message broker was in downtime for almost three days. Nagios probes picks up a random message broker and failover is not working so a lot of ops jobs hanged for long time. Its a known issue and unlikely to be fixed as SAM Nagios is in its last leg. Monitoring is moving to ARGO and many things are not clear at the moment.

Monday 30th November

  • The SAM/ARGO team has created a document describing Availability reliability calculation in ARGO tool.


Tuesday 6 Oct 2015

Moved Gridppnagios instance back to Oxford from Lancaster. It was kind of double whammy as both sites went down together. Fortunately Oxford site was partially working so we managed to start SAM Nagios at Oxford. Sam tests were unavailable for few hours but no affect on egi availibilty/reliability. Sites can have a look at https://mon.egi.eu/myegi/ss/ for a/r status.

Tuesday 29 Sep 2015

Following an air-conditioning problem at machine room in Oxford Tier-2 site on 26 September, gridppnagios(OX) was shut down and gridppnagios(Lancs) became active instance. Oxford site is in downtime until 1st Oct and it may be extended depending on the situation. VO-Nagios was also unavailable for two days but we have started it yesterday as it is running on a VM. VO-nagios is using oxford SE for replication test so it is failing those tests. I am looking to change to some other SE.

VOs - GridPP VOMS VO IDs Approved VO table

Tuesday 19th May

  • There is a current priority for enabling/supporting our joining communities.

Tuesday 5th May

  • We have a number of VOs to be removed. Dedicated follow-up meeting proposed.

Tuesday 28th April

  • For SNOPLUS.SNOLAB.CA, the port numbers for voms02.gridpp.ac.uk and voms03.gridpp.ac.uk have both been updated from 15003 to 15503.

Tuesday 31st March

  • LIGO are in need of additional support for debugging some tests.
  • LSST now enabled on 3 sites. No 'own' CVMFS yet.
Site Updates

Tuesday 23rd February

  • For January:

ALICE: All okay.

RHUL 89%:89% Lancaster 0%:0%

RALPP: 80%::80%

RALPP: 77%:77%

  • Site responses:
    • RHUL: The largest problem was related to the SRM. The DPM version was upgraded and it took several weeks to get it working again (13 Jan onwards). Several short-lived occurrences of running out of space on the SRM for non-ATLAS VOs. For around 3 days (15-17 Jan) the site suffered from a DNS configuration error by their site network manager which removed their SRM from the DNS, causing external connections such as tests and transfers to fail. For one day (25 Jan) the site network was down for upgrade to the 10Gb link to JANET. Some unexpected problems occurred extending the interruption from an hour to a day. The link has been successfully commissioned.
    • Lancaster: The ASAP metric for Lancaster for January is 97.5 %. There is a particular problem with ATLAS SAM tests which doesn’t affect the site activity in production and analysis and this relates to the path name being too long. A re-calculation has been performed.
    • RALPP: Both CMS and LHCb low figures are due to specific CMS jobs overloading the site SRM head node. The jobs should have stopped now.



Meeting Summaries
Project Management Board - MembersMinutes Quarterly Reports

Empty

GridPP ops meeting - Agendas Actions Core Tasks

Empty


RAL Tier-1 Experiment Liaison Meeting (Wednesday 13:30) Agenda Meeting takes place on Vidyo.

Highlights from this meeting are now included in the Tier1 report farther up this page.

WLCG Grid Deployment Board - Agendas MB agendas

Empty



NGI UK - Homepage CA

Empty

Events
UK ATLAS - Shifter view News & Links

Atlas S&C week 2-6 Feb 2015

Production

• Prodsys-2 in production since Dec 1st

• Deployment has not been transparent , many issued has been solved, the grid is filled again

• MC15 is expected to start soon, waiting for physics validations, evgen testing is underway and close to finalised.. Simulation expected to be broadly similar to MC14, no blockers expected.

Rucio

• Rucio in production since Dec 1st and is ready for LHC RUN-2. Some fields need improvements, including transfer and deletion agents, documentation and monitoring.

Rucio dumps available.

Dark data cleaning

files declaration . Only Only DDM ops can issue lost files declaration for now, cloud support needs to fill a ticket.

• Webdav panda functional tests with Hammercloud are ongoing

Monitoring

Main page

DDM Accounting

space

Deletion

ASAP

• ASAP (ATLAS Site Availability Performance) in place. Every 3 months the T2s sites performing BELOW 80% are reported to the International Computing Board.


UK CMS

Empty

UK LHCb

Empty

UK OTHER
  • N/A
To note

  • N/A