Operations Bulletin 150615

From GridPP Wiki
Jump to: navigation, search

Bulletin archive


Week commencing 8th June 2015
Task Areas
General updates

Tuesday 9th June

Tuesday 25th May

  • VO updates:
    • LIGO issue looked like DNS resolution issue. Sam following up this week. Noted DIRAC errors truncated. Suggested trying non-IC disk. Tom reported: CERN@school-enabled disks - QMUL-disk, GLASGOW-disk and LIVERPOOL-disk worked on every network tried with various instances of the GridPP CernVM with the DIRAC UI installed... so issue elsewhere.
    • LZ: Andy W: Met Paolo (Edinburgh) from LZ to walk through the basics (in parallel with LZ computing on the Grid being established). AW joined the GridPP VO to assist.
    • gridpp-support at jiscmail ac uk created.
  • Friday WMS issue?: svr024.gla.scotgrid.ac.uk has been removed from the Nagios configuration. Most CEs quickly returned from 'unknown' state, and others came back to normal in the next round of tests.
  • Tier-1problems with secondary database system for Castor - resolved quickly.

Tuesday 19th May

  • There was a GDB last week. The summary is available.
  • The summary of the pre-GDB about batch systems is available.
  • GridPP contacts for other VOs established (these are a current priority). Contacts expected to provide weekly updates on progress and status.
    • DIRAC: Jens Jensen (-> Brian Davies?) – vo being created
    • LIGO: Catalin Condurache – vo being created
    • LOFAR: George Ryall
    • LSST: Alessandra Forti
    • LZ: David Colling
    • UKQCD: Jeremy Coles
WLCG Operations Coordination - Agendas

Tuesday 9th June

  • There was a WLCG ops coordination meeting on Thursday 4th June: [1]: Minutes.
  • News: Next WLCG workshop Jan or Fed 2016 (24/01-06/02). Hosting call open.
  • Baselines: New gfal2 (2.9.2) includes FTS 3.2.33 related bug fix.
  • MW issues: new version of a Globus library (globus-gssapi-gsi-11.16-1) released in EPEL testing breaks at least MyProxy and FTS. (GGUS:114076)
  • T0/T1 services: CERN LFCs to go 22nd June. Various dCache updates.
  • T0 news: NTR
  • T1 feedback: NTR
  • T2 feedback: NTR
  • ALICE: Smooth Run-2 start.
  • ATLAS: Smooth start. Rucio good. mc15 (mcore) running well. High-level trigger repro 1st week done. T0 WNs slow perhaps due to OpenStack update. FTS bring online daemons issue.
  • CMS: Digi-Reco going well. T0 cores quota increased. Bad transfers from T0 being investigated.
  • LHCb: T1 request for RAW data to go to same tape set where possible.
  • gLExec: NTR
  • RFC proxies: Tried on sam-alice-preprod and led to unstable proxies. Fix needed.
  • M/J features: NTR
  • MW readiness: MW readiness app moved to production node. Next meeting 17th June @ 3pm BST.
  • Multicore: ATLAS goal is 80% production resources to be MC.
  • IPv6: NTR
  • Squid mon/HTTP proxy discovery: Slow progress but picking up.
  • Network & transfer metrics WG: PS update at LHCOPN-LHCONE meeting. Meshes now stable. RAL issues with latency and bandwidth. Parsing trace path results bug found. Network incidents process defined. Next meeting 10th June (to cover WLCG meshes, proximity service and FTS performance).
  • HTTP Deployment TF: Second meeting was on 3rd June.


Tier-1 - Status Page
  • A reminder that there is a weekly Tier-1 experiment liaison meeting.
  • The agenda follows this format:
    • 1. Summary of Operational Status and Issues
    • 2. Highlights/summary of the Tier1 Monday operations meeting (Grid Services; Fabric; CASTOR and Other)
    • 3. Experiment plans and operational issues (CMS; ATLAS; LHCb; ALICE and Others)
    • 4. Special presentations
    • 5. Actions
    • 6. Highlights for Operations Bulletin Latest
    • 7. AoB

Tuesday 8th June

  • We have announced an outage on Wednesday 17th June. There will be a 15-minute break in connectivity to the site between 2 and 2.30 (BST) when all services will be unavialable. Castor will be shutdown just before this and restarted afterwards. This is to enable a test of a change to our network configuration that was made a couple of weeks ago.
Storage & Data Management - Agendas/Minutes

Wednesday 27 May

  • Working on troubleshooting DIRAC data for/with LIGO (not to be confused with DiRAC or with any of the other things called DiRAC)
  • Working on setting up DiRAC at Tier 1 (not to be confused with DIRAC or Dirac or with any other thing called Dirac)
  • New secret user support list!

Tuesday 18th May

Tuesday 21st April

  • Has there been any Tier-1 contact with DiRAC?
  • Proposal to setup an 'other VOs' users list. GridPP-Users is too tied with WLCG projects.

Wednesday 15 April

  • Backing up data from DiRAC to GridPP (tape)
  • More case studies on supporting non-LHC VOs on GridPP: we have a lot of great stuff that can do great stuff - non-LHC VOs tend to have less regimented data models so maybe we need more case studies.


Accounting - UK Grid Metrics HEPSPEC06 Atlas Dashboard HS06

Tuesday 9th June

  • Delay noted for Sheffield

Tuesday 26th May

  • Delay noted for Sheffield.

Tuesday 12th May

  • Issues noted with sync for Brunel, Liv, ECDF (see EGI ticket 113473). Message broker issues (memory related) are likely the underlying EGI problem.
  • Need to check on VAC sync publishing.


Documentation - KeyDocs

Tuesday 9th June

LSST voms2 records are not present in VOID cards yet. As a workaround, a temporary note of the actual values has been added to the LSST section of Approved VOs.

https://www.gridpp.ac.uk/wiki/GridPP_approved_VOs

General note

See the worst KeyDocs list for documents needing review now and the names of the responsible people.

Tuesday 21st April

  • The Approved VOs document has been updated to take account of changes to the Ops Portal VOID cards.For SNOPLUS.SNOLAB.CA, the port numbers for voms02.gridpp.ac.uk and voms03.gridpp.ac.uk have both been updated from 15003 to 15503. Sites that support SNOPLUS.SNOLAB.CA should ensure that their configuration conforms to these settings: Approved VOs
  • KeyDocs still need updating since agreements reached at last core ops meeting.
  • New section in Wiki called "Project Management Pages".
The idea is to cluster all Self-Edited Site Tracking Tables
in here. Sites should keep entries in Current Activities
up to date. Once a Self-Edited Site Tracking Tables has
served its purpose, PM to move it to  Historical Archive 
or otherwise dispose of the table.
Interoperation - EGI ops agendas

Tuesday 21st April

  • There was an EGI ops meeting on Monday 20th.
  • David updated the UK SL5 response.
  • Please review the agenda/minutes.

Monday 9th March

  • The agenda for February's EGI ops meeting is here. Minutes are here
    • APEL 1.4.0
      • Added Month and Year columns to primary key of CloudSummaries table in cloud schema.
    • DPM-Xrootd 3.5.2 is in EPEL stable - this is the first version of the component compatible with xrootd4
    • gLExec-wn - v. 1.2.3: lcmaps-plugins-c-pep 1.3.0-1 & mkgltempdir 0.0.5-1
      • "The lcmaps-plugins-c-pep-1.3.0-1 preferably needs the argus-pep-api-c-2.3.0. This version will be released into EMI & UMD repositories in a near future."
    • UMD 3.11.0 released on 16.02.2014, UMD 3.11.1 released on 4.03.2014
    • lcg-CA 1.62 noted with an intention to broadcast these as they occur as opposed to monthly.
    • EGI looking at the decommissioning of SL5, possibly by end of 2015, as a byproduct of adding CentOS 7 to UMD. NGIs to make a note if extended SL5 support is required.
    • Vincenzo Spinoso has joined EGI Ops team from NGI_IT. Vincenzo will chair EGI Ops.
    • Next meeting is April 20th.


Monitoring - Links MyWLCG

Tuesday 31st March

Monday 7th December

On-duty - Dashboard ROD rota

Monday 8th June

  • The eu.repository has now made a comeback, so the arc alarms, cleared, but I the site availabilities (probably) need to be corrected.
  • Still getting on/off bdii alarms for a variety of sites.

Monday 11th May

  • Rota responses awaited from Andrew and Daniela.
  • Handover summary should be uploaded to the bulletin please.


Rollout Status WLCG Baseline

Tuesday 12th May

  • MW Readiness WG meeting Wed May 6th at 4pm. Attended by Raul, Matt, Sam and Jeremy.

Tuesday 17th March

  • Daniela has updated the [ https://www.gridpp.ac.uk/wiki/Staged_rollout_emi3 EMI-3 testing table]. Please check it is correct for your site. We want a clear view of where we are contributing.
  • There is a middleware readiness meeting this Wednesday. Would be good if a few site representatives joined.
  • Machine job features solution testing. Fed back that we will only commence tests if more documentation made available. This stops the HTC solution until after CHEP. Is there interest in testing other batch systems? Raul mentioned SLURM. There is also SGE and Torque.

References


Security - Incident Procedure Policies Rota

Tuesday 9th May


Tuesday 18th May

  • EGI SVG and CSIRT Advisory "Critical/Low?". "VENOM: QEMU vulnerability (CVE-2015-3456)
  • Issue with VM appliance - image ships with ...
  • EGI SVG Advisory 'High' Risk - Dirac SQL injection vulnerability [EGI-SVG-2014-7553]
  • IGTF is about to release an update to the trust anchor repository (1.64)

Tuesday 12th May


Services - PerfSonar dashboard | GridPP VOMS

- This includes notifying of (inter)national services that will have an outage in the coming weeks or will be impacted by work elsewhere. (Cross-check the Tier-1 update).

Tuesday 12th May

  • LHCOPN & LHCONE joint meeting at LBL June 1st & 2nd. Agenda taking shape.

Tuesday 31st March

Tuesday 10th March

  • From the recent WLCG meeting, two slides (1 & 2) give the direction of the network monitoring and metrics progress: integration of perfSONAR event types into experiment monitoring and an architecture for data to get from RSV probes to client. Components described on slide 3.
  • The next LHCOPN and LHCONE joint meeting will take place on Monday 1st and Tuesday 2nd of June 2015 in Berkeley (US) (hosted by LBL and ESnet).
Tickets

Monday 8th June 2015, 15.00 BST
21 Open Tickets this week.

TIER 1
113914 (26/5)
Sno+ had problems at the Tier 1 where jobs failed whilst uploading data, believed to be due to an incorrect VOInfoPath. There's been a failure at replicating the issue, and the VOInfoPath advertised is correct. Very confusing, as I assume it all worked at some point before! In progress (2/6)

113910 (26/5)
Another Sno+ ticket, concerning lcg-cp timeouts whilst data-staging from tape. Matt M has asked for advice on the best practice for doing this, or if Sno+ would be better off just upping their timeouts. Brian has given some advice on using the "bringonline" command, but is himself unsure the best way of seeing what files are currently in a VO's cache. Not much news since. In progress (28/5)

114004 (31/5)
Atlas transfers fail due to the "bring-online" timeout being exceeded. Brian spotted a problem with file timestamps mismatching, but no news on this ticket since. In progress (1/6)

BRUNEL
114006 (31/5)
APEL accounting oddness at Brunel, noticed by the APEL team. After much to-and-fro-ing John noticed that multiple CEs were using the same SumbitHost, and thus overwriting each other's sync records. Something to watch out for. In progress (7/6)

IMPERIAL
114157 (8/6)
There's been some debate on the atlas lists about this ticket, a classic "not enough space at the site" ticket. Raising above the indignation over being ticketed for this, Daniela has offered a couple of TB to give some space, and pointed out that IC have some atlas data outside space tokens, and that this could be used to expand the tokens if cleaned up. Waiting for reply (8/6)

Tools - MyEGI Nagios

Tuesday 09 June 2015

  • ARC CEs were failing nagios test becuase of non-availability of egi repository. Nagios test compare CA version from EGI repo. It started on 5th June and one of the IP addresses behind webserver was not responding. Problem went away in approximately 3 hours. The same problem started again on 6th June. Finally it was fixed on 8th June. No reason was given in any of the ticket opened regarding this outage.

Tuesday 17th February

  • Another period where message brokers were temporarily unavailable seen yesterday. Any news on the last follow-up?

Tuesday 27th January

  • Unscheduled outage of the EGI message broker (GRNET) caused a short-lived disruption to GridPP site monitoring (jobs failed) last Thursday 22nd January. Suspect BDII caching meant no immediate failover to stomp://mq.cro-ngi.hr:6163/ from stomp://mq.afroditi.hellasgrid.gr:6163/


VOs - GridPP VOMS VO IDs Approved VO table

Tuesday 19th May

  • There is a current priority for enabling/supporting our joining communities.

Tuesday 5th May

  • We have a number of VOs to be removed. Dedicated follow-up meeting proposed.

Tuesday 28th April

  • For SNOPLUS.SNOLAB.CA, the port numbers for voms02.gridpp.ac.uk and voms03.gridpp.ac.uk have both been updated from 15003 to 15503.

Tuesday 31st March

  • LIGO are in need of additional support for debugging some tests.
  • LSST now enabled on 3 sites. No 'own' CVMFS yet.
Site Updates

Tuesday 24th February

  • Next review of status today.

Tuesday 27th January

  • Squids not in GOCDB for: UCL; ECDF; Birmingham; Durham; RHUL; IC; Sussex; Lancaster
  • Squids in GOCDB for: EFDA-JET; Manchester; Liverpool; Cambridge; Sheffield; Bristol; Brunel; QMUL; T1; Oxford; Glasgow; RALPPD.

Tuesday 2nd December

  • Multicore status. Queues available (63%)
    • YES: RAL T1; Brunel; Imperial; QMUL; Lancaster; Liverpool; Manchester; Glasgow; Cambridge; Oxford; RALPP; Sussex (12)
    • NO: RHUL (testing); UCL; Sheffield (testing); Durham; ECDF (testing); Birmingham; Bristol (7)
  • According to our table for cloud/VMs (26%)
    • YES: RAL T1; Brunel; Imperial; Manchester; Oxford (5)
    • NO: QMUL; RHUL; UCL; Lancaster; Liverpool; Sheffield; Durham; ECDF; Glasgow; Birmingham; Bristol; Cambridge; RALPP; Sussex (14)
  • GridPP DIRAC jobs successful (58%)
    • YES: Bristol; Glasgow; Lancaster; Liverpool; Manchester; Oxford; Sheffield; Brunel; IC; QMUL; RHUL (11)
    • NO: Cambridge; Durham; RALPP; RAL T1 (4) + ECDF; Sussex; UCL; Birmingham (4)
  • IPv6 status
    • Allocation - 42%
    • YES: RAL T1; Brunel; IC; QMUL; Manchester; Sheffield; Cambridge; Oxford (8)
    • NO: RHUL; UCL; Lancaster; Liverpool; Durham; ECDF; Glasgow; Birmingham; Bristol; RALPP; Sussex
  • Dual stack nodes - 21%
    • YES: Brunel; IC; QMUL; Oxford (4)
    • NO: RHUL; UCL; Lancaster; Glasgow; Liverpool; Manchester; Sheffield; Durham; ECDF; Birmingham; Bristol; Cambridge; RALPP; Sussex, RAL T1 (15)


Tuesday 21st October

  • High loads seen in xroot by several sites: Liverpool and RALT1... and also Bristol (see Luke's TB-S email on 16/10 for questions about changes to help).

Tuesday 9th September

  • Intel announced the new generation of Xeon based on Haswell.



Meeting Summaries
Project Management Board - MembersMinutes Quarterly Reports

Empty

GridPP ops meeting - Agendas Actions Core Tasks

Empty


RAL Tier-1 Experiment Liaison Meeting (Wednesday 13:30) Agenda Meeting takes place on Vidyo.

Wednesday 10th June 2015 Operations report

  • Ongoing investigation into Castor performance issues for CMS.
  • The second tranche of 2014 Worker Node purchases has been put into production.
  • There is a short outage announced for next Wednesday (17th June) to test the recent change in the network routing and confirm the problem with the Tier1 network router can still be reproduced.
WLCG Grid Deployment Board - Agendas MB agendas

Empty



NGI UK - Homepage CA

Empty

Events
UK ATLAS - Shifter view News & Links

Atlas S&C week 2-6 Feb 2015

Production

• Prodsys-2 in production since Dec 1st

• Deployment has not been transparent , many issued has been solved, the grid is filled again

• MC15 is expected to start soon, waiting for physics validations, evgen testing is underway and close to finalised.. Simulation expected to be broadly similar to MC14, no blockers expected.

Rucio

• Rucio in production since Dec 1st and is ready for LHC RUN-2. Some fields need improvements, including transfer and deletion agents, documentation and monitoring.

Rucio dumps available.

Dark data cleaning

files declaration . Only Only DDM ops can issue lost files declaration for now, cloud support needs to fill a ticket.

• Webdav panda functional tests with Hammercloud are ongoing

Monitoring

Main page

DDM Accounting

space

Deletion

ASAP

• ASAP (ATLAS Site Availability Performance) in place. Every 3 months the T2s sites performing BELOW 80% are reported to the International Computing Board.


UK CMS

Empty

UK LHCb

Empty

UK OTHER
  • N/A
To note

  • N/A