Operations Bulletin Latest

From GridPP Wiki
Jump to: navigation, search

Bulletin archive


Week commencing Monday 7th November 2016
Task Areas
General updates

Monday 7th November

Monday 31st October

  • Please register for the HEPSYSMAN meeting next week. Agenda.
  • The APEL team has scheduled a down time for 10:30am - 12:30pm (UTC) on the 1st November. This is to allow us to upgrade our machines with a new kernel.
  • There is an extension of the paper call for the International Symposium on Grids and Clouds (ISGC) 2017.
  • The next WLCG GDB is on 9th November. The agenda can be found here.
  • GridPP website: AM informs us the web server VM and the hypervisor are now running shiny new kernels. He undertook some preliminary checks and the pages, Wiki, database etc all look ok, but we should inform him of any issues observed.
  • Thread related to: Change to Approved VOs (and RPMs). What were the conclusions?


Monday 24th October

  • Message to GridPP CB regarding Tier-2 hardware spend allocations. GridPP will use the second table.
  • Minutes are available from today's WLCG ops meeting.
  • The APEL team have notified us that: The APEL Accounting Repository has been doing some internal data processing of its cloud data so to enable this to process at maximum speed the summarising of grid sites has been suspended. This is reflected in the portal not being updated and the Apel.Pub tests showing several days since your site last published.
  • HEPiX took place last week. See the detailed agenda for more information.
  • Request for Grid Engine site to test a new implementation of the machine job features plug-in.


Monday 17th October


Monday 26th September

WLCG Operations Coordination - AgendasWiki Page

Monday 7th November

Tuesday 25th October

Monday 3rd October

  • There was a WLCG ops coordination meeting last week: Agenda. (Good to review in the ops meeting).

Monday 26th September

Monday 19th September


Tier-1 - Status Page

Tuesday 8th November A reminder that there is a weekly Tier-1 experiment liaison meeting. Notes from the last meeting here

  • Still some mopping up after CVE-2016-5195
  • The CVMFS Stratum0 server has been replaced with newer hardware.
  • Intervention by Oracle on the Tier1 tape library went OK last Wednesday.
  • Owing to staff availability the upgrade of Castor to version 2.1.15 is being scheduled to tale place in January.
Storage & Data Management - Agendas/Minutes

Wednesday 09 Nov

  • Storage related issues from hepsysman?
  • ...

Wednesday 02 Nov

  • Big picture: an attempt to explain where GridPP fits with other things such as "UKT0" and other infrastructures

Wednesday 26 Oct

  • Feedback from WLCHEPiXG - don't miss it!

Wednesday 19 Oct

  • Long list of loose ends - accounting and information systems, IPv6 surprising successes

Wednesday 12 Oct

  • Initial impressions from WLCG workshop and CHEP-so-far
  • Coming events where GridPP storage-and-data-management could be, will be, or should be (re)presented.


Tier-2 Evolution - GridPP JIRA

Tuesday 8 November

  • Agreement with ETF about how to implement SAM probes for VMs, initially for LHCb. (We will use an external service provided by LHCb which the probes inside the VMs will report to.)

Monday 24th October

  • Started HTCvcm (HTCondor Vacuum VM), using ATLAS VMs as the starting point, to provide a generic HTCondor client VM that will connect to HTCondor pools run by the local site, experiments, or larger sites.
  • Merging LHCb multipayload VM code into DIRAC Pilots repo.
  • CernVM team updated CernVM to use kernel with fix for CVE-2016-5195 ("DirtyCOW")

Monday 17th October

  • Validation of APEL accounting of VM resources and VM-only sites has been completed.
  • From 4th October: A Lightweight sites questionnaire for WLCG sites has been circulated. The aim is to get to a "matrix" of approaches that sites can choose from, depending on criteria that is covered in the questionnaire.

Tue 10 Oct

  • Vac-in-a-Box 00.34 supports Vac 01.00 itself rather than pre-release (note that upgrading to 01.00 requires a reboot due to network layout changes)
  • "Vacuum Platform" specification published as HSF-TN-2016-04

Wed 05 Oct


Accounting - UK Grid Metrics HEPSPEC06 Atlas Dashboard HS06

Monday 26th September

  • A problem with the APEL Pub and Sync tests developed last Tuesday and was resolved on Wednesday. This had a temporary impact on the accounting portal.

Tuesday 14th June

  • GridPP accounting switched to use the 'new' EGI accounting portal.
  • APEL delays from UK sites look about 'normal' (i.e. delays are typical).

Tuesday 9th February

  • 4th Feb: The data from the APEL summariser that was fixed yesterday has now propagated through the data pipeline and the Accounting Portal views and the Sync and Pub tests are all working again.
  • Sheffield is slightly behind other sites (but looks normal) and so is QMUL.


Documentation - KeyDocs

Tue 1 Nov

Publishing tutorial updated to use new wording for various measurements.

https://www.gridpp.ac.uk/wiki/Publishing_tutorial#Accounting_transmissions


Tue 20th Sept GridPP Approved VOs now has link to RPM versions of the VOMS records. They are available for now via the VOMS RPMS Yum Repository. The latest version, which is consistent with the Yaim records in the Approved VOs doc, is 1.0-1. Plan is that when VO records change, Approved VOs doc version will be incremented, and RPMs of changed VOs (only those) will be released carrying the same version stamp as the document. Thus a site that upgrades to "latest" will get the records compatible with the newest version of the GridPP Approved VOs document.

Note: A typical RPM contains as so:

[sjones@hep169]$ rpm -qlp gridpp-voms-dteam-1.0-1.noarch.rpm 
/etc/grid-security/vomsdir/dteam
/etc/grid-security/vomsdir/dteam/voms.hellasgrid.gr.lsc
/etc/grid-security/vomsdir/dteam/voms2.hellasgrid.gr.lsc
/etc/vomses/dteam-voms.hellasgrid.gr
/etc/vomses/dteam-voms2.hellasgrid.gr
/root/vo_xml/dteam.xml

The vomsdir (lsc) files (which list the DNs and CA DNs of acceptable certificates) and the vomses files (which give the coordinates of VOMS servers of various VOs) are provided, as if they were created by YAIM in the normal locations. No other features of YAIM are facilitaed by these RPMs. Thus they are useful for migrating from YAIM, but do not provide all the functions of YAIM such as setting SW dirs or other ENV vars etc.

Tue 6th Sept Benchmarking procedure. Contains instructions for ARC/Condor, CREAM/Torque, VAC. Needs to be updated for use with other systems.

https://www.gridpp.ac.uk/wiki/Benchmarking_procedure

Mon 1st Aug LZ VO now up to date in portal, and will be updated in Approved VOs automatically from now on. Sites supporting LZ are advised to read LZ VOMS settings section of https://www.gridpp.ac.uk/wiki/GridPP_approved_VOs (which is between LSST and MAGIC!)

Tue 26th July

Elena has provided VOMS info for DUNE. I'm maintaining it by hand, at present, similarly for LZ.

Both should be present and correct in the Operations Portal, but are not.

https://www.gridpp.ac.uk/wiki/GridPP_approved_VOs


General note

See the worst KeyDocs list for documents needing review now and the names of the responsible people.

Interoperation - EGI ops agendas

Monday 7th November

  • There was an EGI Ops meeting today: agenda
  • UMD 3.14.5 released today
    • VOMS 3.5.0, which makes RFC proxies the default for voms-proxy-init
  • UMD 4.3.0 'October' release, release candidate ready, to be released by end of this week, including:
    • ARC, GFAL2, XROOT, Davix, dCache, ARGUS, Gridsite, edg-mkgrid, umd-release for CentOS7
  • please start using UMD4/SL6 or UMD4/CentOS7 instead of UMD3/SL6 & please don't use anymore EMI3
    • (think there may be a campaign around this soon)
  • Downtimes due to the vulnerability CVE-2016-5195: request an A/R recomputation
    • All the resource centres that were affected by the vulnerability CVE-2016-5195 and that declared a downtime between 2016-10-20 16:00 UTC and 2016-10-31 18:00 UTC are invited to request a recomputation of A/R figures for the days in which the downtime was ongoing.
Monitoring - Links MyWLCG

Tuesday 1st December


Tuesday 16th June

  • F Melaccio & D Crooks decided to add a FAQs section devoted to common monitoring issues under the monitoring page.
  • Feedback welcome.


Tuesday 31st March

Monday 7th December

On-duty - Dashboard ROD rota

Monday 17th October

  • Mostly quiet. We've got six outstanding tickets, four of which have been there for a while. There's one new ticket against Liverpool's ARC-CEs. The final one is there purely to silence the availability alarm at EFDA-JET until the decommissioning process is complete.

Monday 19th September

  • Fairly quiet week, with just the usual suspects.
  • ROD responses received.

Monday 22nd August

  • Unusually quiet week. Nothing significant.
  • Portal very slow at times though.
  • New rota meet-o-matic request circulated - ROD team members please respond this week!

Monday 28th June

  • The ROD rota needs to be updated.
Rollout Status WLCG Baseline

Tuesday 7th December

  • Raul reports: validation of site BDII on Centos 7 done.

Tuesday 15th September

Tuesday 12th May

  • MW Readiness WG meeting Wed May 6th at 4pm. Attended by Raul, Matt, Sam and Jeremy.


References


Security - Incident Procedure Policies Rota

Tuesday 8th November

  • UK now showing clear of both CVE-2016-5195 (D.Cow) and EGI-SVG-2016-11476 (canl-c) on EGI monitoring.
  • Talks at Hepsysman [1]
    • Matt - soundings for interest in UK pakiti instance [2]
    • David - overview of authentication using Federated Identities [3]
  • GDB tomorrow (9th Nov) "Facilitating campus and grid security teams working on the same threats - Romain & Liviu" [4]
  • EGI Security Policy Group last week [5] See SPG Drafts - [6]
    • Top Level Security Policy
    • EGI AAI CheckIn Service. Example of FIM - David's hepsysman talk, see also DI4R presentation [7]
    • Policies around FedCloud - responsibility as root user etc.

Tuesday 1st November

  • Dirty COW vulnerability - CVE-2016-5195
    • Small number of sites still showing in the EGI monitoring after the deadline. Please check the OPS Portal and acknowledge ticket promptly if you get one with explanation and plan for update.
  • The Dutch cybersecurity center (CERT of Dutch government) just published its annual threat assessment report. It gives a very good overview of trends in threats and actors, highly recommended reference material!
  • EGI Security Policy Group meeting this week [8]

Tuesday 25th of October

  • Dirty COW vulnerability - CVE-2016-5195
    • Sites are asked to act to mitigate this as soon as possible - see the advisory. Hopefully by the time the meeting comes we'll have more information on an SL fix (SL7 is available) - when this comes sites will have 7 days to update. Sites not able or wanting to apply mitigation before official patches are available have the option to go into Downtime, without penalty of loss of availability (until 3 days after official patches are available, agreed by EGI Operations).
  • EGI-SVG-2016-11476 (canl-c)
    • One or two sites still popping up on the monitoring each week.

Monday 17th October

  • Due to problems with pattern matching filters, Pakiti was not complaining about some instances until recently. This was in connection to vulnerability EGI-SVG-2016-11476.
  • There was an EGI Trust Anchor release 1.78-1. Please upgrade by 2016.10.18 at your earliest convenience. Please check the release notes for more details
  • FedCloud Sites have received a 'Heads Up'.
  • We are down to a few SL5 services.

Tuesday 4th October

  • Sites not upgrading for EGI-SVG-2016-11476 have now been ticketed by EGI CSIRT. Although WNs not thought to be vulnerable they are asked to be upgraded as indicator of compliance elsewhere. No UK sites have been ticketed _BUT_ it looks like the monitoring is only working through CREAM CEs so there may be ARC CE installations that would show "vulnerable" if/when the monitoring is fixed. Please check.
  • Some randon stuff from DI4R
    • Keynotes (again) on European Open Science Cloud and Human Brain Project
    • Good run through of pilots/plans for "Enabling federated login to WLCG" [9]
    • Anybody developing software might like to look at OSG's Rob Quick's presentation on the Software Assurance Marketplace SWAMP, a free software QA tool which can help improve code quality and security.
    • Summary of the "WISE people take action on Security" workshop
    • High level stuff on activities around procurement of infrastructure from public cloud providers [10]
    • Bruce Becker gave a good/amusing/thoughtful lightening talk on managing distributed infrastructure in Africa [11]


The EGI security dashboard.


Services - PerfSonar production dashboard |PerfSonar development dashboard | GridPP VOMS

- This includes notifying of (inter)national services that will have an outage in the coming weeks or will be impacted by work elsewhere. (Cross-check the Tier-1 update).

Tuesday 25th October

  • Duncan has recreated the UK perfSONAR mesh. Link here!


Monday 19th September

  • UK eScience CA - certificate issuance problems. Jens reported that on 15th a partial but significant database corruption occurred on the signing system for the CA. Data was restored from (offline) backups but the rebuild was not correctly configured.
  • A large number of site admins and other GridPP supporters appeared to be suspended from the dteam VO last week. “During a planned upgrade operation of VOMS service, a system malfunction occurred. As a result, some users received false notification about membership expiration. We are in contact with the software development team in order to identify the cause.”


Tickets

Monday 7th November 2016, 22.00 GMT
34 Open UK Tickets this month.

AFS TICKETS (?!)
A bunch of tickets landed on our doorsteps concerning AFS - which is odd, as they were all meant for the respective Tier 3s.

124805 (3/11) AFS being blocked at RALPP, Chris has requested some changes to the RAL firewall but there's no hurry. In progress (3/11)
124823 (3/11) Birmingham's AFS ticket,. Mark is unblocking udp port 7001 in the Birmingham firewalls. In progress (4/11)
124821 (3/11) Glasgow's copy of the AFS ticket - Gareth points out that it isn't really a site problem, but has kindly passed the information on. On Hold (3/11)
124822 (3/11) Manchester's AFS ticket, again the University firewall is the likely culprit. In progress (4/11)
124819 (3/11) And Liverpool's AFS ticket... John also thinks it's the University blocking UDP. In progress (3/11)
124816 (3/11) Finally RHUL's AFS ticket. Simon forwarded it to the Tier 3. In progress (7/11)

Andrew has suggested in the Manchester ticket that these shouldn't have been submitted - being a Tier 3 and not a Tier 2 problem. I'd suggest that sites would be fine solving the ticket after passing on the details to their local counterparts, perhaps leaving contact details in the solution.

SUSSEX
124614 (24/10)
Low availability ticket, nothing exciting here. On Hold (26/10)

122772 (11/7)
xrootd/webdav ticket from atlas. Although no progress is expected the ticket could do with an update (even a null one). On Hold (26/7)

RALPP
124684 (27/10)
Another availability ticket. On Hold (3/11)

OXFORD
124487 (17/10)
IPv6 had fallen over on the Oxford perfsonar - Kashif has fixed things and Duncan has put Oxford back into the mesh, so with any luck this ticket can be closed soon. In progress (3/11) Update - solved this morning'

121924 (2/6)
Another perfsonar ticket for Oxford, this one regarding a drop in throughput. How are things looking? On Hold (10/8)

BRISTOL
124796 (3/11)
CMS ticket about Bristol being moved to the waiting room. Has it snuck under the Monday radar? Assigned (3/11)

BIRMINGHAM
122771 (11/7)
atlas xrootd/webdav ticket. Mark has tried to get xroot to work but things don't seem to be playing ball. Mark will try again when he has time - let the rest of us know if you need help Mark! In progress (25/10)

GLASGOW
120351 (22/3)
Enabling LSST at Glasgow. The VO was enabled but things weren't working - any news? Although I'm not sure any of us have had much spare time over the last month! On hold (5/10)

124862 (6/11)
atlas deletions having problems at Glasgow where the DPM is playing up, but the good fight is being fought and it looks like things are nearly sorted. In progress (7/11)

122378 (28/6)
perfsonar at Glasgow - David has reinstalled the nodes and Duncan has meshed Glasgow up, things are looking good. In progress (7/11) Update - some last minute teething troubles, but David has given everything what is hopefully one last kick.

124052 (25/9)
LHCB ticket about incorrect running/waiting jobs being published on the Glasgow ARCs. Is the plan to wait for an official release with the fix in? On hold (26/9)

EDINBURGH
124758 (1/11)
A low availability ticket for ECDF. Looks like it was caused by Ops jobs getting caught in the queues. This might have been enough to clear the alarm, if not it's likely you'll need to on hold the ticket till it clears itself. In progress (4/11)

QMUL
124556 (20/10)
Biomed ticketed QM over a pair of CEs not working for them - which is expected as they're in the middle of being de/re-commissioned. On Hold (20/10)

IMPERIAL
124241 (5/10)
The IC WMS not working for a na62 user. NA62 is in the process of moving to Dirac, so the ticket is being kept on hold for reference; Daniela has however found a possibly related bug that could be the root cause of this issue. On hold (7/11)

BRUNEL
124428 (13/10)
CMS transfers failing after someone cut the Brunel network link. Hopefully IPv4 traffic will be restored fully later this week. This is completely out of the site's hand, but an interesting study of how Tier 2s need their phat network pipes these days. In progress (5/11) Update - Raul solved the tickets, the extra emergency network plumbing seems to have solved the congestion problems.

100IT - have an availability ticket 124511

TIER ONE
124876 (7/11)
Nagios gridftp failures for the nwe echo interface - which shouldn't really be tested, but needs to be set to production for atlas tests. Sound familiar to the CDF/Archer problem? Maybe PRODUCTION=Y, MONITORING=N needs to be a reinstated option? In progress (7/11) Alastair has added some extra input to the issue - perhaps though the ticket should be waiting for reply now. I'm not entirely sure if the questions posed were rhetorical or not.

124606 (24/10)
CMS consistency check, delayed by the usual consistency checker being on leave - hopefully another team member knows the invocations. In progress (1/11)

124785 (2/11)
CMS have noted that the two xroot servers at RAL need an extra config added (or something - CMS ways appear eldritch and arcane to my atlas-tempered worldview). The ticket has been acknowledged but no news. In progress (2/11)

120350 (22/3)
Enabling LSST at RAL. Test jobs failed here too. Maybe the 3 sites that had LSST payloads fail (the TIER 1, Glasgow and Lancaster) could put their heads together with a site where the jobs work and take some of the pressure off of Alessandra? On Hold (12/9)

122827 (12/7)
SNO+ disk space query, upended by the departure of Matt M. David his replacement has provide his details, and is waiting to see what the next batch of MC looks like - so this ticket should probably go On Hold. Waiting for reply (21/10)

121687 (20/5)
Packet loss for the Tier 1 Perfsonar. Brian notes the replacement of the UK Light router seems to have improved the picture somewhat, as hopefully will moving the perfsonar host within the network infrastructure. Waiting to see how that pans out. In progress (26/10)

124877 (7/11)
Nagios tests failing for one of the Tier 1 ARCs - being looked at, and it's a very fresh ticket. In Progress (7/11)

124478 (17/10)
Another WMS ticket from na62. This one got confused as ideally it needed help from WMS devs. Hopefully the na62 move to Dirac will render this ticket moot as well. On hold (1/11)

123504 (19/8)
T2K proxy expiration problem ticket. This ticket really should just be closed with the departure of Jon Perkin and the rumoured likelihood that t2k will be quiet on the grid for a while. Waiting for reply (28/10)

117683 (18/11/2015)
Glue 2 publishing for Castor. Jens provided an update last month (thanks Jens!), citing the lack of resources to commit to this - but there is a promising prototype (still far from production ready but better then naught). On hold (5/10)

FINALLY, IN MEMORY OF EFDA-JET
Despite being decommissioned it still has two tickets:
122198 - Decommissioning ticket, waiting for 90 days to pass before the site and ticket can be closed (end of this month).
124237 - poor Gordon found himself in the ridiculous situation of having to ticket a decommissioned site for low availability in order to stop the ROD dashboard alarming. There's a life lesson in there somewhere, but I don't want to dwell on it too much.

Tools - MyEGI Nagios

13th September 2016


19th July

Both instances of gridppnagios at Oxford and Lancaster has been decommissioned.

12th July 2016

Central ARGO monitoring service has started from 1st of July. All grid resources are monitored through two Nagios instances

https://argo-mon.egi.eu/nagios/

https://argo-mon2.egi.eu/nagios/

It has same interface as gridppnagios. Alarms from these instances goes to Operational Dashboard

http://argo.egi.eu/ is a web interface which provides availability/reliability figures and site status. It is equivalent of old myegi interface with some additional services.

I am planning to decommission both instances of gridppnagios in coming weeks. I have stopped nagios and httpd on both instances so it will not send tests to grid resources in UK. I will also decommission storage-monit.physics.ox.ac.uk which was only used for storage replication test.

We will keep vo-nagios.physics.ox.ac.uk running until we get a replacement for vo-monitoring.


Monday 13th June

  • Active Nagios instance moved to Lancaster

Tuesday 5th April 2016

Oxford had a scheduled network warning so active nagios instance was moved from Oxford to Lancaster. I am not planning to move it back to Oxford for the time being.


Tuesday 26th Jan 2016

One of the message broker was in downtime for almost three days. Nagios probes picks up a random message broker and failover is not working so a lot of ops jobs hanged for long time. Its a known issue and unlikely to be fixed as SAM Nagios is in its last leg. Monitoring is moving to ARGO and many things are not clear at the moment.

Monday 30th November

  • The SAM/ARGO team has created a document describing Availability reliability calculation in ARGO tool.
VOs - GridPP VOMS VO IDs Approved VO table

Tuesday 19th May

  • There is a current priority for enabling/supporting our joining communities.

Tuesday 5th May

  • We have a number of VOs to be removed. Dedicated follow-up meeting proposed.

Tuesday 28th April

  • For SNOPLUS.SNOLAB.CA, the port numbers for voms02.gridpp.ac.uk and voms03.gridpp.ac.uk have both been updated from 15003 to 15503.

Tuesday 31st March

  • LIGO are in need of additional support for debugging some tests.
  • LSST now enabled on 3 sites. No 'own' CVMFS yet.
Site Updates

Tuesday 23rd February

  • For January:

ALICE: All okay.

RHUL 89%:89% Lancaster 0%:0%

RALPP: 80%::80%

RALPP: 77%:77%

  • Site responses:
    • RHUL: The largest problem was related to the SRM. The DPM version was upgraded and it took several weeks to get it working again (13 Jan onwards). Several short-lived occurrences of running out of space on the SRM for non-ATLAS VOs. For around 3 days (15-17 Jan) the site suffered from a DNS configuration error by their site network manager which removed their SRM from the DNS, causing external connections such as tests and transfers to fail. For one day (25 Jan) the site network was down for upgrade to the 10Gb link to JANET. Some unexpected problems occurred extending the interruption from an hour to a day. The link has been successfully commissioned.
    • Lancaster: The ASAP metric for Lancaster for January is 97.5 %. There is a particular problem with ATLAS SAM tests which doesn’t affect the site activity in production and analysis and this relates to the path name being too long. A re-calculation has been performed.
    • RALPP: Both CMS and LHCb low figures are due to specific CMS jobs overloading the site SRM head node. The jobs should have stopped now.



Meeting Summaries
Project Management Board - MembersMinutes Quarterly Reports

Empty

GridPP ops meeting - Agendas Actions Core Tasks

Empty


RAL Tier-1 Experiment Liaison Meeting (Wednesday 13:30) Agenda Meeting takes place on Vidyo.

Highlights from this meeting are now included in the Tier1 report farther up this page.

WLCG Grid Deployment Board - Agendas MB agendas

Empty



NGI UK - Homepage CA

Empty

Events
UK ATLAS - Shifter view News & Links

Atlas S&C week 2-6 Feb 2015

Production

• Prodsys-2 in production since Dec 1st

• Deployment has not been transparent , many issued has been solved, the grid is filled again

• MC15 is expected to start soon, waiting for physics validations, evgen testing is underway and close to finalised.. Simulation expected to be broadly similar to MC14, no blockers expected.

Rucio

• Rucio in production since Dec 1st and is ready for LHC RUN-2. Some fields need improvements, including transfer and deletion agents, documentation and monitoring.

Rucio dumps available.

Dark data cleaning

files declaration . Only Only DDM ops can issue lost files declaration for now, cloud support needs to fill a ticket.

• Webdav panda functional tests with Hammercloud are ongoing

Monitoring

Main page

DDM Accounting

space

Deletion

ASAP

• ASAP (ATLAS Site Availability Performance) in place. Every 3 months the T2s sites performing BELOW 80% are reported to the International Computing Board.


UK CMS

Empty

UK LHCb

Empty

UK OTHER
  • N/A
To note

  • N/A