Difference between revisions of "Operations Bulletin Latest"

From GridPP Wiki
Jump to: navigation, search
()
()
Line 511: Line 511:
 
===== =====  
 
===== =====  
 
<!-- ******************Edit start********************* ----->
 
<!-- ******************Edit start********************* ----->
''' Tuesday 5th April 2106 '''
+
''' Tuesday 5th April 2016 '''
  
 
Oxford had a scheduled network warning so active nagios instance was moved from Oxford to Lancaster. I am not planning to move it back to Oxford for the time being.
 
Oxford had a scheduled network warning so active nagios instance was moved from Oxford to Lancaster. I am not planning to move it back to Oxford for the time being.

Revision as of 10:40, 5 April 2016

Bulletin archive


Week commencing Monday 4th April 2016
Task Areas
General updates

Monday 4th April


Tuesday 22nd March


WLCG Operations Coordination - AgendasWiki Page

Tuesday 22nd March

  • There was a WLCG ops coordination meeting last Thursday.
    • T0 & T1s requested to check 2016 pledges attached to the agenda.
    • The Multicore TF accomplished its mission. Its twiki remains as a documentation source.
    • The gLExec TF also completed. Support will continue. Its twiki is up-to-date.
  • There was a WLCG Middleware Readiness meeting last Wednesday.

Tuesday 15th March

Tuesday 1st March

  • The next WLCG Ops Coord meeting originally planned for this Thursday 3rd March (1st Thursday of the month) will be postponed since it clashes with the ATLAS S&C and CMS Data Management 2-day workshop.
  • MJF technical note and Torque/PBS implementation available.
Tier-1 - Status Page

Tuesday 5th April A reminder that there is a weekly Tier-1 experiment liaison meeting. Notes from the last meeting here

  • Castor 2.1.15 update is waiting. This is pending resolution of a problem around memory usage by the Oracle database behind Castor. In the meantime we will carry out the (separate) update of the Castor SRMs to version 2.14.
  • "GEN Scratch" storage in Castor will be decommissioned.
  • Catalin is organizing the CernVM Users Workshop at RAL on the 6-8 June 2016.
  • We have had a couple of network issues in the last period. The first one (Thursday 24 - Friday 25 March) only affected data transfers in/out. The second, on the evening of Tuesday 30th March between around 18:30 and 21:00, affected access to all services.
  • Eight disk servers, each with 111TBytes storage, have been deployed to AtlasdataDisk.
  • A load balancer (a pair of systems running "HAProxy") has been introduced in front of the "test" FTS3 instance which is used by Atlas. At present this just handles some 20% of the requests to the service. It will gradually ramp up to handle all requests.
Storage & Data Management - Agendas/Minutes

Wednesday 30 Mar

Wednesday 16 Mar

  • Oxford's new shiny DPM and others - if tests fail intermittently, surely the tests are wrong?
  • MySQL optimisation. Who is afraid of 100GB?

Monday 14th March

  • Topics that we would want covered at the next DPM Collaboration Meeting

Wednesday 02 Mar

  • GridPP as a data infrastructure - towards the science DMZ?

Wednesday 24 Feb

  • Snoplus data model. Looks a lot like DiRAC? (so far?)
  • The write up is in the minutes!

Wednesday 17 Feb

  • Report from the secret ATLAS meeting on Monday
    • Sites to run cache with RUCIO and ARC CEs?
    • Dynafed or RUCIO redirect?
  • Which goals if any should we set for achieving interesting things to be reported at GridPP36?
Tier-2 Evolution - GridPP JIRA

Tuesday 5 Apr

  • Vac 00.22 release (01.00 pre-release) ready. Includes Squid-on-hypervisor config in Puppet module.
  • Aim to do Vac 01.00 release immediately after Pitlochry.

Tuesday 22 Mar

  • GridPP DIRAC SAM tests now have gridpp and vo.northgrid.ac.uk pages
  • Some pheno jobs have been run on VAC.Manchester.uk
  • ATLAS cloud init VMs now running at Liverpool too

Tuesday 15 Mar

  • Vac 00.21 deployed on all/some machines at all 5 production Vac sites
  • Oxford Vac-in-a-Box set up in production
  • ATLAS pilot factory at CERN configured for our ATLAS cloud init VMs
  • ATLAS cloud init VMs running production jobs at Manchester and Oxford; being enabled at other sites
  • vo.northgrid.ac.uk DIRAC jobs running in GridPP DIRAC VMs: should work for any GridPP DIRAC VO, since a parameter to the VM config.
  • Multipayload LHCb pilot scripts tested in VMs: same pilot scripts can be used on VM or batch sites in multiprocessor slots.

Monday 29 Feb

  • Vac 00.21 released, with new MJF
  • EGI operations MB presentation positively received


Accounting - UK Grid Metrics HEPSPEC06 Atlas Dashboard HS06

Tuesday 9th February

  • 4th Feb: The data from the APEL summariser that was fixed yesterday has now propagated through the data pipeline and the Accounting Portal views and the Sync and Pub tests are all working again.
  • Sheffield is slightly behind other sites (but looks normal) and so is QMUL.

Tuesday 24th November

  • Slight delay for Sheffield.

Tuesday 3rd November

  • APEL delay (normal state) Lancaster and Sheffield.

Tuesday 20th October The WLCG MB decided to create a Benchmarking Task force led by Helge Meinhard see talk

Documentation - KeyDocs

Tuesday 23rd February

  • Tom has converted (most of) the DIRAC tutorial on job submission to work with the GridPP DIRAC service. You can find the guide here.


Tuesday 9th February

  • Guidelines for using the DIRAC command line tools and the DIRAC File Catalog metadata functionality to the UserGuide.

Tuesday 12th January

  • The VOID cards (and hence the Yaim records) for CDF, PLANCK, SUPERBVO, LSST, MAGIC and ENMR have changed a bit. Sites that support any of these may want to have a look. See the GridPP approved VOs page.

General note

See the worst KeyDocs list for documents needing review now and the names of the responsible people.

Interoperation - EGI ops agendas

Monday 14th March


Monitoring - Links MyWLCG

Tuesday 1st December


Tuesday 16th June

  • F Melaccio & D Crooks decided to add a FAQs section devoted to common monitoring issues under the monitoring page.
  • Feedback welcome.


Tuesday 31st March

Monday 7th December

On-duty - Dashboard ROD rota

Tuesday 15th March

  • Normal week with few alarms which were fixed in time. Birmingham has a low availability ticket. ECDF has ticket on hold as it test ARC CE and putting this CE in downtime might effect proper job test from ATLAS.


Tuesday 16th February

  • Team membership discussed at yesterday's PMB. We will need to look to the larger GridPP sites for more support.
Rollout Status WLCG Baseline

Tuesday 7th December

  • Raul reports: validation of site BDII on Centos 7 done.

Tuesday 15th September

Tuesday 12th May

  • MW Readiness WG meeting Wed May 6th at 4pm. Attended by Raul, Matt, Sam and Jeremy.


References


Security - Incident Procedure Policies Rota

Tuesday 5th April

Tuesday 22nd March

  • Some sites still appearing as vulnerable to SVG:Advisory-SVG-CVE-2016-1950
  • EGI CSIRT broadcast relating to compromised FedCloud VM instance. Does not appear to affect UK.
  • WLCG MW Readiness Minutes, see: On the deployment of Pakiti in WLCG production
  • The IGTF will release a regular update to the trust anchor repository (1.73) - for distribution ON OR AFTER March 28th

Tuesday 15th March

The EGI security dashboard.


Services - PerfSonar dashboard | GridPP VOMS

- This includes notifying of (inter)national services that will have an outage in the coming weeks or will be impacted by work elsewhere. (Cross-check the Tier-1 update).

Tuesday 22nd March

Tuesday 8th December

  • Given the recent network issues and role of GridPP DIRAC, there are plans to have a slave DNS for gridpp.ac.uk at Imperial and hopefully the T1 too. Andrew will seek an update to the whois records and name servers once the other host sites are confirmed.
  • Looking at the network problems this week will be of interest. Duncan supplied this link and Ewan the one for the dual stack instance.
Tickets

Monday 4th April 2016, 14.00 BST
26 Open UK Tickets this month.

NGI
119995 (7/3)
Uncertified site ticket for the UK - Jeremy is on the case, and there appears to be no need to rush. In progress (4/4)

120588 (4/4)
A fresh ticket, saying we have achieved insufficient "Quality of Support performance" - we had an average of a 1.4 day response time for very urgent tickets during March.

I've looked into this using the ggus report viewer and I believe we're being accused of a crime we only technically committed (if I'm looking at things right). We only had 2 "very urgent" tickets in this period, and one of them the site forgot to put In Progress, so had an erroneous response time of two and a half days. When averaged with the single other very urgent ticket this gave us an average response time > 1. Poor statistics is a right blimmer. I've updated the ticket - which was solved whilst I wrote the report.

The take home from this - please remember to set your tickets In Progress! It does actually matter (kinda).

SUSSEX
118337 (14/12/15)
Sussex Storage down for Sno+ - I assume this is still the case? Jeremy M replied a while ago but no news since. On Hold (15/2)

117894 (23/11/15)
One of the last Atlas Consistency Checking tickets - in a similar state to the former. On Hold (25/1) Update - Solved by Alessandra, can make do without for Sussex

118289 (10/12/15)
gridpp pilots at Sussex- again no news. On Hold (25/1)

I was supposed to poke the Sussex tickets before Easter but local things came up - I will prod them after tomorrow's meeting if we don't get a chance to discuss them during.

RALPP
118628 (5/1)
LZ support at RALPP. Chris tried to roll out the LZ-friendly test version of ARC to a production server but hit a roadblock and had to rollback. Chris is waiting on the fix to go out into the proper repositories, and is interested to see how things fair on a test centos7/umd4 ArcCE he has brewing (no pun intended). On hold (22/3)

120282 (18/3) Atlas HTTP taskforce ticket. Chris has asked that the tests be re-aimed at another, less-loaded server. Waiting for reply (1/4)

OXFORD
120019 (7/3)
A CMS ticket asking the Oxford T3 to change its xrootd federation subscription. Ewan was the chap who first-responded to this ticket, quiet since - it needs some attention. In progress (7/3)

117892 (23/11/15)
The other holdout of the Atlas Storage Consistency Checking tickets, and again in a similar state. In progress (24/3)

120345 (22/3)
At atlas ticket asking Oxford to update their xroot monitoring settings. Kashif battled this issue with Ilija's help, and with luck it can be closed. In progress (31/3)

BIRMINGHAM
119957 (4/3)
A ROD availiability ticket after their SE DB crisis, just waiting to for the alarms to go green. On hold (31/3)

GLASGOW
117706 (19/11/15)
Pheno (and other?) pilots at Glasgow. Gareth reports that they should have their new identity management system up and running soon (it it arrived on time). On Hold (23/3)

118052 (30/11/15)
ATLAS HTTP Taskforce ticket. Reopened just before Easter after tests started failing with TLS issues. Reopened (24/3)

120351 (22/3)
The first on a few enable LSST tickets - On Hold until the new identity management system is up and running. On hold (23/3)

120135 (11/3)
I'm not entirely sure why you chaps got a second http TF ticket, but you have (for a slightly different issue). In progress (1/4)

EDINBURGH
120004 (7/3)
ROD ticket for the test ARC CE fronting ARCHER, where tests fail as expected. I remember years ago being among many who couldn't think of a good reason to keep the "Production=yes, Monitoring=no" option, so they got rid of it - but it would perfectly apply here. How long can the ROD keep this ticket on hold before the dashboard self-destructs? On hold (29/3)

SHEFFIELD
118764 (12/1)
Another HTTP TF ticket. Elena kicked the services a while ago, but no news since (and the tests are still not passing by the looks of things). In progress (24/2)

114460 (18/6/15)
gridpp pilots at Sheffield. Did you get round to having a look at this? In progress (29/2)

MANCHESTER
120430 (24/3)
Ticket tracking setting up Manchester for Icecube glideins (the coolest of VOs...). It opens with a request to the Manchester site admins to enable their user (looks like just the one pilot DN), but no reply (as the Mancunians might have missed that the ticket has turned on them). Assigned (24/3)

LANCASTER 120412 (24/3)
Atlas deletion errors at Lancaster - caused by a few files badly drained back in 2014. I'm trying to figure out a clever, database-y way of listing all the files on these long gone servers (the best I've got so far is `select * from Cns_file_replica where host like 'fal-pygrid-%';`, but of course the dpns mapping isn't that straightforward. Expect a cry for help soone! In progress (4/4)

RHUL
119509 (12/2)
Sno+ job directories being cleaned up prematurely. It looks like this problem could have been transient - Matt M submitted some test jobs and didn't see the problem, and is re-testing with some proper work. Hopefully those tests completed okay. In progress (22/3)

QMUL
120352 (22/3)
Request to enable LSST at QM. Dan has asked for a reminder after/during GRIDPP36. On hold (24/3)

120204 (15/3)
LHCB having issues with some of the QM CEs. The reasons for this are unclear - pilots stopped around the start of March and the problem persisted at last check. In progress (17/3)

THE TIER 1
117683 (18/11/15)
CASTOR not publishing GLUE2. It's being worked on in people's spare time - any recent news? If not, maybe progress is slow enough to warrant on-holding the ticket. In progress (17/2)

119841 (1/3)
HTTP TF ticket, this time for LHCB. Proxy functionality isn't working (although regular cert/key pair access is okay) - this functionality was never turned on and is being looked into. In progress (22/3)

120350 (22/3)
Request to enable LSST at the Tier 1. Daniela notes that the Tier 1 will likely hit the same problem as RALPP for LZ (118628), Andrew L concurs. Pool accounts have been requested, things chug along nicely. In progress (22/3)

Tools - MyEGI Nagios

Tuesday 5th April 2016

Oxford had a scheduled network warning so active nagios instance was moved from Oxford to Lancaster. I am not planning to move it back to Oxford for the time being.


Tuesday 26th Jan 2016

One of the message broker was in downtime for almost three days. Nagios probes picks up a random message broker and failover is not working so a lot of ops jobs hanged for long time. Its a known issue and unlikely to be fixed as SAM Nagios is in its last leg. Monitoring is moving to ARGO and many things are not clear at the moment.

Monday 30th November

  • The SAM/ARGO team has created a document describing Availability reliability calculation in ARGO tool.


Tuesday 6 Oct 2015

Moved Gridppnagios instance back to Oxford from Lancaster. It was kind of double whammy as both sites went down together. Fortunately Oxford site was partially working so we managed to start SAM Nagios at Oxford. Sam tests were unavailable for few hours but no affect on egi availibilty/reliability. Sites can have a look at https://mon.egi.eu/myegi/ss/ for a/r status.

Tuesday 29 Sep 2015

Following an air-conditioning problem at machine room in Oxford Tier-2 site on 26 September, gridppnagios(OX) was shut down and gridppnagios(Lancs) became active instance. Oxford site is in downtime until 1st Oct and it may be extended depending on the situation. VO-Nagios was also unavailable for two days but we have started it yesterday as it is running on a VM. VO-nagios is using oxford SE for replication test so it is failing those tests. I am looking to change to some other SE.

VOs - GridPP VOMS VO IDs Approved VO table

Tuesday 19th May

  • There is a current priority for enabling/supporting our joining communities.

Tuesday 5th May

  • We have a number of VOs to be removed. Dedicated follow-up meeting proposed.

Tuesday 28th April

  • For SNOPLUS.SNOLAB.CA, the port numbers for voms02.gridpp.ac.uk and voms03.gridpp.ac.uk have both been updated from 15003 to 15503.

Tuesday 31st March

  • LIGO are in need of additional support for debugging some tests.
  • LSST now enabled on 3 sites. No 'own' CVMFS yet.
Site Updates

Tuesday 23rd February

  • For January:

ALICE: All okay.

RHUL 89%:89% Lancaster 0%:0%

RALPP: 80%::80%

RALPP: 77%:77%

  • Site responses:
    • RHUL: The largest problem was related to the SRM. The DPM version was upgraded and it took several weeks to get it working again (13 Jan onwards). Several short-lived occurrences of running out of space on the SRM for non-ATLAS VOs. For around 3 days (15-17 Jan) the site suffered from a DNS configuration error by their site network manager which removed their SRM from the DNS, causing external connections such as tests and transfers to fail. For one day (25 Jan) the site network was down for upgrade to the 10Gb link to JANET. Some unexpected problems occurred extending the interruption from an hour to a day. The link has been successfully commissioned.
    • Lancaster: The ASAP metric for Lancaster for January is 97.5 %. There is a particular problem with ATLAS SAM tests which doesn’t affect the site activity in production and analysis and this relates to the path name being too long. A re-calculation has been performed.
    • RALPP: Both CMS and LHCb low figures are due to specific CMS jobs overloading the site SRM head node. The jobs should have stopped now.



Meeting Summaries
Project Management Board - MembersMinutes Quarterly Reports

Empty

GridPP ops meeting - Agendas Actions Core Tasks

Empty


RAL Tier-1 Experiment Liaison Meeting (Wednesday 13:30) Agenda Meeting takes place on Vidyo.

Highlights from this meeting are now included in the Tier1 report farther up this page.

WLCG Grid Deployment Board - Agendas MB agendas

Empty



NGI UK - Homepage CA

Empty

Events
UK ATLAS - Shifter view News & Links

Atlas S&C week 2-6 Feb 2015

Production

• Prodsys-2 in production since Dec 1st

• Deployment has not been transparent , many issued has been solved, the grid is filled again

• MC15 is expected to start soon, waiting for physics validations, evgen testing is underway and close to finalised.. Simulation expected to be broadly similar to MC14, no blockers expected.

Rucio

• Rucio in production since Dec 1st and is ready for LHC RUN-2. Some fields need improvements, including transfer and deletion agents, documentation and monitoring.

Rucio dumps available.

Dark data cleaning

files declaration . Only Only DDM ops can issue lost files declaration for now, cloud support needs to fill a ticket.

• Webdav panda functional tests with Hammercloud are ongoing

Monitoring

Main page

DDM Accounting

space

Deletion

ASAP

• ASAP (ATLAS Site Availability Performance) in place. Every 3 months the T2s sites performing BELOW 80% are reported to the International Computing Board.


UK CMS

Empty

UK LHCb

Empty

UK OTHER
  • N/A
To note

  • N/A