Difference between revisions of "Operations Bulletin Latest"

From GridPP Wiki
Jump to: navigation, search
()
()
Line 32: Line 32:
  
 
* Winnie: Do you NFS-mount /etc/grid-security/vomsdir?
 
* Winnie: Do you NFS-mount /etc/grid-security/vomsdir?
* Lukas:Kernel 4.5 on SL 6 WNs
+
* Luke:Kernel 4.5 on SL 6 WNs
 +
* Luke: Accounting for 'local' groups
 
* There is a [https://indico.cern.ch/event/466991/timetable/#20160418.detailed HEPiX in DESY Zeuthen this week].
 
* There is a [https://indico.cern.ch/event/466991/timetable/#20160418.detailed HEPiX in DESY Zeuthen this week].
  

Revision as of 13:43, 18 April 2016

Bulletin archive


Week commencing Monday 18th April 2016
Task Areas
General updates

Monday 18th April

  • GridPP36 took place last week. Speakers are requested to upload their talks!
  • Winnie: Do you NFS-mount /etc/grid-security/vomsdir?
  • Luke:Kernel 4.5 on SL 6 WNs
  • Luke: Accounting for 'local' groups
  • There is a HEPiX in DESY Zeuthen this week.


Monday 4th April


Tuesday 22nd March


WLCG Operations Coordination - AgendasWiki Page

Monday 18th April

  • MW Readiness WG achievements October 2015 - March 2016 - link to MB.

Tuesday 22nd March

  • There was a WLCG ops coordination meeting last Thursday.
    • T0 & T1s requested to check 2016 pledges attached to the agenda.
    • The Multicore TF accomplished its mission. Its twiki remains as a documentation source.
    • The gLExec TF also completed. Support will continue. Its twiki is up-to-date.
  • There was a WLCG Middleware Readiness meeting last Wednesday.

Tuesday 15th March

Tuesday 1st March

  • The next WLCG Ops Coord meeting originally planned for this Thursday 3rd March (1st Thursday of the month) will be postponed since it clashes with the ATLAS S&C and CMS Data Management 2-day workshop.
  • MJF technical note and Torque/PBS implementation available.
Tier-1 - Status Page

Tuesday 5th April A reminder that there is a weekly Tier-1 experiment liaison meeting. Notes from the last meeting here

  • Castor 2.1.15 update is waiting. This is pending resolution of a problem around memory usage by the Oracle database behind Castor. In the meantime we will carry out the (separate) update of the Castor SRMs to version 2.14.
  • "GEN Scratch" storage in Castor will be decommissioned.
  • Catalin is organizing the CernVM Users Workshop at RAL on the 6-8 June 2016.
  • We have had a couple of network issues in the last period. The first one (Thursday 24 - Friday 25 March) only affected data transfers in/out. The second, on the evening of Tuesday 30th March between around 18:30 and 21:00, affected access to all services.
  • Eight disk servers, each with 111TBytes storage, have been deployed to AtlasdataDisk.
  • A load balancer (a pair of systems running "HAProxy") has been introduced in front of the "test" FTS3 instance which is used by Atlas. At present this just handles some 20% of the requests to the service. It will gradually ramp up to handle all requests.
Storage & Data Management - Agendas/Minutes

Wednesday 30 Mar

Wednesday 16 Mar

  • Oxford's new shiny DPM and others - if tests fail intermittently, surely the tests are wrong?
  • MySQL optimisation. Who is afraid of 100GB?

Monday 14th March

  • Topics that we would want covered at the next DPM Collaboration Meeting

Wednesday 02 Mar

  • GridPP as a data infrastructure - towards the science DMZ?

Wednesday 24 Feb

  • Snoplus data model. Looks a lot like DiRAC? (so far?)
  • The write up is in the minutes!

Wednesday 17 Feb

  • Report from the secret ATLAS meeting on Monday
    • Sites to run cache with RUCIO and ARC CEs?
    • Dynafed or RUCIO redirect?
  • Which goals if any should we set for achieving interesting things to be reported at GridPP36?
Tier-2 Evolution - GridPP JIRA

Tuesday 5 Apr

  • Vac 00.22 release (01.00 pre-release) ready. Includes Squid-on-hypervisor config in Puppet module.
  • Aim to do Vac 01.00 release immediately after Pitlochry.

Tuesday 22 Mar

  • GridPP DIRAC SAM tests now have gridpp and vo.northgrid.ac.uk pages
  • Some pheno jobs have been run on VAC.Manchester.uk
  • ATLAS cloud init VMs now running at Liverpool too

Tuesday 15 Mar

  • Vac 00.21 deployed on all/some machines at all 5 production Vac sites
  • Oxford Vac-in-a-Box set up in production
  • ATLAS pilot factory at CERN configured for our ATLAS cloud init VMs
  • ATLAS cloud init VMs running production jobs at Manchester and Oxford; being enabled at other sites
  • vo.northgrid.ac.uk DIRAC jobs running in GridPP DIRAC VMs: should work for any GridPP DIRAC VO, since a parameter to the VM config.
  • Multipayload LHCb pilot scripts tested in VMs: same pilot scripts can be used on VM or batch sites in multiprocessor slots.

Monday 29 Feb

  • Vac 00.21 released, with new MJF
  • EGI operations MB presentation positively received


Accounting - UK Grid Metrics HEPSPEC06 Atlas Dashboard HS06

Tuesday 9th February

  • 4th Feb: The data from the APEL summariser that was fixed yesterday has now propagated through the data pipeline and the Accounting Portal views and the Sync and Pub tests are all working again.
  • Sheffield is slightly behind other sites (but looks normal) and so is QMUL.

Tuesday 24th November

  • Slight delay for Sheffield.

Tuesday 3rd November

  • APEL delay (normal state) Lancaster and Sheffield.

Tuesday 20th October The WLCG MB decided to create a Benchmarking Task force led by Helge Meinhard see talk

Documentation - KeyDocs

Tuesday 23rd February

  • Tom has converted (most of) the DIRAC tutorial on job submission to work with the GridPP DIRAC service. You can find the guide here.


Tuesday 9th February

  • Guidelines for using the DIRAC command line tools and the DIRAC File Catalog metadata functionality to the UserGuide.

Tuesday 12th January

  • The VOID cards (and hence the Yaim records) for CDF, PLANCK, SUPERBVO, LSST, MAGIC and ENMR have changed a bit. Sites that support any of these may want to have a look. See the GridPP approved VOs page.

General note

See the worst KeyDocs list for documents needing review now and the names of the responsible people.

Interoperation - EGI ops agendas

Monday 14th March


Monitoring - Links MyWLCG

Tuesday 1st December


Tuesday 16th June

  • F Melaccio & D Crooks decided to add a FAQs section devoted to common monitoring issues under the monitoring page.
  • Feedback welcome.


Tuesday 31st March

Monday 7th December

On-duty - Dashboard ROD rota

Tuesday 15th March

  • Normal week with few alarms which were fixed in time. Birmingham has a low availability ticket. ECDF has ticket on hold as it test ARC CE and putting this CE in downtime might effect proper job test from ATLAS.


Tuesday 16th February

  • Team membership discussed at yesterday's PMB. We will need to look to the larger GridPP sites for more support.
Rollout Status WLCG Baseline

Tuesday 7th December

  • Raul reports: validation of site BDII on Centos 7 done.

Tuesday 15th September

Tuesday 12th May

  • MW Readiness WG meeting Wed May 6th at 4pm. Attended by Raul, Matt, Sam and Jeremy.


References


Security - Incident Procedure Policies Rota

Tuesday 5th April

Tuesday 22nd March

  • Some sites still appearing as vulnerable to SVG:Advisory-SVG-CVE-2016-1950
  • EGI CSIRT broadcast relating to compromised FedCloud VM instance. Does not appear to affect UK.
  • WLCG MW Readiness Minutes, see: On the deployment of Pakiti in WLCG production
  • The IGTF will release a regular update to the trust anchor repository (1.73) - for distribution ON OR AFTER March 28th

Tuesday 15th March

The EGI security dashboard.


Services - PerfSonar dashboard | GridPP VOMS

- This includes notifying of (inter)national services that will have an outage in the coming weeks or will be impacted by work elsewhere. (Cross-check the Tier-1 update).

Tuesday 22nd March

Tuesday 8th December

  • Given the recent network issues and role of GridPP DIRAC, there are plans to have a slave DNS for gridpp.ac.uk at Imperial and hopefully the T1 too. Andrew will seek an update to the whois records and name servers once the other host sites are confirmed.
  • Looking at the network problems this week will be of interest. Duncan supplied this link and Ewan the one for the dual stack instance.
Tickets

Link to all the UK Tickets

Tools - MyEGI Nagios

Tuesday 5th April 2016

Oxford had a scheduled network warning so active nagios instance was moved from Oxford to Lancaster. I am not planning to move it back to Oxford for the time being.


Tuesday 26th Jan 2016

One of the message broker was in downtime for almost three days. Nagios probes picks up a random message broker and failover is not working so a lot of ops jobs hanged for long time. Its a known issue and unlikely to be fixed as SAM Nagios is in its last leg. Monitoring is moving to ARGO and many things are not clear at the moment.

Monday 30th November

  • The SAM/ARGO team has created a document describing Availability reliability calculation in ARGO tool.


Tuesday 6 Oct 2015

Moved Gridppnagios instance back to Oxford from Lancaster. It was kind of double whammy as both sites went down together. Fortunately Oxford site was partially working so we managed to start SAM Nagios at Oxford. Sam tests were unavailable for few hours but no affect on egi availibilty/reliability. Sites can have a look at https://mon.egi.eu/myegi/ss/ for a/r status.

Tuesday 29 Sep 2015

Following an air-conditioning problem at machine room in Oxford Tier-2 site on 26 September, gridppnagios(OX) was shut down and gridppnagios(Lancs) became active instance. Oxford site is in downtime until 1st Oct and it may be extended depending on the situation. VO-Nagios was also unavailable for two days but we have started it yesterday as it is running on a VM. VO-nagios is using oxford SE for replication test so it is failing those tests. I am looking to change to some other SE.

VOs - GridPP VOMS VO IDs Approved VO table

Tuesday 19th May

  • There is a current priority for enabling/supporting our joining communities.

Tuesday 5th May

  • We have a number of VOs to be removed. Dedicated follow-up meeting proposed.

Tuesday 28th April

  • For SNOPLUS.SNOLAB.CA, the port numbers for voms02.gridpp.ac.uk and voms03.gridpp.ac.uk have both been updated from 15003 to 15503.

Tuesday 31st March

  • LIGO are in need of additional support for debugging some tests.
  • LSST now enabled on 3 sites. No 'own' CVMFS yet.
Site Updates

Tuesday 23rd February

  • For January:

ALICE: All okay.

RHUL 89%:89% Lancaster 0%:0%

RALPP: 80%::80%

RALPP: 77%:77%

  • Site responses:
    • RHUL: The largest problem was related to the SRM. The DPM version was upgraded and it took several weeks to get it working again (13 Jan onwards). Several short-lived occurrences of running out of space on the SRM for non-ATLAS VOs. For around 3 days (15-17 Jan) the site suffered from a DNS configuration error by their site network manager which removed their SRM from the DNS, causing external connections such as tests and transfers to fail. For one day (25 Jan) the site network was down for upgrade to the 10Gb link to JANET. Some unexpected problems occurred extending the interruption from an hour to a day. The link has been successfully commissioned.
    • Lancaster: The ASAP metric for Lancaster for January is 97.5 %. There is a particular problem with ATLAS SAM tests which doesn’t affect the site activity in production and analysis and this relates to the path name being too long. A re-calculation has been performed.
    • RALPP: Both CMS and LHCb low figures are due to specific CMS jobs overloading the site SRM head node. The jobs should have stopped now.



Meeting Summaries
Project Management Board - MembersMinutes Quarterly Reports

Empty

GridPP ops meeting - Agendas Actions Core Tasks

Empty


RAL Tier-1 Experiment Liaison Meeting (Wednesday 13:30) Agenda Meeting takes place on Vidyo.

Highlights from this meeting are now included in the Tier1 report farther up this page.

WLCG Grid Deployment Board - Agendas MB agendas

Empty



NGI UK - Homepage CA

Empty

Events
UK ATLAS - Shifter view News & Links

Atlas S&C week 2-6 Feb 2015

Production

• Prodsys-2 in production since Dec 1st

• Deployment has not been transparent , many issued has been solved, the grid is filled again

• MC15 is expected to start soon, waiting for physics validations, evgen testing is underway and close to finalised.. Simulation expected to be broadly similar to MC14, no blockers expected.

Rucio

• Rucio in production since Dec 1st and is ready for LHC RUN-2. Some fields need improvements, including transfer and deletion agents, documentation and monitoring.

Rucio dumps available.

Dark data cleaning

files declaration . Only Only DDM ops can issue lost files declaration for now, cloud support needs to fill a ticket.

• Webdav panda functional tests with Hammercloud are ongoing

Monitoring

Main page

DDM Accounting

space

Deletion

ASAP

• ASAP (ATLAS Site Availability Performance) in place. Every 3 months the T2s sites performing BELOW 80% are reported to the International Computing Board.


UK CMS

Empty

UK LHCb

Empty

UK OTHER
  • N/A
To note

  • N/A