Difference between revisions of "Operations Bulletin Latest"

From GridPP Wiki
Jump to: navigation, search
()
()
Line 28: Line 28:
 
<!-- *********************************************************** ----->
 
<!-- *********************************************************** ----->
 
<!-- ***********************Start General text*********************** ----->'''
 
<!-- ***********************Start General text*********************** ----->'''
 +
'''Tuesday 28th July'''
 +
* GGUS was down at the weekend due to an unscheduled downtime.
 +
 
'''Tuesday 21st July'''
 
'''Tuesday 21st July'''
 
* Various requests (e.g. via EGI) to express interest in "Compute Resources for PanCancer Analysis of Whole Genomes" call.
 
* Various requests (e.g. via EGI) to express interest in "Compute Resources for PanCancer Analysis of Whole Genomes" call.

Revision as of 21:30, 27 July 2015

Bulletin archive


Week commencing 27th July 2015
Task Areas
General updates

Tuesday 28th July

  • GGUS was down at the weekend due to an unscheduled downtime.

Tuesday 21st July

  • Various requests (e.g. via EGI) to express interest in "Compute Resources for PanCancer Analysis of Whole Genomes" call.
  • MM raised a question about remote loading of Grid files for SNO+ (email 20th July).
  • From Winnie: yaim on CREAM-CE - no update of var/lib/bdii/gip/ldif/ files. Should it?
  • From Catalin (16th July): The decommissioning process of the 'gridpp.ac.uk' CVMFS space is about halfway, and it came to my attention the fact that sites are defining the CVMFS_REPOSITORIES variable at WN level. Whilst this variable was mandatory with cvmfs v2.0.X, this is not case with v2.1.Y. I would ask site admins to remove the 'gridpp.ac.uk' repository names from there, except the regional UK VO repositories (londongrid, scotgrid, southgrid, north grid)
  • /cvmfs/gridpp-vo help. Down to the repository? Is more documentation needed here?
  • Atlas ADC meeting containing discussion of the future of cloud support is at 4.30 on the 21st of July, and sites are welcome to attend.
  • For those who missed it: [arxiv.org/abs/1507.03414 observation of a new particle with LHCb]! Also see the press release.
  • Minutes of the last EGI operations meeting are now available.
  • Process for uploading files to CVMFS.

Tuesday 14th July

  • There was a GDB at CERN last Wednesday agenda: minutes.
  • Very worthwhile reviewing Romain's talk from the GDB on threats.
  • Raja noted: It looks like some certificates have problems when submitted to CEs with RHEL5 variant OS-es underneath. Does this need further investigation/work?
  • Matt: reports that the EMI WN tarball now contains gfal CLI utilities.


Tuesday 7th July

  • Reminder: on July 17 a 4 hour long EGI Federated Cloud tutorial will be organised in London (at SAP near Heathrow). It's a free event, part of a 3-day long software carpentry workshop.
  • Q2 15 quarterly reports are now due!
  • There is a UKNGI ticket on multicore accounting that still requires some sites to act.
  • Proposal for a DPM workshop 16-17th of November or possibly 19-20th. Any comments?
  • The draft T2 reliability and availability figures for June have been produced.
    • ALICE: All okay.
    • ATLAS: All okay.
    • CMS: Bristol (89%:100%) - so just under the cut-off.
    • LHCb: Sheffield: (64%:64%); RALPP: (83%:83%).
  • EGI Community Forum 2015: Call for Participation open! Deadline: 12 July
  • A summary of a recent ARGUS collaboration meeting is now available.
  • There is a WLCG GDB this week.


WLCG Operations Coordination - Agendas

Monday 20th July

  • There was a WLCG ops coordination meeting last Thursday 16th July. Agenda : Minutes.
  • News: Next WLCG workshop in Lisbon 1-3 February 2016 (DPHEP event 3-4 February).
  • New TF: Future of the IS. OSG may stop publishing to BDII. eGroup created. Meetings TBD. REBUS issues will be first focus.
  • Middleware: NTR
  • T0 news: CVMFS incident written up (led to CMS job failures) - caused by symlink issue.
  • T1 feedback: Concerns about FTS3-Rucio bugs being filed against sites.
  • T2 feedback: NTR
  • ALICE: CERN LSF cap removed (15k-> 27k+ jobs reached). CNAF tape upgrade to Xrootd 4.1.3.
  • ATLAS: Data taking back at reduced rate. Grid full (upto 200k slots).Lost files due to Rucio and FTS race condition in May/June now fixed but file checks ongoing.
  • CMS: High demand DIGI-RECO starting. T0 now creates MINIAOD. CVMFS bug encountered (link in SITCONF dir).
  • LHCb: NTR.
  • gLExec: NTR
  • RFC proxies: NTR
  • Machine/Job features: NTR
  • Middleware readiness: Volunteer sites active. Polling sites for interest in testing CentOS7/SL7 MW. Pakiti-client v3.0.1 available.
  • Multicore: Accounting check. UK sites now resolved.
  • IPv6: NTR
  • Squid monitoring & HTTP proxy discovery: NTR
  • Network and transfer metrics: NTR
  • HTTP deployment: 3rd meeting on 15th July. Functional validation of storage will use a shared SAM probe in experiment instances. For access monitoring a UDP stream compatible with rooted f-stream and publication of json messages were two presented solutions.
  • Actions:
    • ATLAS MC sites to review cap (80% T2 prod to MC). T2 is 50:50 analysis and production.
    • CMS asks all T1 and T2 sites to provide Space Monitoring information to complete the picture on space usage at the sites. Please have a look at the general description and also some instructions for site admins.


Tier-1 - Status Page

Tuesday 14th July A reminder that there is a weekly Tier-1 experiment liaison meeting. Notes from the last meeting here

  • RAL Open Days went very well.
  • We are extending the number of worker nodes running a test configuration (these obtain the grid middleware via cvmfs).
  • There will be an intervention on our faulty router on Tuesday 4th August. This is announced in the GOC DB.
Storage & Data Management - Agendas/Minutes

Wednesday 08 July

  • Huge backlog of ATLAS data from Glasgow waiting to go to RAL, and oddly varying performance numbers - investigating
  • How physics data is like your Windows 95 games

Wednesday 01 July

  • Feedback on CMS's proposal for listing contents of storage
  • Simple storage on expensive raided disks vs complicated storage on el cheapo or archive drives?

Wednesday 24 June

  • Heard about the Indigo datacloud project, a H2020 project in which STFC is participating
  • Data transfers, theory and practice
    • Somewhat clunky tools to set up but perform well when they run
    • Will continue to work on recommendations/overview document
    • Worth having recommendations/experiences for different audiences - (potential) users, decision makers, techies


Accounting - UK Grid Metrics HEPSPEC06 Atlas Dashboard HS06

Monday 20th July

  • Oxford publishing 0 cores from Cream today. Maybe they forgot to switch one off. Check here.

Tuesday 14th July

  • QMUL and Sheffield appear to be lagging with publishing by a week.
  • Please check your multicore publishing status (especially those sites mentioned in June).

Tuesday 16th June

  • Region not publishing accounting by number of cores.
    • "0" core submission hosts:
    • ce3.dur.scotgrid.ac.uk
    • ce4.dur.scotgrid.ac.uk
    • cetest02.grid.hep.ph.ic.ac.uk
    • hepgrid5.ph.liv.ac.uk
    • hepgrid6.ph.liv.ac.uk
    • hepgrid97.ph.liv.ac.uk
    • svr009.gla.scotgrid.ac.uk
    • t2ce06.physics.ox.ac.uk

Tuesday 9th June

  • Delay noted for Sheffield


Documentation - KeyDocs

Tuesday 23rd June

  • Reminder that documents need reviewing!

Tuesday 9th June

LSST voms2 records are not present in VOID cards yet. As a workaround, a temporary note of the actual values has been added to the LSST section of Approved VOs.

https://www.gridpp.ac.uk/wiki/GridPP_approved_VOs

General note

See the worst KeyDocs list for documents needing review now and the names of the responsible people.

Tuesday 21st April

  • The Approved VOs document has been updated to take account of changes to the Ops Portal VOID cards.For SNOPLUS.SNOLAB.CA, the port numbers for voms02.gridpp.ac.uk and voms03.gridpp.ac.uk have both been updated from 15003 to 15503. Sites that support SNOPLUS.SNOLAB.CA should ensure that their configuration conforms to these settings: Approved VOs
  • KeyDocs still need updating since agreements reached at last core ops meeting.
  • New section in Wiki called "Project Management Pages".
The idea is to cluster all Self-Edited Site Tracking Tables
in here. Sites should keep entries in Current Activities
up to date. Once a Self-Edited Site Tracking Tables has
served its purpose, PM to move it to  Historical Archive 
or otherwise dispose of the table.
Interoperation - EGI ops agendas

Monday 13th July

  • SR updates (small because it's summer):
      • gfal2 2.9.1
      • storm 1.11.9
      • srm-ifce 1.23.1....
      • gfal2-python 1.8.1
    • In Verification
      • gfal2-plugin-xrootd 0.3.4
  • Accounting
    • [John Gordon] "Of the WLCG sites we now have 97%+ of cpu reported with cores. I expect you all saw my recent email to GDB naming 16 sites. If one German and one Spanish site and the four Russians start publishing we will jump to 99%+"
    • New list of sites needing to update multicore accounting being prepared this evening (Monday) by Vincenzo
  • SL5 decommissioning date March 2016;
  • Next meeting 10th August

Monday 15th June

  • There was an EGI operations meeting today: agenda.
  • New Action: for the NGIs: please start tracking which sites are still using SL5 services: how many services, and for each service if still needed on SL5, if upgrades on SL5 services are expected). A wiki has been provided to record updates. Also interesting to understand who is using Debian.


Monitoring - Links MyWLCG

Tuesday 16th June

  • F Melaccio & D Crooks decided to add a FAQs section devoted to common monitoring issues under the monitoring page.
  • Feedback welcome.


Tuesday 31st March

Monday 7th December

On-duty - Dashboard ROD rota

Monday 13th July

  • Quiet week, once the (unofficial) ARC fix was implemented on the nagios side.
  • Low/best efforts ROD effort after this week for 1-2 weeks.

Monday 6th July

  • DB: Between the fake ARC alarms and the fake bdii alarms it's hard to see the real alarms.

Monday 22nd June

  • Generally quiet. There are some 'glue2' errors that were ticketed. Tried to let these go and see if they would clear. However, in some cases the amount of time the error was outstanding was building up. Unclear if Glue2 is used anywhere.


Rollout Status WLCG Baseline

Tuesday 12th May

  • MW Readiness WG meeting Wed May 6th at 4pm. Attended by Raul, Matt, Sam and Jeremy.

Tuesday 17th March

  • Daniela has updated the [ https://www.gridpp.ac.uk/wiki/Staged_rollout_emi3 EMI-3 testing table]. Please check it is correct for your site. We want a clear view of where we are contributing.
  • There is a middleware readiness meeting this Wednesday. Would be good if a few site representatives joined.
  • Machine job features solution testing. Fed back that we will only commence tests if more documentation made available. This stops the HTC solution until after CHEP. Is there interest in testing other batch systems? Raul mentioned SLURM. There is also SGE and Torque.

References


Security - Incident Procedure Policies Rota

Tuesday 21st July

  • Most exchanges on statement wording.

Monday 13th July

  • EGI SVG/CSIRT **Update** OpenSSL release on 9th July - CVE-2015-1793

Monday 29th June

  • EUGridPMA have announced a new set of CA rpms. Based on this IGTF release a new set of CA RPMs have been packaged for EGI. There is a request to please upgrade within the next seven days at your earliest convenience. When this timeout is over, SAM will throw critical errors on CA tests if old CAs are still detected.
  • The next security team meeting is this Wednesday 1st July.


Services - PerfSonar dashboard | GridPP VOMS

- This includes notifying of (inter)national services that will have an outage in the coming weeks or will be impacted by work elsewhere. (Cross-check the Tier-1 update).

Tuesday 14th July

  • GridPP35 in September will have a part focus on networking and IPv6. This will include a review of where sites are with their deployment. Please try to firm up dates for your IPv6 availability between now and September. Please update the GridPP IPv6 status table.

Tuesday 23rd June

  • GridPP issued a position statement regarding LHCONE.
    • ...Concerning LHCONE for both T1 and T2. The high level summary is that the UK is not in favour, as within the UK we have no explicit need for LHCONE for any reason of T1 capacity planning, but to implement it involves additional complexity and possibly cost. The current system works fine and we therefore see no overriding reason to remove T1-T1 transit via LHCOPN. ...The UK is sensitive to the “collective” needs of the community, and as a general statement we would always seek to address any legitimate request agreed by the WLCG MB in order to play our role in meeting international expectations.

Tuesday 12th May

  • LHCOPN & LHCONE joint meeting at LBL June 1st & 2nd. Agenda taking shape.

Tuesday 31st March

Tickets

Monday 27th July 2015, 16.10 BST

Only 20 UK Tickets this week, and many are on hold for summer holidays. I pruned a few tickets, but none are striking me as needing urgent action, so this will be brief.

Mandatory UK GGUS Link
Nothing to see here really, but maybe I'm missing something? Nit-picking I see:

114381 (publishing problems at Durham) could still do with an update - not sure if work is progressing offline on the issue.

The Snoplus ticket 115165 looks like it might be of interest for others - in it Matt M asks about tape-functionality in gfal2 tools. Brian has updated the ticket clueing us in about gfal-xattr.

UK T'other VO Nagios
A few failures here at time of writing - although only one at Brunel seems to be more then a few hours old (dc2-grid-66.brunel.ac.uk is failing pheno CE tests).

Let me know if I missed ought!

Tools - MyEGI Nagios

Tuesday 09 June 2015

  • ARC CEs were failing nagios test becuase of non-availability of egi repository. Nagios test compare CA version from EGI repo. It started on 5th June and one of the IP addresses behind webserver was not responding. Problem went away in approximately 3 hours. The same problem started again on 6th June. Finally it was fixed on 8th June. No reason was given in any of the ticket opened regarding this outage.

Tuesday 17th February

  • Another period where message brokers were temporarily unavailable seen yesterday. Any news on the last follow-up?

Tuesday 27th January

  • Unscheduled outage of the EGI message broker (GRNET) caused a short-lived disruption to GridPP site monitoring (jobs failed) last Thursday 22nd January. Suspect BDII caching meant no immediate failover to stomp://mq.cro-ngi.hr:6163/ from stomp://mq.afroditi.hellasgrid.gr:6163/


VOs - GridPP VOMS VO IDs Approved VO table

Tuesday 19th May

  • There is a current priority for enabling/supporting our joining communities.

Tuesday 5th May

  • We have a number of VOs to be removed. Dedicated follow-up meeting proposed.

Tuesday 28th April

  • For SNOPLUS.SNOLAB.CA, the port numbers for voms02.gridpp.ac.uk and voms03.gridpp.ac.uk have both been updated from 15003 to 15503.

Tuesday 31st March

  • LIGO are in need of additional support for debugging some tests.
  • LSST now enabled on 3 sites. No 'own' CVMFS yet.
Site Updates

Tuesday 24th February

  • Next review of status today.

Tuesday 27th January

  • Squids not in GOCDB for: UCL; ECDF; Birmingham; Durham; RHUL; IC; Sussex; Lancaster
  • Squids in GOCDB for: EFDA-JET; Manchester; Liverpool; Cambridge; Sheffield; Bristol; Brunel; QMUL; T1; Oxford; Glasgow; RALPPD.

Tuesday 2nd December

  • Multicore status. Queues available (63%)
    • YES: RAL T1; Brunel; Imperial; QMUL; Lancaster; Liverpool; Manchester; Glasgow; Cambridge; Oxford; RALPP; Sussex (12)
    • NO: RHUL (testing); UCL; Sheffield (testing); Durham; ECDF (testing); Birmingham; Bristol (7)
  • According to our table for cloud/VMs (26%)
    • YES: RAL T1; Brunel; Imperial; Manchester; Oxford (5)
    • NO: QMUL; RHUL; UCL; Lancaster; Liverpool; Sheffield; Durham; ECDF; Glasgow; Birmingham; Bristol; Cambridge; RALPP; Sussex (14)
  • GridPP DIRAC jobs successful (58%)
    • YES: Bristol; Glasgow; Lancaster; Liverpool; Manchester; Oxford; Sheffield; Brunel; IC; QMUL; RHUL (11)
    • NO: Cambridge; Durham; RALPP; RAL T1 (4) + ECDF; Sussex; UCL; Birmingham (4)
  • IPv6 status
    • Allocation - 42%
    • YES: RAL T1; Brunel; IC; QMUL; Manchester; Sheffield; Cambridge; Oxford (8)
    • NO: RHUL; UCL; Lancaster; Liverpool; Durham; ECDF; Glasgow; Birmingham; Bristol; RALPP; Sussex
  • Dual stack nodes - 21%
    • YES: Brunel; IC; QMUL; Oxford (4)
    • NO: RHUL; UCL; Lancaster; Glasgow; Liverpool; Manchester; Sheffield; Durham; ECDF; Birmingham; Bristol; Cambridge; RALPP; Sussex, RAL T1 (15)


Tuesday 21st October

  • High loads seen in xroot by several sites: Liverpool and RALT1... and also Bristol (see Luke's TB-S email on 16/10 for questions about changes to help).

Tuesday 9th September

  • Intel announced the new generation of Xeon based on Haswell.



Meeting Summaries
Project Management Board - MembersMinutes Quarterly Reports

Empty

GridPP ops meeting - Agendas Actions Core Tasks

Empty


RAL Tier-1 Experiment Liaison Meeting (Wednesday 13:30) Agenda Meeting takes place on Vidyo.

Wednesday 8th July 2015 Operations report

  • Lots of preparation for the RAL Open Days. These start today (8th) and culminate in the public day on Saturday (11th).
  • Intervention on faulty router being prepared for 4th August.
WLCG Grid Deployment Board - Agendas MB agendas

Empty



NGI UK - Homepage CA

Empty

Events
UK ATLAS - Shifter view News & Links

Atlas S&C week 2-6 Feb 2015

Production

• Prodsys-2 in production since Dec 1st

• Deployment has not been transparent , many issued has been solved, the grid is filled again

• MC15 is expected to start soon, waiting for physics validations, evgen testing is underway and close to finalised.. Simulation expected to be broadly similar to MC14, no blockers expected.

Rucio

• Rucio in production since Dec 1st and is ready for LHC RUN-2. Some fields need improvements, including transfer and deletion agents, documentation and monitoring.

Rucio dumps available.

Dark data cleaning

files declaration . Only Only DDM ops can issue lost files declaration for now, cloud support needs to fill a ticket.

• Webdav panda functional tests with Hammercloud are ongoing

Monitoring

Main page

DDM Accounting

space

Deletion

ASAP

• ASAP (ATLAS Site Availability Performance) in place. Every 3 months the T2s sites performing BELOW 80% are reported to the International Computing Board.


UK CMS

Empty

UK LHCb

Empty

UK OTHER
  • N/A
To note

  • N/A