Operations Bulletin Latest

From GridPP Wiki
Jump to: navigation, search

Bulletin archive


Week commencing Monday 18th January 2016
Task Areas
General updates

Tuesday 19th January

  • There was a GDB last week. Agenda.
  • Additional material can now be found in the HEPSYSMAN meeting and GANGA workshop agenda pages.
  • Monday's ops update is available here.
  • As circulated to TB-SUPPORT: The T2 WLCG A/R results for December are now available.
    • ALICE. All okay.
    • ATLAS.
      • QMUL: 71%: 76%
      • RHUL: 89%: 95%
      • Lancaster: 0%: 0%
    • CMS
      • RALPP: 89%: 89%
    • LHCb
      • Lancaster: 81%: 96%
      • RALPP: 87%: 87%


Tuesday 12th January

  • Reminder: WLCG workshop registration closes on 22nd.
  • GDB this Wednesday with a security focus.


Tuesday 5th January

Tuesday 15th December

  • Simon: Raised a question about switch monitoring.
  • WMSes:
    • How many WMS servers do you have in production?
    • How many and which VOs are enabled?
    • Which VOs are using the service most?
    • If possible, can you provide the jobs number submitted per month (and per VO) through your instances during the last year (Dec 2014 - Nov 2015)?
  • Notes from Thursday's operations meeting.
  • T2 reliability & availability reports for November 2015, with corrections applied.
  • Govind: Process for dealing with lost files.
  • Janet issued a statement regarding the DDoS last week.


WLCG Operations Coordination - AgendasWiki Page

Tuesday 19th January

  • The next WLCG MW readiness group meeting will be on 27th January: Agenda at 15:00 UTC.
  • There is an ops coordination meeting this Thursday at 14:30 UTC: Agenda.


Tuesday 12th January

  • There was an ops coordination meeting last week. Minutes
  • Approach for configuring batch systems (e.g. setting up mem limits).

Tuesday 5th January

  • Ops coordination meeting on 17th December. Minutes.
  • The next meeting is on 7th January. Agenda.

Tuesday 7th December

  • There was a WLCG operations coordination meeting last Thursday. Minutes | Agenda.
    • Please register for the WLCG workshop.
    • News: RedHat now fixing the openldap crash issue affecting Top BDII and ARC-CE. Stay tuned.
    • T0: LSF 9 software deployed on all worker nodes
    • T1: CC-IN2P3 & PIC: Globus host certificate validation changes
    • T2: NTR
    • ALICE: heavy ion run has been smooth from the grid perspective
    • ATLAS: Tier-0 performance in terms of events/second reconstructed from the whole cluster are quite low (few tents of Hz), observed huge I/O wait in Wigner spinning disks nodes. Plan full reprocessing campaign start for the 14th of December.
    • CMS: CMS Tier-0 workflows is driving some CERN Openstack hardware to its limits: GGUS:118056.
    • LHCb: Significant MC generation in-coming. MC simulation workflows have been executed successfully on commercial clouds, on both DBCE (up to 600 simultaneous jobs running) and Azure.
    • glexec: NTR
    • M/J features: Ongoing discussions clarifying key/value pairs. Next steps to review experience with implementations and installations, and update in view of technical note discussions.
    • HTTP TF: NTR
    • Infro sys future: Future Use Cases Document is now ready in the WLCG Document Repository (PDF). Looking at information system owned by WLCG (an interesting idea). Starting to prepare a Roadmap to GLUE 2.0.
    • IPv6: VOMS still does not work with IPv6. ARGUS not really tested but no problem expected as Java has good IPv6 support.
    • MW readiness: Virtual meeting on 2nd December. Verification workflows in progress listed here.
    • MC: NTR
    • Network/transfer metrics: NTR
    • RFC proxies: CMS have switched test pilot factories to RFC proxies.
    • Squid monitoring/HTTP discovery: NTR


Monday 30th November


Tier-1 - Status Page

Tuesday 12th January A reminder that there is a weekly Tier-1 experiment liaison meeting. Notes from the last meeting here

  • We are investigating why LHCb batch jobs sometimes fail to write results back to Castor (and the sometimes fail to write remotely as well). A recent change has improved, but not fixed, this problem.
  • We plan to migrate Atlas and LHCb data from the T10KC (5TB) to T10KD (8TB) generation tapes. Details yet to be finalized but likely to start with Atlas first.
  • We have seen some problems with the CMSTape instance. Two out of the five disk servers in CMSTape were taken out of service on Monday (11th) owing to double disk failures. Some other parameter changes were made to optimise throughput. Service being kept under review.
  • All Castor disk servers are now running SL6. Most (those in disk backed service classes) had been done some months ago.
Storage & Data Management - Agendas/Minutes

Wednesday 13 Jan

  • "Diskless" Tier 2 testing to go ahead anyway at Oxford; also Bristol may be interesting
  • Need feedback from VOs on catalogue format and location before rolling out to remaining sites

Wednesday 06 Jan

  • Merry and happy. Apart from Glasgow losing power, most T2s came through the hols relatively unscathed
  • Key member of T2 storageanddatamanagement supportologists leaving *sad face*
  • Excellent DiRAC progress over hols (thanks, Lydia!), now need Leicester started.
  • More generally, GridPP as a data (only) infrastructure, bring your own compute. Good case, but you need to bring your own catalogue (more or less), too, which may not suit everyone.
  • Will a future T2 be built on CEPH? Not likely...

Wednesday 16 Dec

  • Sam presented at and reports from DPM workshop
  • Last preparations/coordination before the hols

Wednesday 09 Dec

  • Filesystem dump at Cambridge successful, no problems, awaiting feedback from ATLAS
  • T2C testing at Oxford "nearly ready", also need ATLAS to pick up the gauntlet now (CMS postponed till 2016)
Tier-2 Evolution - GridPP JIRA

Tuesday 19 Jan

  • Vcycle now supports EC2 API, and preparing to test with Open Nebula at RAL Tier1
  • Next Vac release (00.20) being tested with LHCb production. This removes the need to have an internal NFS server and supports VMs using Cloud Init.
  • Some work last month on revised ATLAS VMs; hoping to converge with VMs run at CERN and on HLT.
  • Vac-in-a-Box updated for Vac 00.20 and NFS-less operation, and numerous feature requests (e.g. bulk adding of hypervisors)

Tuesday 8 Dec

  • Liverpool has set up Vac with 126 VM slots. LHCb production MC and GridPP SAM tests running.
  • GridPP DIRAC VMs now working with new dirac.gridpp.ac.uk service: needed an ad-hoc configuration value adding (/Resources/Computing/CEDefaults/VirtualOrganization=gridpp) for the pilots due to the multi-VO support (GRIDPP-9)
  • For rollout, would like to set up a JIRA component for each site (several exist already.) Will be mailing sites to get the site contacts involved before we add each site.

Tuesday 1 Dec

  • Vac (0.20pre) now provides contexualization, metadata (EC2/OpenStack), and Machine/Job Features via HTTP rather than ISO image and NFS. (GRIDPP-27). This should allow VMs expecting to be run by OpenStack to work on Vac without modification.
  • CernVM image signature checking, and APEL Sync record generation (GRIDPP-10) are now also in Vac 0.20pre.

Tuesday 24 Nov

  • Cloud Init demonstrated with modified version of (old) GridPP DIRAC VMs
  • Cloud Init support in Vac 0.20pre (GRIDPP-27)
  • Progress on VMs for new GridPP DIRAC service: multi-VO config currently preventing matching. (GRIDPP-9)

Tuesday 17 Nov

  • New depo.gridpp.ac.uk service for uploading files to via HTTPS
  • ATLAS VMs now upload log files to depo.gridpp.ac.uk for debugging (GRIDPP-24)


Accounting - UK Grid Metrics HEPSPEC06 Atlas Dashboard HS06

Tuesday 24th November

  • Slight delay for Sheffield.

Tuesday 3rd November

  • APEL delay (normal state) Lancaster and Sheffield.

Tuesday 20th October The WLCG MB decided to create a Benchmarking Task force led by Helge Meinhard see talk

Documentation - KeyDocs

Tuesday 12th January

  • The VOID cards (and hence the Yaim records) for CDF, PLANCK, SUPERBVO, LSST, MAGIC and ENMR have changed a bit. Sites that support any of these may want to have a look. See the GridPP approved VOs page.
  • WLCG Information System Evolution Task Force is drafting refined definitions for LOG_CPU and PHYS_CPU, as well as the benchmark/calibration process. Progress is documented in this agenda:
https://indico.cern.ch/event/471965/

In particular, sites should note the 'BenchmarkingProcess.txt' (attached to agenda) which lays out in general terms how to run benchmark instances to obtain maximum throughput, and the GridPP Publishing Tutorial (https://www.gridpp.ac.uk/wiki/Publishing_tutorial) which WLCG propose to adopt (with some modifications.)

Tuesday 1st December

  • Sixt and hone have been removed from the GridPP list.

Tuesday 24th November Steve J: problems with voms server at fnal, voms.fnal.gov, resolved. Approved VOs document updated with newest records for those VOs affected, CDF, DZERO, LSST. Also, note changes to CA_DN for PLANCK and CDF. https://www.gridpp.ac.uk/wiki/GridPP_approved_VOs

Friday 6th Nov, 2015

SteveJ: Advice to admins published about a common GSS error, globus_gsi_callback_module: Could not verify credential etc.

https://www.gridpp.ac.uk/wiki/Security_system_errors_and_workarounds

Tuesday 20th October, 2015

Approved VOs document updated with temporary section for LZ

Tuesday 29nd September Steve J: problems with voms server at fnal, voms.fnal.gov, have been detected; I will resolve them soon and may issue an update to Approved VOs, alerting sites with TB_SUPPORT should that occur. Approved VOs potentially affected are CDF, DZERO, LSST. Please do not act act yet.


General note

See the worst KeyDocs list for documents needing review now and the names of the responsible people.

Interoperation - EGI ops agendas

Monday 11th January

  • A meeting on Monday 11th. Agenda. For the meeting link see Indico.

Tuesday 15th December

There was a meeting yesterday, agenda is here: https://wiki.egi.eu/wiki/Agenda-14-12-2015

  • New meetings calendari: https://indico.egi.eu/indico/categoryDisplay.py?categId=32
  • New summary page: https://wiki.egi.eu/wiki/Operations_Meeting
  • UMD Preview repository
  • UMD releases
  • Decommissioning of dCache 2.6
  • Decommissioning of SL5
    • SL5 services must be decommissioned by end of April 2016; broadcast at December, probes will be warning since February 2016 to start helping with decommissioning
  • APEL on SL5
  • WMS Usage
  • New CE/batch system accounting integration
  • Raised a question on clarifying the future of the VO Nagios in a centralised ARGO world


Monitoring - Links MyWLCG

Tuesday 1st December


Tuesday 16th June

  • F Melaccio & D Crooks decided to add a FAQs section devoted to common monitoring issues under the monitoring page.
  • Feedback welcome.


Tuesday 31st March

Monday 7th December

On-duty - Dashboard ROD rota

Monday 11th January

  • There was a problem with the dashboard during the week, where alarms wouldn't clear even though they had cleared in the nagios. The portal people are aware of this.
  • The current alarms in Manchester (vomsserver) are thought to be fixed - the dashboard just hasn't caught up.

Monday 14th December

  • Janet issue affected ROD dashboard access making it almost unusable.
  • GGUS was updated on Wednesday (09/12/15) and was in downtime for two hours. The update did not go smoothly and the downtime was extended. The GGUS interface did not work through the dashboard for most of the morning on Wednesday.
  • There were quite a few alarms throughout the week - most disappeared without intervention.
  • Three open tickets remain against QMUL, RHUL and Lancaster.


Rollout Status WLCG Baseline

Tuesday 7th December

  • Raul reports: validation of site BDII on Centos 7 done.

Tuesday 15th September

Tuesday 12th May

  • MW Readiness WG meeting Wed May 6th at 4pm. Attended by Raul, Matt, Sam and Jeremy.


References


Security - Incident Procedure Policies Rota

Tuesday 12th January

Tuesday 15th December

  • No updates applicable for the UK
  • News from elsewhere: EGI CSIRT activity is now suspending multiple sites following CVE-2015-7181/2/3, largely due to simple failure to acknowledge the notification, which was all that was required.

Tuesday 8th December

  • EGI SVG Advisory 'Low' RISK - OpenSSL announcement on 3rd December.
  • Keep OSes up-to-date.

Tuesday 2nd December

Tuesday 24th November

  • Call on NGIs to participate in "Security Threat Risk Assessment - with Cloud Focus" work.
  • Check Pakiti for CVE-2015-7183 issues.

Tuesday 17th November

  • Advisory-SVG-2015-CVE-2015-7183 issued 06/11/2015: a few UK sites show as unpatched in EGI monitoring. WNs, as tested by the monitoring, may be less vulnerable than affected middleware services but they could be taken as an indication of general site readiness and sites are encouraged to check their status.

The EGI security dashboard.


Services - PerfSonar dashboard | GridPP VOMS

- This includes notifying of (inter)national services that will have an outage in the coming weeks or will be impacted by work elsewhere. (Cross-check the Tier-1 update).

Tuesday 8th December

  • Given the recent network issues and role of GridPP DIRAC, there are plans to have a slave DNS for gridpp.ac.uk at Imperial and hopefully the T1 too. Andrew will seek an update to the whois records and name servers once the other host sites are confirmed.
  • Looking at the network problems this week will be of interest. Duncan supplied this link and Ewan the one for the dual stack instance.

Tuesday 6th October

Tuesday 14th July

  • GridPP35 in September will have a part focus on networking and IPv6. This will include a review of where sites are with their deployment. Please try to firm up dates for your IPv6 availability between now and September. Please update the GridPP IPv6 status table.


Tickets

Monday 18th January 2016, 14.00 GMT
49(!!) Open UK Tickets this week

NGI
118930 (18/1)
The NGI received a ticket concerning incorrect or missing glue information for the Tier 1, Brunel, Imperial, Liverpool, Durham, Glasgow, Bristol, Oxford and RALPP. The variables in question are GlueSubClusterPhysicalCPUs, GlueSubClusterLogicalCPUs and GlueHostProcessorOtherDescription. There are some extra instructions in the ticket - it would be nice if we didn't have to create child tickets (hint hint...).

ATLAS CONSISTENCY CHECKS (10 tickets)
Progress, or at least non-exciting but reassuring updates, on these. Birmingham and Glasgow tickets could do with an update (even if it's a "nothing to see here").

The QMUL ticket had an update providing feedback that might be useful to others too:
https://ggus.eu/?mode=ticket_info&ticket_id=117880

HTTP TF (5 tickets)
ECDF, Manchester, Sheffield and Glasgow are on the HTTP TF list - although no tickets are stale at the moment.

TIER 1 RECOMMENDATIONS
118809 (12/1) An interesting ticket asking T0 and T1s to fill in a questionnaire on configuring batch job memory limits - the Tier 1 have did their bit and the ticket is On Holded for feedback.

GLASGOW
118732 (9/1)
This ticket has got confusing - atlas want a dump for files "lost" at Glasgow that by the looks of it actually never made it to the site in the first place... Waiting for reply (15/1)

TIER 1 DUPLICATES
Are these three CMS are the same (or similar or related) issues -or am I just getting my wires crossed?
118494 (23/12/15)
116864 (12/10/15)
118722 (8/1)

CAN BE CLOSED (I THINK)
IC - 118162 (lfc ticket)
QM - 118839 (atlas job mcore jobs failures - doesn't look like the problem persists).

NEARLY THERE:
Lancaster - 118637 (squid misconfiguration hammering statum-0)
Birmingham - 118155 (biomed SE use - biomed now think they deleted all data at Birmingham).

Tools - MyEGI Nagios

Monday 30th November

  • The SAM/ARGO team has created a document describing Availability reliability calculation in ARGO tool.


Tuesday 6 Oct 2015

Moved Gridppnagios instance back to Oxford from Lancaster. It was kind of double whammy as both sites went down together. Fortunately Oxford site was partially working so we managed to start SAM Nagios at Oxford. Sam tests were unavailable for few hours but no affect on egi availibilty/reliability. Sites can have a look at https://mon.egi.eu/myegi/ss/ for a/r status.

Tuesday 29 Sep 2015

Following an air-conditioning problem at machine room in Oxford Tier-2 site on 26 September, gridppnagios(OX) was shut down and gridppnagios(Lancs) became active instance. Oxford site is in downtime until 1st Oct and it may be extended depending on the situation. VO-Nagios was also unavailable for two days but we have started it yesterday as it is running on a VM. VO-nagios is using oxford SE for replication test so it is failing those tests. I am looking to change to some other SE.

VOs - GridPP VOMS VO IDs Approved VO table

Tuesday 19th May

  • There is a current priority for enabling/supporting our joining communities.

Tuesday 5th May

  • We have a number of VOs to be removed. Dedicated follow-up meeting proposed.

Tuesday 28th April

  • For SNOPLUS.SNOLAB.CA, the port numbers for voms02.gridpp.ac.uk and voms03.gridpp.ac.uk have both been updated from 15003 to 15503.

Tuesday 31st March

  • LIGO are in need of additional support for debugging some tests.
  • LSST now enabled on 3 sites. No 'own' CVMFS yet.
Site Updates

Tuesday 24th February

  • Next review of status today.

Tuesday 27th January

  • Squids not in GOCDB for: UCL; ECDF; Birmingham; Durham; RHUL; IC; Sussex; Lancaster
  • Squids in GOCDB for: EFDA-JET; Manchester; Liverpool; Cambridge; Sheffield; Bristol; Brunel; QMUL; T1; Oxford; Glasgow; RALPPD.

Tuesday 2nd December

  • Multicore status. Queues available (63%)
    • YES: RAL T1; Brunel; Imperial; QMUL; Lancaster; Liverpool; Manchester; Glasgow; Cambridge; Oxford; RALPP; Sussex (12)
    • NO: RHUL (testing); UCL; Sheffield (testing); Durham; ECDF (testing); Birmingham; Bristol (7)
  • According to our table for cloud/VMs (26%)
    • YES: RAL T1; Brunel; Imperial; Manchester; Oxford (5)
    • NO: QMUL; RHUL; UCL; Lancaster; Liverpool; Sheffield; Durham; ECDF; Glasgow; Birmingham; Bristol; Cambridge; RALPP; Sussex (14)
  • GridPP DIRAC jobs successful (58%)
    • YES: Bristol; Glasgow; Lancaster; Liverpool; Manchester; Oxford; Sheffield; Brunel; IC; QMUL; RHUL (11)
    • NO: Cambridge; Durham; RALPP; RAL T1 (4) + ECDF; Sussex; UCL; Birmingham (4)
  • IPv6 status
    • Allocation - 42%
    • YES: RAL T1; Brunel; IC; QMUL; Manchester; Sheffield; Cambridge; Oxford (8)
    • NO: RHUL; UCL; Lancaster; Liverpool; Durham; ECDF; Glasgow; Birmingham; Bristol; RALPP; Sussex
  • Dual stack nodes - 21%
    • YES: Brunel; IC; QMUL; Oxford (4)
    • NO: RHUL; UCL; Lancaster; Glasgow; Liverpool; Manchester; Sheffield; Durham; ECDF; Birmingham; Bristol; Cambridge; RALPP; Sussex, RAL T1 (15)


Tuesday 21st October

  • High loads seen in xroot by several sites: Liverpool and RALT1... and also Bristol (see Luke's TB-S email on 16/10 for questions about changes to help).

Tuesday 9th September

  • Intel announced the new generation of Xeon based on Haswell.



Meeting Summaries
Project Management Board - MembersMinutes Quarterly Reports

Empty

GridPP ops meeting - Agendas Actions Core Tasks

Empty


RAL Tier-1 Experiment Liaison Meeting (Wednesday 13:30) Agenda Meeting takes place on Vidyo.

Highlights from this meeting are now included in the Tier1 report farther up this page.

WLCG Grid Deployment Board - Agendas MB agendas

Empty



NGI UK - Homepage CA

Empty

Events
UK ATLAS - Shifter view News & Links

Atlas S&C week 2-6 Feb 2015

Production

• Prodsys-2 in production since Dec 1st

• Deployment has not been transparent , many issued has been solved, the grid is filled again

• MC15 is expected to start soon, waiting for physics validations, evgen testing is underway and close to finalised.. Simulation expected to be broadly similar to MC14, no blockers expected.

Rucio

• Rucio in production since Dec 1st and is ready for LHC RUN-2. Some fields need improvements, including transfer and deletion agents, documentation and monitoring.

Rucio dumps available.

Dark data cleaning

files declaration . Only Only DDM ops can issue lost files declaration for now, cloud support needs to fill a ticket.

• Webdav panda functional tests with Hammercloud are ongoing

Monitoring

Main page

DDM Accounting

space

Deletion

ASAP

• ASAP (ATLAS Site Availability Performance) in place. Every 3 months the T2s sites performing BELOW 80% are reported to the International Computing Board.


UK CMS

Empty

UK LHCb

Empty

UK OTHER
  • N/A
To note

  • N/A