Operations Bulletin 171212

From GridPP Wiki
Jump to: navigation, search

Bulletin archive


Week commencing 10th December 2012
Task Areas
General updates

Monday 10th December

  • Two GridPP sites are not currently publishing UserDNs.
  • Multi-core biomed jobs seen at a few sites.
  • The November T2 availability/reliability report had some errors. Sites have till 14th to request a re-computation.
  • There is a GDB this Wednesday with the following agenda.
  • Experiment plans/directions for Long Shutdown 1 (LS1) were outlined in a recent WLCG meeting.
  • Minutes from the most recent WLCG Ops Coordination Team review are available.


Monday 3rd December

  • The 2nd DPM Community Workshop is taking place this Monday and Tuesday. It includes tutorial sessions on DM-Lite.
  • Following on from the GridPP29 discussions, there was a GridPP/UK cloud kick-off meeting last Friday. If your site is already running a cloud of some description, then please let David know. The core group is defined in the slides. There is a jiscmail mailing list if you want to follow/contribute to this work: GRIDPP-CLOUD.
  • An agenda is now available for December's GDB.
  • Last week a subset of the PMB plus Neasan and Chris discussed project impact and knowledge exchange topics. This will be important in future proposals. We would like to find/produce some VO exemplars from non-HEP areas. Please let Jeremy and Chris know if you have at your site a community that may (significantly) benefit from using the Grid and have an interest in piloting a project.
Tier-1 - Status Page

Tuesday 11th December

  • We have obtained enough replacements parts (esp. power supplies) to relieve the crucial points that were not resilient following the power incident of 20th Nov.
  • Test of UPS/diesel today (10:00).
  • The roll-out of the over-commit of batch jobs to make use of hyperthreading was completed. However, problems encountered with MAUI scheduler with maximum number of slots it could manage. Pulled back partly on level of over-commit and have rolled out a parameter change in MAUI that should fix this. If all OK will re-apply full planned over-commit for hyperthreading.
  • Other items:
    • Ongoing investigating into asymmetric data rates seen to remote sites.
    • Roll-out the over-commit of batch jobs to make use of hyperthreading ongoing.
    • Test instance of FTS version 3 now available and being tested by Atlas & CMS.
Storage & Data Management - Agendas/Minutes

Wednesday 5 Dec

  • DPM EMI upgrades:
    • Future DPM support now better understood (DMLite)
    • Brunel still to try dCache migration
  • ATLAS Jamboree next week, ATLAS want to change all their filenames... (by 2014)
  • How we are doing Big Data(tm)


Accounting - UK Grid Metrics HEPSPEC06 Atlas Dashboard HS06

Tuesday 30th October

  • Storage availability in SL pages has been affected by a number of sites being asked by ATLAS to retire the ATLASGROUPDISK space token while the SUM tests were still testing it as critical. The availability will be corrected manually once the month ends. Sites affected in different degrees are RHUL, CAM, BHAM, SHEF and MAN.

Friday 28th September

  • Tier-2 pledges to WLCG will be made shortly. The situation is fine unless there are significant equipment retirements coming up.
  • See Steve Lloyd's GridPP29 talk for the latest on the GridPP accounting.

Wednesday 6th September

  • Sites should check the atlas page reporting HS06 coefficient because according to the latest statement from Steve that is what it's going to be used Atlas Dashboard coefficients are averages over time.

I am going to suggest using the ATLAS production and analysis numbers given in hs06 directly rather than use cpu secs and try and convert them ourselves as we have been doing. There doesn't seem to be any robust way of doing it any more and so we may as well use ATLAS numbers which are the ones they are checking against pledges etc anyway. If the conversion factors are wrong then we should get them fixed in our BDIIs. No doubt there will be a lively debate at GridPP29!

Documentation - KeyDocs

Tuesday 4th December

KeyDocs monitoring status: Grid Storage(7/0) Documentation(3/0) On-duty coordination(3/0) Staged rollout(3/0) Ticket follow-up(3/0) Regional tools(3/0) Security(3/0) Monitoring(3/0) Accounting(3/0) Core Grid services(3/0) Wider VO issues(3/0) Grid interoperation(3/0) Cluster Management(1/0) (brackets show total/missing)

Thursday, 29th November

The Approved VOs document has been updated to automatically contain a table that lays out the resource requirements for each VO, as well as the maximum. We need to discuss whether this is useful - it seems that the majority of WN software requirements are passed around by word of mouth etc. Should this be formalized? Please see

   https://www.gridpp.ac.uk/wiki/GridPP_approved_VOs#VO_Resource_Requirements

This table will be kept up to date with a regular process that syncs it with the CIC Portal, should it prove to be useful.


Tuesday 6th November

  • Do we need the Approved VOs document the set out the software needs for the VOs?

Tuesday 23rd October

KeyDocs monitoring status: Grid Storage(7/0) Documentation(3/0) On-duty coordination(3/0) Staged rollout(3/0) Ticket follow-up(3/0) Regional tools(3/0) Security(3/0) Monitoring(3/0) Accounting(3/0) Core Grid services(3/0) Wider VO issues(3/0) Grid interoperation(3/0) Cluster Management(1/0) (brackets show total/missing)

Thursday 26th July

All the "site update pages" have been reconfigured from a topic oriented structure into a site oriented structure. This is available to view at https://www.gridpp.ac.uk/wiki/Separate_Site_Status_Pages#Site_Specific_Pages

Please do not edit these pages yet - any changes would be lost when we refine the template. Any comments gratefully received, contact: sjones@hep.ph.liv.ac.uk

Interoperation - EGI ops agendas

Monday 3rd December


Monday 5th November

  • There was an EGI ops meeting today.
  • UMD 2.3.0 in preparation. Release due 19 November, freeze date 12 November.
  • EMI-2 updates: DPM/LFC and VOMS - bugfixes, and glue 2.0 in DPM.
  • EGI have a list of sites considered unresponsive or having insufficient plans for the middleware migration. The one UK site mentioned has today updated their ticket again with further information.
  • In general an upgrade plan cannot extend after the end of 2012.
  • A dCache probe was being rolled into production yesterday, alarms should appear in the next 24 hours on the security dashboard
  • CSIRT is taking over from COD on migration ticketing. By next Monday the NGIs with problematic sites will be asked to contact the sites, asking them to register a downtime for their unsupported services.
  • Problems with WMS in EMI-2 (update 4) - WMS version 3.4.0. Basically, it can get proxy interaction with MyProxy a bit wrong. The detail is at GGUS 87802, and there exist a couple of workarounds.



Monitoring - Links MyWLCG

Monday 2nd July

  • DC has almost finished an initial ranking. This will be reviewed by AF/JC and discussed at 10th July ops meeting

Wednesday 6th June

  • Ranking continues. Plan to have a meeting in July to discuss good approaches to the plethora of monitoring available.
  • Glasgow dashboard now packaged and can be downloaded here.
On-duty - Dashboard ROD rota

Friday 7th December - KM

  • Very quite, nothing to report

Sunday 2nd December - SP

  • Phenomonly quiet week - with one exception, all I ever saw was alarms that had already gone green!
  • One case open at the moment - ECDF, which is clicking over into the 'ticket them' window today - so that'll need some action either today or tomorrow morning.

Friday 23rd November - DB

  • Despite the meltdown at RAL nothing exciting to report. The Imperial WMS got stuck at some point (you'll see an accumulation of "job cancelled" due to failed to run messages). Simon managed to reproduce the bug https://ggus.eu/tech/ticket_show.php?ticket=88831 (In real life it was the Durham CE that caused the problem.) If this happens again, let Daniela know and she will check the WMS.

Monday 19th November - AM

  • Good week overall. No UK-wide problems. Several sites still with (upgrade)

related planned downtimes. Only one outstanding ticket (Durham) and no alarms left open over the weekend.

Monday 12th November

  • Birmingham is in downtime till further notice as university is operating on emergency power.
  • Durham has a open ticket.
  • A lot of Nagios test failure because of power failure at Tier1 but now everything is back to normal.


Rollout Status WLCG Baseline

Tuesday 6th November

References


Security - Incident Procedure Policies Rota

Monday 3rd December

  • One critical alarm for legacy CREAMCE-gLite-32 service which is already in downtime. World writable directory warning, site will raise with ATLAS.

Monday 22nd October

  • Last week's UK security activity was very much business as usual; there are a lot of alarms in the dashboard for UK sites, but for most of the week they only related to the gLite 3.2 retirement.

Friday 12th October

  • The main activity over the last week has been due to new Nagios tests for obsoleted glite middleware and classic SE instances. Most UK sites have alerts against them in the security dashboard and the COD has ticketed sites as appropriate. Several problems have been fixed already, though it seems that the dashboard is slow to notice the fixes.


Services - PerfSonar dashboard | GridPP VOMS

Tuesday 20th November

  • Reminder for sites to add perfSONAR services in GOCDB.
  • VOMS upgraded at Manchester. No reported problems. Next step to do the replication to Oxford/Imperial.

Monday 5th November

  • perfSONAR service types are now defined in GOCDB.
  • Reminder that the gridpp VOMS will be upgraded next Wednesday.

Thursday 18th October

  • VOMS sub-group meeting on Thursday with David Wallom to discuss the NGS VOs. Approximately 20 will be supported on the GridPP VOMS. The intention is to go live with the combined (upgrades VOMS) on 14th November.
  • The Manchester-Oxford replication has been successfully tested. Imperial to test shortly.


Tickets

Monday 10th December 2012 15.00 GMT</br> 29 tickets this week.</br>

I haven't seen any sign of a fresh wave of "Unsupported Glite Software" tickets, but the ROD team have started issuing tickets for security alarms in the dashboard. Two of the three sites ticketed are tarball reliant (I wonder how Lancaster dodged the bullet? Only one of our clusters is running the tarball prototype). For your records the current tarball ticket is https://ggus.eu/ws/ticket_info.php?ticket=81496.

A lot of sites did the workers at the same time as the CEs, so with luck there shouldn't be too many more tickets showing up on this front.

  • NGI</br>

https://ggus.eu/ws/ticket_info.php?ticket=89350 (10/12)</br> Bristol and ECDF aren't publishing UserDN, so the NGI got ticketed. In progress (10/12)

  • GLASGOW</br>

https://ggus.eu/ws/ticket_info.php?ticket=89221 (5/12)</br> This ticket concerning enmr.eu accounting at Glasgow seems to have trailed off into a debate between two VO members, so I think it can be closed. In progress (5/12)

  • BIRMINGHAM</br>

https://ggus.eu/ws/ticket_info.php?ticket=89129 (3/12)</br> High atlas prod job failure rates at Birmingham. Thought to be caused by the transition to EMI2 workers interacting with the atlas local area. Despite being all EMI now, and having completely reinstalled the workers, the ghosts of gLite past still linger somehow and the atlas software validation jobs won't update. Everyone who should be involved is involved, but something for others to watch out for. Waiting for reply (10/10)

  • LIVERPOOL</br>

https://ggus.eu/ws/ticket_info.php?ticket=89374 (10/12)</br> Fusion is having trouble getting it's files. A very unhelpful split ticket without a notified site. I think it might be a problem at Liverpool (.py.liv.ac.uk servers), but feel free to punt it elsewhere if it ain't. Assigned (10/12)

https://ggus.eu/ws/ticket_info.php?ticket=88761 (22/11)</br> lhcb jobs clogging the Liverpool network pipes. As a ticket that got submitted by Liverpool that wound up going back to Liverpool it could easily end up in Limbo. Both the site and lhcb have taken steps to stop this happening again, so I think it can be closed. In progress (4/12)

No other tickets caught my eye.

Tools - MyEGI Nagios

Tuesday 13th November

  • Noticed two issues during tier1 powercut. SRM and direct cream submission uses top bdii defined in Nagios configuration to query about the resource. These tests started to fail because of RAL top BDII being not accessible. It doesn't use BDII_LIST so I can not define more than one BDII. I am looking into that how to make it more robust.
  • Nagios web interface was not accessible to few users because of GOCDB being down. It is a bug in SAM-nagios and I have opened a ticket.

Availability of sites have not been affected due to this issue because Nagios sends a warning alert in case of not being able to find resource through BDII.


Wednesday 17th October

Monday 17th September

  • Current state of Nagios is now on this page.

Monday 10th September

  • Discusson needed on which Nagios instance is reporting for the WLCG (metrics) view



VOs - GridPP VOMS VO IDs Approved VO table

Tue 4th December

Thursday 29 November

Tuesday 27 November

  • VOs supported at sites page updated
    • now lists number of sites supporting a VO, and number of VOs supported by a site.
    • Linked to by Steve Lloyd's pages


Tuesday 23 October

  • A local user is wanting to get on the grid and wants to set up his own UI. Do we have instructions?


Site Updates

Monday 5th November

  • SUSSEX: Site working on enabling of ATLAS jobs.


Meeting Summaries
Project Management Board - MembersMinutes Quarterly Reports

Monday 1st October

  • ELC work


Tuesday 25th September

  • Reviewing pledges.
  • Q2 2012 review
  • Clouds and DIRAC
GridPP ops meeting - Agendas Actions Core Tasks

Tuesday 21st August - link Agenda Minutes

  • TBC


RAL Tier-1 Experiment Liaison Meeting (Wednesday 13:30) Agenda EVO meeting

Wednesday 12th December

  • Operations report
  • Operations generally calmer this week (since the power problems.) However significant problems with Atlas SRM this today (12th Dec).
  • Successful test of the UPS / Diesel Generator on Tuesday (11th Dec).
  • Roll-out of over-commit of worker nodes for hyperthreading and upgrade of WNs to EMI-2 completed.
  • The Tier1 team is checking whether any VO requires the AFS client on the worker nodes.
WLCG Grid Deployment Board - Agendas MB agendas

October meeting Wednesday 10th October




NGI UK - Homepage CA

Wednesday 22nd August

  • Operationally few changes - VOMS and Nagios changes on hold due to holidays
  • Upcoming meetings Digital Research 2012 and the EGI Technical Forum. UK NGI presence at both.
  • The NGS is rebranding to NES (National e-Infrastructure Service)
  • EGI is looking at options to become a European Research Infrastructure Consortium (ERIC). (Background document.
  • Next meeting is on Friday 14th September at 13:00.
Events

WLCG workshop - 19th-20th May (NY) Information

CHEP 2012 - 21st-25th May (NY) Agenda

GridPP29 - 26th-27th September (Oxford)

UK ATLAS - Shifter view News & Links

Thursday 21st June

  • Over the last few months ATLAS have been testing their job recovery mechanism at RAL and a few other sites. This is something that was 'implemented' before but never really worked properly. It now appears to be working well and saving allowing jobs to finish even if the SE is not up/unstable when the job finishes.
  • Job recovery works by writing the output of the job to a directory on the WN should it fail when writing the output to the SE. Subsequent pilots will check this directory and try again for a period of 3 hours. If you would like to have job recovery activated at your site you need to create a directory which (atlas) jobs can write too. I would also suggest that this directory has some form of tmp watch enabled on it which clears up files and directories older than 48 hours. Evidence from RAL suggest that its normally only 1 or 2 jobs that are ever written to the space at a time and the space is normally less than a GB. I have not observed more than 10GB being used. Once you have created this space if you can email atlas-support-cloud-uk at cern.ch with the directory (and your site!) and we can add it to the ATLAS configurations. We can switch off job recovery at any time if it does cause a problem at your site. Job recovery would only be used for production jobs as users complain if they have to wait a few hours for things to retry (even if it would save them time overall...)
UK CMS

Tuesday 24th April

  • Brunel will be trialling CVMFS this week, will be interesting. RALPP doing OK with it.
UK LHCb

Tuesday 24th April

  • Things are running smoothly. We are going to run a few small scale tests of new codes. This will also run at T2, one UK T2 involved. Then we will soon launch new reprocessing of all data from this year. CVMFS update from last week; fixes cache corruption on WNs.
UK OTHER

Thursday 21st June - JANET6

  • JANET6 meeting in London (agenda)
  • Spend of order £24M for strategic rather than operational needs.
  • Recommendations to BIS shortly
  • Requirements: bandwidth, flexibility, agility, cost, service delivery - reliability & resilience
  • Core presently 100Gb/s backbone. Looking to 400Gb/s and later 1Tb/s.
  • Reliability limited by funding not ops so need smart provisioning to reduce costs
  • Expecting a 'data deluge' (ITER; EBI; EVLBI; JASMIN)
  • Goal of dynamic provisioning
  • Looking at ubiquitous connectivity via ISPs
  • Contracts were 10yrs wrt connection and 5yrs transmission equipment.
  • Current native capacity 80 channels of 100Gb/s per channel
  • Fibre procurement for next phase underway (standard players) - 6400km fibre
  • Transmission equipment also at tender stage
  • Industry engagement - Glaxo case study.
  • Extra requiements: software coding, security, domain knowledge.
  • Expect genome data usage to explode in 3-5yrs.
  • Licensing is a clear issue
To note

Tuesday 26th June