Operations Bulletin 231212

From GridPP Wiki
Jump to: navigation, search

Bulletin archive


Week commencing 17th December 2012
Task Areas
General updates

Tuesday 18th December

  • There is now an updated/final T2 availability/reliability report for November from WLCG.
  • For those wanting a better insight into EGI operations priorities take a look at today's OMB agenda.


Monday 17th December

  • Add yourself to the Janet UK community on the EVO system via this link if you have not already done so.
  • There was a GDB last Wednesday. Matt's notes are in the wiki. The GDB meeting summary can also be referenced.
  • There was an ATLAS T1/2/3 jamboree last Monday and Tuesday.
  • A reminder that sites need to update their voms.gridpp.ac.uk voms setup for the NES/NGS VOs.

Monday 10th December

  • Two GridPP sites are not currently publishing UserDNs.
  • Multi-core biomed jobs seen at a few sites.
  • The November T2 availability/reliability report had some errors. Sites have till 14th to request a re-computation.
  • There is a GDB this Wednesday with the following agenda.
  • Experiment plans/directions for Long Shutdown 1 (LS1) were outlined in a recent WLCG meeting.
  • Minutes from the most recent WLCG Ops Coordination Team review are available.


Tier-1 - Status Page

Tuesday 18th December

  • Test of UPS/diesel last Tuesday (11th Dec) was successful.
  • There was a hardware failure of one of the main site routers at around 06:45 this morning (18th Dec). At the time of writing this report the network has been restored and we are checking out the Tier1.
  • The roll-out of the over-commit of batch jobs to make use of hyperthreading has been completed. The final step had to be re-applied after modifying a MAUI scheduler parameter to cope with the increased number of job slots.
  • Other items:
    • Ongoing investigating into asymmetric data rates seen to remote sites.
    • Test instance of FTS version 3 now available and being tested by Atlas & CMS.
  • Tier1 Plans for the holiday period on the blog.
Storage & Data Management - Agendas/Minutes

Wednesday 5 Dec

  • DPM EMI upgrades:
    • Future DPM support now better understood (DMLite)
    • Brunel still to try dCache migration
  • ATLAS Jamboree next week, ATLAS want to change all their filenames... (by 2014)
  • How we are doing Big Data(tm)


Accounting - UK Grid Metrics HEPSPEC06 Atlas Dashboard HS06

Tuesday 30th October

  • Storage availability in SL pages has been affected by a number of sites being asked by ATLAS to retire the ATLASGROUPDISK space token while the SUM tests were still testing it as critical. The availability will be corrected manually once the month ends. Sites affected in different degrees are RHUL, CAM, BHAM, SHEF and MAN.

Friday 28th September

  • Tier-2 pledges to WLCG will be made shortly. The situation is fine unless there are significant equipment retirements coming up.
  • See Steve Lloyd's GridPP29 talk for the latest on the GridPP accounting.

Wednesday 6th September

  • Sites should check the atlas page reporting HS06 coefficient because according to the latest statement from Steve that is what it's going to be used Atlas Dashboard coefficients are averages over time.

I am going to suggest using the ATLAS production and analysis numbers given in hs06 directly rather than use cpu secs and try and convert them ourselves as we have been doing. There doesn't seem to be any robust way of doing it any more and so we may as well use ATLAS numbers which are the ones they are checking against pledges etc anyway. If the conversion factors are wrong then we should get them fixed in our BDIIs. No doubt there will be a lively debate at GridPP29!

Documentation - KeyDocs

Tuesday 4th December

KeyDocs monitoring status: Grid Storage(7/0) Documentation(3/0) On-duty coordination(3/0) Staged rollout(3/0) Ticket follow-up(3/0) Regional tools(3/0) Security(3/0) Monitoring(3/0) Accounting(3/0) Core Grid services(3/0) Wider VO issues(3/0) Grid interoperation(3/0) Cluster Management(1/0) (brackets show total/missing)

Thursday, 29th November

The Approved VOs document has been updated to automatically contain a table that lays out the resource requirements for each VO, as well as the maximum. We need to discuss whether this is useful - it seems that the majority of WN software requirements are passed around by word of mouth etc. Should this be formalized? Please see

   https://www.gridpp.ac.uk/wiki/GridPP_approved_VOs#VO_Resource_Requirements

This table will be kept up to date with a regular process that syncs it with the CIC Portal, should it prove to be useful.


Tuesday 6th November

  • Do we need the Approved VOs document the set out the software needs for the VOs?

Tuesday 23rd October

KeyDocs monitoring status: Grid Storage(7/0) Documentation(3/0) On-duty coordination(3/0) Staged rollout(3/0) Ticket follow-up(3/0) Regional tools(3/0) Security(3/0) Monitoring(3/0) Accounting(3/0) Core Grid services(3/0) Wider VO issues(3/0) Grid interoperation(3/0) Cluster Management(1/0) (brackets show total/missing)

Thursday 26th July

All the "site update pages" have been reconfigured from a topic oriented structure into a site oriented structure. This is available to view at https://www.gridpp.ac.uk/wiki/Separate_Site_Status_Pages#Site_Specific_Pages

Please do not edit these pages yet - any changes would be lost when we refine the template. Any comments gratefully received, contact: sjones@hep.ph.liv.ac.uk

Interoperation - EGI ops agendas

Tuesday 18th December

  • Update coming from today's meeting....!

Monday 3rd December


Monday 5th November

  • There was an EGI ops meeting today.
  • UMD 2.3.0 in preparation. Release due 19 November, freeze date 12 November.
  • EMI-2 updates: DPM/LFC and VOMS - bugfixes, and glue 2.0 in DPM.
  • EGI have a list of sites considered unresponsive or having insufficient plans for the middleware migration. The one UK site mentioned has today updated their ticket again with further information.
  • In general an upgrade plan cannot extend after the end of 2012.
  • A dCache probe was being rolled into production yesterday, alarms should appear in the next 24 hours on the security dashboard
  • CSIRT is taking over from COD on migration ticketing. By next Monday the NGIs with problematic sites will be asked to contact the sites, asking them to register a downtime for their unsupported services.
  • Problems with WMS in EMI-2 (update 4) - WMS version 3.4.0. Basically, it can get proxy interaction with MyProxy a bit wrong. The detail is at GGUS 87802, and there exist a couple of workarounds.



Monitoring - Links MyWLCG

Monday 2nd July

  • DC has almost finished an initial ranking. This will be reviewed by AF/JC and discussed at 10th July ops meeting

Wednesday 6th June

  • Ranking continues. Plan to have a meeting in July to discuss good approaches to the plethora of monitoring available.
  • Glasgow dashboard now packaged and can be downloaded here.
On-duty - Dashboard ROD rota

Friday 7th December - KM

  • Very quite, nothing to report

Sunday 2nd December - SP

  • Phenomonly quiet week - with one exception, all I ever saw was alarms that had already gone green!
  • One case open at the moment - ECDF, which is clicking over into the 'ticket them' window today - so that'll need some action either today or tomorrow morning.

Friday 23rd November - DB

  • Despite the meltdown at RAL nothing exciting to report. The Imperial WMS got stuck at some point (you'll see an accumulation of "job cancelled" due to failed to run messages). Simon managed to reproduce the bug https://ggus.eu/tech/ticket_show.php?ticket=88831 (In real life it was the Durham CE that caused the problem.) If this happens again, let Daniela know and she will check the WMS.

Monday 19th November - AM

  • Good week overall. No UK-wide problems. Several sites still with (upgrade)

related planned downtimes. Only one outstanding ticket (Durham) and no alarms left open over the weekend.

Monday 12th November

  • Birmingham is in downtime till further notice as university is operating on emergency power.
  • Durham has a open ticket.
  • A lot of Nagios test failure because of power failure at Tier1 but now everything is back to normal.


Rollout Status WLCG Baseline

Tuesday 6th November

References


Security - Incident Procedure Policies Rota

Monday 3rd December

  • One critical alarm for legacy CREAMCE-gLite-32 service which is already in downtime. World writable directory warning, site will raise with ATLAS.

Monday 22nd October

  • Last week's UK security activity was very much business as usual; there are a lot of alarms in the dashboard for UK sites, but for most of the week they only related to the gLite 3.2 retirement.

Friday 12th October

  • The main activity over the last week has been due to new Nagios tests for obsoleted glite middleware and classic SE instances. Most UK sites have alerts against them in the security dashboard and the COD has ticketed sites as appropriate. Several problems have been fixed already, though it seems that the dashboard is slow to notice the fixes.


Services - PerfSonar dashboard | GridPP VOMS

Tuesday 20th November

  • Reminder for sites to add perfSONAR services in GOCDB.
  • VOMS upgraded at Manchester. No reported problems. Next step to do the replication to Oxford/Imperial.

Monday 5th November

  • perfSONAR service types are now defined in GOCDB.
  • Reminder that the gridpp VOMS will be upgraded next Wednesday.

Thursday 18th October

  • VOMS sub-group meeting on Thursday with David Wallom to discuss the NGS VOs. Approximately 20 will be supported on the GridPP VOMS. The intention is to go live with the combined (upgrades VOMS) on 14th November.
  • The Manchester-Oxford replication has been successfully tested. Imperial to test shortly.


Tickets

Monday 17th December 15:00 GMT</br>

33 tickets this week. Due to the GGUS outage time was a little short for me to go over the tickets in as much depth as I'd like to have, nor put in any delightful holiday puns. Next year!

Please can everyone make extra effort to put any tickets to bed this week.

  • NGI</br>

https://ggus.eu/ws/ticket_info.php?ticket=89350 (10/12)</br> User DN publishing (or lack thereof) at ECDF & Bristol. I believe both sites are publishing DNs now, and that neither destroyed APEL by trying to republish their data, so I think this ticket can be closed? In progress (12/12)

  • Out of Date middleware tickets:</br>

Sheffield: https://ggus.eu/ws/ticket_info.php?ticket=89478 - Can close their ticket by the looks of it.</br> WNs:</br> GLASGOW: https://ggus.eu/ws/ticket_info.php?ticket=89369 - Can you please try to post plans this week?</br> IC: https://ggus.eu/ws/ticket_info.php?ticket=89368 - Waiting on the tarball.</br> LANCASTER: https://ggus.eu/ws/ticket_info.php?ticket=89476 - Waiting on the tarball too. I better get my backside in gear!</br> DURHAM: https://ggus.eu/ws/ticket_info.php?ticket=89370 -plan to upgrade during their overhaul.</br> ECDF: https://ggus.eu/ws/ticket_info.php?ticket=89356 - waiting on the (sl6 just to be different) tarball.</br> DPM:</br> UCL: https://ggus.eu/ws/ticket_info.php?ticket=89477 - Ben will contact the storage group for help soon. Needs a plan.</br> BIRMINGHAM: No ticket, but Mark is battling bravely against what ever's causing his upgrade failures. At this point I'd suggest exorcising the machine room, to get rid of the Ghosts of Upgrades Past.</br> Update - Birmingham do have a ticket now: https://ggus.eu/ws/ticket_info.php?ticket=89572

  • TIER-1</br>

https://ggus.eu/ws/ticket_info.php?ticket=89733 (17/10)</br> Chris has spotted some bad information coming from the Tier-1 top bdii. Assigned as I was writing this (17/10)

  • DURHAM</br>

https://ggus.eu/ws/ticket_info.php?ticket=89731 (17/12)</br> enmr.eu are having jobs stall at Durham, can you please have a gander before the holidays. Assigned (17/12)

  • BRUNEL</br>

https://ggus.eu/ws/ticket_info.php?ticket=89415 (10/12)</br> Of interest- an hone user's jobs failed to send work back to a Desy SE (perhaps an SL6 compatibility problem with his jobs?). The user has asked to use a few hundred GBs of space at Brunel to stage his output to, Raul permitted it. In progress (could waiting for reply pending the user's experience) (14/12)

  • LIVERPOOL</br>

https://ggus.eu/ws/ticket_info.php?ticket=89374 (10/12)</br> Did you chaps have a chance to look at this fusion ticket? Is there actually a problem at Liverpool? In progress (11/12)

https://ggus.eu/ws/ticket_info.php?ticket=88761 (22/11)</br> The lhcb jobs that clogged the Liverpool network. Steps have been taken to stop this happening again, so I reckon we can put this one to bed. In progress (23/11)

  • BIRMINGHAM</br>

https://ggus.eu/ws/ticket_info.php?ticket=89129 (3/12)</br> atlas jobs failures, seem to have abated thanks to Mark's efforts. This ticket can be closed by the looks of it. In Progress (17/12)

  • IC</br>

https://ggus.eu/ws/ticket_info.php?ticket=89105 (1/12)</br> t2k were having problems renewing proxies via the IC WMSes. Daniela implemented a hopeful fix, and Stephen Burke suggested that the change of behaviour was due to the switch to EMI myproxy servers. In Progress (can be waiting for replied to see how the t2k jobs fare) (15/12)

  • LANCASTER</br>

https://ggus.eu/ws/ticket_info.php?ticket=88628 (20/11)</br> t2k were having trouble running their SW jobs at Lancaster. Current problem is getting the WMS to submit to one CE, expect a plea for help to TB-SUPPORT soon. In progress (17/12)

Tools - MyEGI Nagios

Tuesday 13th November

  • Noticed two issues during tier1 powercut. SRM and direct cream submission uses top bdii defined in Nagios configuration to query about the resource. These tests started to fail because of RAL top BDII being not accessible. It doesn't use BDII_LIST so I can not define more than one BDII. I am looking into that how to make it more robust.
  • Nagios web interface was not accessible to few users because of GOCDB being down. It is a bug in SAM-nagios and I have opened a ticket.

Availability of sites have not been affected due to this issue because Nagios sends a warning alert in case of not being able to find resource through BDII.


Wednesday 17th October

Monday 17th September

  • Current state of Nagios is now on this page.

Monday 10th September

  • Discusson needed on which Nagios instance is reporting for the WLCG (metrics) view



VOs - GridPP VOMS VO IDs Approved VO table

Mon 17th December

Tue 4th December

Thursday 29 November

Tuesday 27 November

  • VOs supported at sites page updated
    • now lists number of sites supporting a VO, and number of VOs supported by a site.
    • Linked to by Steve Lloyd's pages


Tuesday 23 October

  • A local user is wanting to get on the grid and wants to set up his own UI. Do we have instructions?


Site Updates

Monday 5th November

  • SUSSEX: Site working on enabling of ATLAS jobs.


Meeting Summaries
Project Management Board - MembersMinutes Quarterly Reports

Monday 1st October

  • ELC work


Tuesday 25th September

  • Reviewing pledges.
  • Q2 2012 review
  • Clouds and DIRAC
GridPP ops meeting - Agendas Actions Core Tasks

Tuesday 21st August - link Agenda Minutes

  • TBC


RAL Tier-1 Experiment Liaison Meeting (Wednesday 13:30) Agenda EVO meeting

Wednesday 19th December

  • Operations report
  • Outage of Tier1 on Tuesday (18th Dec) following the failure of a core network router at RAL.
  • Post mortem report for Power Incident on 20th November now available here
  • Plans for cover over Christmas & New year holiday on Tier1 Blog here
  • The Tier1 team is checking whether any VO requires the AFS client on the worker nodes.
WLCG Grid Deployment Board - Agendas MB agendas

October meeting Wednesday 10th October




NGI UK - Homepage CA

Wednesday 22nd August

  • Operationally few changes - VOMS and Nagios changes on hold due to holidays
  • Upcoming meetings Digital Research 2012 and the EGI Technical Forum. UK NGI presence at both.
  • The NGS is rebranding to NES (National e-Infrastructure Service)
  • EGI is looking at options to become a European Research Infrastructure Consortium (ERIC). (Background document.
  • Next meeting is on Friday 14th September at 13:00.
Events

WLCG workshop - 19th-20th May (NY) Information

CHEP 2012 - 21st-25th May (NY) Agenda

GridPP29 - 26th-27th September (Oxford)

UK ATLAS - Shifter view News & Links

Thursday 21st June

  • Over the last few months ATLAS have been testing their job recovery mechanism at RAL and a few other sites. This is something that was 'implemented' before but never really worked properly. It now appears to be working well and saving allowing jobs to finish even if the SE is not up/unstable when the job finishes.
  • Job recovery works by writing the output of the job to a directory on the WN should it fail when writing the output to the SE. Subsequent pilots will check this directory and try again for a period of 3 hours. If you would like to have job recovery activated at your site you need to create a directory which (atlas) jobs can write too. I would also suggest that this directory has some form of tmp watch enabled on it which clears up files and directories older than 48 hours. Evidence from RAL suggest that its normally only 1 or 2 jobs that are ever written to the space at a time and the space is normally less than a GB. I have not observed more than 10GB being used. Once you have created this space if you can email atlas-support-cloud-uk at cern.ch with the directory (and your site!) and we can add it to the ATLAS configurations. We can switch off job recovery at any time if it does cause a problem at your site. Job recovery would only be used for production jobs as users complain if they have to wait a few hours for things to retry (even if it would save them time overall...)
UK CMS

Tuesday 24th April

  • Brunel will be trialling CVMFS this week, will be interesting. RALPP doing OK with it.
UK LHCb

Tuesday 24th April

  • Things are running smoothly. We are going to run a few small scale tests of new codes. This will also run at T2, one UK T2 involved. Then we will soon launch new reprocessing of all data from this year. CVMFS update from last week; fixes cache corruption on WNs.
UK OTHER

Thursday 21st June - JANET6

  • JANET6 meeting in London (agenda)
  • Spend of order £24M for strategic rather than operational needs.
  • Recommendations to BIS shortly
  • Requirements: bandwidth, flexibility, agility, cost, service delivery - reliability & resilience
  • Core presently 100Gb/s backbone. Looking to 400Gb/s and later 1Tb/s.
  • Reliability limited by funding not ops so need smart provisioning to reduce costs
  • Expecting a 'data deluge' (ITER; EBI; EVLBI; JASMIN)
  • Goal of dynamic provisioning
  • Looking at ubiquitous connectivity via ISPs
  • Contracts were 10yrs wrt connection and 5yrs transmission equipment.
  • Current native capacity 80 channels of 100Gb/s per channel
  • Fibre procurement for next phase underway (standard players) - 6400km fibre
  • Transmission equipment also at tender stage
  • Industry engagement - Glaxo case study.
  • Extra requiements: software coding, security, domain knowledge.
  • Expect genome data usage to explode in 3-5yrs.
  • Licensing is a clear issue
To note

Tuesday 26th June