Operations Bulletin 031212

From GridPP Wiki
Jump to: navigation, search


Week commencing 26th November 2012
Task Areas
General updates

Monday 26th November

  • A reminder of the repository hosted at Manchester.
  • Please note that the SARoNGS CA certificate expires on the 30th Nov (this Friday).
  • There was a WLCG Operations Coordination Team meeting last Thursday. Note that a gLExec test has been added to the EGI ROC_OPERATORS profile. A push for enablement will likely start in December or January.
  • Brian circulated some tables showing (ATLAS) WAN performance.


Tuesday 20th November

  • The WLCG T2 reliability/availability report for October is now final.
  • There was a GDB last week. Ewan's report is in the wiki.
  • There is an ongoing EGI Operations Management Board meeting.
  • The DTEAM VO membership service is currently supported by VOMRS (AUTH). VOMRS is unsupported since 01-10-2012. VOMRS ready for migration to VOMS at the end of November.
Tier 1

Tuesday 27th November

  • Major problem last Tuesday when an over-voltage was applied to equipment on the UPS during planned electrical work. There was a significant loss of PDUs and power supplies. Services were restored around 48 hours later although batch and number of operational tape drives still at reduced capacity. Discussions underway about when a load test of teh diesel generator could take place.
  • We are in the process of upgrading the worker nodes to EMI-2.
  • Other items:
    • Ongoing investigating into asymmetric data rates seen to remote sites.
    • A small start has been made in rolling out the over-commit of batch jobs to make use of hyperthreading.
    • Test instance of FTS version 3 now available and being tested by Atlas & CMS.
Storage & Data Management - Agendas/Minutes

Wednesday 10th October

  • DPM EMI upgrades:
    • 9 sites need to upgrade from gLite 3.2
  • QMUL asking for FTS settings to be increased to fully test Network link.
  • Initial discussion on how Brunel might upgrade it's SE and decommission is old SE
  • Classic SE support , both for new SEs and plan to remove current publishing of classic SE endpoint


Accounting - UK Grid Metrics HEPSPEC06 Atlas Dashboard HS06

Tuesday 30th October

  • Storage availability in SL pages has been affected by a number of sites being asked by ATLAS to retire the ATLASGROUPDISK space token while the SUM tests were still testing it as critical. The availability will be corrected manually once the month ends. Sites affected in different degrees are RHUL, CAM, BHAM, SHEF and MAN.

Friday 28th September

  • Tier-2 pledges to WLCG will be made shortly. The situation is fine unless there are significant equipment retirements coming up.
  • See Steve Lloyd's GridPP29 talk for the latest on the GridPP accounting.

Wednesday 6th September

  • Sites should check the atlas page reporting HS06 coefficient because according to the latest statement from Steve that is what it's going to be used Atlas Dashboard coefficients are averages over time.

I am going to suggest using the ATLAS production and analysis numbers given in hs06 directly rather than use cpu secs and try and convert them ourselves as we have been doing. There doesn't seem to be any robust way of doing it any more and so we may as well use ATLAS numbers which are the ones they are checking against pledges etc anyway. If the conversion factors are wrong then we should get them fixed in our BDIIs. No doubt there will be a lively debate at GridPP29!

Documentation - KeyDocs

Thursday, 29th November

The Approved VOs document has been updated to automatically contain a table that lays out the resource requirements for each VO, as well as the maximum. We need to discuss whether this is useful - it seems that the majority of WN software requirements are passed around by word of mouth etc. Should this be formalized? Please see

   https://www.gridpp.ac.uk/wiki/GridPP_approved_VOs#VO_Resource_Requirements

This table will be kept up to date with a regular process that syncs it with the CIC Portal, should it prove to be useful.


Tuesday 6th November

  • Do we need the Approved VOs document the set out the software needs for the VOs?

Tuesday 23rd October

KeyDocs monitoring status: Grid Storage(7/0) Documentation(3/0) On-duty coordination(3/0) Staged rollout(3/0) Ticket follow-up(3/0) Regional tools(3/0) Security(3/0) Monitoring(3/0) Accounting(3/0) Core Grid services(3/0) Wider VO issues(3/0) Grid interoperation(3/0) Cluster Management(1/0) (brackets show total/missing)

Thursday 26th July

All the "site update pages" have been reconfigured from a topic oriented structure into a site oriented structure. This is available to view at https://www.gridpp.ac.uk/wiki/Separate_Site_Status_Pages#Site_Specific_Pages

Please do not edit these pages yet - any changes would be lost when we refine the template. Any comments gratefully received, contact: sjones@hep.ph.liv.ac.uk

Interoperation - EGI ops agendas

Monday 5th November

  • There was an EGI ops meeting today.
  • UMD 2.3.0 in preparation. Release due 19 November, freeze date 12 November.
  • EMI-2 updates: DPM/LFC and VOMS - bugfixes, and glue 2.0 in DPM.
  • EGI have a list of sites considered unresponsive or having insufficient plans for the middleware migration. The one UK site mentioned has today updated their ticket again with further information.
  • In general an upgrade plan cannot extend after the end of 2012.
  • A dCache probe was being rolled into production yesterday, alarms should appear in the next 24 hours on the security dashboard
  • CSIRT is taking over from COD on migration ticketing. By next Monday the NGIs with problematic sites will be asked to contact the sites, asking them to register a downtime for their unsupported services.
  • Problems with WMS in EMI-2 (update 4) - WMS version 3.4.0. Basically, it can get proxy interaction with MyProxy a bit wrong. The detail is at GGUS 87802, and there exist a couple of workarounds.



Monitoring - Links MyWLCG

Monday 2nd July

  • DC has almost finished an initial ranking. This will be reviewed by AF/JC and discussed at 10th July ops meeting

Wednesday 6th June

  • Ranking continues. Plan to have a meeting in July to discuss good approaches to the plethora of monitoring available.
  • Glasgow dashboard now packaged and can be downloaded here.
On-duty - Dashboard ROD rota

Friday 23rd November - DB

  • Despite the meltdown at RAL nothing exciting to report. The Imperial WMS got stuck at some point (you'll see an accumulation of "job cancelled" due to failed to run messages). Simon managed to reproduce the bug https://ggus.eu/tech/ticket_show.php?ticket=88831 (In real life it was the Durham CE that caused the problem.) If this happens again, let Daniela know and she will check the WMS.

Monday 19th November - AM

  • Good week overall. No UK-wide problems. Several sites still with (upgrade)

related planned downtimes. Only one outstanding ticket (Durham) and no alarms left open over the weekend.

Monday 12th November

  • Birmingham is in downtime till further notice as university is operating on emergency power.
  • Durham has a open ticket.
  • A lot of Nagios test failure because of power failure at Tier1 but now everything is back to normal.


Rollout Status WLCG Baseline

Tuesday 6th November

References


Security - Incident Procedure Policies Rota

Monday 22nd October

  • Last week's UK security activity was very much business as usual; there are a lot of alarms in the dashboard for UK sites, but for most of the week they only related to the gLite 3.2 retirement.

Friday 12th October

  • The main activity over the last week has been due to new Nagios tests for obsoleted glite middleware and classic SE instances. Most UK sites have alerts against them in the security dashboard and the COD has ticketed sites as appropriate. Several problems have been fixed already, though it seems that the dashboard is slow to notice the fixes.

Tuesday 25th September


Services - PerfSonar dashboard | GridPP VOMS

Tuesday 20th November

  • Reminder for sites to add perfSONAR services in GOCDB.
  • VOMS upgraded at Manchester. No reported problems. Next step to do the replication to Oxford/Imperial.

Monday 5th November

  • perfSONAR service types are now defined in GOCDB.
  • Reminder that the gridpp VOMS will be upgraded next Wednesday.

Thursday 18th October

  • VOMS sub-group meeting on Thursday with David Wallom to discuss the NGS VOs. Approximately 20 will be supported on the GridPP VOMS. The intention is to go live with the combined (upgrades VOMS) on 14th November.
  • The Manchester-Oxford replication has been successfully tested. Imperial to test shortly.


Tickets

Monday 26th November 14.30 GMT</br> 35 Open UK tickets today.</br>

I had to set a few tickets "In Progress" this week. Remember if a VO reopens a ticket and you get back onto refix the issue the ticket should be re-in progressed.

A few trends this week. t2k are on a software update spree, I know at Lancaster there were problems due to us not having installed the packages listed on the CiC portal. Biomed continue to give the silent treatment on their tickets, and there seem to be a few lhcb tickets about the place.

  • Unsupported Glite Software:</br>

BRISTOL: https://ggus.eu/ws/ticket_info.php?ticket=87472 (17/10) In Progress (23/11)</br> Cream CE & Workernodes are EMI2. This only leaves the the APEL box and I believe (the hard one) the Bristol Storm box. Good stuff though.</br> EDINBURGH: https://ggus.eu/ws/ticket_info.php?ticket=87171 (10/10) In Progress (21/11)</br> "pretty much done with our EMI deployment". Great news, once you're done you can close this ticket, it's not being done for us anymore.</br> MANCHESTER: https://ggus.eu/ws/ticket_info.php?ticket=87467 (17/10) On hold (5/11)</br> Manchester have the final push of upgrades planned for this week. Good luck!</br> UCL: https://ggus.eu/ws/ticket_info.php?ticket=87468 (17/10) On hold (1/11)</br> Things are quiet on the UCL front.

  • QMUL</br>

https://ggus.eu/ws/ticket_info.php?ticket=88822 (23/11)</br> This lhcb ticket concerning 99999 values for Max CPU time information publishing might have slipped under the QM radar. Assigned (23/11)

Similar problem at Lancaster: https://ggus.eu/ws/ticket_info.php?ticket=88772</br> Sheffield solved their ticket: https://ggus.eu/ws/ticket_info.php?ticket=88781

https://ggus.eu/ws/ticket_info.php?ticket=86306 (22/9)</br> Hard to Kill lhcb pilots on the QMUL cream. LHCB sent out another "to purge" list on the 21st. The corresponding ticket from Chris to the CREAM developers (https://ggus.eu/tech/ticket_show.php?ticket=87891) has been cheekily closed because Chris couldn't provide log snippets right away (due to Cream's overactive log rotation). In progress (21/11)

BTW Sheffield has seen this problem too in the last week: https://ggus.eu/ws/ticket_info.php?ticket=88719

  • LIVERPOOL</br>

https://ggus.eu/ws/ticket_info.php?ticket=88761 (22/11)</br> Technically this is a "from the UK ticket". LHCB jobs have been swamping the Liverpool WAN. Cutting back the number of running lhcb jobs has alleviated the problem somewhat. Is the cause of the sudden lhcb bandwidth gobbling still thought to be simply a surge of lhcb work whilst atlas were quiet? In progress (23/11)

  • LANCASTER</br>

https://ggus.eu/ws/ticket_info.php?ticket=88628 (20/11)</br> t2k were having problems installing their software on one of our clusters, largely due to not taking t2k into account in our latest node image. However the latest problem is a crash when building ROOT which has us stumped. Apparently Oxford see this too, so expect a poke from me soon in an attempt to scavenge a solution. In progress (26/11)

  • RHUL</br>

https://ggus.eu/ws/ticket_info.php?ticket=88417 (11/11)</br> Atlas squid problems - any luck getting the firewall opened for your new squid config? In Progress (20/11)

The solved tickets and tickets from the UK bits have kind of merged into the site tickets this week.

Tools - MyEGI Nagios

Tuesday 13th November

  • Noticed two issues during tier1 powercut. SRM and direct cream submission uses top bdii defined in Nagios configuration to query about the resource. These tests started to fail because of RAL top BDII being not accessible. It doesn't use BDII_LIST so I can not define more than one BDII. I am looking into that how to make it more robust.
  • Nagios web interface was not accessible to few users because of GOCDB being down. It is a bug in SAM-nagios and I have opened a ticket.

Availability of sites have not been affected due to this issue because Nagios sends a warning alert in case of not being able to find resource through BDII.


Wednesday 17th October

Monday 17th September

  • Current state of Nagios is now on this page.

Monday 10th September

  • Discusson needed on which Nagios instance is reporting for the WLCG (metrics) view



VOs - GridPP VOMS VO IDs Approved VO table

Thursday 29 November

Tuesday 27 November

  • VOs supported at sites page updated
    • now lists number of sites supporting a VO, and number of VOs supported by a site.
    • Linked to by Steve Lloyd's pages


Tuesday 23 October

  • A local user is wanting to get on the grid and wants to set up his own UI. Do we have instructions?

Monday 15th October

  • Sno+ jobs now work at Dresden https://ggus.eu/ws/ticket_info.php?ticket=86741, but there has got to be a better way.
  • Discussion with SNO+ about their requirements - discussions started on the following topics:
  • Robot certificates and hardware keys
  • FCR
  • Managing storage - how to avoid users filling up the space

Monday 8th October

  • Sno+ had problems with EMI-2 WN and ganga - formatting changes in EMI-2 command output.
  • Now fixed by Mark Slater (8 hours to install EMI2-WN and 20 mins to fix ganga.
  • Snoplus jobs don't work at Dresden https://ggus.eu/ws/ticket_info.php?ticket=86741
  • Draft e-mail to warning "non LHC VOs" about upcoming updates sent to ops list. Comments please.


Site Updates

Monday 5th November

  • SUSSEX: Site working on enabling of ATLAS jobs.


Meeting Summaries
Project Management Board - MembersMinutes Quarterly Reports

Monday 1st October

  • ELC work


Tuesday 25th September

  • Reviewing pledges.
  • Q2 2012 review
  • Clouds and DIRAC
GridPP ops meeting - Agendas Actions Core Tasks

Tuesday 21st August - link Agenda Minutes

  • TBC


RAL Tier-1 Experiment Liaison Meeting (Wednesday 13:30) Agenda EVO meeting

Wednesday 14th November

  • Operations report
  • Main issue was the power cut at RAL on Wednesday 7th Nov. The backup power via a diesel generator did not work. Core services (TopBDII, FTS) were returned to service at the end of that afternoon (although there was a subsequent problem with the FTS service that meant it was down overnight). All services (including Castor & Batch) were back around 14:00 the next day. It is known that if there is another power cut the diesel generator will not automatically cut in. Work is scheduled for Tuesday 20th Nov to try and resolve this.
WLCG Grid Deployment Board - Agendas MB agendas

October meeting Wednesday 10th October




NGI UK - Homepage CA

Wednesday 22nd August

  • Operationally few changes - VOMS and Nagios changes on hold due to holidays
  • Upcoming meetings Digital Research 2012 and the EGI Technical Forum. UK NGI presence at both.
  • The NGS is rebranding to NES (National e-Infrastructure Service)
  • EGI is looking at options to become a European Research Infrastructure Consortium (ERIC). (Background document.
  • Next meeting is on Friday 14th September at 13:00.
Events

WLCG workshop - 19th-20th May (NY) Information

CHEP 2012 - 21st-25th May (NY) Agenda

GridPP29 - 26th-27th September (Oxford)

UK ATLAS - Shifter view News & Links

Thursday 21st June

  • Over the last few months ATLAS have been testing their job recovery mechanism at RAL and a few other sites. This is something that was 'implemented' before but never really worked properly. It now appears to be working well and saving allowing jobs to finish even if the SE is not up/unstable when the job finishes.
  • Job recovery works by writing the output of the job to a directory on the WN should it fail when writing the output to the SE. Subsequent pilots will check this directory and try again for a period of 3 hours. If you would like to have job recovery activated at your site you need to create a directory which (atlas) jobs can write too. I would also suggest that this directory has some form of tmp watch enabled on it which clears up files and directories older than 48 hours. Evidence from RAL suggest that its normally only 1 or 2 jobs that are ever written to the space at a time and the space is normally less than a GB. I have not observed more than 10GB being used. Once you have created this space if you can email atlas-support-cloud-uk at cern.ch with the directory (and your site!) and we can add it to the ATLAS configurations. We can switch off job recovery at any time if it does cause a problem at your site. Job recovery would only be used for production jobs as users complain if they have to wait a few hours for things to retry (even if it would save them time overall...)
UK CMS

Tuesday 24th April

  • Brunel will be trialling CVMFS this week, will be interesting. RALPP doing OK with it.
UK LHCb

Tuesday 24th April

  • Things are running smoothly. We are going to run a few small scale tests of new codes. This will also run at T2, one UK T2 involved. Then we will soon launch new reprocessing of all data from this year. CVMFS update from last week; fixes cache corruption on WNs.
UK OTHER

Thursday 21st June - JANET6

  • JANET6 meeting in London (agenda)
  • Spend of order £24M for strategic rather than operational needs.
  • Recommendations to BIS shortly
  • Requirements: bandwidth, flexibility, agility, cost, service delivery - reliability & resilience
  • Core presently 100Gb/s backbone. Looking to 400Gb/s and later 1Tb/s.
  • Reliability limited by funding not ops so need smart provisioning to reduce costs
  • Expecting a 'data deluge' (ITER; EBI; EVLBI; JASMIN)
  • Goal of dynamic provisioning
  • Looking at ubiquitous connectivity via ISPs
  • Contracts were 10yrs wrt connection and 5yrs transmission equipment.
  • Current native capacity 80 channels of 100Gb/s per channel
  • Fibre procurement for next phase underway (standard players) - 6400km fibre
  • Transmission equipment also at tender stage
  • Industry engagement - Glaxo case study.
  • Extra requiements: software coding, security, domain knowledge.
  • Expect genome data usage to explode in 3-5yrs.
  • Licensing is a clear issue
To note

Tuesday 26th June