Operations Bulletin 261112

From GridPP Wiki
Jump to: navigation, search

Bulletin archive


Week commencing 19th November 2012
Task Areas
General updates

Tuesday 20th November

  • The WLCG T2 reliability/availability report for October is now final.
  • There was a GDB last week. Ewan's report is in the wiki.
  • There is an ongoing EGI Operations Management Board meeting.
  • The DTEAM VO membership service is currently supported by VOMRS (AUTH). VOMRS is unsupported since 01-10-2012. VOMRS ready for migration to VOMS at the end of November.

Tuesday 20th November

  • Investigations going on today into electrical systems including looking at failure of diesel generator to take over when there was the site power outage a couple of weeks ago.
  • We have had a string of operational issues: Some of the Castor stagers (Atlas & GEN) are consuming memory - this led to an outage of the Atlas Castor system for around 4 hours over the weekend; We are investigating an intermittent problem seen as timeouts in CMS Castor; A network switch stack that mainly connects some older systems failed last night.
  • Ongoing investigating into asymmetric data rates seen to remote sites.
  • A small start has been made in rolling out the over-commit of batch jobs to make use of hyperthreading.
  • Now rolling out EMI-2 SL5 WNs.
  • Test instance of FTS version 3 now available. Non-LHC VOs that use the existing service have been enabled on it and looking for one of the VOs to test.
Storage & Data Management - Agendas/Minutes

Wednesday 10th October

  • DPM EMI upgrades:
    • 9 sites need to upgrade from gLite 3.2
  • QMUL asking for FTS settings to be increased to fully test Network link.
  • Initial discussion on how Brunel might upgrade it's SE and decommission is old SE
  • Classic SE support , both for new SEs and plan to remove current publishing of classic SE endpoint


Accounting - UK Grid Metrics HEPSPEC06 Atlas Dashboard HS06

Tuesday 30th October

  • Storage availability in SL pages has been affected by a number of sites being asked by ATLAS to retire the ATLASGROUPDISK space token while the SUM tests were still testing it as critical. The availability will be corrected manually once the month ends. Sites affected in different degrees are RHUL, CAM, BHAM, SHEF and MAN.

Friday 28th September

  • Tier-2 pledges to WLCG will be made shortly. The situation is fine unless there are significant equipment retirements coming up.
  • See Steve Lloyd's GridPP29 talk for the latest on the GridPP accounting.

Wednesday 6th September

  • Sites should check the atlas page reporting HS06 coefficient because according to the latest statement from Steve that is what it's going to be used Atlas Dashboard coefficients are averages over time.

I am going to suggest using the ATLAS production and analysis numbers given in hs06 directly rather than use cpu secs and try and convert them ourselves as we have been doing. There doesn't seem to be any robust way of doing it any more and so we may as well use ATLAS numbers which are the ones they are checking against pledges etc anyway. If the conversion factors are wrong then we should get them fixed in our BDIIs. No doubt there will be a lively debate at GridPP29!

Documentation - KeyDocs

Tuesday 6th November

  • Do we need the Approved VOs document the set out the software needs for the VOs?

Tuesday 23rd October

KeyDocs monitoring status: Grid Storage(7/0) Documentation(3/0) On-duty coordination(3/0) Staged rollout(3/0) Ticket follow-up(3/0) Regional tools(3/0) Security(3/0) Monitoring(3/0) Accounting(3/0) Core Grid services(3/0) Wider VO issues(3/0) Grid interoperation(3/0) Cluster Management(1/0) (brackets show total/missing)

Thursday 26th July

All the "site update pages" have been reconfigured from a topic oriented structure into a site oriented structure. This is available to view at https://www.gridpp.ac.uk/wiki/Separate_Site_Status_Pages#Site_Specific_Pages

Please do not edit these pages yet - any changes would be lost when we refine the template. Any comments gratefully received, contact: sjones@hep.ph.liv.ac.uk

Interoperation - EGI ops agendas

Monday 5th November

  • There was an EGI ops meeting today.
  • UMD 2.3.0 in preparation. Release due 19 November, freeze date 12 November.
  • EMI-2 updates: DPM/LFC and VOMS - bugfixes, and glue 2.0 in DPM.
  • EGI have a list of sites considered unresponsive or having insufficient plans for the middleware migration. The one UK site mentioned has today updated their ticket again with further information.
  • In general an upgrade plan cannot extend after the end of 2012.
  • A dCache probe was being rolled into production yesterday, alarms should appear in the next 24 hours on the security dashboard
  • CSIRT is taking over from COD on migration ticketing. By next Monday the NGIs with problematic sites will be asked to contact the sites, asking them to register a downtime for their unsupported services.
  • Problems with WMS in EMI-2 (update 4) - WMS version 3.4.0. Basically, it can get proxy interaction with MyProxy a bit wrong. The detail is at GGUS 87802, and there exist a couple of workarounds.


Monday 8th October

  • COD are about to launch monitoring tickets for 'out of support' services, (i.e. glite 3.2), for removal by the end of the month. (They seem to have missed some gLite 32 CREAM CE's, however - we need to make sure we don't).
  • EMI updates. EMI-2, expected today or so.

DPM 1.8.4 (Yay! But let it filter through staged rollout a bit...) LB and WMS 3.4 (Both with security updates). UI and WN (including 32bit libs, and a few other dependancies).

  • Tarballs were raised. Tiziana raised the need for a tarball (EMI-2) before the gLite 3.2 were retired.
  • Staged Rollout. The ARC 2.0.0 clients are in the production repositories due to the emi-ui being in production. (Don't think that affects anyone in the UK).
  • Released today: BDII Core and GFAL/lcgUtils.
  • Products in staged rollout: WMS 3.3.8; CREAM 1.13.4 (due to a mismatch between EMI and UMD versions, this is 1.13.5 in UMD)
  • It's been noted that there are a number of products without early adopters in EMI 2: EMIR, Pseudonymity, Wnodes, GridSAM and OGSA-DAI. These will not be included in UMD, unless there's an EA, and demand from NGI's. There's also a few with no EA in EMI-2, but there are in EMI-1, and these are expected to move to EMI-2 at some point: CLUSTER, CREAM-LSF. (VOMS was listed, but the EA was present, and pointed out they are on EMI-2).
  • Unsupported services on 8th October EGI list.


Monitoring - Links MyWLCG

Monday 2nd July

  • DC has almost finished an initial ranking. This will be reviewed by AF/JC and discussed at 10th July ops meeting

Wednesday 6th June

  • Ranking continues. Plan to have a meeting in July to discuss good approaches to the plethora of monitoring available.
  • Glasgow dashboard now packaged and can be downloaded here.
On-duty - Dashboard ROD rota

Monday 19th November - AM

  • Good week overall. No UK-wide problems. Several sites still with (upgrade)

related planned downtimes. Only one outstanding ticket (Durham) and no alarms left open over the weekend.

Monday 12th November

  • Birmingham is in downtime till further notice as university is operating on emergency power.
  • Durham has a open ticket.
  • A lot of Nagios test failure because of power failure at Tier1 but now everything is back to normal.

Monday 5th November


Friday 19th October

  • Many sites continue to have planned downtimes for EMI upgrades, with knock-on effects on other local services (eg SRM -> WN Rep alarms). Changeover back to Oxford GridPP Nagios midweek, with some caching weirdness (reloading the dashboard produced one of two different sets of results!) which quickly went away.

Friday 12th October

  • There is a new ROD newsletter available from EGI.
  • As of this week, John Walsh will no longer be contributing to the ROD work. Many thanks to John for his input over the years!


Rollout Status WLCG Baseline

Tuesday 6th November

References


Security - Incident Procedure Policies Rota

Monday 22nd October

  • Last week's UK security activity was very much business as usual; there are a lot of alarms in the dashboard for UK sites, but for most of the week they only related to the gLite 3.2 retirement.

Friday 12th October

  • The main activity over the last week has been due to new Nagios tests for obsoleted glite middleware and classic SE instances. Most UK sites have alerts against them in the security dashboard and the COD has ticketed sites as appropriate. Several problems have been fixed already, though it seems that the dashboard is slow to notice the fixes.

Tuesday 25th September


Services - PerfSonar dashboard

Tuesday 20th November

  • Reminder for sites to add perfSONAR services in GOCDB.
  • VOMS upgraded at Manchester. No reported problems. Next step to do the replication to Oxford/Imperial.

Monday 5th November

  • perfSONAR service types are now defined in GOCDB.
  • Reminder that the gridpp VOMS will be upgraded next Wednesday.

Thursday 18th October

  • VOMS sub-group meeting on Thursday with David Wallom to discuss the NGS VOs. Approximately 20 will be supported on the GridPP VOMS. The intention is to go live with the combined (upgrades VOMS) on 14th November.
  • The Manchester-Oxford replication has been successfully tested. Imperial to test shortly.


Tickets

Monday 19th November 14.30 GMT</br> 29 Open UK tickets this week. Thanks for all your hard work making my job easier :-)

  • Unsupported Glite Software.</br>

MANCHESTER: https://ggus.eu/ws/ticket_info.php?ticket=87467 (17/10) On Hold (5/11)</br> Just the DPM left to go now I think, which is scheduled for next week.</br>

ECDF: https://ggus.eu/ws/ticket_info.php?ticket=87171 (10/10) In progress (7/11)</br> Opened support tickets to help deal with their issues:</br> https://ggus.eu/tech/ticket_show.php?ticket=88284 (cream canceling jobs-in progress)</br> https://ggus.eu/ws/ticket_info.php?ticket=88285 (cream sge information pub-solved)</br> https://ggus.eu/tech/ticket_show.php?ticket=88286 (argus problems-on hold)</br>

EFDA-JET: https://ggus.eu/ws/ticket_info.php?ticket=87169 (10/10) In Progress (9/11)</br> I think jet are all upgraded, but having nagios issues. Might need a hand from someone. Not sure why this ticket hasn't closed though. Update-solved

BRISTOL: https://ggus.eu/ws/ticket_info.php?ticket=87472 (17/10) In Progress (19/11)</br> Have an emi2 test CE up and running, so things are looking good.

BRUNEL: https://ggus.eu/ws/ticket_info.php?ticket=87469 (17/10) In Progress (5/11)</br> Site is on EMI2, so I don't understand why this ticket didn't auto-close. Worth manually solving it (unless you guys have some hidden glite about the place). Update-solved

UCL: https://ggus.eu/ws/ticket_info.php?ticket=87468 (17/10) On hold (1/11)</br> Ben hoped to have an EMI cream by the 9th, not sure that that target was reached.

SHEFFIELD: Elena has closed their ticket after upgrading everything.

  • NGI/VOMS</br>

https://ggus.eu/ws/ticket_info.php?ticket=88546 (16/11)</br> Setting up a new "epic" VO (that's actually what they're calling themselves). Debate in the finer points of naming - original suggestion was "epic.gridpp.ac.uk" but need to decide on some precedent for future VO naming. Andy McNab suggest the VO registers it's own domain name and uses that. In progress (16/11)

https://ggus.eu/ws/ticket_info.php?ticket=88395 (9/11)</br> David Meredith asks the NGI (i.e. us) if there are any objections to deleting the "UKI-Local-MAN-HEP" site-that-never-was from the gocdb. Waiting for reply (13/11)

  • RHUL</br>

https://ggus.eu/ws/ticket_info.php?ticket=88417 (11/11)</br> Alastair would like to know what you have in your squid ACLs/customize.sh to debug the squid problems at RHUL. In progress (12/12)

  • GLASGOW</br>

https://ggus.eu/ws/ticket_info.php?ticket=88376 (8/11)</br> Biomed ticketed Glasgow with a problem on one of their CEs, but have neglected to reply to Sam's question on the 9th. Still Waiting for Reply (9/11)

  • DURHAM</br>

https://ggus.eu/ws/ticket_info.php?ticket=86242 (20/9)</br> Another example of biomed's silence when asked a question. Waiting for reply (5/11)

  • BIRMINGHAM</br>

https://ggus.eu/ws/ticket_info.php?ticket=88262 (6/11)</br> How are Birmingham's power problems coming along? On hold (9/11) Update-downtime extended

  • RALPP</br>

https://ggus.eu/ws/ticket_info.php?ticket=88099 (3/11)</br> Transfer errors continued, although they've changed in nature. The last update was from Wahid on Thursday, any word from the site? They've been quiet on this one, which suggests that they're not getting alerts. In progress (16/11)

  • OXFORD</br>

https://ggus.eu/ws/ticket_info.php?ticket=86106 (14/9) Low atlas sonar rates to BNL. Brian asked if you guys could see if the bad rates also applied to direct globus-url-copies. Have you had a chance to have a bash at this? In Progress (6/11)

  • Discussion point (mainly for atlas): In https://ggus.eu/ws/ticket_info.php?ticket=86334 Wahid and Brian had an exchange about the usefulness of tickets to track very long standing issues- the main grumble for Wahid seemed to be the constant presence of ECDF on the daily resume because of this ticket. I agree with Brian that tickets are the tool to track issues, but the constant noise created by this ticket (which isn't going anywhere fast) is a nuisance. Maybe the resume needs to start ignoring certain classes of tickets (low priority, on hold ones for example).
  • Tickets from the UK:</br>

https://ggus.eu/tech/ticket_show.php?ticket=87891</br> Chris's ticket concerning the hard-to-kill lhcb jobs.</br> https://ggus.eu/ws/ticket_info.php?ticket=87264</br> Daniela's ticket concerning the enormous number of entries showing up in /var/glite/log, with some interesting input from Daniela. QMUL & Lancaster also see this problem, if you do too please add your voice (to coax a swifter fix).

If you have any tickets you'd like the spot light on, please let me know. Update - Daniela been schooling biomed in https://ggus.eu/ws/ticket_info.php?ticket=88489 and would like any feedback from other sites - although I think that her & Sam's suggestions are perfectly sensible

Tools - MyEGI Nagios

Tuesday 13th November

  • Noticed two issues during tier1 powercut. SRM and direct cream submission uses top bdii defined in Nagios configuration to query about the resource. These tests started to fail because of RAL top BDII being not accessible. It doesn't use BDII_LIST so I can not define more than one BDII. I am looking into that how to make it more robust.
  • Nagios web interface was not accessible to few users because of GOCDB being down. It is a bug in SAM-nagios and I have opened a ticket.

Availability of sites have not been affected due to this issue because Nagios sends a warning alert in case of not being able to find resource through BDII.


Wednesday 17th October

Monday 17th September

  • Current state of Nagios is now on this page.

Monday 10th September

  • Discusson needed on which Nagios instance is reporting for the WLCG (metrics) view



VOs - GridPP VOMS VO IDs Approved VO table

Tuesday 23 October

  • A local user is wanting to get on the grid and wants to set up his own UI. Do we have instructions?

Monday 15th October

  • Sno+ jobs now work at Dresden https://ggus.eu/ws/ticket_info.php?ticket=86741, but there has got to be a better way.
  • Discussion with SNO+ about their requirements - discussions started on the following topics:
  • Robot certificates and hardware keys
  • FCR
  • Managing storage - how to avoid users filling up the space

Monday 8th October

  • Sno+ had problems with EMI-2 WN and ganga - formatting changes in EMI-2 command output.
  • Now fixed by Mark Slater (8 hours to install EMI2-WN and 20 mins to fix ganga.
  • Snoplus jobs don't work at Dresden https://ggus.eu/ws/ticket_info.php?ticket=86741
  • Draft e-mail to warning "non LHC VOs" about upcoming updates sent to ops list. Comments please.


Site Updates

Monday 5th November

  • SUSSEX: Site working on enabling of ATLAS jobs.


Meeting Summaries
Project Management Board - MembersMinutes Quarterly Reports

Monday 1st October

  • ELC work


Tuesday 25th September

  • Reviewing pledges.
  • Q2 2012 review
  • Clouds and DIRAC
GridPP ops meeting - Agendas Actions Core Tasks

Tuesday 21st August - link Agenda Minutes

  • TBC


RAL Tier-1 Experiment Liaison Meeting (Wednesday 13:30) Agenda EVO meeting

Wednesday 14th November

  • Operations report
  • Main issue was the power cut at RAL on Wednesday 7th Nov. The backup power via a diesel generator did not work. Core services (TopBDII, FTS) were returned to service at the end of that afternoon (although there was a subsequent problem with the FTS service that meant it was down overnight). All services (including Castor & Batch) were back around 14:00 the next day. It is known that if there is another power cut the diesel generator will not automatically cut in. Work is scheduled for Tuesday 20th Nov to try and resolve this.
WLCG Grid Deployment Board - Agendas MB agendas

October meeting Wednesday 10th October




NGI UK - Homepage CA

Wednesday 22nd August

  • Operationally few changes - VOMS and Nagios changes on hold due to holidays
  • Upcoming meetings Digital Research 2012 and the EGI Technical Forum. UK NGI presence at both.
  • The NGS is rebranding to NES (National e-Infrastructure Service)
  • EGI is looking at options to become a European Research Infrastructure Consortium (ERIC). (Background document.
  • Next meeting is on Friday 14th September at 13:00.
Events

WLCG workshop - 19th-20th May (NY) Information

CHEP 2012 - 21st-25th May (NY) Agenda

GridPP29 - 26th-27th September (Oxford)

UK ATLAS - Shifter view News & Links

Thursday 21st June

  • Over the last few months ATLAS have been testing their job recovery mechanism at RAL and a few other sites. This is something that was 'implemented' before but never really worked properly. It now appears to be working well and saving allowing jobs to finish even if the SE is not up/unstable when the job finishes.
  • Job recovery works by writing the output of the job to a directory on the WN should it fail when writing the output to the SE. Subsequent pilots will check this directory and try again for a period of 3 hours. If you would like to have job recovery activated at your site you need to create a directory which (atlas) jobs can write too. I would also suggest that this directory has some form of tmp watch enabled on it which clears up files and directories older than 48 hours. Evidence from RAL suggest that its normally only 1 or 2 jobs that are ever written to the space at a time and the space is normally less than a GB. I have not observed more than 10GB being used. Once you have created this space if you can email atlas-support-cloud-uk at cern.ch with the directory (and your site!) and we can add it to the ATLAS configurations. We can switch off job recovery at any time if it does cause a problem at your site. Job recovery would only be used for production jobs as users complain if they have to wait a few hours for things to retry (even if it would save them time overall...)
UK CMS

Tuesday 24th April

  • Brunel will be trialling CVMFS this week, will be interesting. RALPP doing OK with it.
UK LHCb

Tuesday 24th April

  • Things are running smoothly. We are going to run a few small scale tests of new codes. This will also run at T2, one UK T2 involved. Then we will soon launch new reprocessing of all data from this year. CVMFS update from last week; fixes cache corruption on WNs.
UK OTHER

Thursday 21st June - JANET6

  • JANET6 meeting in London (agenda)
  • Spend of order £24M for strategic rather than operational needs.
  • Recommendations to BIS shortly
  • Requirements: bandwidth, flexibility, agility, cost, service delivery - reliability & resilience
  • Core presently 100Gb/s backbone. Looking to 400Gb/s and later 1Tb/s.
  • Reliability limited by funding not ops so need smart provisioning to reduce costs
  • Expecting a 'data deluge' (ITER; EBI; EVLBI; JASMIN)
  • Goal of dynamic provisioning
  • Looking at ubiquitous connectivity via ISPs
  • Contracts were 10yrs wrt connection and 5yrs transmission equipment.
  • Current native capacity 80 channels of 100Gb/s per channel
  • Fibre procurement for next phase underway (standard players) - 6400km fibre
  • Transmission equipment also at tender stage
  • Industry engagement - Glaxo case study.
  • Extra requiements: software coding, security, domain knowledge.
  • Expect genome data usage to explode in 3-5yrs.
  • Licensing is a clear issue
To note

Tuesday 26th June