Difference between revisions of "Past Ticket Bulletins 2015"

From GridPP Wiki
Jump to: navigation, search
m (Matthew Doidge 09da329419 moved page Past Ticket Bulletins to Past Ticket Bulletins 2015: New Year, new past ticket page)
 
(45 intermediate revisions by 2 users not shown)
Line 1: Line 1:
 +
'''Monday 14th December 2015, 13.00 GMT'''<br />
 +
37 Open Tickets to end the year on.
 +
 +
Separating out the atlas storage consistency check tickets, I don't think there's going to me much more done in 2015 on them from either end. Sites with tickets still open for this are: Sussex, Oxford, Birmingham, Glasgow, Sheffield, RHUL, QM, Imperial, Brunel and the Tier 1 (who are almost done). If you're not almost done then you'll want to On Hold the ticket (and knock the urgency down to "less urgent" whilst you're at it).
 +
 +
Onto the other tickets, site by site:
 +
 +
'''SUSSEX'''<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=118337 118337] (14/12)<br />
 +
Some accidental not-reading-downtime-noifications by Sno+, I suspect this ticket can be closed. In progress (14/12)
 +
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=117739 117739] (20/11)<br />
 +
PRODDISK deletion at Sussex - as they're in the middle of a SE move this will want to be on hold! Unless you're just not moving the proddisk directories? In progress (25/11)
 +
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=118289 118289] (10/12)<br />
 +
Pilots roles at Sussex- the ticket only just rolled into the site but if you can you might want to roll this into your downtime if it's going smoothly (and if you're having a smooth downtime, can you tell me what that's like?). Assigned (14/12)
 +
 +
'''RALPP'''<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=117711 117711] (19/11)<br />
 +
Pheno Pilot Role ticket. At last word the new groups "just" needed to be configured on the middleware. Any joy? In progress (19/11)
 +
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=117740 117740] (20/11)<br />
 +
The deletion of proddisk at RALPP has snowballed into a good clearout of the atlas area. Chris asked this morning if atlasdatadisk/step09 can be deleted too, and if Brian's had a chance to look at the filelists he provided. Waiting for reply (14/12)
 +
 +
'''BRISTOL'''<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=118311 118311] (12/12)<br />
 +
CMS SAM3 tests were failing at Bristol. A tweak of their site configs solved the problem and they're both green (although Lukasz was surprised that they were throwing SRM warnings at a site with no SRM). In progress (looks like it can be closed) (14/12)
 +
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=118299 118299] (10/12)<br />
 +
CMS transfer timeouts on job stageout - has this one snuck past the Bristol ticket-sentries? Assigned (11/12)
 +
 +
'''BIRMINGHAM'''<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=118155 118155] (4/12)<br />
 +
Biomed have used up too much space on Birmingham's SE - but need help finding the dark data and asked for a dump. We could co-opt the atlas tools to make a dump for them? No news for a while. In progress (4/12)
 +
 +
'''GLASGOW'''<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=117706 117706] (19/11)<br />
 +
Pheno pilots at Glasgow -Gareth has updated that they're having a overhaul of identity management at the site and will roll supporting the "Dirac VOs" into that work. On Hold (2/12)
 +
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=118052 118052] (30/11)<br />
 +
An atlas ticket about http support at Glasgow's endpoints. Sam noticed that this is likely a problem with some older dmlite versions on some pools, and is looking into a non-nuclear solution. In progress (1/12)
 +
 +
'''ECDF'''<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=95303 95303] (1/7/13)<br />
 +
Tarball glexec ticket-as usual see the Lancaster update as this is my bad. On hold (18/5)
 +
 +
'''SHEFFIELD'''<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=114460 114460] (18/6)<br />
 +
Pilots roles at Sheffield - Elena is trying to roll this out this week. Any Joy? In progress (1/12)
 +
 +
'''LANCASTER'''<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=118305 118305] (11/12)<br />
 +
Low availability ticket for Lancaster after the [http://www.bbc.co.uk/news/uk-england-lancashire-35020049 events] of last week. We're just waiting for the numbers to pick up now. On hold (11/12)
 +
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=95299 95299] (1/7/13)<br />
 +
Tarball glexec. We have a new, updated build process but have had no time to test the end product - we have to build lcas/lcmaps alongside glexec to cope with different paths. On Hold (2/11)
 +
 +
'''RHUL'''<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=118236 118236] (9/12)<br />
 +
Atlas "no route to host" errors due to a downed pool node - I think Govind has happy news about that disk pool so hopefully this ticket can be closed. In progress (11/12) ''Update -solved, looks like a transient problem.''
 +
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=118295 118295] (11/12)<br />
 +
SE not working for Pheno when they use lcg-tools, although they can get stuff with uberftp. Odd. Assigned (11/12)
 +
 +
'''QMUL'''<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=117723 117723] (19/11)<br />
 +
Pilots roles for pheno - these were rolled out but weren't behaving (normal pheno jobs are fine). Any joy? In progress (23/11)
 +
 +
'''IMPERIAL'''<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=118162 118162] (4/12)<br />
 +
LFC at Imperial not working right for users from NA62, which coincided with the attack on JANET. DAMMIT. The LFC restarted and Janusz ran a test which worked fine the day after. No word back from the user. In progress (9/12)
 +
 +
'''BRUNEL'''<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=117710 117710] (19/11)<br />
 +
Pheno pilots at Brunel - Raul was hoping to get to this. Any luck? On hold (22/11)
 +
 +
'''100IT'''<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=116358 116358] (22/11)<br />
 +
Included for completeness - a "missing image" ticket for 100IT. There was a recent update so there's some progress here. In progress (7/12)
 +
 +
'''THE TIER 1'''<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=116866 116866] (12/10)<br />
 +
Snoplus (and other) pilot roles at the Tier 1. Daniela is throwing dirac test jobs at the Tier 1 to try to debug the issues. A lot of effort going into the issue. In progress (9/12)
 +
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=118044 118044] (30/11)<br />
 +
Atlas glexec hammerclouds failing at the Tier 1. It was noted that the failures occurred about the time of a config change, but the glexec-guru was unavailable. In progress (7/12)
 +
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=117683 117683] (18/11)<br />
 +
Castor not publishing glue2. Stephen Burke has offered a hand with the task. In progress (19/11)
 +
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=118338 118338] (14/12)<br />
 +
Janusz, with his mice (mouse?) hat on, cannot copy data from the Imperial SE to Castor via the FTS, but can do uploads/downloads using the regular tools. Early days of the issue. In progress (14/12)
 +
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=116864 116864] (12/10)<br />
 +
CMS AAA xroot test failures. The ticket submitter replied to some queries with quite a lot of information that hopefully helps get to the bottom of this. In progress (20/11)
 +
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=118130 118130] (3/12)<br />
 +
Technically a ticket to the whole atlas UK cloud I think - atlas transfers throughout the UK has a large number of errors. Brian has been investigating and asked the FTS experts to help. Waiting for Reply (11/12)
 +
 +
'''''Update:''''' [https://ggus.eu/?mode=ticket_info&ticket_id=118345 118345] (14/12)<br />
 +
Jeremy has asked the Tier 1 about how well utilised their WMS currently is - Catalin has been put forward to comment.
 +
 +
'''Monday 7th December 2015, 12.30 GMT'''<br />
 +
Apologies from Matt - we're picking ourselves up from the weekend's events up here in Lancaster.
 +
 +
31 Open UK tickets, 13 of these are atlas consistency checking tickets.
 +
 +
Link to all the [http://tinyurl.com/nwgrnys '''GGUS tickets'''].
 +
 +
Of general interest is this atlas ticket: [https://ggus.eu/?mode=ticket_info&ticket_id=118130 118130]<br />
 +
Where atlas have noticed large numbers of transfer and staging errors for the cloud.
 +
 +
Full review next week!
 +
 +
'''Monday 30th November 2015, 15.30 GMT'''
 +
 +
33 Open Tickets this week. 14 are for the same Atlas request.
 +
 +
'''GAZILLION ATLAS CONSISTENCY TICKETS'''<br />
 +
I won't go over all these - but ECDF, BIRMINGHAM and IMPERIAL are still just "assigned". As Ewan did with Oxford's ticket feel free to down the "Priority" for these tickets - these requests aren't urgent at all! A few sites are done already, a few are on holding their tickets till later.
 +
 +
'''RALPP'''<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=117740 117740] (20/11)<br />
 +
Atlas request to delete proddisk data which digressed a bit (after the original issue was confirmed as solved) but then stalled. Last check Brian asked for a list of files in ./users. In progress (23/11)
 +
 +
'''TIER 1'''<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=116864 116864] (12/10)<br />
 +
CMS xrootd tests failing. No news since CMS replied on the 20th. Are things churning along in the background? In progress (20/11)
 +
 +
'''''Forgot about this one:'''''<br />
 +
'''GLASGOW'''<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=118052 118052] (30/11)<br />
 +
Atlas, for the "WLCG HTTP Deployment Task Force" have ticket Glasgow after their SE showed up on this [https://etf-atlas-dev.cern.ch/etf/check_mk/index.py?start_url=%2Fetf%2Fcheck_mk%2Fview.py%3Fview_name%3Dallhosts http probe page]. I notice a few other UK sites in the red on that page. In progress (1/12)
 +
 +
I think that's it for "exciting" tickets - as always let me know if that's not the case.
 +
 +
Not much news on the [https://vo-nagios.physics.ox.ac.uk/nagios/cgi-bin/status.cgi?host=all&servicestatustypes=16&hoststatustypes=15 Other VO nagios] - it's looking pleasantly plain.
 +
 +
'''Monday 23rd November 2015, 15.00 GMT'''<br />
 +
There were just 18 Open UK Tickets - now we're up to 37...
 +
 +
'''ATLASPRODDISK Deletion''' (20/11)<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=117740 117740] (RALPP)<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=117739 117739] (Sussex)<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=117737 117737] (Birmingham)
 +
 +
Brian (as an agent of atlas) has asked sites to delete the data from proddisk and remove the tokens. Chris has sorted it already for RALPP and the ticket has mutated to a generic atlas cleanout. No news on the other two.
 +
 +
'''PILOT LIGHT AT THE END OF THE TUNNEL'''<br />
 +
In another round of Pilot wrangling by Daniela sites have been asked to implement pilots for the Pheno VO (or where they are enabled find out why they're not working!):
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=117706 117706] (GLASGOW)<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=117710 117710] (Brunel)<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=117711 117711] (RALPP)<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=117723 117723] (QMUL)
 +
 +
For completeness the older pilot tickets:<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=114460 114460] (Sheffield)<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=116866 116866] (TIER 1)
 +
The Tier 1 is at the "debugging" stage, no news from Sheffield though.
 +
 +
'''HOT OFF THE TICKET PRESS'''<br />
 +
Atlas have ticketed everyone and their grandmother asking to implement the automated storage consistency check stuff. Which I think is a little unfair as I'm unconvinced of the maturity of the tools after spending a day with them last week.
 +
 +
I've updated Lancaster's ticket to see if we've set this up right during our mucking about last week: [https://ggus.eu/index.php?mode=ticket_info&ticket_id=117883 117883]<br />
 +
''Update''- Atlas have replied to say things are looking good at Lancaster so we just need a few more tweaks (such as using Alessandra's updated and polished version of the [https://gitlab.cern.ch/atlas-adc-ddm-support/dark_data/blob/master/dpm_dump.py dpm_dump.py] script and creating dumps that are only a day old and we're done.
 +
 +
'''EVEN HOTTER OFF THE PRESS'''<br />
 +
Brunel have been asked by CMS to upgrade their DPM to 1.8.10 in [https://ggus.eu/?mode=ticket_info&ticket_id=117922 117922] - however Raul reports that things aren't working all that well, suggesting CMS need to have a natter with the DPM devs.
 +
 +
'''Monday 16th November 2015, 15.45 GMT'''<br />
 +
Only 17 Open UK tickets this week.
 +
 +
'''LSST FRIDAY 13th TEETHING TROUBLES'''<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=117585 117585] (Liverpool)<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=117586 117586] (Oxford)<br />
 +
Daniela has been testing out the LSST Dirac pilots - Ewan's fighting the good fight at Oxford getting his ARC to work, maybe Liverpool's problem has a similar root cause (we should be so lucky)?
 +
 +
'''TIER 1'''<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=116866 116866] (12/10)<br />
 +
In a similar vein (and thanks again to Daniela for the effort with preparing our pilots) is this ticket about getting "other" pilots enabled at the Tier 1 - particularly with a view of these tests: http://www.gridpp.ac.uk/php/gridpp-dirac-sam.php?action=viewlcg Any news? On hold (6/11)
 +
 +
'''ECDF(-RDF)'''<br />
 +
I think this new atlas ticket:<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=117606 117606] (15/11)<br />
 +
is a duplicate of this current atlas ticket:<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=117447 117447] (8/11)<br />
 +
So I suspect it can be closed!
 +
 +
I see from [https://ggus.eu/?mode=ticket_info&ticket_id=117642 117642] that the other [https://vo-nagios.physics.ox.ac.uk/nagios/cgi-bin/status.cgi?host=all&servicestatustypes=16&hoststatustypes=15 VO nagios] has been recently rebooted which might explain the lot of failures I see, I'll check again before the meeting.
 +
 +
 +
'''Tuesday 1oth November 2015, 10.20 GMT'''<br />
 +
Down to 13 Open tickets this week!
 +
 +
All the UK can be found [http://tinyurl.com/nwgrnys '''here'''].
 +
 +
Not very exciting - the ECDF-RDF ticket [https://ggus.eu/?mode=ticket_info&ticket_id=117447 117447] (atlas complaining about stale file handle error messages - but it's for the "RDF" so not urgent?) might have slipped past Andy's sentries yesterday, and the two remaining pilot role tickets for Sheffield and the Tier 1 ([https://ggus.eu/?mode=ticket_info&ticket_id=114460 114460] and [https://ggus.eu/?mode=ticket_info&ticket_id=116866 116866]) could really do with updates - with the latter Daniela notes that the gridpp pilots roles are needed for testing purposes.
 +
 +
Anything I've missed?
 +
 +
[https://vo-nagios.physics.ox.ac.uk/nagios/cgi-bin/status.cgi?host=all&servicestatustypes=16&hoststatustypes=15 '''Other VO Nagios'''] - no persistent trouble anywhere (that matters to us) - a few ARC CE submit troubles over the last hour or so with some RALPP CEs but I don't think that's ought to worry about.
 +
 +
'''Monday 2nd November 2015, 13.30 GMT'''<br />
 +
22 Open UK Tickets this week. First Monday of the Month, so all the tickets get looked at, however run of the mill they are.
 +
 +
First, the link to all the UK [http://tinyurl.com/nwgrnys '''tickets'''].
 +
 +
'''SUSSEX'''<br />
 +
[https://ggus.eu/index.php?mode=ticket_info&ticket_id=116915 116915] (14/10)<br />
 +
Low availability Ops ticket. On holded whilst the numbers sooth themselves. On Hold (23/10)
 +
 +
[https://ggus.eu/index.php?mode=ticket_info&ticket_id=116865 116865] (12/10)<br />
 +
Sno+ job submission failures. Not much on this ticket since it was set In Progress. Looks like an argus problem. How goes things at Sussex before Matt RB moves on? (We'll miss you Matt!). In progress (20/10)
 +
 +
'''RALPP'''<br />
 +
[https://ggus.eu/index.php?mode=ticket_info&ticket_id=117261 117261] (28/10)<br />
 +
Atlas jobs failing with stage out failures. Federico notices that the failures are due to odd errors - "file already existing", and that things seem to be calming themselves. He's at a loss of what RALPP can do. Checking the panda link suggests the errors are still there today. Waiting for reply (29/10)
 +
 +
'''BRISTOL'''<br />
 +
[https://ggus.eu/index.php?mode=ticket_info&ticket_id=116775 116775] (6/10)<br />
 +
Bristol's CMS glexec ticket. It looks like the solution is to have more cms pool accounts (which of course requires time to deploy). In progress (28/10)
 +
 +
[https://ggus.eu/index.php?mode=ticket_info&ticket_id=117303 117303] (30/10)<br />
 +
CMS, not Highlander fans, don't seem to believe that There can be only One (glexec ticket). Poor old Bristol seem to be playing whack-a-mole with duplicate tickets. Is there a note that can be left somewhere to stop this happening? Assigned (30/10)
 +
 +
'''ECDF'''<br />
 +
[https://ggus.eu/index.php?mode=ticket_info&ticket_id=95303 95303] (Long long ago)<br />
 +
Edinburgh's (and indeed Scotgrid's) only ticket is this tarball glexec ticket. A bit more on this later. On hold (18/5)
 +
 +
'''SHEFFIELD'''<br />
 +
[https://ggus.eu/index.php?mode=ticket_info&ticket_id=114460 114460] (18/6)<br />
 +
Gridpp (and other) VO pilot roles at Sheffield. No news for a while, snoplus are trying to use pilot roles now for dirac so this is becoming very relevant. In progress (9/10)
 +
 +
[https://ggus.eu/index.php?mode=ticket_info&ticket_id=116560 116560] (30/9)<br />
 +
Sno+ jobs failing, likely due to too many being submitted to the 10 slots that Sno+ has. Maybe a WMS scheduling problem - Stephen B has given advice. Elena asked if the problem persisted a few weeks ago. Waiting for reply (12/10)
 +
 +
[https://ggus.eu/index.php?mode=ticket_info&ticket_id=116967 116967] (17/10)<br />
 +
A ROD availability ticket, on hold as per SOP. On hold (20/10)
 +
 +
'''LANCASTER'''<br />
 +
[https://ggus.eu/index.php?mode=ticket_info&ticket_id=116478 116478] (28/9)<br />
 +
Another availability ticket. Autumn was not kind to many of us! On hold (8/10)
 +
 +
[https://ggus.eu/index.php?mode=ticket_info&ticket_id=116882 116882] (13/10)<br />
 +
Enabling pilot snoplus users at Lancaster. Shouldn't have been a problem, but turned into a bit of a comedy/tragedy of errors by yours truly mucking up. Hopefully fixed now- thanks to Daniela for her patience. In progress (2/11)
 +
 +
[https://ggus.eu/index.php?mode=ticket_info&ticket_id=95299 95299] (Far far away)<br />
 +
glexec tarball ticket. There's been a lot of communication with the glexec devs about this - the hopefully last hurdle is sorting out the RPATHs for the libraries. It's not a small hurdle though... On hold (2/11)
 +
 +
'''QMUL'''<br />
 +
[https://ggus.eu/index.php?mode=ticket_info&ticket_id=117151 117151](23/10)<br />
 +
A ticket about jumbo frame problems, submitted to QM. After Dan provided some education the user replied, in that he only sees this problem at two atlas sites. But he is contacting the network admins at his institution to see if it is their end. On hold (29/10)
 +
 +
[https://ggus.eu/index.php?mode=ticket_info&ticket_id=117011 117011] (19/10)<br />
 +
ROD ticket for glue-validate errors. Went away for a while after Dan re-yaimed his site bdii, but possibly back again. Daniela suggests re-running the glue-validate test. Reopened (2/11)
 +
 +
[https://ggus.eu/index.php?mode=ticket_info&ticket_id=116689 116689] (6/10)<br />
 +
Another ROD ticket, where Ops glexec test jobs are seemingly timing out for QM (this is the ticket Daniela mentioned on the ops mailing list). Dan noted that with the cluster half full tests were passing, suggesting some kind of load correlation (but as he also notes - what's getting loaded and causing the problem - Batch, CE or WNs?). Kashif reckons the argus server, and suggests a handy glexec time test which he posted. In progress (2/11)
 +
 +
'''BRUNEL'''<br />
 +
[https://ggus.eu/index.php?mode=ticket_info&ticket_id=117324 117324] (2/11)<br />
 +
A fresh looking ROD ticket - Raul had to restart the BDII and hopefully that got it. In progress (2/11)
 +
 +
'''100IT'''<br />
 +
[https://ggus.eu/index.php?mode=ticket_info&ticket_id=116358 116358] (22/9)<br />
 +
Missing Image at 100IT. 100IT have asked for more details, no news since. Waiting for reply (19/10)
 +
 +
'''THE TIER 1'''<br />
 +
[https://ggus.eu/index.php?mode=ticket_info&ticket_id=116866 116866] (12/10)<br />
 +
Snoplus pilot enablement (not actually a word) at the Tier 1. New accounts were being requested after some internal discussion. On hold (19/10)
 +
 +
[https://ggus.eu/index.php?mode=ticket_info&ticket_id=116864 116864] (12/10)<br />
 +
CMS AAA tests failing (the submitter notes "again..."). There are some oddities with other sites, which might be remote problems, but Andrew notes that previous manual fixes have been overwritten which likely explains why problems came back. In progress (does it need to be waiting for a reply?) (26/10)
 +
 +
[https://ggus.eu/index.php?mode=ticket_info&ticket_id=117171 117171] (24/10)<br />
 +
LHCB had problems with an arc CE that was misbehaving for everyone. Things were fixed, and this ticket can now be closed. Waiting for reply (can be closed) (27/10)
 +
 +
[https://ggus.eu/index.php?mode=ticket_info&ticket_id=117277 117277] (30/10)<br />
 +
Atlas have spotted "bring online timeout has been exceeded). This appears to be a mixture of problems adding up, such as a number of borken disk nodes and heavy write access by atlas. In progress (2/11)
 +
 +
[https://ggus.eu/index.php?mode=ticket_info&ticket_id=117248 117248] (28/10)<br />
 +
I believe related to the discussion on tb-support, this ticket requests that new SRM host certs that meet the requirements specified be requested for the RAL SRMs. Jens was on it, and the new certs are ready to be deployed. In progress (30/10)
 +
 +
[https://vo-nagios.physics.ox.ac.uk/nagios/cgi-bin/status.cgi?host=all&servicestatustypes=16&hoststatustypes=15 '''Other VO Nagios'''] - some badness at Sussex, but they have a ticket open for that.
 +
 +
'''Monday 26th October 2015, 15.30 GMT'''<br />
 +
26 Open UK Tickets this week. Not many seem all that exciting though.
 +
 +
The link to all the UK [http://tinyurl.com/nwgrnys '''tickets'''].
 +
 +
The few (two) tickets that really caught my eye are:
 +
 +
'''QMUL'''<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=117151 117151] (23/10)<br />
 +
This ticket is quite interesting, mainly for Dan schooling the submitter. QM received a ticket complaining that their jumbo frames were breaking stuff - it doesn't looking like the problem is at QM though. Naught wrong with the ticket handling. Waiting for reply (23/10)
 +
 +
'''BRISTOL'''<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=117065 117065] (20/10)<br />
 +
Bristol have a CMS glexec error ticket that looks very similar to an existing one ([https://ggus.eu/?mode=ticket_info&ticket_id=116683 116683]), which is in turn spookily similar to a ticket being worked on by the Bristol admins ([https://ggus.eu/?mode=ticket_info&ticket_id=116775 116775]). At the very least I would say that if the two problems are different one would likely obfuscate the other. Is this a case of over-keen shifters submitting tickets without checking? I'd be tempted to close one or both of [https://ggus.eu/?mode=ticket_info&ticket_id=116683 116683] and this one ([https://ggus.eu/?mode=ticket_info&ticket_id=117065 117065]). Tell them I said it was okay to[1]. In progress (21/10)
 +
 +
[1] It probably is okay to.
 +
 +
There are also 4 availability tickets, all On Hold waiting for 30 days or so to pass.
 +
 +
[https://vo-nagios.physics.ox.ac.uk/nagios/cgi-bin/status.cgi?host=all&servicestatustypes=16&hoststatustypes=15 '''Other VO Nagios'''] looks clean at time of writing.
 +
 +
'''Monday 19th October 2015, 14.30 BST'''<br />
 +
28 Open UK Tickets today.
 +
 +
'''UCL'''<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=116920 116920] (14/10)<br />
 +
UCL have a availability ticket, and Andrew M wonders what can be done for them as a VAC site to stop getting these sort of tickets? Assigned (15/10) ''Update - Alessandra has updated and On-Holded the ticket. Thanks!''
 +
 +
'''SUSSEX'''<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=116865 116865] (12/10)<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=116915 116915] (14/10)<br />
 +
Sussex have a Sno+ ticket and a ROD ticket that don't seem to have been looked at since they were submitted last week. Both just Assigned.
 +
 +
'''LIVERPOOL'''<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=116918 116918] (14/10)<br />
 +
Another ROD ticket (Invalid glue), I think this one has snuck past the Liver-Lad's watch. Assigned (14/10)
 +
 +
'''QMUL'''<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=116782 116782] (7/10)<br />
 +
Another Rod ticket, Dan looks like he's tracked down why one of his CE's is misbehaving (MaxStartups in sshd_config). Looks like this ticket at least can be closed, Gareth confirms that the test is green. Waiting for reply (19/10)
 +
 +
'''SHEFFIELD'''<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=116560 116560] (30/9)<br />
 +
Stephen B has added some more information to try to figure out why Sno+ jobs are flooding Sheffield's 10 Sno+ slots. Elena is not sure if the problem persists though. Waiting for reply (12/10)
 +
 +
'''TIER 1'''<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=116864 116864] (12/10)<br />
 +
It looks like this CMS AAA test problem has resolved itself - Federica asks if you chaps at RAL changed anything? Looks like the ticket can be closed. In progress (15/10)
 +
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=116866 116866] (12/10)<br />
 +
Enabling Sno+ pilots at the Tier 1. Only the LHC VOs had pilot roles enabled at the Tier 1, Andrew was going to discuss how best to make these changes. As with a similar issue at Lancaster - probably best to do it for all the VOs that will be using Dirac. In progress (13/10)
 +
 +
'''Monday 12th October 2015, 14.30 BST'''<br />
 +
23 Open UK Tickets this week. Just a light review.
 +
 +
'''OXFORD'''<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=116752 116752] (1/10)<br />
 +
Oxford's CMS Phedex renewal ritual ticket. Chris B kindly offered in his ticket for RALPP to officiate this (un)holy task for Oxford, so I advise some communication between you chaps and him - which may well be going on, but it ain't in the ticket! Assigned (6/10)
 +
 +
'''BRISTOL'''<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=116683 116683] (5/10)<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=116775 116775] (6/10)<br />
 +
CMS have by the looks of it thrown a pair of duplicate tickets Bristol's way. How rude! Lukasz has rightfully suggested closing one of them (I suggest 116683. Winnie's put a good reply in t'other one).The underlying problem appears to be a shortage of pool accounts - what are the recommended amount of accounts for VOs these days? In progress.
 +
 +
'''SHEFFIELD'''<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=114460 114460] (18/6)<br />
 +
Sheffield pilot role tickets. Daniela has pointed out that as Sno+ have started using Sheffield in earnest they really could do with pilot roles enabled. In progress (9/10)
 +
 +
'''ECDF'''<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=116812 116812] (8/10)<br />
 +
LHCB asked Andy to clean up the $LOGIN_POST_SCRIPT for LHCB at his site, removing a "export CMTEXTRATAGS="host-slc5"" line. Naught wrong with the ticket, but I liked how lhcb did some good debugging, and there's something about a profile script problem that reminds me of a simpler time... In progress (9/10)
 +
 +
[http://tinyurl.com/nwgrnys '''A link to the rest of the tickets for completeness.''']
 +
 +
[https://vo-nagios.physics.ox.ac.uk/nagios/cgi-bin/status.cgi?host=all&servicestatustypes=16&hoststatustypes=15 '''The other VO nagios.''']<br />
 +
Looks okay, some (known I think) errors at QM, and Sheffield seems to have a few SRM errors going on since Saturday.
 +
 +
 +
'''Monday 5th October 2015, 14.15 BST'''
 +
 +
22 Open UK Tickets this month, all of them, Site by Site:
 +
 +
'''SUSSEX'''<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=116136 116136] (9/9)<br />
 +
Sussex got a snoplus ticket for a high number of job failures, although simple test jobs ran okay. Matt asks if the problem persists, the reply was a resounding "not sure". In progress (think about closing) (21/9)
 +
 +
'''RALPP'''<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=116652 116652](1/10)<br />
 +
A ticket from CMS, about some important Phedex ritual that must occur on the 3rd of November, when the stars are right. The ticket needs some confirmation and feedback, plus the nomination of one site acolyte to receive the DBParam secrets from CMS - but the ticket only got assigned to sites this morning. Assigned (5/10)
 +
 +
'''BRISTOL'''<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=116651 116651](1/10)<br />
 +
Same as the RALPP ticket, Winnie has volunteered Dr Kreczko for the task. In progress (5/10)
 +
 +
'''ECDF'''<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=95303 95303](Long, long ago)<br />
 +
glexec ticket. On hold (18/5)
 +
 +
'''DURHAM'''<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=116576 116576](1/10)<br />
 +
Atlas ticket asking Durham to delete all files outside of the datadisk path. Oliver asks what this means for the other tokens (I think they can be sacrificed to feed datadisk, but Brian et al can confirm that). Waiting for reply (5/10)
 +
 +
'''SHEFFIELD'''<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=116560 116560](30/9)<br />
 +
Sno+ jobs having trouble at Sheffield. Looks like a proxy going stale problem as only 10 Sno+ jobs at a time can run at Sheffield. Matt M asks if/how the WMS can be notified to stop sending jobs in such a case. In progress (30/9)
 +
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=114460 114460](18/6)<br />
 +
Gridpp Pilot roles. No news on this for a while, after the last attempt seemed to not quite work. In progress (30/7)
 +
 +
'''MANCHESTER'''<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=116585 116585](1/10)<br />
 +
Biomed ticketed Manchester with problems from their VO nagios box - which Alessandra points out being due to there being no spare cycles for biomed to run on. Assigned (can be put on hold or closed?) (1/10)
 +
 +
'''LIVERPOOL'''<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=116082 116082](7/9)<br />
 +
A classic Rod Availability ticket. On Hold (7/9)
 +
 +
'''LANCASTER''' (a little embarrassing that my own site has the most tickets)<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=116478 116478] (28/9)<br />
 +
Another availability ticket, this time for Lancaster (which has been through the wars in September). Still trying to dig our way out, but even the Admin's broke. On hold (5/10)
 +
 +
[https://ggus.eu/index.php?mode=ticket_info&ticket_id=116676 116676] (5/10)<br />
 +
Another ROD ticket, Lancaster's not quite out of the woods. We think WMS access is somewhat broken. We have no idea about the sha2 error. In progress (5/10)
 +
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=116366 116366] (22/9)<br />
 +
Sno+ spotted malloc errors at Lancaster. The problems seemed to survive one batch of fixes, but I asked again if they still see problems after running a good number of jobs over the weekend. Waiting for reply (5/10)
 +
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=95299 95299] (In a galaxy far, far way)<br />
 +
glexec ticket. This was supposed to be done last week, after I had figured out [https://www.gridpp.ac.uk/wiki/RelocatableGlexec "the formula"] - but then last week happened. On hold (5/10)
 +
 +
'''QMUL'''<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=115959 115959] (31/8)<br />
 +
LHCB job errors at QM, with a 70% pilot failure rate on ce05. Dan couldn't see where things are breaking (only that the CE wasn't publishing to APEL- and asks if this is the cause of the problem?) Waiting for reply (5/10)
 +
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=116662 116662] (5/10)<br />
 +
LHCB job failures on ce05 - almost certainly a duplicate of [https://ggus.eu/?mode=ticket_info&ticket_id=115959 115959], but it might have some useful information in it. Assigned (probably can be closed as a duplicate) (5/10)
 +
 +
'''IMPERIAL'''<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=116650 116650] (1/10)<br />
 +
Imperial's invitation to the CMS Phedex DBParam ritual. Daniela's on it, as well as the other CMS sites. On hold (5/10)
 +
 +
'''BRUNEL'''<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=116649 116649] (1/10)<br />
 +
Brunel's ticket for the great DBParam alignment of 2015. On hold (5/10)
 +
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=116455 116455] (28/9)<br />
 +
A CMS request to change the xrootd monitoring configs. Did you get round to doing this last week Raul? In progress (29/9)
 +
 +
'''EFDA-JET'''<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=115448 115448] (3/8)<br />
 +
Biomed having trouble tagging the jet CE. The Jet admins think this is the same underlying issues as their other ticket
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=115496 115496]. In progress (25/9)
 +
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=115496 115496] (5/8)<br />
 +
Biomed unable to remove files from the jet SE. There are clues that suggest that some dns oddness is the cause, but it's not clear. In progress (18/9)
 +
 +
'''100IT'''<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=116358 116358] (22/9)<br />
 +
Ticket complaining about a missing image at the site. Some to and fro, the ball is back in the site's court. In progress (2/10)
 +
 +
'''TIER 1'''<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=116618 116618] (1/10)<br />
 +
The Tier 1's CMS DBParam ritual ticket. In progress (5/10)
 +
 +
Let me know if I missed ought.
 +
 +
[https://vo-nagios.physics.ox.ac.uk/nagios/cgi-bin/status.cgi?host=all&servicestatustypes=16&hoststatustypes=15 '''T'OTHER VO NAGIOS''']<br />
 +
At time of writing things looka a bit rough at QM, Liverpool (just getting over their downtime) and for Sno+ at Sheffield (likely related to their ticket).
 +
 +
 +
Matt's on holiday until the 29th, so he's being replaced with links or any update Jeremy is kind enough to provide.
 +
 +
[http://tinyurl.com/nwgrnys '''CAST YOUR GAZE UPON THE UK'S GGUS TICKETS HERE'''].
 +
 +
[https://vo-nagios.physics.ox.ac.uk/nagios/cgi-bin/status.cgi?host=all&servicestatustypes=16&hoststatustypes=15 '''FEAST YOUR EYES ON THE T'OTHER VO NAGIOS STATUS HERE''']
 +
 +
"Normal" service will resume in October. I'll leave y'all anticipating that lovely review of all the tickets.
 +
 +
'''Monday 14th September, 15.00 BST'''<br />
 +
 +
Yet another brief ticket review - its been a tad busy! Hopefully normal service will resume, err, next month. My apologies.
 +
 +
Only 20 Open UK tickets this week.
 +
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=115805 115805] - This RAL ticket from Sno+ looks like it can be closed, with there not actually being a problem with the site, or a feature that RAL would really want to implement.
 +
 +
Keeping on the Tier 1, does this ticket concerning the FTS3 certificates ([https://ggus.eu/?mode=ticket_info&ticket_id=115290 115290]) have anyone looking into it yet?
 +
 +
Are these two QMUL LHCB tickets duplicates, or related? [https://ggus.eu/?mode=ticket_info&ticket_id=115959 115959] & the more recent [https://ggus.eu/?mode=ticket_info&ticket_id=116153 116153]
 +
 +
This Sussex ticket looks like it could do with someone taking a look - still just "assigned" since the 9th - [https://ggus.eu/?mode=ticket_info&ticket_id=116136 116136]
 +
 +
Finally, an interesting VAC ticket for Oxford - "no space left on device" for atlas jobs - [https://ggus.eu/?mode=ticket_info&ticket_id=116123 116123]. Bit of an odd one!
 +
 +
'''Monday 7th September 2015, 15.00 BST'''<br />
 +
20 Open UK Tickets this week.
 +
Due to Interesting Downtimes (apologies for reusing that pun!) yet another fairly light review. But not much is going on in ticketland.
 +
 +
[http://tinyurl.com/nwgrnys THE GGUS TICKETS IN ALL THEIR GLORY]<br />
 +
The Sno+ ticket [https://ggus.eu/?mode=ticket_info&ticket_id=115805 115805] is interesting, Sno+ are looking at monitoring jobs submitted via WMS through the port 9000 URLS, but the RAL WMS behaves differently from the Glasgow one and doesn't let others with the same roles look at the links. Sno+ are still "developing" their grid infrastructure with the WMS in mind by the looks of it.
 +
 +
The pilot role at Sheffield ticket [https://ggus.eu/?mode=ticket_info&ticket_id=114460 114460] could still do with some attention, or an update.
 +
 +
Bristol had a ticket from CMS that looks interesting - [https://ggus.eu/?mode=ticket_info&ticket_id=115883 115883]. The CMS SAM3 tests are confused by Bristol having an SRM-less SE endpoint. Waiting for reply after things were clarified.
 +
 +
[https://vo-nagios.physics.ox.ac.uk/nagios/cgi-bin/status.cgi?host=all&servicestatustypes=16&hoststatustypes=15 T'Other VO Nagios Page]<br />
 +
A few "The CREAM service cannot accept jobs at the moment" style errors at QM, but they're only a few hours old. Otherwise looking alright beyond the usual noise.
 +
 +
Of course with these light reviews I could well be missing something, so feel free to let me know - sites or VO representatives.
 +
 +
'''Monday 24th August 2015, 15.45 BST'''<br />
 +
 +
21 Open UK tickets this week, most being looked at or are understood. There will be no ticket update from Matt next week (1st September) either, as he will be flapping about during a local downtime.
 +
 +
[http://tinyurl.com/nwgrnys YE OLDE GGUS TICKET LINK]<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=114460 114460] (gridpp pilots at Sheffield) could do with an update. The Liverpool ticket discussed last week ([https://ggus.eu/?mode=ticket_info&ticket_id=114248 114248]) has received an update from the user saying that the ticket can be closed. As Steve mentioned last week the underlying issue is still very much there, but I don't think this ticket is a suitable banner for us to fight that battle under.
 +
 +
[https://vo-nagios.physics.ox.ac.uk/nagios/cgi-bin/status.cgi?host=all&servicestatustypes=16&hoststatustypes=15 T'Other VO Nagios Page]<br />
 +
As usual nothing to see here at time of writing, sites are doing a grand job of working for the monitored VOs.
 +
 +
And that's all folks! See you in Liverpool.
 +
 +
 +
'''Monday 17th August 2015, 15.00 BST'''<br />
 +
29 Open UK Tickets this week.
 +
 +
'''TIER 1'''<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=114992 114992] (10/7)<br />
 +
It looks like this CMS transfer problem ticket can be closed after a user update last week, which reported enabling multiple streams solved the initial failures. In progress (13/8)
 +
 +
'''EFDA-JET'''<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=115613 115613] (assigned) ''Update- looks like this was a temporary problem, and the ticket can be closed.<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=115448 115448] (in progress, but empty)<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=115496 115496] (in progress, but might not be a site problem).<br />
 +
Jet have 3 biomed tickets, 2 of which are looking a little neglected.
 +
 +
'''CAMBRIDGE'''<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=115655 115655] (12/8)<br />
 +
John rightfully asks why does a lengthy but very scheduled downtime set off a ROD alarm. Other then that it'll be worth "on-holding" this ticket whilst the unrighteous red flag fades. In progress (17/8) ''Update - On holded''
 +
 +
'''LIVERPOOL'''<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=114248 114248] (10/6)<br />
 +
This Sno+ ticket looks like it needs some chasing up, no news for nearly 2 months. In progress (21/7) ''Update - Steve commented on this on TB-SUPPORT''
 +
 +
'''Monday 10th August 2015, 15.00 BST'''<br />
 +
 +
[https://vo-nagios.physics.ox.ac.uk/nagios/cgi-bin/status.cgi?host=all&servicestatustypes=16&hoststatustypes=15 T'Other VO Nagios] looks alright at time of writing.
 +
 +
24 Open UK tickets this week.
 +
 +
''Lots of activity on GGUS since yesterday. Most positive, but I see the number of tickets at EFDA-JET increasing - all three from biomed.''
 +
 +
'''Tier 1'''<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=115512 115512] (5/8)<br />
 +
An interesting ticket - where a banned user is still banned after moving to LHCB from Biomed (no, he's not called Heinz). In Progress (6/8) ''Update - Andrew has asked that the ticket be reassigned to the argus devs as the pepd logs are showing oddness. Waiting for reply now.''
 +
 +
'''Bristol'''<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=115565 115565] (7/8)<br />
 +
Bristol's phedex agents are down, and have been for a few days. I might have dreamt this, but thought that the Bristol Phedex service might not be hosted at Bristol, especially after RALPP had a similar ticket ([https://ggus.eu/?mode=ticket_info&ticket_id=115566 115566]) at the same time.  Assigned (7/8) ''Update - solved''
 +
 +
'''Manchester'''<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=115504 115504] (ROD Ticket) ''Solved'' <br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=115399 115399] (Wiki Ticket) ''Still Open'' <br />
 +
Both these tickets look like they can be closed.
 +
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=115525 115525] (5/8)<br />
 +
Atlas deletion errors after a a disk server fell over- nothing wrong with the ticket handling, but Alessandra brings up a point that always niggles me - the emphasis on the total number of transaction errors and not the number of affected unique files. In progress (8/8)
 +
 +
'''QMUL'''<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=114573 114573] (23/6)<br />
 +
LHCB ticket sparked by those IPv6 problems. Still no word from Vladimir; Raja - could you comment? I suspect there's been plenty of room for LHCB jobs in QM's (and everyone else who mainlines atlas jobs) queues this weekend. Waiting for reply (21/7)
 +
 +
'''Monday 3rd August 2015, 14.30 BST'''<br />
 +
 +
23 Open tickets this month, full review time.
 +
 +
'''''Newish this morning'''''<br />
 +
As discussed on TB-SUPPORT, a few sites have been getting "I can't lcg-tag your CE" tickets from Biomed. The Liverpool ticket [https://ggus.eu/?mode=ticket_info&ticket_id=115449 115449], solved and verified, was the flagship of these issues. Brunel ([https://ggus.eu/?mode=ticket_info&ticket_id=115445 115445]) and EFDA-JET ([https://ggus.eu/?mode=ticket_info&ticket_id=115448 115448]) also have tickets about this.
 +
 +
 +
'''Sno+ "glite-wms-job-status warning"''' (3/8)<br />
 +
Glasgow: [https://ggus.eu/?mode=ticket_info&ticket_id=115435 115435]<br />
 +
Tier 1:  [https://ggus.eu/?mode=ticket_info&ticket_id=115434 115434]<br />
 +
Matt M submitted these tickets to Glasgow and the Tier 1 after having trouble with a proportion of Sno+ jobs. Both are being looked at- definitely worth collaborating on this one.
 +
 +
'''Wiki-leak...'''<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=115399 115399](31/7)<br />
 +
Jeremy noticed that the wiki didn't work for him on Friday - but it seems to work for Jeremy, Alessandra and myself now. As Jeremy notes the ticket can be closed, but out of interest did anyone else spot any problems? In progress (3/8)
 +
 +
'''Spare the ROD...'''<br />
 +
'''LIVERPOOL'''<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=115433 115433] (3/8)<br />
 +
Some CE problems noticed on the dashboard for the Liver-lads - who might be in mourning. Assigned (3/8) ''Update - aaannd Solved by upping gridftp max connections''
 +
 +
'''RALPP & UCL'''<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=114764 114764]<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=114851 114851]<br />
 +
Both of these are "availability" alarm tickets, on-holding until they clear. I hope RALPP managed to get a re-computation for their unfair failures (IGTF-1.65 problems on ARC).
 +
 +
'''Sno+'d Under'''<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=115387 115387] (30/7)<br />
 +
I'm uncertain if this Sno+ ticket, probably somewhat related to Matt M's recent thread on TB-SUPPORT and concerning xrootd access, is meant for the Tier 1 or RALPP. Assigned (3/8)
 +
 +
'''First Tier Problems.'''<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=115417 115417] (2/8)<br />
 +
LHCB spotted a number of nodes with cvmfs problems at the Tier 1, which the RAL team had already jumped on and repaired this morning. They wonder if the problem persists. Waiting for reply (3/8)
 +
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=115290 115290] (28/7)<br />
 +
An FTS problem requiring some special CA magic to solve, but the current CA-wizard isn't about. On hold (29/7)
 +
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=113836 113836] (20/5)<br />
 +
Glue 1 vs Glue 2 queue mismatches. It's being worked on perfecting cluster publishing for ARC CEs, but the ticket could either do with an update or on-holding. In progress (24/6)
 +
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=114992 114992] (10/7)<br />
 +
CMS transfers failing between RAL and, err, TAMU in the US. Assigned to RAL, where Brian has investigating and Andrew has posed an good question, asking if the user has considered managing the transfers with FTS. Quiet on the user side. In progress (21/7)
 +
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=108944 108944] (1/10/2014)<br />
 +
One of the tickets from the before times, about CMS AA access tests. It has become a long and confusing saga, but Gareth rescued it with a handy summary of the issue in his last update. How goes the battle? In progress (17/7)
 +
 +
'''Oxford Squid is red.'''<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=115230 115230] (24/7)<br />
 +
Which might be the colour Ewan's seeing right now! The ticket is reopened, with a comment from Alessandra that the current recommendation is to allow all CERN addresses, and asks if this is something Oxford could do. Reopened (3/8) ''Update - solved after a squid restart. It's not the end of this ordeal, but Ewan would like to tackle it in a different arena then an Oxford ticket.''
 +
 +
'''Mavaricks and Gooses - GridPP Pilot roles.'''<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=114485 114485] '''Bristol'''<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=114460 114460] '''Sheffield'''<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=114442 114442] '''RALPP'''<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=114441 114441] '''RHUL'''<br />
 +
Daniela hopped right back into the pilot seat after getting back from her holidays. Bristol and RALPP are looking good, Sheffield and RHUL are still in the Danger Zone - RHUL in particular were having troubles with argus and could do with some working configs from elsewhere to compare and contrast with their own.<br />
 +
''Update - RHUL are looking better, just a few queue permission tweaks to go by the looks of it.''
 +
 +
 +
'''My shame - tarball glexec'''<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=95303 95303] '''ECDF'''<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=95299 95299] '''Lancaster'''<br />
 +
The tarball glexec tickets. Actually this is likely to become a defunct (or at least different) problem at Edinburgh with their SL7 move. Lancaster has a plan - we plan to deploy *something* (amazing plan there Matt) during our next big reinstall in September. Between now and then I have a test CE, cluster and most importantly some time.
 +
 +
'''Pot-luck tickets (or those I couldn't group).'''
 +
 +
'''Durham'''<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=114381 114381] (16/6)<br />
 +
A tiny fraction of jobs publishing 0 cores used. Looks to be a slurm oddity. Oliver upgraded their CEs to ARC5 last week and hopes this has fixed things. Fingers crossed! In progress (29/7)
 +
 +
'''Lancaster'''<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=100566 100566] (27/1/2014)<br />
 +
My other shame - Lancaster's poor perfsonar performance. It's being worked on. On Hold (should be back in progress soon) (30/7)
 +
 +
'''Liverpool'''<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=114248 114248] (21/7)<br />
 +
Sno+ production problems at Liverpool, probably due to a lack of space in the shared area. Things are back in Sno+'s court, with the submitter consulting the Sno+ gurus (I think). In progress (21/7)
 +
 +
'''QMUL'''<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=114573 114573] (23/6)<br />
 +
LHCB job submission problems due to the known about dual-stacking problems. Waiting for input from LHCB for a while now, as things look okay at QM now but at least check LHCB jobs still weren't running for some reason. Waiting for reply (21/7)
 +
 +
That's all folks!
 +
 +
'''Monday 27th July 2015, 16.10 BST'''<br />
 +
 +
Only 20 UK Tickets this week, and many are on hold for summer holidays. I pruned a few tickets, but none are striking me as needing urgent action, so this will be brief.
 +
 +
[http://tinyurl.com/nwgrnys Mandatory UK GGUS Link]<br />
 +
Nothing to see here really, but maybe I'm missing something? Nit-picking I see:
 +
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=114381 114381] (publishing problems at Durham) could still do with an update - not sure if work is progressing offline on the issue.
 +
 +
The Snoplus ticket [https://ggus.eu/?mode=ticket_info&ticket_id=115165 115165] looks like it might be of interest for others - in it Matt M asks about tape-functionality in gfal2 tools. Brian has updated the ticket clueing us in about gfal-xattr.
 +
 +
[https://vo-nagios.physics.ox.ac.uk/nagios/cgi-bin/status.cgi?host=all&servicestatustypes=16&hoststatustypes=15 UK T'other VO Nagios]<br />
 +
A few failures here at time of writing - although only one at Brunel seems to be more then a few hours old (dc2-grid-66.brunel.ac.uk is failing pheno CE tests).
 +
 +
Let me know if I missed ought!
 +
 +
'''Monday 20th July 2015, 14.30 BST'''<br />
 +
27 Open UK Tickets this week.
 +
 +
'''Brunel'''<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=115113 115113] (17/7)<br />
 +
This Brunel Ops ticket (otherwise okay) is a good reminder that when removing CEs from the GOCDB for your site make sure to get all the services, or else you just might end up still monitored (and thus end up ticketed!). Reopened (can probably be closed soon) (20/7) ''Update - ticket solved''
 +
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=114006 114006] (31/5)<br />
 +
This ticket about problems with Brunels accounting figures was looking promisingly close to being solved (at least to my layman's eyes) 3 weeks ago. Any word offline? In progress (30/6)
 +
 +
'''Lancaster'''<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=114845 114845] (6/7)<br />
 +
LHCB jobs were failing at Lancaster a fortnight ago, but things should have been fixed quite promptly. Are they still broken? Waiting for reply (14/7)
 +
 +
'''QMUL'''<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=114573 114573] (23/6)<br />
 +
A similar case for this LHCB ticket for QM (part of their ongoing dual-stack saga). Waiting for reply (13/7)
 +
 +
Note: The related issue [https://ggus.eu/index.php?mode=ticket_info&ticket_id=115017 115017] has been "solved" pending further testing.
 +
 +
'''Sheffield'''<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=114649 114649] (26/6)<br />
 +
This Sno+ ticket has been in "Waiting for Reply" for a little while, no word from the user (who isn't Matt M). Could we poke Sno+ though another channel about this? Waiting for reply (6/7)
 +
 +
'''Liverpool'''<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=114248 114248] (10/6)<br />
 +
We could do with finding out from Sno+ the state of play at Liverpool too, although we might have to wait until Steve is back from his hols later this week to field any replies. In progress (17/6)
 +
 +
'''Durham'''<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=114381 114381] (16/6)<br />
 +
Ticket concerning the small percentage of Durham jobs that aren't publishing their core count (probably a slurm oddity). Now that Oliver's back from holiday has he had time to look at this? On Hold (19/6)
 +
 +
'''Tier 1'''<br />
 +
https://ggus.eu/?mode=ticket_info&ticket_id=113836 (20/5)<br />
 +
I suspect that whilst the work described chugs along in the background we consider on holding this ticket. In progress (24/6)
 +
 +
'''UCL'''
 +
[https://ggus.eu/index.php?mode=ticket_info&ticket_id=114746 114746] (30/6)<br />
 +
Ben has been battling getting his DPM working, and spotted an interesting problem where SELinux was blocking the httpd from accessing mysql. It's nice to see someone not just switching SELinux off (like I have a habit of doing...). In progress (20/7)
 +
 +
'''ECDF'''<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=115003 115003] (12/7)<br />
 +
Andy having some problems on a test SE at ECDF - he seems ot be suffering a series of unfortunate errors. Maybe the storage group could help? In progress (17/7)
 +
 +
'''GridPP Pilot Roles.'''<br />
 +
Bristol are ready for testing, Govind discovered a possible bug in argus that needs a bit more testing at RHUL. Things seem quiet at RALPP and Sheffield, and Brunel too.
 +
 +
''Supplemental - In the region multicore publishing ticket ([https://ggus.eu/?mode=ticket_info&ticket_id=114233 114233]) only Oxford have a CE still appearing to publish 0 cores - but I thought this CE was Old-Yeller'ed?''
 +
 +
'''Tuesday 14th July 2015, 9.30 BST'''
 +
 +
Lazy update today, due to some fun and games at Lancaster yesterday.
 +
 +
[http://tinyurl.com/nwgrnys UK GGUS Tickets]
 +
 +
[https://vo-nagios.physics.ox.ac.uk/nagios/cgi-bin/status.cgi?host=all&servicestatustypes=16&hoststatustypes=15 Other VO Nagios]
 +
 +
'''Tickets that pop:'''
 +
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=114952 114952] and [https://ggus.eu/?mode=ticket_info&ticket_id=114951 114951] are both atlas frontier tickets at RALPP and Oxford, both have been reopened - although the underlying issues seem different. The Oxford ticket is similar to one at RAL ([https://ggus.eu/?mode=ticket_info&ticket_id=114957 114957]), which looks to be caused by an unannounced change in IP for some important atlas squids (AIUI - speed reading the tickets this morning).
 +
 +
'''QM IPv6 Woes'''<br />
 +
Followers of the atlas uk lists will have noticed some heroic attempts to diagnose and repair problems at QM which appear to be someone else's fault.
 +
LHCB's ticket to QM on the matter: [https://ggus.eu/?mode=ticket_info&ticket_id=114573 114573]<br />
 +
Dan's ticket concerning the "rogue routes": [https://ggus.eu/index.php?mode=ticket_info&ticket_id=115017 115017]
 +
 +
'''GridPP Pilot Roles'''<br />
 +
Durham, Bristol, Sheffield, Brunel and RHUL still have open tickets about this. Bristol are working on it, as are Durham - Oliver's ready for their setup to be tested again (Puppet overwrote his last changes!). Not much recent news from the other three.
 +
 +
That's all from me folks, let me know if I missed ought!
 +
 +
'''Tuesday 14th July 2015, 9.30 BST'''
 +
 +
Lazy update today, due to some fun and games at Lancaster yesterday.
 +
 +
[http://tinyurl.com/nwgrnys UK GGUS Tickets]
 +
 +
[https://vo-nagios.physics.ox.ac.uk/nagios/cgi-bin/status.cgi?host=all&servicestatustypes=16&hoststatustypes=15 Other VO Nagios]
 +
 +
'''Tickets that pop:'''
 +
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=114952 114952] and [https://ggus.eu/?mode=ticket_info&ticket_id=114951 114951] are both atlas frontier tickets at RALPP and Oxford, both have been reopened - although the underlying issues seem different. The Oxford ticket is similar to one at RAL ([https://ggus.eu/?mode=ticket_info&ticket_id=114957 114957]), which looks to be caused by an unannounced change in IP for some important atlas squids (AIUI - speed reading the tickets this morning).
 +
 +
'''QM IPv6 Woes'''<br />
 +
Followers of the atlas uk lists will have noticed some heroic attempts to diagnose and repair problems at QM which appear to be someone else's fault.
 +
LHCB's ticket to QM on the matter: [https://ggus.eu/?mode=ticket_info&ticket_id=114573 114573]<br />
 +
Dan's ticket concerning the "rogue routes": [https://ggus.eu/index.php?mode=ticket_info&ticket_id=115017 115017]
 +
 +
'''GridPP Pilot Roles'''<br />
 +
Durham, Bristol, Sheffield, Brunel and RHUL still have open tickets about this. Bristol are working on it, as are Durham - Oliver's ready for their setup to be tested again (Puppet overwrote his last changes!). Not much recent news from the other three.
 +
 +
That's all from me folks, let me know if I missed ought!
 +
 +
'''Monday 6th July 2015, 14.00 BST''' <br />
 +
30 Open UK Tickets this month. Looking at them all!
 +
 +
'''NGI'''<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=114233 114233] (10/6)<br />
 +
The UK not publishing core counts at all sites. Some progress, but at last check John G couldn't see a change for Oxford or Glasgow. In progress (30/6) ''Update - Glasgow seems to be okay after de-creaming, checking the July list we have t2ce6 at Oxford, ce3 and ce4 at Durham (see their ticket) and cetest02 at IC (but that node has test in its hostname!).''
 +
 +
'''RALPP'''<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=114442 114442] (18/6)<br />
 +
Gridpp Pilot role ticket. Accounts need to be created, but no word for a few weeks. In progress (19/6)
 +
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=114764 114764] (1/7)<br />
 +
Ticket tracking (false) availability issues, created to appease COD - the problem caused by a broken CA rpm release for Arc CEs. Kashif has created a counter-ticket [https://ggus.eu/index.php?mode=ticket_info&ticket_id=114742 114742] Gordon's sagely advice is to submit a recalculation request once the issue is fixed. Assigned (1/7)
 +
 +
'''BRISTOL'''<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=114485 114485] (19/6)<br />
 +
Bristol's gridpp pilot role ticket. No news, could do with an update really. In progress (22/6)
 +
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=114426 114426] (18/6)<br />
 +
CMS AAA reading test problems. The Bristol admins have transferred data to their new shiny SE and have asked CMS to test again. No word since. Waiting for reply (30/6)
 +
 +
'''EDINBURGH'''<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=95303 95303] (1/7/13...)<br />
 +
Tarball glexec ticket, now 2 years old. After a really promising burst the last 6 weeks haven't seen any progress, due to a lot of other "normal" tarball work taking up the time. Sorry! On hold (18/5)
 +
 +
'''DURHAM'''<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=114536 114536] (22/6)<br />
 +
Durham's gridpp pilot role ticket. Not acknowledged yet, is Oliver back yet? Assigned (22/6)
 +
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=114765 114765] (1/7)<br />
 +
See RALPP ticket 114764. Assigned (1/7)
 +
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=114727 114727] (30/6)<br />
 +
Catalin ticketed that a number of SW_DIR variables at Durham are still pointing to the old school .gridpp.ac.uk cvmfs space. Assigned (30/6)
 +
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=114381 114381] (16/6)<br />
 +
John G ticketed Durham over a small percentage of jobs being published as "zero core". Looks like a SLURM timeout problem, although a fix isn't obvious. Put on the back burner whilst Oliver is on holiday. On Hold (19/6)
 +
 +
'''SHEFFIELD'''<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=114649 114649] (26/6)<br />
 +
A ticket from a Sno+ user about not being able to access software using the Sheffield CEs. Acknowledged but no news. In progress (26/6) ''Update - Elena can't find anything wrong, cvmfs seems to be working fine. Perhaps a problem with the environment?''
 +
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=114460 114460] (18/6)<br />
 +
Sheffield's gridpp pilot role ticket. Did you get round to rolling them out? In progress (19/6)
 +
 +
'''MANCHESTER'''<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=114444 114444] (18/6)<br />
 +
LHCB ticket concerning the DPM's SRM not returning checksum information. On hold whilst a related ticket is being looked at ([https://ggus.eu/index.php?mode=ticket_info&ticket_id=111403 111403]). On Hold (22/6)
 +
 +
'''LIVERPOOL'''<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=114248 114248] (10/6)<br />
 +
Another Sno+ ticket, about grid production jobs failing at Liverpool. AIUI caused by Sno+ running out of space on the shared pool. At last check Steve posted the usage information for Sno+ but no word since (and Steve's off on his hols). In progress (17/6)
 +
 +
'''LANCASTER'''<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=114845 114845] (6/7)<br />
 +
LHCB pilots failing at Lancaster. Looks like a simple node misconfiguration, hopefully fixed, waiting to see if it is. On hold (6/7)
 +
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=95299 95299] (1/7/2013)<br />
 +
glexec ticket - see Edinburgh description. On hold (15/5)
 +
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=100566 100566] (27/1)<br />
 +
Bad bandwidth performance at Lancaster. Hoping that IPv6 will shake things up a bit so pushing that. On hold (18/5)
 +
 +
'''UCL'''<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=114746 114746] (30/6)<br />
 +
SRM-put failures ROD ticket. No news at all. Assigned (30/6)
 +
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=114851 114851] (6/7)<br />
 +
Low availability ROD ticket, related to above. Assigned (6/7)
 +
 +
'''RHUL'''<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=114441 114441] (18/6)<br />
 +
Another GridPP pilot role ticket. Pilots rolled out, but something isn't quite right and they're not working - Govind is looking again. In progress (6/7)
 +
 +
'''QMUL'''<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=114573 114573] (23/6)<br />
 +
LHCB ticket about two out of three QM CEs not responding for them. Dan spotted the broken CEs were dual-stacked, the working one wasn't. The ticket seemed to have trailed off into some confusion over who needs to do some testing where. I agree with Dan that that who needs to be someone with LHCB credentials! The waters still seem muddied. In progress (1/7)
 +
 +
'''IC'''<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=114737 114737] (30/6)<br />
 +
The IC voms wasn't updating properly, due to what I infer from the ticket as "SSL/mysql madness". Simon and Robert have been heroically battling this one - it's a good read. On hold (3/7)
 +
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=114379 114379] (16/6)<br />
 +
Sam's ticket about SE support in Dirac. Sam will shortly try testing things out on the new Dirac to see how it fares. In progress (6/7)
 +
 +
'''BRUNEL'''<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=114447 114447] (18/6)<br />
 +
Brunel's gridpp pilot ticket. Being worked on, with one CE with the pilots enabled. In progress (26/6)
 +
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=114006 114006] (31/5)<br />
 +
A ticket from APEL, about Brunel under-reporting the number of jobs they are doing. Turned out to be a problem with Arc, which Raul upgraded to the fixed 5.0 version. The APEL team deleted the sync records, but no word since. In progress (30/6)
 +
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=114850 114850] (6/7)<br />
 +
Another APEL ticket, likely the fallout of the previous one - it looks like GAP publishing has been left on for the Brunel CREAM CEs. Assigned (6/7) ''Update - solved''
 +
 +
'''TIER 1'''<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=114786 114786] (2/7)<br />
 +
Low availability ticket - see RALPP ticket 114442 - probably could do with On holding. In progress (2/7) ''Update - Onholded''
 +
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=113910 113910] (26/5)<br />
 +
Sno+ data staging problems. Brian gave some advice on how the large VOs do data staging from tape, and has asked if Sno+ still has problems. Matt M might still be on leave though. Waiting for reply (23/6)
 +
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=108944 108944] (1/10/14)<br />
 +
CMS AAA problems, which eventually brought to light to a problem with super-hot datasets which were alleviated (I think). Despite an update to castor that improved performance the last batch of tests didn't show improved results. No news since. In progress (17/6)
 +
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=113836 113836] (20/5)<br />
 +
Glue mismatch problems at RAL. Working on getting "many-Arcs" to correctly publish. In progress (24/6)
 +
 +
'''Monday 29th June 2015, 14.30 BST'''<br />
 +
 +
Looking at the [https://vo-nagios.physics.ox.ac.uk/nagios/cgi-bin/status.cgi?host=all&servicestatustypes=16&hoststatustypes=15 '''"Other VO" Nagios''']. <br />
 +
Things look generally alright - but Durham look like they need to update their CA rpms - but that might have to wait until Oliver is back from leave.
 +
 +
'''Tarballs:'''<br />
 +
I don't think this effects many, but there's was a ticket to produce a new version of the WN tarball (which is done): [https://ggus.eu/index.php?mode=ticket_info&ticket_id=114574 114574]<br >
 +
Although AFAICS there is no urgent need to upgrade tarball WNs.
 +
 +
26 UK Tickets, although not many stand out.
 +
 +
'''Gridpp VO Pilot tickets:'''<br />
 +
Largely doing alright. With Oliver away the Durham ticket hasn't been looked at yet. Sheffield and Bristol's tickets could do with an update (or on-holding if there's going to be a delay). The RHUL ticket has been reopened as they're deployment of the pilot roles hasn't quite worked out.
 +
 +
'''QMUL'''<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=114573 114573] (23/6)<br />
 +
LHCB having trouble with two out of three QM CEs. Dan notes that the two "broken" CEs have been recently dual-stacked, and asks if this could be the problem. The answers is a resounding "maybe", and Raja asks if problems could be duplicated by others using lxplus. Waiting for reply (24/6)
 +
 +
'''IMPERIAL/DIRAC'''<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=114379 114379] (16/6)<br />
 +
Sam's ticket trying to get SE support with Dirac...er, spruced up. Daniela has asked if the tests can be redone with the "new" dirac. Waiting for reply (22/6)
 +
 +
Let me know if I missed any tickets.
 +
'''Monday 22nd June 2015, 14.00 BST'''
 +
 +
35 Open UK Tickets this week (!!!)
 +
 +
'''GridPP Pilot Role'''<br />
 +
A dozen of them are from Daniela (who painstakingly submitted them all) concerning getting the gridpp (and other) pilot role enabled on the site in question's CEs.
 +
 +
An example of one of these tickets is:<br />
 +
[https://ggus.eu/index.php?mode=ticket_info&ticket_id=114440  114440] (Lancaster's smug solved ticket).
 +
 +
Ticketed sites are: Durham, Bristol, Cambridge, Glasgow, ECDF (who are also having general gridpp VO support problems), EFDA-JET (looking solved), Oxford, Liverpool, Sheffield, Brunel, RALPP and RHUL. Most tickets are being worked on fine, but the Bristol and Liverpool ones were still just in an "assigned" state at time of writing. <br />
 +
''Update - good progress on this, just one ticket left "assigned". Cambridge are done, as are JET (ticket needs to be closed). Oxford and Manchester are ready for to have their new setups tried out, with Oxford kindly road-testing glexec for the pilot roles. Good stuff.''
 +
 +
'''Core Count Publishing'''<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=114233 114233] (10/6)<br />
 +
Of the sites mentioned in this ticket (Durham[1], IC, Liverpool, Glasgow, Oxford) who *hasn't* had a go at changing their core count publishing? I know Oxford have. Daniela had a pertinent question about publishing for VMs, which John answered. In progress (17/6)
 +
 +
[1] Durham have another ticket on this which may explain their lack of core count publishing:
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=114381 114381] (16/6)
 +
 +
'''DIRAC'''<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=114379 114379] (16/6)<br />
 +
Sam S formed this ticket over having trouble accessing the majority of SEs over Dirac, after some discussion around this last week. Sam acknowledges that this could be a site problem, not a DIRAC problem, but you gotta start somewhere (he worded that point more eloquently). Daniela has posted her latest and greatest DIRAC setup gubbins for Sam to try out. Another, unrelated,  point to have are the names missing from Sam's list - for example I'm pretty sure Lancaster should support gridpp VO storage but I've forgotten to roll it out! Waiting for reply (22/6)
 +
 +
'''LIVERPOOL'''<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=114248 114248] (10/6)<br />
 +
Final ticket today, and another one discussed last week in the storage meeting. Steve's explanation of why (and how) Sno+ would need to start to using space tokens was fantastically well worded in a way to not spook easily startled users. David is digesting the information, but it will likely need to wait for Matt M's return before we'll see progress. In progress (16/6)
 +
 +
'''MANCHESTER'''<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=114444 114444](18/6)<br />
 +
I told a pork pie when I said that was the last ticket - this one caught my eye. A ticket from lhcb over files not having their checksums stored on Manchester's DPM. A link was given to another ticket at CBPF for a similar issue which got the DPM devs involved ([https://ggus.eu/index.php?mode=ticket_info&ticket_id=111403 111403]) - although Andrew McNab was already subscribed to the ticket. In progress (19/6)
 +
 +
'''Monday 15th June 2015, 14.15 BST'''<br />
 +
 +
[https://vo-nagios.physics.ox.ac.uk/nagios/cgi-bin/status.cgi?host=all&servicestatustypes=16&hoststatustypes=15 Other VO nagios:]<br />
 +
ce05.esc.qmul.ac.uk and hepgrid11.ph.liv.ac.uk seem to be having a spot of bother for multiple VOs, and hepgrid2.ph.liv.ac.uk seems to be starting to have trouble too.
 +
 +
19 Open UK Tickets this week.
 +
 +
'''NGI'''<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=114233 114233] (10/6)<br />
 +
John Gordon ticketed the NGI (as well as others) about some sites in the UK not publishing core counts with their APEL numbers (or more precisely have submission hosts at that site not publishing). Following the link it looks like Imperial, Liverpool, Durham, Glasgow and Oxford are on this list, I've listed the submission hosts reporting "0" core jobs below to help people clear up their rogues! If you've only fixed things in the last fortnight you'd still show up on this list. In progress (15/6)
 +
 +
"0" core job submission hosts:<br />
 +
ce3.dur.scotgrid.ac.uk<br />
 +
ce4.dur.scotgrid.ac.uk<br />
 +
cetest02.grid.hep.ph.ic.ac.uk<br />
 +
hepgrid5.ph.liv.ac.uk<br />
 +
hepgrid6.ph.liv.ac.uk<br />
 +
hepgrid97.ph.liv.ac.uk<br />
 +
svr009.gla.scotgrid.ac.uk<br />
 +
t2ce06.physics.ox.ac.uk
 +
 +
'''Tier 1'''<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=113914 113914] (26/5)<br />
 +
Sno+ file copying ticket. Matt M is away on his hols, but Dave Auty has took over his duties and reports that this problem seems to have gone away - it can probably be closed. In progress (9/6)
 +
 +
'''Liverpool'''<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=114248 114248] (10/6)<br />
 +
Another Sno+ ticket here from David concerning job failures. Nothing wrong with the ticket handling, but I thought that David's errors in submitting test jobs are worth documenting, as they were very understandable. David has since asked if the Sno+ job failures are linked to Sno+ nagios test failures at Liverpool. In progress (12/6)
 +
 +
'''Imperial'''<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=114157 114157] (8/6)<br />
 +
After Simon and Daniela have cleared up the atlas dark data and expanded their Space Tokens using the space freed up there still seems to be some confusion in Rucio, disagreeing with the SRM numbers. In progress (10/6)
 +
 +
'''Oxford'''<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=114208 114208] (9/6)<br />
 +
Oxford being ticketed for UKI-SOUTHGRID-OX-HEP_IPV6TEST failing connection tests, tests for which Oxford should not be getting ticketed afaacs. I remember this being mentioned in the Thursday cloud meeting, but I'm ashamed to say I wasn't paying attention. Were any conclusions drawn/decisions made? In progress (10/6)
 +
 +
'''Manchester'''<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=114153 114153] (7/6)<br />
 +
Atlas transfer failures from Manchester. Errors are still occurring as of yesterday, any news? In progress (14/6)
 +
 +
 +
'''Monday 8th June 2015, 15.00 BST'''<br />
 +
21 Open Tickets this week.
 +
 +
'''TIER 1'''<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=113914 113914] (26/5)<br />
 +
Sno+ had problems at the Tier 1 where jobs failed whilst uploading data, believed to be due to an incorrect VOInfoPath. There's been a failure at replicating the issue, and the VOInfoPath advertised is correct. Very confusing, as I assume it all worked at some point before! In progress (2/6)
 +
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=113910 113910] (26/5)<br />
 +
Another Sno+ ticket, concerning lcg-cp timeouts whilst data-staging from tape. Matt M has asked for advice on the best practice for doing this, or if Sno+ would be better off just upping their timeouts. Brian has given some advice on using the "bringonline" command, but is himself unsure the best way of seeing what files are currently in a VO's cache. Not much news since. In progress (28/5)
 +
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=114004 114004] (31/5)<br />
 +
Atlas transfers fail due to the "bring-online" timeout being exceeded. Brian spotted a problem with file timestamps mismatching, but no news on this ticket since. In progress (1/6)
 +
 +
'''BRUNEL'''<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=114006 114006] (31/5)<br />
 +
APEL accounting oddness at Brunel, noticed by the APEL team. After much to-and-fro-ing John noticed that multiple CEs were using the same SumbitHost, and thus overwriting each other's sync records. Something to watch out for. In progress (7/6)
 +
 +
'''IMPERIAL'''<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=114157 114157] (8/6)<br />
 +
There's been some debate on the atlas lists about this ticket, a classic "not enough space at the site" ticket. Raising above the indignation over being ticketed for this, Daniela has offered a couple of TB to give some space, and pointed out that IC have some atlas data outside space tokens, and that this could be used to expand the tokens if cleaned up. Waiting for reply (8/6)
 +
 +
 +
'''Tuesday 26th May 2015'''<br />
 +
Matt's on leave until the 8th of June. But he's replaceable with handy links... 23 tickets today:
 +
 +
[https://vo-nagios.physics.ox.ac.uk/nagios/cgi-bin/status.cgi?host=all&servicestatustypes=16&hoststatustypes=15 '''Other VO Nagios''']<br />
 +
 +
[http://tinyurl.com/nwgrnys '''UK NGI GGUS tickets''']
 +
 +
'''Monday 18th May 2015, 14.30 BST'''<br />
 +
Full review this week.
 +
 +
[https://vo-nagios.physics.ox.ac.uk/nagios/cgi-bin/status.cgi?host=all&servicestatustypes=16&hoststatustypes=15 '''Other VO Nagios''']<br />
 +
At time of writing I see problems with test jobs at Brunel for pheno and Liverpool for a number of VOs (see Sno+ ticket for probable cause and fix at Liverpool).
 +
 +
22 Open UK Tickets this week. Going site-by-site:
 +
 +
'''APEL/NGI'''<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=113473 113473] (4/5)<br />
 +
Missing accounting date for April for some sites. Raul is discussing things for Brunel in the ticket, although they have republished. I think it's only ECDF left to republish their April data. In progress (16/5)
 +
 +
'''OXFORD'''<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=113482 113482] (26/4)<br />
 +
Loss of accounting data for Oxford needing a APEL republish. The Oxford guys republished, but there is some confusion with the resulting numbers. Discussion is ongoing, John G is currently looking at the records. In progress (14/5)
 +
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=113650 113650] (11/5)<br />
 +
CMS glideins failing at Oxford. The original problem was with a config tweak being left out of the cvmfs setup, but the ticket has been reopened citing problems persisting on the ARC CE (the CREAM appears to be fixed). Reopened (16/5)
 +
 +
'''GLASGOW'''
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=113095 113095] (17/4)<br />
 +
ROD ticket about batch system BDII failures, left open to avoid unnecessary ticket filing. Gareth noted that the full migration to ARC and HTCondor, which should see the end of these issues, will hopefully be completed by the end of June. On Hold (12/5)
 +
 +
'''ECDF'''
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=95303 95303] (31/7/13)<br />
 +
Somehow left this one out of the e-mail update. Edinburgh's glexec ticket, dependent on the tarball. I put in my tuppence worth today with my tarball hat on. On hold (18/5)
 +
 +
'''SHEFFIELD'''<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=113769 113769] (18/5)<br />
 +
LHCB see a cvmfs problem at Sheffield. Elena has probably fixed the problem(restarted the sssd), just waiting to see if it all pans out. In progress (18/5)
 +
 +
'''MANCHESTER'''<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=113744 113744] (15/5)<br />
 +
For the VOMS rather then the site, Jens' request for the creation of the dIrac VO, vo.dirac.ac.uk. In progress (18/5)
 +
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=113692 113692] (13/5)<br />
 +
A request from pheno to add support to for their new cvmfs area at Manchester, and as I understand it, to support them in a new "form" (pheno.egi.eu). In progress (13/5)
 +
 +
'''LIVERPOOL'''<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=113742 113742] (15/5)<br />
 +
Sno+ noticed their nagios failures at Liverpool. Rob reckons this was a problem with the DPM BDII service certificate not being updated (that's bitten me too), and fixed things this morning. Let's see how that goes. In progress (18/5)
 +
 +
'''LANCASTER'''<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=95299 95299] (1/7/13!)<br />
 +
Lancaster's vintage glexec ticket. An update on this - after have a roundtuit session last week I was building glexec for different paths. It still needs some testing to make sure it works properly. There however definitely won't be a one-size-fits-all tarball solution. On hold (15/5)
 +
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=100566 100566] (27/1/14)<br />
 +
Only the crustiest old tickets for us at Lancaster! Poor perfsonar performance. Sadly didn't get roundtuit on this one - we're pushing getting these nodes dual stacked as Ewan had pointed out that it would be interesting to see if IPv6 tests also saw this issue. On hild (18/5)
 +
 +
'''UCL'''<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=113721 113721] (14/5)<br />
 +
The only UCL ticket, this is a egi "low availability" ticket. However Daniela notes that the plots are on the rise, so things are looking alright. Probably want to "On Hold" it but otherwise not much to be done. In progress (14/5)
 +
 +
'''IMPERIAL'''<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=113743 113743] (15/5)<br />
 +
A ticket from Durham concerning the Dirac instance at Imperial's settings for their site. Daniela hopes to get it fixed soon. In progress (15/5)
 +
 +
'''100IT'''<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=112948 112948] (10/4)<br />
 +
CA certificate update at 100IT leading to a discussion of other authentication based failures. David has asked for voms information after posting his configs. In progress (13/5)
 +
 +
'''TIER 1'''<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=113035 113035] (14/4)<br />
 +
Ticket tracking the decommissioning of the Tier 1 CREAM CEs. I think things are just about done now, this ticket can soon be closed. In progress (11/5)
 +
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=109694 109694] (28/10/14)<br />
 +
Sno+ gfal-copy ticket. Brian reports that the Tier 1 is upgrading gfal2 on their WNs, and notes that there's a lot of active debugging work going on in the area. As he eloquently puts it "situation is quite fluid". In progress (13/5)
 +
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=108944 108944] (1/10/14)<br />
 +
CMS AAA tests failing at the Tier 1. There's been a lot of work on this, deploying then trying to get the new xrootd director configured. New problems have cropped up, and are under investigation. In progress (11/5)
 +
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=112721 112721] (28/3)<br />
 +
Atlas transfer failures ("failed to get source file size"). Tracked to a odd double transfer error, possibly introduced in one of the recent "upgrades". Brian has been declaring these files as bad, and a workaround or solution is being thought about. In progress (14/5)
 +
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=113705 113705] (13/5)<br />
 +
Atlas transfer failures from RAL tape. Checksum failures, which Brian tracked to being due to not being of a type Castor supports. Brian has asked if this can be changed at the CERN FTS or in rucio. Waiting for reply (14/5)
 +
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=113748 113748] (16/5)<br />
 +
Another atlas transfer ticket, but as the error indicates no space left at the Brunel space token being transferred to Elena has noted that this isn't a site problem, telling the submitter to put in a JIRA ticket instead. Waiting for reply, but probably can be just closed (16/5)
 +
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=112866 112866] (7/4)<br />
 +
Lots of cms job failures at RAL. This has been traced to some super-hot files, mitigation is being looked into. A candidate for perhaps On Holding, depends on the time frame of a work around. In progress (13/5)
 +
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=113320 113320] (27/4)<br />
 +
CMS data transfer issues. I'm not actually too sure what's going on. There are files that need invalidating, which seems to be the root of the evil befalling transfers. The issue is being actively worked on though. In progress (18/5)
 +
 +
'''Monday 11th May 2015, 14.10 BST'''<br />
 +
22 Open UK Tickets this week.
 +
 +
'''TIER 1'''<br />
 +
There are a few tickets at the Tier 1 that are set "In Progress" but haven't received an update yet this month:<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=108944 108944] (CMS AAA Tests, 30/4)<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=112721 112721] (Atlas Transfer problems, 16/4)<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=109694 109694] (SNO+ gfal copy trouble, 15/4)<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=112866 112866] (CMS job failures, 7/4)<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=112819 112819] (SNO+ arcsync troubles, 20/4)
 +
 +
'''Other Tier 1 Tickets''' (sorry to be picking on you guys!)<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=111699 111699] (10/2)<br />
 +
Atlas glexec hammercloud test jobs at the Tier 1. It appears to be working, but a batch of test jobs failed because they couldn't find the "mkgltempdir" utility on some nodes ("slot1_5@lcg1742.gridpp.rl.ac.uk" and "slot1_4@lcg1739.gridpp.rl.ac.uk"). In progress (4/5)
 +
 +
[https://ggus.eu/index.php?mode=ticket_info&ticket_id=113320 113320] (27/4)<br />
 +
Maybe repeating what Daniela is going to say in the CMS update - trouble with CMS data transfers within RAL. It's under investigation, but it looks like the files in question will need to be invalidated - even if it's just to paint a clearer picture. In progress (10/5)
 +
 +
'''APEL REPUBLISHING'''<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=113473 113473] <br />
 +
At last update Brunel, Liverpool, Edinburgh, Birmingham and Oxford need to republish still. Oxford have their own ticket about it due to complications ([https://ggus.eu/?mode=ticket_info&ticket_id=113482 113482]).
 +
 +
'''UCL Tickets''' - Ben is starting to move to close these, some are going to be "unsolved".
 +
 +
'''GLASGOW'''<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=113095 113095] (17/4)<br />
 +
Andrew asks if the timeframe for the move to Condor be added to this ticket, for the ROD team's information. On Hold (7/4)
 +
 +
'''100IT'''<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=112948 112948] (10/4)<br />
 +
No news on this 100IT ticket for a while. In progress (27/4)
 +
 +
'''Friday 1st May'''<br />
 +
The Bank Holiday weekend might muck up plans for a Ticket review this week. Just in case, some links!
 +
 +
[https://vo-nagios.physics.ox.ac.uk/nagios/cgi-bin/status.cgi?host=all&servicestatustypes=16&hoststatustypes=15 Other VO Nagios page.]<br />
 +
 +
[http://tinyurl.com/l784bbg UK GGUS Tickets]
 +
 +
Hope you all have a nice weekend!
 +
 +
A quick check of the [https://vo-nagios.physics.ox.ac.uk/nagios/cgi-bin/status.cgi?host=all&servicestatustypes=16&hoststatustypes=15 Other VO Nagios] page.
 +
 +
26 Open UK tickets this week.
 +
 +
'''ITWO Decommissioning'''<br />
 +
Three of the tickets are to the VOMS sites (Manchester, Oxford, IC), concerning the decommissioning of the ITWO VO. Just an FYI to y'all.
 +
 +
'''ECDF'''<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=113293 113293] (26/4)<br />
 +
There was an APEL problem last month where a lot of sites needed to republish their data for the month. I think Edinburgh are the only UK site that suffered this problem, but another FYI ticket. Assigned (26/4) '' And solved''
 +
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=113181 113181](21/4)<br />
 +
Atlas production jobs not running at ECDF. Andy noticed that analysis jobs were running fine, and believes that this might be a problem scheduling pilots in time. Perhaps a multicore issue if this is only effecting (affecting?) production jobs. In progress (22/4) ''Update - solved''
 +
 +
'''ATLAS GLEXEC HAMMERCLOUD PROBLEMS'''<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=111699 111699] (TIER 1)<br />
 +
[https://ggus.eu/index.php?mode=ticket_info&ticket_id=111703 111703] (RALPP)<br />
 +
It was discovered that there was a problem in the test code, so the ball is very much in atlas' court for this one. The problem has been fixed and the tests are being rebuilt and resubmitted.
 +
 +
'''TIER 1'''<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=112721 112721] (28/3)<br />
 +
ATLAS FTS failures too RAL. A rucio issue causing double-transfers has been discovered ([https://its.cern.ch/jira/browse/ATLDDMOPS-4939 here]), which would explain the behaviour seen. No news since this revelation. In progress (16/4)
 +
 +
There are a number of other Tier 1 tickets that could do with either an update or On Holding
 +
 +
'''Monday 20th April 2015, 14.30 BST'''<br />
 +
24 Open UK tickets this week, only a light review. ''Update - down to 20 open tickets as of this morning''
 +
 +
'''NGI/TIER 1'''<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=113150 113150] (20/4)<br />
 +
Fresh in - the NGI has been ticketed to change the regional VO from emi.argus to ngi.argus in the gocdb. Seems a bit pedantic, but hey! I assigned it to the NGI ops, and notified RAL as keepers of the regional argus. Assigned (20/4) ''Update - solved''
 +
 +
'''TIER 1'''<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=113035 113035] (14/4)<br />
 +
Just for people's interest, the ticket tracking the decommissioning of the last of the RAL CREAM CEs. In progress (14/4)
 +
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=112819 112819] (2/4)<br />
 +
A SNO+ ticket I must of somehow missed last week, concerning SNO+'s manual renewing of proxies on ARC machines. Matt M has noticed that ArcSync occasionally hangs rather then timeouts smoothly (although he later notes that he doesn't see the initial problems working from a different network). I'm thinking that this should be redirected at the arc devs, but I don't think they have a GGUS support group (I could be wrong, I'm well behind on the ARC curve). In progress (7/4)
 +
 +
'''EDINBURGH'''<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=113110 113110] (17/4)<br />
 +
Looks like this atlas low transfer efficiency ticket can be closed. Waiting for reply (20/4) ''Update - solved''
 +
 +
'''GLASGOW'''<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=113095 113095] (17/4)<br />
 +
ROD ticket for some BDII misreporting at Glasgow. The botheration seems to be ephemeral in nature, the blunders passing with the abating of their batch system's burden. This ticket can probably be solved. In progress (17/4)
 +
 +
 +
'''Monday 13th April 2015, 14.00 BST'''<br />
 +
24 Open tickets this week - going over all of them this week, site by site.
 +
 +
''Fresh in this morning - [https://ggus.eu/?mode=ticket_info&ticket_id=113010 113010] and [https://ggus.eu/?mode=ticket_info&ticket_id=113011 113011] - Sno+ tickets concerning the RAL and Glasgow WMSes not updating job statuses.''
 +
 +
'''RALPP'''<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=111703 111703](11/2)<br />
 +
Atlas glexec hammercloud tests failing. There's been a lot of waiting on atlas to build new HC jobs. The most recent exchange (delayed due to Easter), was asking about SELinux - but no news since the first. In progress (1/4)
 +
 +
'''BIRMINGHAM'''<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=112875 112875](7/4)<br />
 +
Low availability ROD ticket. Availability is crawling back up, just need it to go green. On hold (13/4)
 +
 +
'''GLASGOW'''<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=112967 112967](10/4)<br />
 +
Another ROD ticket for bdii errors at Glasgow. Gareth has been doing everything right investigating this. Kashif recommended ticketed the midmon unit, but Gareth has spotted that the errors correspond to high load on their ARC CE - so it might be a site problem after all - Gareth asks for clarification. Waiting for reply (13/4)
 +
 +
'''EDINBURGH'''<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=95303 95303] (1/7/13)<br />
 +
Tarball glexec ticket. No news (sorry). End of April I believe was the "deadline" I set for having this made. On Hold (9/3)
 +
 +
'''LANCASTER'''<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=100566 100566] (27/1/14)<br />
 +
Lancaster's poor perfsonar performance. I'm not believing quite what I was seeing with the tests I performed so I'm aiming to rerun them. On hold (13/4)
 +
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=95299 95299] (1/7/13)<br />
 +
Lancaster's tarball glexec ticket. Same as ECDF. On hold (9/3)
 +
 +
'''BRUNEL'''<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=112966 112966] (13/3)<br />
 +
A ROD cream job submit ticket, freshly assigned this afternoon. It's a bit mean of me to bring notice to it. Assigned (13/4) ''And POW, Raul closed this after kicking torque into shape - solved''
 +
 +
'''100IT'''<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=112948 112948] (10/4)<br />
 +
100IT needed to upgrade to the latest CA release. They've done this, but there are still authentication problems. In progress (13/4)
 +
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=108356 108356] (10/9/14)<br />
 +
Deploying vmcatcher at 100IT. After David's questions falling on deaf ears for a while it has been advised that the ticket be closed as this issue will be dealt with elsewhere. Whether or not it is to be "solved" or "unsolved" is open to debate! In progress (can possibly be closed) (13/4)
 +
 +
'''TIER 1'''<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=108944 108944] (1/10/14)<br />
 +
CMS AAA tests failing at RAL. After a lot of work and new xrootd redirectors problems persist. It's looking to be a problem that needs the CASTOR and/or xrootd devs to look at. In progress (30/3)
 +
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=112713 112713] (27/3)<br />
 +
CMS asking to clean up the "unmerged area". Andrew conjured up a list of files and asked if they could be deleted - CMS responded with a "yes please then close the ticket". Has the deed been done? In progress (31/3)
 +
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=109694 109694] (28/10/14)<br />
 +
The Sno+ gfal copy ticket. Matt M still sees gfal-copy hang for files at RAL when he uses the GUID (SURL works). A Castor oddity perhaps? Matt asks a question about what problems like this (coupled with the move away from lcg tools) will mean for VOs that rely on the LFC. In progress (31/3)
 +
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=112977 112977] (10/3)<br />
 +
CMS high job failure rate at RAL. Related to 112896 (below) - the jobs all want that file! In progress (13/3)
 +
 +
https://ggus.eu/index.php?mode=ticket_info&ticket_id=112896 (9/4)<br />
 +
CMS Dataset  access problems  - caused by over a million access attempts on a single file over a 18 hour period. Andrew L comments that CMS needs to have a think about how they access pileup datasets. In progress (9/4)
 +
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=111699 111699] (10/2)<br />
 +
Tier 1 counterpart to 111703. A new HC stress test was submitted near the end of March, but no news on how it did. In progress (23/3)
 +
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=112866 112866] (7/4)<br />
 +
A different "lots of CMS job failures" ticket. Again a "hot file" seems to be the root cause. In progress (7/4)
 +
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=112721 112721] (28/3)<br />
 +
An atlas file access ticket, seemingly caused by some odd FTS behaviour. No answers to Shaun's question about this odd occurrence or much noise at all till today. Waiting for reply (13/4)
 +
 +
'''UCL'''<br />
 +
UCL has 6 tickets - 4 just "assigned". I'll just list them in the interests of brevity.<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=112371 112371] (ROD low availiability, On Hold)<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=112841 112841] (atlas 0% transfer efficiency, assigned)<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=112873 112873] (ROD srm put failures, assigned)<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=95298 95298] (glexec ticket, on hold)<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=112722 112722] (atlas checksum timeouts, in progress)<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=112966 112966] (ROD job submit failures, assigned)
 +
 +
'''Tuesday 7th April'''
 +
* 20 open tickets. [http://tinyurl.com/l784bbg Link to GGUS].
 +
 +
'''Monday 23rd March 2015, 15.30 GMT'''<br />
 +
19 Open tickets this week.
 +
 +
'''BIRMINGHAM'''<br />
 +
https://ggus.eu/?mode=ticket_info&ticket_id=112550 (23/3)<br />
 +
A ticket fresh off the ROD dashboard - the Birmingham CREAMs aren't being matched ("BrokerHelper: no compatible resources"). Matt W has double checked their setup and can't spot anything wrong - they've been running "normal" atlas/lhcb etc jobs fine over the last few weeks. Any advice appreciated. In progress (23/3)
 +
 +
'''TIER 1'''<br />
 +
https://ggus.eu/?mode=ticket_info&ticket_id=112495 (21/3)<br />
 +
MICE problems running jobs at RAL, which Andrew L discovered coincided with WMS problems that he fixed. Probably should be in "Waiting for Reply/Seeing if the problem's evaporated". In progress (23/3)
 +
 +
https://ggus.eu/?mode=ticket_info&ticket_id=112350 (14/3)<br />
 +
The cause of this Sno+ ticket, about a recent user not being able to access files due to not being in the gridmap, has been discovered. As Robert F sagely pointed out the latest version of the mkgridmap rpm is required to talk to the voms server. Just waiting on the time for it to updated at now. In progress (17/3)
 +
 +
'''100IT'''<br />
 +
https://ggus.eu/?mode=ticket_info&ticket_id=108356 (10/9/14)<br />
 +
This 100IT ticket is still waiting for a reply (since mid-January). The question needs to be answered by someone familiar with the technologies and terminologies? Is anyone up on vmcatcher? Anyone know what other channel to pass David's query onto? Waiting for reply (15/1)
 +
 +
'''Monday 16th March 2015, 15.30 GMT'''<br />
 +
 +
16 Open UK tickets this week. Half Red, Half Green.
 +
 +
'''RALPP and TIER 1 glexec HC tickets'''<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=111703 111703] (RALPP)<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=111699 111699] (TIER 1)<br />
 +
No news on these tickets since it was expected that a new stable HC job would be released last Tuesday. All very quiet.
 +
 +
'''OXFORD'''<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=112011 112011](25/2)<br />
 +
Similar for this CMS glexec ticket - no news after Kashif asked for some more information way back. Waiting for reply (27/2)
 +
 +
'''TIER 1'''<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=109694 109694](28/10/14)<br />
 +
The Sno+ gfal copying ticket. A lot of people are working on this, and attempts to recreate the problems seem to occasionally be devolving into "did we get this complicated command right?". At some point it might be necessary to get the gfal devs involved (are there gfal devs?). Waiting for reply (11/3)
 +
 +
Also there's the poor 100IT ticket, still waiting for a reply. The JET ticket also needs wrapping up, I put a reminder in to the end of it.
 +
 +
 +
'''Monday 9th March 2015, 15.00 GMT'''<br/ >
 +
From last week's crusty ticket round up:
 +
 +
'''Tier 1'''<br/ >
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=109694 109694]- 28/10/14<br/ >
 +
Matt M has managed to reclaim his tickets after a certificate change orphaned his old ones. Progress has resumed. Duncan has asked Matt to retry his failing tests with a simple copy to local disk example.
 +
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=108944 108944]- 1/10/14<br/ >
 +
CMS AAA access at RAL. Andrew posed a question to the CMS xroot experts last week - if you have their details it might be a good idea to involve them in the ticket.
 +
 +
'''100IT'''<br/ >
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=108356 108356]- 10/9/14<br/ >
 +
This VMCatcher ticket is still stuck waiting for a reply. Deafening silence for our 100IT colleagues.
 +
 +
'''EFDA-JET'''<br/ >
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=97485 97485] - 21/9/13<br/ >
 +
The Jet LHCB ticket is in the state of being wrapped up. LHCB have been removed from the local configs, and the site has been removed from LHCB's. I believe that this ticket can be terminated.
 +
 +
'''QMUL'''<br/ >
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=110353 110353] - 25/11/14<br/ >
 +
Dan's managed to get webdav working on one SE, but not t'other. Very strange, but Dan is investigating (see also [https://ggus.eu/?mode=ticket_info&ticket_id=111942 111942]).
 +
 +
No movement on the three glexec tickets (none expected on the two tarball ones in the last week though), the Lancaster perfsonar ticket is still waiting on another batch of local tweaks (and I still need to make sense of what I'm seeing). Matt RB closed Sussex's perfsonar ticket though - nice one.
 +
 +
'''The "Normal" tickets:'''
 +
 +
Atlas gLexec Hammercloud failures (RALPP and Tier 1)<br/ >
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=111699 111699] (Tier 1)<br/ >
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=111703 111703] (RALPP)<br/ >
 +
These tests were waiting on a new stable job release being made - this has been delayed (hopefully out tomorrow).
 +
 +
'''TIER 1'''<br/ >
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=111856 111856](19/2)<br/ >
 +
This LHCB ticket about stalled jobs looks like it can be closed (LHCB no longer see a problem). ''Update - set to solved, the jobs were being killed for using too much memory.''
 +
 +
'''OXFORD'''<br/ >
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=112011 112011](25/2)<br/ >
 +
A CMS user saw glexec failures on some nodes - Kashif asked for some more information but there has been no reply. I'd consider giving the user till the end of the week then closing the ticket if there's still no word. Waiting for reply (27/2)
 +
 +
'''Tuesday 3rd March'''
 +
* [http://tinyurl.com/nwgrnys A link to GGUS open tickets] for checking status directly.
 +
 +
Concentrating on pre-2015 tickets this week in an attempt to Spring Clean the UK ggus presence. I will review these again next week - can people please take a look at these tickets if they're owned by them (or if they think they can help!).
 +
 +
'''SUSSEX - 26/11/14'''<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=110389 110389]<br />
 +
This is a perfsonar ticket - the initial request (reinstalling the perfsonar node) has been done a while ago but things weren't quite right. Matt RB did some soothing to this last week and asks if he's missed anything - I put it to waiting for reply this morning. Waiting for reply (24/2)
 +
 +
'''QMUL - 25/11/14'''<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=110353 110353]<br />
 +
Atlas wanting https access on QM's SE. Dan's been working on this nicely, carefully testing each stage of his rollout. The end is in sight here. In progress (17/2)
 +
 +
'''TIER 1 - 28/10/14'''<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=109694 109694]<br />
 +
This is a SNO+ ticket about getting gfal tools working for the Tier 1 - with the new version out Brian has tested it correctly (and I saw a related thread on lcg-rollout) - but no word from Sno+. Who is wrangling the other VOs in these post-Walker times?  Waiting for reply (24/2)
 +
 +
'''TIER 1 - 1/10/14'''<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=108944 108944]<br />
 +
A CMS access about AAA tests at the Tier 1. This ticket is being actively worked on, with a new xrootd redirector at RAL and problems with the EU redirectors mucking things up. No problems that I can see. Waiting for reply (2/3)
 +
 +
'''100IT - 10/9/14'''<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=108356 108356]<br />
 +
Getting VMcatcher stuff to work at 100IT. This ticket seems to keep stalling due to lack of documentation or replies from the submitters. Waiting for reply (19/1)
 +
 +
'''LANCASTER - 27/1/14'''<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=100566 100566]<br />
 +
Lancaster's poor perfsonar performance. Being poked and prodded on and off over the last year, but the problems remain a mystery - Ewan's lending of a iperf endpoint has helped out greatly though, waiting on yet another network tweak. On Hold (23/2)
 +
 +
'''EFDA-JET - 21/9/13'''<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=97485 97485]<br />
 +
LHCB job failures at EFDA-JET. The causes of this remain a mystery, and is the first ticket on my "to be set to unsolved" list.
 +
 +
'''UCL - 1/7/13'''<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=95298 95298]<br />
 +
UCL's glexec ticket. Ben's been working hard at this recently, but keeps hitting show stoppers - the latest being a performance problem possibly due to the VM he's running argus on. On hold (19/2)
 +
 +
'''ECDF and LANCASTER - 1/7/13'''<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=95303 95303] (ECDF)<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=95299 95299] (Lancaster)<br />
 +
glexec for the tarball. This is *still* waiting on the tarball glexec, which is again waiting on me, which is waiting on me magicking some extra tarball development time. Will be reviewed by the end of March.
 +
 +
 +
'''Monday 23rd February 2015, 15.00 GMT'''<br />
 +
Only 15 Open UK tickets this week. Feel free to bring up any ticket-based issues of your own on this quiet week.
 +
 +
'''RALPP'''<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=111703 111703](11/2)<br />
 +
This CMS glexec hammercloud ticket is looking a little quiet - no update for a while. If it's continuing offline or waiting on input could it at least be put On Hold? In progress (11/2)
 +
(The Tier-1 version of this ticket, 111699, seems to be chugging along fine - there might be useful snippets in there).
 +
 +
'''100IT'''<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=111333 111333](22/1)<br />
 +
The Cloud accounting probe ticket was reopened, asking if 100IT ticketed apel support (I assume contacting them via other means would work too) otherwise the new cloud accounting won't be properly republished. Reopened (20/2)
 +
 +
 +
'''IMPERIAL''' (but not really their issue)<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=111872 111872](20/2)<br />
 +
Tom opened a ticket after another cern@school user had troubles using the IC SE - there has been some problems with the newer versions of the dirac UI. Sometimes it's better to go Vintage! Although after trying Simon and Daniela haven't been able to reproduce the failure - perhaps something's up with the user's UI? Waiting for reply (23/2)
 +
 +
'''SUSSEX'''<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=110389 110389](26/11/14)<br />
 +
Sussex's Perfsonar ticket. I know Matt RB has put the ticket On Hold and is very busy - is there any news/anything we can do to help? On Hold (21/1
 +
 +
'''TIER 1'''<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=109694 109694](28/10/14)<br />
 +
Sno+ gfal copy problems at RAL. Brian informs us that the latest version of the gfal tools works for him and has asked if they work for Matt M. and Co. Did you get these packages out of epel/epel-testing or somewhere else Brian? Waiting for reply (18/2)
 +
 +
'''Monday 16th February 2015, 14.30 GMT'''<br />
 +
Only 19 open UK tickets today.
 +
 +
'''TIER 1'''<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=111120 111120](12/1)<br />
 +
An atlas ticket concerning transfer failures between RAL and BNL. Brian mentioned last week that the lack of recent failures is due to atlas not attempting to transfer any older data recently. Perhaps this could do with being put into the ticket (and the ticket being put On Hold, or prodded some more)? Waiting for reply (29/1)
 +
 +
Also
 +
[https://ggus.eu/index.php?mode=ticket_info&ticket_id=111800 111800] (17/2)<br />
 +
ARC CE issues at RAL detected.
 +
 +
'''RALPP'''<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=111703 111703](11/2)<br />
 +
Atlas running glexec hammerclouds - having trouble at RALPP (and RAL, see 111699). The glexec experts have gotten involved on this one, and asked to take a peek at a proxy - not sure about anyone else, but I'd feel a tad uncomfortable sharing proxies, even with known and trusted experts as in this case. Am I being overly paranoid? Either way the ticket has gone a bit quiet. In progress (11/2)
 +
 +
'''100IT'''<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=108356 108356] & [https://ggus.eu/?mode=ticket_info&ticket_id=111333 111333]<br />
 +
Both the 100IT tickets are in Waiting for reply - the oldest one for quite a while - David asked a question a while back and no answer. The newest one asks if the 100IT logs made it to apel safely - I think what David has to do is submit a ticket with this question to the apel support team - have I got the right end of the stick?
 +
 +
'''Biomed Tickets at Manchester and Imperial'''<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=111356 111356] & [https://ggus.eu/?mode=ticket_info&ticket_id=111357 111357]<br />
 +
FYI There's a note at the bottom of both of these tickets that the version of CREAM that should fix this has been delayed until the end of February(ish).<br />
 +
''Update - I read those updates wrong - the cream update has been released and these tickets have been (perhaps erroneously) closed.''
 +
 +
 +
 +
'''Monday 9th February 2015, 15.00 GMT'''
 +
 +
[https://vo-nagios.physics.ox.ac.uk/nagios/cgi-bin/status.cgi?host=all&servicestatustypes=16&hoststatustypes=15 Other VO Nagios Results]<br />
 +
At the time of writing the only site showing red that aren't suffering an understood problem was RALPP with org.nordugrid.ARC-CE-submit and SRM-submit test failures for gridpp, pheno, t2k and southgrid for both its CEs and its SE. The failures are between 1 and 12 hours old, so it doesn't seem to be a persistent failure, but it seems to be quite consistent. They all seem to be failing with "Job submission failed... arcsub exited with code 256: ...ERROR: Failed to connect to XXXX(IPv4):443 .... Job submission failed, no more possible targets". Anyone seen something like this before?
 +
 +
Only 20 Open UK tickets this week.
 +
 +
'''Biomed tickets:'''<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=111356 111356] (Manchester)<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=111357 111357] (Imperial)<br />
 +
Biomed have linked both these tickets as children of 110636, being worked on by the cream blah team. AFAIKS no sign of Cream 1.16.5 just yet.
 +
 +
'''TIER 1'''<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=111347 111347] (22/1)<br />
 +
CMS consistency checks for January 2015. It looks like everything that was asked of RAL has been done by RAL, so hopefully this can be successfully closed. In progress (3/2)
 +
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=111120 111120] (12/1)<br />
 +
Another ticket, this time concerning a period of Atlas transfer failures between RAL and BNL, that looks like it can be closed as the failures seem to have stopped (and might well have been at the BNL end). Waiting for reply (22/1)
 +
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=108944 108944] (1/10/14)<br />
 +
CMS AAA test failures at RAL. Federica can't connect to the new xrootd service according to the error messages. No news for a while. In progress (29/1)
 +
 +
'''100IT'''
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=108356 108356]<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=111333 111333]<br />
 +
Both of these 100IT tickets are looking a bit crusty - the first is waiting for advice, the second was just put "In progress".
 +
 +
'''QMUL'''<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=110353 110353] (25/11/14)
 +
Dan has set up se02.esc.qmul.ac.uk to test out the latest https-accessible version of storm for dteam and atlas. As a cherry on top this node is also IPv6 enabled. I'm not sure if Dan wants others in the UK to "give it a go"? In progress (6/2)
 +
 +
'''LANCASTER'''<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=100566 100566] (27/1/14)<br />
 +
(Blatantly scounging for advice) Trying to figure out why Lancaster's perfsonar is under-performing. Ewan kindly gave us access to a iperf endpoint and it's been very useful in characterising some of the weirdness - although I'm still confused. Ewan also gave us a bunch of suggestions for testing that have been useful - next stop, window sizes. If anyone else wants to throw advice to me all wisdom donations are thankfully accepted. My advice for others in be careful trying to connect to the default iperf port on a working DPM pool node.... In Progress (9/2)
 +
 +
'''Monday 2nd February 2015, 14.00 GMT'''<br />
 +
22 Open UK tickets this month.
 +
 +
'''SUSSEX'''<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=110389 110389] (26/11/14)<br />
 +
A perfsonar ticket for Sussex. Their perfsonar has been reinstalled, but needs soothing. Matt has informed us that this might have to wait a few weeks due to other issues. On Hold (21/1)
 +
 +
'''RALPP'''<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=110536 110536] (2/12/14)<br />
 +
MICE job failures at RALPP - it looked like they were dying due to running of of memory. The queues have been tweaked to give MICE more, but no word from the MICE if this has solved the problem. Waiting for reply (12/1)
 +
 +
'''BRISTOL'''<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=110365 110365] (25/11/14)<br />
 +
Another perfsonar ticket. Again the node is reinstalled, just not quite working right. Winnie is waiting for news from the other sites in a similar boat. In progress (maybe On Hold it?) (20/1)
 +
 +
'''EDINBURGH'''<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=111118 111118] (12/1)<br />
 +
ECDF "low availability" ticket - just waiting for the silly alarm to clear. Daniela submitted a ticket about this foolish alarm a while ago - [https://ggus.eu/?mode=ticket_info&ticket_id=107689 107689]. On Hold (19/1)
 +
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=95303 95303] (1/7/13)<br />
 +
glexec tarball ticket. With my tarball hat on - still no positive news on this front - it's beginning to look like this can't be done but we're having one last go. Sorry! On Hold (19/12)
 +
 +
'''MANCHESTER'''<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=110225 110225] (18/11/14)<br />
 +
Change of VO Manager for helios-vo.eu. It looks like this ticket is being held up at the user end a lot. I'm not sure there's anything we can do as it involves outside CAs. On Hold (20/1)
 +
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=111356 111356] (23/1)<br />
 +
One of Manchester's CEs not working for biomed, due to problems with the new CREAM/old WMS communication. Alessandra gave biomed some sagely advice, but I suspect this ticket will need to be prodded soon to get a reponse from biomed (who I agree should use a newer WMS and close it). On Hold (26/1)
 +
 +
'''LANCASTER'''<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=111547 111547] (2/2)<br />
 +
I'm reporting on a ticket that I submitted to myself today. I'm not sure what that says about the world. Anyway - a ticket to track the decommissioning of one of Lancaster's CEs, as we try to do it all proper like. On Hold (2/2)
 +
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=100566 100566] (21/1/14)<br />
 +
Lancaster's perfsonar ticket, which I sadly let reach its first birthday. I've been prodding this offline, does anyone have the address for a regular, open iperf endpoint I could borrow? On Hold (9/1)
 +
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=95299 95299] (1/7/13)<br />
 +
Lancaster's tarball glexec ticket, as the ECDF one. On hold (26/1)
 +
 +
'''UCL'''<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=95299 95299] (1/7/13)<br />
 +
UCL's glexec ticket. They've been having trouble getting it to behave, and at last check Ben was off ill - probably due to dealing with glexec :-) On Hold (20/1)
 +
 +
'''QMUL'''<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=110353 110353] (25/11/14)<br />
 +
Atlas asking for QM's storage to be made available via https. Waiting on a production ready STORM that can provide this - Dan is trying it out on his testbed se02.esc.qmul.ac.uk, which still needs tweaking. In progress (28/1)
 +
 +
'''IMPERIAL'''<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=111357 111357](23/1)<br />
 +
One of the IC CEs not working for biomed. Similar to the Manchester ticket, Daniela points to ticket 110635 and is waiting on an EMI release to fix it (due out imminently AIUI). On Hold (28/1)
 +
 +
'''EFDA-JET'''<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=97485 97485] (21/9/13)<br />
 +
Jet's LCHB job failure tickets. I'm afraid I haven't been able to chase this up (partly due to only ever remembering on the first Monday of the month) - there's been no news for a while. On Hold (1/10/14)
 +
 +
'''100IT'''<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=111333 111333] (22/1)<br />
 +
A ticket to 100IT and the NGI to get the cloud accounting probe upgraded. I notified 100IT, but forgot to reassign the ticket - thanks to Jeremy for doing it. Assigned (2/2)
 +
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=108356 108356](10/9/14)<br />
 +
Getting VMcatcher working at 100IT. David from 100IT has asked for some answers on which "glancepush" to use, but no reply for a while. Waiting for reply (19/1)
 +
 +
'''TIER 1'''<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=111477 111477](29/1)<br />
 +
CMS would like to run some staging tests to warm up for Run2. The Tier 1 warned CMS of today's outage and they're happy to proceed tomorrow (the 3rd) - I think they'd like a response. In progress (30/1)
 +
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=107935 107935](27/8/14)<br />
 +
A ticket regarding inconsistent BDII and SRM storage numbers. Waiting on a fix from the developers regarding read-only disk accounting (I think), Brian is still on the case. Stephen B let us know that Maria the ticket submitter is on maternity leave, and asks in her stead if the numbers are expected to align now. On hold (28/1)
 +
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=111120 111120](12/1)<br />
 +
An atlas ticket about a large number of data transfer errors seen between RAL and BNL. Brian reckoned that this was due to shallow checksums on the old data being transferred, but had trouble looking at the BNL FTS. Regardless, the ADCoS shifter hadn't seen any errors for a week and suggests the ticket can be closed. Waiting for reply (29/1)
 +
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=108944 108944](1/10/14)<br />
 +
CMS AAA test problems at RAL. After setting up a new xrootd box the test failures have changed in nature, but sadly they're still failures. In progress (29/1)
 +
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=111347 111347](22/1)<br />
 +
CMS Consistency Check for RAL, January 2015 edition. Filelists were generated, orphan files were identified, then purged. Just need to know what CMS want to do next. Waiting for reply (26/1)
 +
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=109694 109694](28/10/14)<br />
 +
Sno+ ticket concerning gfal tool problems, waiting on the new release to come out (middle of this month I believe). If you don't want to wait that long then I believe the 2.8 gfal2 tools can be found in the fts3 repo at last check. On hold (20/1)
 +
 +
 +
'''Monday 26th January 2015, 14.15 GMT'''<br />
 +
''Back after being forgotten about by me:''<br />
 +
[https://vo-nagios.physics.ox.ac.uk/nagios/cgi-bin/status.cgi?host=all&servicestatustypes=16&hoststatustypes=15 '''Other VO Nagios Status:''']
 +
 +
At the time of writing I see:<br />
 +
'''Imperial:''' gridpp VO job submission errors (but only 34 minutes old so probably naught to worry about).<br />
 +
'''Brunel:''' gridpp VO jobs aborted (one of these is 94 days old, so might be something to worry about).<br />
 +
'''Lancaster:''' pheno failures (I can't see what's wrong, but this CE only has 10 days left to live).<br />
 +
'''Sussex:''' snoplus failures (but I think Sussex is in downtime).<br />
 +
'''RALPP:''' A number of failures across a number of CEs, all a few hours old. An SE problem?<br />
 +
'''Sheffield:''' gridpp VO job submission failure, but only 6 hours old.
 +
And of course the srm-$VONAME failures at the Tier 1, which are caused by incompatibility between the tests and Castor AIUI. Things are generally looking good.
 +
 +
22 Open UK Tickets this week. <br />
 +
'''NGI/100IT'''<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=111333 111333](22/1)<br />
 +
The NGI has been asked to upgrade the cloud accounting probe, and then notify our (only at the moment) cloud site to republish their accounting. Not entirely sure what this entails or who this falls on, I assigned it to NGI-OPERATIONS (and also noticed that 100IT isn't on the "notify site" list - odd). Assigned (22/1)
 +
 +
'''TIER 1'''<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=108944 108944](1/10/14)<br />
 +
CMS AAA test failures. Andrew Lahiff reported last week that the Tier 1 is building a replacement xrootd box which is currently being prepared. If that will take a while can the ticket be put on hold? In progress (19/1)
 +
 +
'''QMUL'''<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=110353 110353](25/11/14)<br />
 +
An atlas ticket, asking for httpd access to at QMUL. The QM chaps were waiting on a production ready Storm that could handle this, and are preparing to test one out. This is another ticket that looks like it might need to be put On Hold (will leave that up to you chaps - there's a big difference between "slow and steady" progress and "no progress for a while"). In progress (21/1)
 +
 +
'''RHUL'''<br />
 +
[https://ggus.eu/?mode=ticket_info&ticket_id=111355 111355](23/1)<br />
 +
A dteam ticket - concerning http access to RHUL's SE. Although the initial observation about the SE certificate being expired was incorrect (the expiry date was reported as 5/1/15, which to be fair I would read as the 5th of January and not the 1st of May!) there still is some underlying problem here with intermittent test failures. Also this ticket raises the question of under what context are these tests being conducted? Anyone know, or shall we ask the submitter? In progress (26/1)
 +
 +
'''BIOMED PROBLEMS:'''<br />
 +
'''Manchester:''' [https://ggus.eu/?mode=ticket_info&ticket_id=111356 111356](23/1)<br />
 +
'''Imperial:''' [https://ggus.eu/?mode=ticket_info&ticket_id=111357 111357](23/1)<br />
 +
Biomed are having job problems, looking to be caused by using crusty old WMSes to communicate with these site's shiny up-to-date CEs. According to ticket 110635 a cream side fix should be out by the end of January (CREAM 1.16.5), although Alessandra suggests that Biomed should try to use newer, working WMSes - or Dirac instead!
 +
 +
 
'''Monday 19th January 2015, 14.30 GMT'''<br />
 
'''Monday 19th January 2015, 14.30 GMT'''<br />
 
23 Open UK Tickets this week.
 
23 Open UK Tickets this week.

Latest revision as of 11:05, 4 January 2016

Monday 14th December 2015, 13.00 GMT
37 Open Tickets to end the year on.

Separating out the atlas storage consistency check tickets, I don't think there's going to me much more done in 2015 on them from either end. Sites with tickets still open for this are: Sussex, Oxford, Birmingham, Glasgow, Sheffield, RHUL, QM, Imperial, Brunel and the Tier 1 (who are almost done). If you're not almost done then you'll want to On Hold the ticket (and knock the urgency down to "less urgent" whilst you're at it).

Onto the other tickets, site by site:

SUSSEX
118337 (14/12)
Some accidental not-reading-downtime-noifications by Sno+, I suspect this ticket can be closed. In progress (14/12)

117739 (20/11)
PRODDISK deletion at Sussex - as they're in the middle of a SE move this will want to be on hold! Unless you're just not moving the proddisk directories? In progress (25/11)

118289 (10/12)
Pilots roles at Sussex- the ticket only just rolled into the site but if you can you might want to roll this into your downtime if it's going smoothly (and if you're having a smooth downtime, can you tell me what that's like?). Assigned (14/12)

RALPP
117711 (19/11)
Pheno Pilot Role ticket. At last word the new groups "just" needed to be configured on the middleware. Any joy? In progress (19/11)

117740 (20/11)
The deletion of proddisk at RALPP has snowballed into a good clearout of the atlas area. Chris asked this morning if atlasdatadisk/step09 can be deleted too, and if Brian's had a chance to look at the filelists he provided. Waiting for reply (14/12)

BRISTOL
118311 (12/12)
CMS SAM3 tests were failing at Bristol. A tweak of their site configs solved the problem and they're both green (although Lukasz was surprised that they were throwing SRM warnings at a site with no SRM). In progress (looks like it can be closed) (14/12)

118299 (10/12)
CMS transfer timeouts on job stageout - has this one snuck past the Bristol ticket-sentries? Assigned (11/12)

BIRMINGHAM
118155 (4/12)
Biomed have used up too much space on Birmingham's SE - but need help finding the dark data and asked for a dump. We could co-opt the atlas tools to make a dump for them? No news for a while. In progress (4/12)

GLASGOW
117706 (19/11)
Pheno pilots at Glasgow -Gareth has updated that they're having a overhaul of identity management at the site and will roll supporting the "Dirac VOs" into that work. On Hold (2/12)

118052 (30/11)
An atlas ticket about http support at Glasgow's endpoints. Sam noticed that this is likely a problem with some older dmlite versions on some pools, and is looking into a non-nuclear solution. In progress (1/12)

ECDF
95303 (1/7/13)
Tarball glexec ticket-as usual see the Lancaster update as this is my bad. On hold (18/5)

SHEFFIELD
114460 (18/6)
Pilots roles at Sheffield - Elena is trying to roll this out this week. Any Joy? In progress (1/12)

LANCASTER
118305 (11/12)
Low availability ticket for Lancaster after the events of last week. We're just waiting for the numbers to pick up now. On hold (11/12)

95299 (1/7/13)
Tarball glexec. We have a new, updated build process but have had no time to test the end product - we have to build lcas/lcmaps alongside glexec to cope with different paths. On Hold (2/11)

RHUL
118236 (9/12)
Atlas "no route to host" errors due to a downed pool node - I think Govind has happy news about that disk pool so hopefully this ticket can be closed. In progress (11/12) Update -solved, looks like a transient problem.

118295 (11/12)
SE not working for Pheno when they use lcg-tools, although they can get stuff with uberftp. Odd. Assigned (11/12)

QMUL
117723 (19/11)
Pilots roles for pheno - these were rolled out but weren't behaving (normal pheno jobs are fine). Any joy? In progress (23/11)

IMPERIAL
118162 (4/12)
LFC at Imperial not working right for users from NA62, which coincided with the attack on JANET. DAMMIT. The LFC restarted and Janusz ran a test which worked fine the day after. No word back from the user. In progress (9/12)

BRUNEL
117710 (19/11)
Pheno pilots at Brunel - Raul was hoping to get to this. Any luck? On hold (22/11)

100IT
116358 (22/11)
Included for completeness - a "missing image" ticket for 100IT. There was a recent update so there's some progress here. In progress (7/12)

THE TIER 1
116866 (12/10)
Snoplus (and other) pilot roles at the Tier 1. Daniela is throwing dirac test jobs at the Tier 1 to try to debug the issues. A lot of effort going into the issue. In progress (9/12)

118044 (30/11)
Atlas glexec hammerclouds failing at the Tier 1. It was noted that the failures occurred about the time of a config change, but the glexec-guru was unavailable. In progress (7/12)

117683 (18/11)
Castor not publishing glue2. Stephen Burke has offered a hand with the task. In progress (19/11)

118338 (14/12)
Janusz, with his mice (mouse?) hat on, cannot copy data from the Imperial SE to Castor via the FTS, but can do uploads/downloads using the regular tools. Early days of the issue. In progress (14/12)

116864 (12/10)
CMS AAA xroot test failures. The ticket submitter replied to some queries with quite a lot of information that hopefully helps get to the bottom of this. In progress (20/11)

118130 (3/12)
Technically a ticket to the whole atlas UK cloud I think - atlas transfers throughout the UK has a large number of errors. Brian has been investigating and asked the FTS experts to help. Waiting for Reply (11/12)

Update: 118345 (14/12)
Jeremy has asked the Tier 1 about how well utilised their WMS currently is - Catalin has been put forward to comment.

Monday 7th December 2015, 12.30 GMT
Apologies from Matt - we're picking ourselves up from the weekend's events up here in Lancaster.

31 Open UK tickets, 13 of these are atlas consistency checking tickets.

Link to all the GGUS tickets.

Of general interest is this atlas ticket: 118130
Where atlas have noticed large numbers of transfer and staging errors for the cloud.

Full review next week!

Monday 30th November 2015, 15.30 GMT

33 Open Tickets this week. 14 are for the same Atlas request.

GAZILLION ATLAS CONSISTENCY TICKETS
I won't go over all these - but ECDF, BIRMINGHAM and IMPERIAL are still just "assigned". As Ewan did with Oxford's ticket feel free to down the "Priority" for these tickets - these requests aren't urgent at all! A few sites are done already, a few are on holding their tickets till later.

RALPP
117740 (20/11)
Atlas request to delete proddisk data which digressed a bit (after the original issue was confirmed as solved) but then stalled. Last check Brian asked for a list of files in ./users. In progress (23/11)

TIER 1
116864 (12/10)
CMS xrootd tests failing. No news since CMS replied on the 20th. Are things churning along in the background? In progress (20/11)

Forgot about this one:
GLASGOW
118052 (30/11)
Atlas, for the "WLCG HTTP Deployment Task Force" have ticket Glasgow after their SE showed up on this http probe page. I notice a few other UK sites in the red on that page. In progress (1/12)

I think that's it for "exciting" tickets - as always let me know if that's not the case.

Not much news on the Other VO nagios - it's looking pleasantly plain.

Monday 23rd November 2015, 15.00 GMT
There were just 18 Open UK Tickets - now we're up to 37...

ATLASPRODDISK Deletion (20/11)
117740 (RALPP)
117739 (Sussex)
117737 (Birmingham)

Brian (as an agent of atlas) has asked sites to delete the data from proddisk and remove the tokens. Chris has sorted it already for RALPP and the ticket has mutated to a generic atlas cleanout. No news on the other two.

PILOT LIGHT AT THE END OF THE TUNNEL
In another round of Pilot wrangling by Daniela sites have been asked to implement pilots for the Pheno VO (or where they are enabled find out why they're not working!): 117706 (GLASGOW)
117710 (Brunel)
117711 (RALPP)
117723 (QMUL)

For completeness the older pilot tickets:
114460 (Sheffield)
116866 (TIER 1) The Tier 1 is at the "debugging" stage, no news from Sheffield though.

HOT OFF THE TICKET PRESS
Atlas have ticketed everyone and their grandmother asking to implement the automated storage consistency check stuff. Which I think is a little unfair as I'm unconvinced of the maturity of the tools after spending a day with them last week.

I've updated Lancaster's ticket to see if we've set this up right during our mucking about last week: 117883
Update- Atlas have replied to say things are looking good at Lancaster so we just need a few more tweaks (such as using Alessandra's updated and polished version of the dpm_dump.py script and creating dumps that are only a day old and we're done.

EVEN HOTTER OFF THE PRESS
Brunel have been asked by CMS to upgrade their DPM to 1.8.10 in 117922 - however Raul reports that things aren't working all that well, suggesting CMS need to have a natter with the DPM devs.

Monday 16th November 2015, 15.45 GMT
Only 17 Open UK tickets this week.

LSST FRIDAY 13th TEETHING TROUBLES
117585 (Liverpool)
117586 (Oxford)
Daniela has been testing out the LSST Dirac pilots - Ewan's fighting the good fight at Oxford getting his ARC to work, maybe Liverpool's problem has a similar root cause (we should be so lucky)?

TIER 1
116866 (12/10)
In a similar vein (and thanks again to Daniela for the effort with preparing our pilots) is this ticket about getting "other" pilots enabled at the Tier 1 - particularly with a view of these tests: http://www.gridpp.ac.uk/php/gridpp-dirac-sam.php?action=viewlcg Any news? On hold (6/11)

ECDF(-RDF)
I think this new atlas ticket:
117606 (15/11)
is a duplicate of this current atlas ticket:
117447 (8/11)
So I suspect it can be closed!

I see from 117642 that the other VO nagios has been recently rebooted which might explain the lot of failures I see, I'll check again before the meeting.


Tuesday 1oth November 2015, 10.20 GMT
Down to 13 Open tickets this week!

All the UK can be found here.

Not very exciting - the ECDF-RDF ticket 117447 (atlas complaining about stale file handle error messages - but it's for the "RDF" so not urgent?) might have slipped past Andy's sentries yesterday, and the two remaining pilot role tickets for Sheffield and the Tier 1 (114460 and 116866) could really do with updates - with the latter Daniela notes that the gridpp pilots roles are needed for testing purposes.

Anything I've missed?

Other VO Nagios - no persistent trouble anywhere (that matters to us) - a few ARC CE submit troubles over the last hour or so with some RALPP CEs but I don't think that's ought to worry about.

Monday 2nd November 2015, 13.30 GMT
22 Open UK Tickets this week. First Monday of the Month, so all the tickets get looked at, however run of the mill they are.

First, the link to all the UK tickets.

SUSSEX
116915 (14/10)
Low availability Ops ticket. On holded whilst the numbers sooth themselves. On Hold (23/10)

116865 (12/10)
Sno+ job submission failures. Not much on this ticket since it was set In Progress. Looks like an argus problem. How goes things at Sussex before Matt RB moves on? (We'll miss you Matt!). In progress (20/10)

RALPP
117261 (28/10)
Atlas jobs failing with stage out failures. Federico notices that the failures are due to odd errors - "file already existing", and that things seem to be calming themselves. He's at a loss of what RALPP can do. Checking the panda link suggests the errors are still there today. Waiting for reply (29/10)

BRISTOL
116775 (6/10)
Bristol's CMS glexec ticket. It looks like the solution is to have more cms pool accounts (which of course requires time to deploy). In progress (28/10)

117303 (30/10)
CMS, not Highlander fans, don't seem to believe that There can be only One (glexec ticket). Poor old Bristol seem to be playing whack-a-mole with duplicate tickets. Is there a note that can be left somewhere to stop this happening? Assigned (30/10)

ECDF
95303 (Long long ago)
Edinburgh's (and indeed Scotgrid's) only ticket is this tarball glexec ticket. A bit more on this later. On hold (18/5)

SHEFFIELD
114460 (18/6)
Gridpp (and other) VO pilot roles at Sheffield. No news for a while, snoplus are trying to use pilot roles now for dirac so this is becoming very relevant. In progress (9/10)

116560 (30/9)
Sno+ jobs failing, likely due to too many being submitted to the 10 slots that Sno+ has. Maybe a WMS scheduling problem - Stephen B has given advice. Elena asked if the problem persisted a few weeks ago. Waiting for reply (12/10)

116967 (17/10)
A ROD availability ticket, on hold as per SOP. On hold (20/10)

LANCASTER
116478 (28/9)
Another availability ticket. Autumn was not kind to many of us! On hold (8/10)

116882 (13/10)
Enabling pilot snoplus users at Lancaster. Shouldn't have been a problem, but turned into a bit of a comedy/tragedy of errors by yours truly mucking up. Hopefully fixed now- thanks to Daniela for her patience. In progress (2/11)

95299 (Far far away)
glexec tarball ticket. There's been a lot of communication with the glexec devs about this - the hopefully last hurdle is sorting out the RPATHs for the libraries. It's not a small hurdle though... On hold (2/11)

QMUL
117151(23/10)
A ticket about jumbo frame problems, submitted to QM. After Dan provided some education the user replied, in that he only sees this problem at two atlas sites. But he is contacting the network admins at his institution to see if it is their end. On hold (29/10)

117011 (19/10)
ROD ticket for glue-validate errors. Went away for a while after Dan re-yaimed his site bdii, but possibly back again. Daniela suggests re-running the glue-validate test. Reopened (2/11)

116689 (6/10)
Another ROD ticket, where Ops glexec test jobs are seemingly timing out for QM (this is the ticket Daniela mentioned on the ops mailing list). Dan noted that with the cluster half full tests were passing, suggesting some kind of load correlation (but as he also notes - what's getting loaded and causing the problem - Batch, CE or WNs?). Kashif reckons the argus server, and suggests a handy glexec time test which he posted. In progress (2/11)

BRUNEL
117324 (2/11)
A fresh looking ROD ticket - Raul had to restart the BDII and hopefully that got it. In progress (2/11)

100IT
116358 (22/9)
Missing Image at 100IT. 100IT have asked for more details, no news since. Waiting for reply (19/10)

THE TIER 1
116866 (12/10)
Snoplus pilot enablement (not actually a word) at the Tier 1. New accounts were being requested after some internal discussion. On hold (19/10)

116864 (12/10)
CMS AAA tests failing (the submitter notes "again..."). There are some oddities with other sites, which might be remote problems, but Andrew notes that previous manual fixes have been overwritten which likely explains why problems came back. In progress (does it need to be waiting for a reply?) (26/10)

117171 (24/10)
LHCB had problems with an arc CE that was misbehaving for everyone. Things were fixed, and this ticket can now be closed. Waiting for reply (can be closed) (27/10)

117277 (30/10)
Atlas have spotted "bring online timeout has been exceeded). This appears to be a mixture of problems adding up, such as a number of borken disk nodes and heavy write access by atlas. In progress (2/11)

117248 (28/10)
I believe related to the discussion on tb-support, this ticket requests that new SRM host certs that meet the requirements specified be requested for the RAL SRMs. Jens was on it, and the new certs are ready to be deployed. In progress (30/10)

Other VO Nagios - some badness at Sussex, but they have a ticket open for that.

Monday 26th October 2015, 15.30 GMT
26 Open UK Tickets this week. Not many seem all that exciting though.

The link to all the UK tickets.

The few (two) tickets that really caught my eye are:

QMUL
117151 (23/10)
This ticket is quite interesting, mainly for Dan schooling the submitter. QM received a ticket complaining that their jumbo frames were breaking stuff - it doesn't looking like the problem is at QM though. Naught wrong with the ticket handling. Waiting for reply (23/10)

BRISTOL
117065 (20/10)
Bristol have a CMS glexec error ticket that looks very similar to an existing one (116683), which is in turn spookily similar to a ticket being worked on by the Bristol admins (116775). At the very least I would say that if the two problems are different one would likely obfuscate the other. Is this a case of over-keen shifters submitting tickets without checking? I'd be tempted to close one or both of 116683 and this one (117065). Tell them I said it was okay to[1]. In progress (21/10)

[1] It probably is okay to.

There are also 4 availability tickets, all On Hold waiting for 30 days or so to pass.

Other VO Nagios looks clean at time of writing.

Monday 19th October 2015, 14.30 BST
28 Open UK Tickets today.

UCL
116920 (14/10)
UCL have a availability ticket, and Andrew M wonders what can be done for them as a VAC site to stop getting these sort of tickets? Assigned (15/10) Update - Alessandra has updated and On-Holded the ticket. Thanks!

SUSSEX
116865 (12/10)
116915 (14/10)
Sussex have a Sno+ ticket and a ROD ticket that don't seem to have been looked at since they were submitted last week. Both just Assigned.

LIVERPOOL
116918 (14/10)
Another ROD ticket (Invalid glue), I think this one has snuck past the Liver-Lad's watch. Assigned (14/10)

QMUL
116782 (7/10)
Another Rod ticket, Dan looks like he's tracked down why one of his CE's is misbehaving (MaxStartups in sshd_config). Looks like this ticket at least can be closed, Gareth confirms that the test is green. Waiting for reply (19/10)

SHEFFIELD
116560 (30/9)
Stephen B has added some more information to try to figure out why Sno+ jobs are flooding Sheffield's 10 Sno+ slots. Elena is not sure if the problem persists though. Waiting for reply (12/10)

TIER 1
116864 (12/10)
It looks like this CMS AAA test problem has resolved itself - Federica asks if you chaps at RAL changed anything? Looks like the ticket can be closed. In progress (15/10)

116866 (12/10)
Enabling Sno+ pilots at the Tier 1. Only the LHC VOs had pilot roles enabled at the Tier 1, Andrew was going to discuss how best to make these changes. As with a similar issue at Lancaster - probably best to do it for all the VOs that will be using Dirac. In progress (13/10)

Monday 12th October 2015, 14.30 BST
23 Open UK Tickets this week. Just a light review.

OXFORD
116752 (1/10)
Oxford's CMS Phedex renewal ritual ticket. Chris B kindly offered in his ticket for RALPP to officiate this (un)holy task for Oxford, so I advise some communication between you chaps and him - which may well be going on, but it ain't in the ticket! Assigned (6/10)

BRISTOL
116683 (5/10)
116775 (6/10)
CMS have by the looks of it thrown a pair of duplicate tickets Bristol's way. How rude! Lukasz has rightfully suggested closing one of them (I suggest 116683. Winnie's put a good reply in t'other one).The underlying problem appears to be a shortage of pool accounts - what are the recommended amount of accounts for VOs these days? In progress.

SHEFFIELD
114460 (18/6)
Sheffield pilot role tickets. Daniela has pointed out that as Sno+ have started using Sheffield in earnest they really could do with pilot roles enabled. In progress (9/10)

ECDF
116812 (8/10)
LHCB asked Andy to clean up the $LOGIN_POST_SCRIPT for LHCB at his site, removing a "export CMTEXTRATAGS="host-slc5"" line. Naught wrong with the ticket, but I liked how lhcb did some good debugging, and there's something about a profile script problem that reminds me of a simpler time... In progress (9/10)

A link to the rest of the tickets for completeness.

The other VO nagios.
Looks okay, some (known I think) errors at QM, and Sheffield seems to have a few SRM errors going on since Saturday.


Monday 5th October 2015, 14.15 BST

22 Open UK Tickets this month, all of them, Site by Site:

SUSSEX
116136 (9/9)
Sussex got a snoplus ticket for a high number of job failures, although simple test jobs ran okay. Matt asks if the problem persists, the reply was a resounding "not sure". In progress (think about closing) (21/9)

RALPP
116652(1/10)
A ticket from CMS, about some important Phedex ritual that must occur on the 3rd of November, when the stars are right. The ticket needs some confirmation and feedback, plus the nomination of one site acolyte to receive the DBParam secrets from CMS - but the ticket only got assigned to sites this morning. Assigned (5/10)

BRISTOL
116651(1/10)
Same as the RALPP ticket, Winnie has volunteered Dr Kreczko for the task. In progress (5/10)

ECDF
95303(Long, long ago)
glexec ticket. On hold (18/5)

DURHAM
116576(1/10)
Atlas ticket asking Durham to delete all files outside of the datadisk path. Oliver asks what this means for the other tokens (I think they can be sacrificed to feed datadisk, but Brian et al can confirm that). Waiting for reply (5/10)

SHEFFIELD
116560(30/9)
Sno+ jobs having trouble at Sheffield. Looks like a proxy going stale problem as only 10 Sno+ jobs at a time can run at Sheffield. Matt M asks if/how the WMS can be notified to stop sending jobs in such a case. In progress (30/9)

114460(18/6)
Gridpp Pilot roles. No news on this for a while, after the last attempt seemed to not quite work. In progress (30/7)

MANCHESTER
116585(1/10)
Biomed ticketed Manchester with problems from their VO nagios box - which Alessandra points out being due to there being no spare cycles for biomed to run on. Assigned (can be put on hold or closed?) (1/10)

LIVERPOOL
116082(7/9)
A classic Rod Availability ticket. On Hold (7/9)

LANCASTER (a little embarrassing that my own site has the most tickets)
116478 (28/9)
Another availability ticket, this time for Lancaster (which has been through the wars in September). Still trying to dig our way out, but even the Admin's broke. On hold (5/10)

116676 (5/10)
Another ROD ticket, Lancaster's not quite out of the woods. We think WMS access is somewhat broken. We have no idea about the sha2 error. In progress (5/10)

116366 (22/9)
Sno+ spotted malloc errors at Lancaster. The problems seemed to survive one batch of fixes, but I asked again if they still see problems after running a good number of jobs over the weekend. Waiting for reply (5/10)

95299 (In a galaxy far, far way)
glexec ticket. This was supposed to be done last week, after I had figured out "the formula" - but then last week happened. On hold (5/10)

QMUL
115959 (31/8)
LHCB job errors at QM, with a 70% pilot failure rate on ce05. Dan couldn't see where things are breaking (only that the CE wasn't publishing to APEL- and asks if this is the cause of the problem?) Waiting for reply (5/10)

116662 (5/10)
LHCB job failures on ce05 - almost certainly a duplicate of 115959, but it might have some useful information in it. Assigned (probably can be closed as a duplicate) (5/10)

IMPERIAL
116650 (1/10)
Imperial's invitation to the CMS Phedex DBParam ritual. Daniela's on it, as well as the other CMS sites. On hold (5/10)

BRUNEL
116649 (1/10)
Brunel's ticket for the great DBParam alignment of 2015. On hold (5/10)

116455 (28/9)
A CMS request to change the xrootd monitoring configs. Did you get round to doing this last week Raul? In progress (29/9)

EFDA-JET
115448 (3/8)
Biomed having trouble tagging the jet CE. The Jet admins think this is the same underlying issues as their other ticket 115496. In progress (25/9)

115496 (5/8)
Biomed unable to remove files from the jet SE. There are clues that suggest that some dns oddness is the cause, but it's not clear. In progress (18/9)

100IT
116358 (22/9)
Ticket complaining about a missing image at the site. Some to and fro, the ball is back in the site's court. In progress (2/10)

TIER 1
116618 (1/10)
The Tier 1's CMS DBParam ritual ticket. In progress (5/10)

Let me know if I missed ought.

T'OTHER VO NAGIOS
At time of writing things looka a bit rough at QM, Liverpool (just getting over their downtime) and for Sno+ at Sheffield (likely related to their ticket).


Matt's on holiday until the 29th, so he's being replaced with links or any update Jeremy is kind enough to provide.

CAST YOUR GAZE UPON THE UK'S GGUS TICKETS HERE.

FEAST YOUR EYES ON THE T'OTHER VO NAGIOS STATUS HERE

"Normal" service will resume in October. I'll leave y'all anticipating that lovely review of all the tickets.

Monday 14th September, 15.00 BST

Yet another brief ticket review - its been a tad busy! Hopefully normal service will resume, err, next month. My apologies.

Only 20 Open UK tickets this week.

115805 - This RAL ticket from Sno+ looks like it can be closed, with there not actually being a problem with the site, or a feature that RAL would really want to implement.

Keeping on the Tier 1, does this ticket concerning the FTS3 certificates (115290) have anyone looking into it yet?

Are these two QMUL LHCB tickets duplicates, or related? 115959 & the more recent 116153

This Sussex ticket looks like it could do with someone taking a look - still just "assigned" since the 9th - 116136

Finally, an interesting VAC ticket for Oxford - "no space left on device" for atlas jobs - 116123. Bit of an odd one!

Monday 7th September 2015, 15.00 BST
20 Open UK Tickets this week. Due to Interesting Downtimes (apologies for reusing that pun!) yet another fairly light review. But not much is going on in ticketland.

THE GGUS TICKETS IN ALL THEIR GLORY
The Sno+ ticket 115805 is interesting, Sno+ are looking at monitoring jobs submitted via WMS through the port 9000 URLS, but the RAL WMS behaves differently from the Glasgow one and doesn't let others with the same roles look at the links. Sno+ are still "developing" their grid infrastructure with the WMS in mind by the looks of it.

The pilot role at Sheffield ticket 114460 could still do with some attention, or an update.

Bristol had a ticket from CMS that looks interesting - 115883. The CMS SAM3 tests are confused by Bristol having an SRM-less SE endpoint. Waiting for reply after things were clarified.

T'Other VO Nagios Page
A few "The CREAM service cannot accept jobs at the moment" style errors at QM, but they're only a few hours old. Otherwise looking alright beyond the usual noise.

Of course with these light reviews I could well be missing something, so feel free to let me know - sites or VO representatives.

Monday 24th August 2015, 15.45 BST

21 Open UK tickets this week, most being looked at or are understood. There will be no ticket update from Matt next week (1st September) either, as he will be flapping about during a local downtime.

YE OLDE GGUS TICKET LINK
114460 (gridpp pilots at Sheffield) could do with an update. The Liverpool ticket discussed last week (114248) has received an update from the user saying that the ticket can be closed. As Steve mentioned last week the underlying issue is still very much there, but I don't think this ticket is a suitable banner for us to fight that battle under.

T'Other VO Nagios Page
As usual nothing to see here at time of writing, sites are doing a grand job of working for the monitored VOs.

And that's all folks! See you in Liverpool.


Monday 17th August 2015, 15.00 BST
29 Open UK Tickets this week.

TIER 1
114992 (10/7)
It looks like this CMS transfer problem ticket can be closed after a user update last week, which reported enabling multiple streams solved the initial failures. In progress (13/8)

EFDA-JET
115613 (assigned) Update- looks like this was a temporary problem, and the ticket can be closed.
115448 (in progress, but empty)
115496 (in progress, but might not be a site problem).
Jet have 3 biomed tickets, 2 of which are looking a little neglected.

CAMBRIDGE
115655 (12/8)
John rightfully asks why does a lengthy but very scheduled downtime set off a ROD alarm. Other then that it'll be worth "on-holding" this ticket whilst the unrighteous red flag fades. In progress (17/8) Update - On holded

LIVERPOOL
114248 (10/6)
This Sno+ ticket looks like it needs some chasing up, no news for nearly 2 months. In progress (21/7) Update - Steve commented on this on TB-SUPPORT

Monday 10th August 2015, 15.00 BST

T'Other VO Nagios looks alright at time of writing.

24 Open UK tickets this week.

Lots of activity on GGUS since yesterday. Most positive, but I see the number of tickets at EFDA-JET increasing - all three from biomed.

Tier 1
115512 (5/8)
An interesting ticket - where a banned user is still banned after moving to LHCB from Biomed (no, he's not called Heinz). In Progress (6/8) Update - Andrew has asked that the ticket be reassigned to the argus devs as the pepd logs are showing oddness. Waiting for reply now.

Bristol
115565 (7/8)
Bristol's phedex agents are down, and have been for a few days. I might have dreamt this, but thought that the Bristol Phedex service might not be hosted at Bristol, especially after RALPP had a similar ticket (115566) at the same time. Assigned (7/8) Update - solved

Manchester
115504 (ROD Ticket) Solved
115399 (Wiki Ticket) Still Open
Both these tickets look like they can be closed.

115525 (5/8)
Atlas deletion errors after a a disk server fell over- nothing wrong with the ticket handling, but Alessandra brings up a point that always niggles me - the emphasis on the total number of transaction errors and not the number of affected unique files. In progress (8/8)

QMUL
114573 (23/6)
LHCB ticket sparked by those IPv6 problems. Still no word from Vladimir; Raja - could you comment? I suspect there's been plenty of room for LHCB jobs in QM's (and everyone else who mainlines atlas jobs) queues this weekend. Waiting for reply (21/7)

Monday 3rd August 2015, 14.30 BST

23 Open tickets this month, full review time.

Newish this morning
As discussed on TB-SUPPORT, a few sites have been getting "I can't lcg-tag your CE" tickets from Biomed. The Liverpool ticket 115449, solved and verified, was the flagship of these issues. Brunel (115445) and EFDA-JET (115448) also have tickets about this.


Sno+ "glite-wms-job-status warning" (3/8)
Glasgow: 115435
Tier 1: 115434
Matt M submitted these tickets to Glasgow and the Tier 1 after having trouble with a proportion of Sno+ jobs. Both are being looked at- definitely worth collaborating on this one.

Wiki-leak...
115399(31/7)
Jeremy noticed that the wiki didn't work for him on Friday - but it seems to work for Jeremy, Alessandra and myself now. As Jeremy notes the ticket can be closed, but out of interest did anyone else spot any problems? In progress (3/8)

Spare the ROD...
LIVERPOOL
115433 (3/8)
Some CE problems noticed on the dashboard for the Liver-lads - who might be in mourning. Assigned (3/8) Update - aaannd Solved by upping gridftp max connections

RALPP & UCL
114764
114851
Both of these are "availability" alarm tickets, on-holding until they clear. I hope RALPP managed to get a re-computation for their unfair failures (IGTF-1.65 problems on ARC).

Sno+'d Under
115387 (30/7)
I'm uncertain if this Sno+ ticket, probably somewhat related to Matt M's recent thread on TB-SUPPORT and concerning xrootd access, is meant for the Tier 1 or RALPP. Assigned (3/8)

First Tier Problems.
115417 (2/8)
LHCB spotted a number of nodes with cvmfs problems at the Tier 1, which the RAL team had already jumped on and repaired this morning. They wonder if the problem persists. Waiting for reply (3/8)

115290 (28/7)
An FTS problem requiring some special CA magic to solve, but the current CA-wizard isn't about. On hold (29/7)

113836 (20/5)
Glue 1 vs Glue 2 queue mismatches. It's being worked on perfecting cluster publishing for ARC CEs, but the ticket could either do with an update or on-holding. In progress (24/6)

114992 (10/7)
CMS transfers failing between RAL and, err, TAMU in the US. Assigned to RAL, where Brian has investigating and Andrew has posed an good question, asking if the user has considered managing the transfers with FTS. Quiet on the user side. In progress (21/7)

108944 (1/10/2014)
One of the tickets from the before times, about CMS AA access tests. It has become a long and confusing saga, but Gareth rescued it with a handy summary of the issue in his last update. How goes the battle? In progress (17/7)

Oxford Squid is red.
115230 (24/7)
Which might be the colour Ewan's seeing right now! The ticket is reopened, with a comment from Alessandra that the current recommendation is to allow all CERN addresses, and asks if this is something Oxford could do. Reopened (3/8) Update - solved after a squid restart. It's not the end of this ordeal, but Ewan would like to tackle it in a different arena then an Oxford ticket.

Mavaricks and Gooses - GridPP Pilot roles.
114485 Bristol
114460 Sheffield
114442 RALPP
114441 RHUL
Daniela hopped right back into the pilot seat after getting back from her holidays. Bristol and RALPP are looking good, Sheffield and RHUL are still in the Danger Zone - RHUL in particular were having troubles with argus and could do with some working configs from elsewhere to compare and contrast with their own.
Update - RHUL are looking better, just a few queue permission tweaks to go by the looks of it.


My shame - tarball glexec
95303 ECDF
95299 Lancaster
The tarball glexec tickets. Actually this is likely to become a defunct (or at least different) problem at Edinburgh with their SL7 move. Lancaster has a plan - we plan to deploy *something* (amazing plan there Matt) during our next big reinstall in September. Between now and then I have a test CE, cluster and most importantly some time.

Pot-luck tickets (or those I couldn't group).

Durham
114381 (16/6)
A tiny fraction of jobs publishing 0 cores used. Looks to be a slurm oddity. Oliver upgraded their CEs to ARC5 last week and hopes this has fixed things. Fingers crossed! In progress (29/7)

Lancaster
100566 (27/1/2014)
My other shame - Lancaster's poor perfsonar performance. It's being worked on. On Hold (should be back in progress soon) (30/7)

Liverpool
114248 (21/7)
Sno+ production problems at Liverpool, probably due to a lack of space in the shared area. Things are back in Sno+'s court, with the submitter consulting the Sno+ gurus (I think). In progress (21/7)

QMUL
114573 (23/6)
LHCB job submission problems due to the known about dual-stacking problems. Waiting for input from LHCB for a while now, as things look okay at QM now but at least check LHCB jobs still weren't running for some reason. Waiting for reply (21/7)

That's all folks!

Monday 27th July 2015, 16.10 BST

Only 20 UK Tickets this week, and many are on hold for summer holidays. I pruned a few tickets, but none are striking me as needing urgent action, so this will be brief.

Mandatory UK GGUS Link
Nothing to see here really, but maybe I'm missing something? Nit-picking I see:

114381 (publishing problems at Durham) could still do with an update - not sure if work is progressing offline on the issue.

The Snoplus ticket 115165 looks like it might be of interest for others - in it Matt M asks about tape-functionality in gfal2 tools. Brian has updated the ticket clueing us in about gfal-xattr.

UK T'other VO Nagios
A few failures here at time of writing - although only one at Brunel seems to be more then a few hours old (dc2-grid-66.brunel.ac.uk is failing pheno CE tests).

Let me know if I missed ought!

Monday 20th July 2015, 14.30 BST
27 Open UK Tickets this week.

Brunel
115113 (17/7)
This Brunel Ops ticket (otherwise okay) is a good reminder that when removing CEs from the GOCDB for your site make sure to get all the services, or else you just might end up still monitored (and thus end up ticketed!). Reopened (can probably be closed soon) (20/7) Update - ticket solved

114006 (31/5)
This ticket about problems with Brunels accounting figures was looking promisingly close to being solved (at least to my layman's eyes) 3 weeks ago. Any word offline? In progress (30/6)

Lancaster
114845 (6/7)
LHCB jobs were failing at Lancaster a fortnight ago, but things should have been fixed quite promptly. Are they still broken? Waiting for reply (14/7)

QMUL
114573 (23/6)
A similar case for this LHCB ticket for QM (part of their ongoing dual-stack saga). Waiting for reply (13/7)

Note: The related issue 115017 has been "solved" pending further testing.

Sheffield
114649 (26/6)
This Sno+ ticket has been in "Waiting for Reply" for a little while, no word from the user (who isn't Matt M). Could we poke Sno+ though another channel about this? Waiting for reply (6/7)

Liverpool
114248 (10/6)
We could do with finding out from Sno+ the state of play at Liverpool too, although we might have to wait until Steve is back from his hols later this week to field any replies. In progress (17/6)

Durham
114381 (16/6)
Ticket concerning the small percentage of Durham jobs that aren't publishing their core count (probably a slurm oddity). Now that Oliver's back from holiday has he had time to look at this? On Hold (19/6)

Tier 1
https://ggus.eu/?mode=ticket_info&ticket_id=113836 (20/5)
I suspect that whilst the work described chugs along in the background we consider on holding this ticket. In progress (24/6)

UCL 114746 (30/6)
Ben has been battling getting his DPM working, and spotted an interesting problem where SELinux was blocking the httpd from accessing mysql. It's nice to see someone not just switching SELinux off (like I have a habit of doing...). In progress (20/7)

ECDF
115003 (12/7)
Andy having some problems on a test SE at ECDF - he seems ot be suffering a series of unfortunate errors. Maybe the storage group could help? In progress (17/7)

GridPP Pilot Roles.
Bristol are ready for testing, Govind discovered a possible bug in argus that needs a bit more testing at RHUL. Things seem quiet at RALPP and Sheffield, and Brunel too.

Supplemental - In the region multicore publishing ticket (114233) only Oxford have a CE still appearing to publish 0 cores - but I thought this CE was Old-Yeller'ed?

Tuesday 14th July 2015, 9.30 BST

Lazy update today, due to some fun and games at Lancaster yesterday.

UK GGUS Tickets

Other VO Nagios

Tickets that pop:

114952 and 114951 are both atlas frontier tickets at RALPP and Oxford, both have been reopened - although the underlying issues seem different. The Oxford ticket is similar to one at RAL (114957), which looks to be caused by an unannounced change in IP for some important atlas squids (AIUI - speed reading the tickets this morning).

QM IPv6 Woes
Followers of the atlas uk lists will have noticed some heroic attempts to diagnose and repair problems at QM which appear to be someone else's fault. LHCB's ticket to QM on the matter: 114573
Dan's ticket concerning the "rogue routes": 115017

GridPP Pilot Roles
Durham, Bristol, Sheffield, Brunel and RHUL still have open tickets about this. Bristol are working on it, as are Durham - Oliver's ready for their setup to be tested again (Puppet overwrote his last changes!). Not much recent news from the other three.

That's all from me folks, let me know if I missed ought!

Tuesday 14th July 2015, 9.30 BST

Lazy update today, due to some fun and games at Lancaster yesterday.

UK GGUS Tickets

Other VO Nagios

Tickets that pop:

114952 and 114951 are both atlas frontier tickets at RALPP and Oxford, both have been reopened - although the underlying issues seem different. The Oxford ticket is similar to one at RAL (114957), which looks to be caused by an unannounced change in IP for some important atlas squids (AIUI - speed reading the tickets this morning).

QM IPv6 Woes
Followers of the atlas uk lists will have noticed some heroic attempts to diagnose and repair problems at QM which appear to be someone else's fault. LHCB's ticket to QM on the matter: 114573
Dan's ticket concerning the "rogue routes": 115017

GridPP Pilot Roles
Durham, Bristol, Sheffield, Brunel and RHUL still have open tickets about this. Bristol are working on it, as are Durham - Oliver's ready for their setup to be tested again (Puppet overwrote his last changes!). Not much recent news from the other three.

That's all from me folks, let me know if I missed ought!

Monday 6th July 2015, 14.00 BST
30 Open UK Tickets this month. Looking at them all!

NGI
114233 (10/6)
The UK not publishing core counts at all sites. Some progress, but at last check John G couldn't see a change for Oxford or Glasgow. In progress (30/6) Update - Glasgow seems to be okay after de-creaming, checking the July list we have t2ce6 at Oxford, ce3 and ce4 at Durham (see their ticket) and cetest02 at IC (but that node has test in its hostname!).

RALPP
114442 (18/6)
Gridpp Pilot role ticket. Accounts need to be created, but no word for a few weeks. In progress (19/6)

114764 (1/7)
Ticket tracking (false) availability issues, created to appease COD - the problem caused by a broken CA rpm release for Arc CEs. Kashif has created a counter-ticket 114742 Gordon's sagely advice is to submit a recalculation request once the issue is fixed. Assigned (1/7)

BRISTOL
114485 (19/6)
Bristol's gridpp pilot role ticket. No news, could do with an update really. In progress (22/6)

114426 (18/6)
CMS AAA reading test problems. The Bristol admins have transferred data to their new shiny SE and have asked CMS to test again. No word since. Waiting for reply (30/6)

EDINBURGH
95303 (1/7/13...)
Tarball glexec ticket, now 2 years old. After a really promising burst the last 6 weeks haven't seen any progress, due to a lot of other "normal" tarball work taking up the time. Sorry! On hold (18/5)

DURHAM
114536 (22/6)
Durham's gridpp pilot role ticket. Not acknowledged yet, is Oliver back yet? Assigned (22/6)

114765 (1/7)
See RALPP ticket 114764. Assigned (1/7)

114727 (30/6)
Catalin ticketed that a number of SW_DIR variables at Durham are still pointing to the old school .gridpp.ac.uk cvmfs space. Assigned (30/6)

114381 (16/6)
John G ticketed Durham over a small percentage of jobs being published as "zero core". Looks like a SLURM timeout problem, although a fix isn't obvious. Put on the back burner whilst Oliver is on holiday. On Hold (19/6)

SHEFFIELD
114649 (26/6)
A ticket from a Sno+ user about not being able to access software using the Sheffield CEs. Acknowledged but no news. In progress (26/6) Update - Elena can't find anything wrong, cvmfs seems to be working fine. Perhaps a problem with the environment?

114460 (18/6)
Sheffield's gridpp pilot role ticket. Did you get round to rolling them out? In progress (19/6)

MANCHESTER
114444 (18/6)
LHCB ticket concerning the DPM's SRM not returning checksum information. On hold whilst a related ticket is being looked at (111403). On Hold (22/6)

LIVERPOOL
114248 (10/6)
Another Sno+ ticket, about grid production jobs failing at Liverpool. AIUI caused by Sno+ running out of space on the shared pool. At last check Steve posted the usage information for Sno+ but no word since (and Steve's off on his hols). In progress (17/6)

LANCASTER
114845 (6/7)
LHCB pilots failing at Lancaster. Looks like a simple node misconfiguration, hopefully fixed, waiting to see if it is. On hold (6/7)

95299 (1/7/2013)
glexec ticket - see Edinburgh description. On hold (15/5)

100566 (27/1)
Bad bandwidth performance at Lancaster. Hoping that IPv6 will shake things up a bit so pushing that. On hold (18/5)

UCL
114746 (30/6)
SRM-put failures ROD ticket. No news at all. Assigned (30/6)

114851 (6/7)
Low availability ROD ticket, related to above. Assigned (6/7)

RHUL
114441 (18/6)
Another GridPP pilot role ticket. Pilots rolled out, but something isn't quite right and they're not working - Govind is looking again. In progress (6/7)

QMUL
114573 (23/6)
LHCB ticket about two out of three QM CEs not responding for them. Dan spotted the broken CEs were dual-stacked, the working one wasn't. The ticket seemed to have trailed off into some confusion over who needs to do some testing where. I agree with Dan that that who needs to be someone with LHCB credentials! The waters still seem muddied. In progress (1/7)

IC
114737 (30/6)
The IC voms wasn't updating properly, due to what I infer from the ticket as "SSL/mysql madness". Simon and Robert have been heroically battling this one - it's a good read. On hold (3/7)

114379 (16/6)
Sam's ticket about SE support in Dirac. Sam will shortly try testing things out on the new Dirac to see how it fares. In progress (6/7)

BRUNEL
114447 (18/6)
Brunel's gridpp pilot ticket. Being worked on, with one CE with the pilots enabled. In progress (26/6)

114006 (31/5)
A ticket from APEL, about Brunel under-reporting the number of jobs they are doing. Turned out to be a problem with Arc, which Raul upgraded to the fixed 5.0 version. The APEL team deleted the sync records, but no word since. In progress (30/6)

114850 (6/7)
Another APEL ticket, likely the fallout of the previous one - it looks like GAP publishing has been left on for the Brunel CREAM CEs. Assigned (6/7) Update - solved

TIER 1
114786 (2/7)
Low availability ticket - see RALPP ticket 114442 - probably could do with On holding. In progress (2/7) Update - Onholded

113910 (26/5)
Sno+ data staging problems. Brian gave some advice on how the large VOs do data staging from tape, and has asked if Sno+ still has problems. Matt M might still be on leave though. Waiting for reply (23/6)

108944 (1/10/14)
CMS AAA problems, which eventually brought to light to a problem with super-hot datasets which were alleviated (I think). Despite an update to castor that improved performance the last batch of tests didn't show improved results. No news since. In progress (17/6)

113836 (20/5)
Glue mismatch problems at RAL. Working on getting "many-Arcs" to correctly publish. In progress (24/6)

Monday 29th June 2015, 14.30 BST

Looking at the "Other VO" Nagios.
Things look generally alright - but Durham look like they need to update their CA rpms - but that might have to wait until Oliver is back from leave.

Tarballs:
I don't think this effects many, but there's was a ticket to produce a new version of the WN tarball (which is done): 114574
Although AFAICS there is no urgent need to upgrade tarball WNs.

26 UK Tickets, although not many stand out.

Gridpp VO Pilot tickets:
Largely doing alright. With Oliver away the Durham ticket hasn't been looked at yet. Sheffield and Bristol's tickets could do with an update (or on-holding if there's going to be a delay). The RHUL ticket has been reopened as they're deployment of the pilot roles hasn't quite worked out.

QMUL
114573 (23/6)
LHCB having trouble with two out of three QM CEs. Dan notes that the two "broken" CEs have been recently dual-stacked, and asks if this could be the problem. The answers is a resounding "maybe", and Raja asks if problems could be duplicated by others using lxplus. Waiting for reply (24/6)

IMPERIAL/DIRAC
114379 (16/6)
Sam's ticket trying to get SE support with Dirac...er, spruced up. Daniela has asked if the tests can be redone with the "new" dirac. Waiting for reply (22/6)

Let me know if I missed any tickets. Monday 22nd June 2015, 14.00 BST

35 Open UK Tickets this week (!!!)

GridPP Pilot Role
A dozen of them are from Daniela (who painstakingly submitted them all) concerning getting the gridpp (and other) pilot role enabled on the site in question's CEs.

An example of one of these tickets is:
114440 (Lancaster's smug solved ticket).

Ticketed sites are: Durham, Bristol, Cambridge, Glasgow, ECDF (who are also having general gridpp VO support problems), EFDA-JET (looking solved), Oxford, Liverpool, Sheffield, Brunel, RALPP and RHUL. Most tickets are being worked on fine, but the Bristol and Liverpool ones were still just in an "assigned" state at time of writing.
Update - good progress on this, just one ticket left "assigned". Cambridge are done, as are JET (ticket needs to be closed). Oxford and Manchester are ready for to have their new setups tried out, with Oxford kindly road-testing glexec for the pilot roles. Good stuff.

Core Count Publishing
114233 (10/6)
Of the sites mentioned in this ticket (Durham[1], IC, Liverpool, Glasgow, Oxford) who *hasn't* had a go at changing their core count publishing? I know Oxford have. Daniela had a pertinent question about publishing for VMs, which John answered. In progress (17/6)

[1] Durham have another ticket on this which may explain their lack of core count publishing: 114381 (16/6)

DIRAC
114379 (16/6)
Sam S formed this ticket over having trouble accessing the majority of SEs over Dirac, after some discussion around this last week. Sam acknowledges that this could be a site problem, not a DIRAC problem, but you gotta start somewhere (he worded that point more eloquently). Daniela has posted her latest and greatest DIRAC setup gubbins for Sam to try out. Another, unrelated, point to have are the names missing from Sam's list - for example I'm pretty sure Lancaster should support gridpp VO storage but I've forgotten to roll it out! Waiting for reply (22/6)

LIVERPOOL
114248 (10/6)
Final ticket today, and another one discussed last week in the storage meeting. Steve's explanation of why (and how) Sno+ would need to start to using space tokens was fantastically well worded in a way to not spook easily startled users. David is digesting the information, but it will likely need to wait for Matt M's return before we'll see progress. In progress (16/6)

MANCHESTER
114444(18/6)
I told a pork pie when I said that was the last ticket - this one caught my eye. A ticket from lhcb over files not having their checksums stored on Manchester's DPM. A link was given to another ticket at CBPF for a similar issue which got the DPM devs involved (111403) - although Andrew McNab was already subscribed to the ticket. In progress (19/6)

Monday 15th June 2015, 14.15 BST

Other VO nagios:
ce05.esc.qmul.ac.uk and hepgrid11.ph.liv.ac.uk seem to be having a spot of bother for multiple VOs, and hepgrid2.ph.liv.ac.uk seems to be starting to have trouble too.

19 Open UK Tickets this week.

NGI
114233 (10/6)
John Gordon ticketed the NGI (as well as others) about some sites in the UK not publishing core counts with their APEL numbers (or more precisely have submission hosts at that site not publishing). Following the link it looks like Imperial, Liverpool, Durham, Glasgow and Oxford are on this list, I've listed the submission hosts reporting "0" core jobs below to help people clear up their rogues! If you've only fixed things in the last fortnight you'd still show up on this list. In progress (15/6)

"0" core job submission hosts:
ce3.dur.scotgrid.ac.uk
ce4.dur.scotgrid.ac.uk
cetest02.grid.hep.ph.ic.ac.uk
hepgrid5.ph.liv.ac.uk
hepgrid6.ph.liv.ac.uk
hepgrid97.ph.liv.ac.uk
svr009.gla.scotgrid.ac.uk
t2ce06.physics.ox.ac.uk

Tier 1
113914 (26/5)
Sno+ file copying ticket. Matt M is away on his hols, but Dave Auty has took over his duties and reports that this problem seems to have gone away - it can probably be closed. In progress (9/6)

Liverpool
114248 (10/6)
Another Sno+ ticket here from David concerning job failures. Nothing wrong with the ticket handling, but I thought that David's errors in submitting test jobs are worth documenting, as they were very understandable. David has since asked if the Sno+ job failures are linked to Sno+ nagios test failures at Liverpool. In progress (12/6)

Imperial
114157 (8/6)
After Simon and Daniela have cleared up the atlas dark data and expanded their Space Tokens using the space freed up there still seems to be some confusion in Rucio, disagreeing with the SRM numbers. In progress (10/6)

Oxford
114208 (9/6)
Oxford being ticketed for UKI-SOUTHGRID-OX-HEP_IPV6TEST failing connection tests, tests for which Oxford should not be getting ticketed afaacs. I remember this being mentioned in the Thursday cloud meeting, but I'm ashamed to say I wasn't paying attention. Were any conclusions drawn/decisions made? In progress (10/6)

Manchester
114153 (7/6)
Atlas transfer failures from Manchester. Errors are still occurring as of yesterday, any news? In progress (14/6)


Monday 8th June 2015, 15.00 BST
21 Open Tickets this week.

TIER 1
113914 (26/5)
Sno+ had problems at the Tier 1 where jobs failed whilst uploading data, believed to be due to an incorrect VOInfoPath. There's been a failure at replicating the issue, and the VOInfoPath advertised is correct. Very confusing, as I assume it all worked at some point before! In progress (2/6)

113910 (26/5)
Another Sno+ ticket, concerning lcg-cp timeouts whilst data-staging from tape. Matt M has asked for advice on the best practice for doing this, or if Sno+ would be better off just upping their timeouts. Brian has given some advice on using the "bringonline" command, but is himself unsure the best way of seeing what files are currently in a VO's cache. Not much news since. In progress (28/5)

114004 (31/5)
Atlas transfers fail due to the "bring-online" timeout being exceeded. Brian spotted a problem with file timestamps mismatching, but no news on this ticket since. In progress (1/6)

BRUNEL
114006 (31/5)
APEL accounting oddness at Brunel, noticed by the APEL team. After much to-and-fro-ing John noticed that multiple CEs were using the same SumbitHost, and thus overwriting each other's sync records. Something to watch out for. In progress (7/6)

IMPERIAL
114157 (8/6)
There's been some debate on the atlas lists about this ticket, a classic "not enough space at the site" ticket. Raising above the indignation over being ticketed for this, Daniela has offered a couple of TB to give some space, and pointed out that IC have some atlas data outside space tokens, and that this could be used to expand the tokens if cleaned up. Waiting for reply (8/6)


Tuesday 26th May 2015
Matt's on leave until the 8th of June. But he's replaceable with handy links... 23 tickets today:

Other VO Nagios

UK NGI GGUS tickets

Monday 18th May 2015, 14.30 BST
Full review this week.

Other VO Nagios
At time of writing I see problems with test jobs at Brunel for pheno and Liverpool for a number of VOs (see Sno+ ticket for probable cause and fix at Liverpool).

22 Open UK Tickets this week. Going site-by-site:

APEL/NGI
113473 (4/5)
Missing accounting date for April for some sites. Raul is discussing things for Brunel in the ticket, although they have republished. I think it's only ECDF left to republish their April data. In progress (16/5)

OXFORD
113482 (26/4)
Loss of accounting data for Oxford needing a APEL republish. The Oxford guys republished, but there is some confusion with the resulting numbers. Discussion is ongoing, John G is currently looking at the records. In progress (14/5)

113650 (11/5)
CMS glideins failing at Oxford. The original problem was with a config tweak being left out of the cvmfs setup, but the ticket has been reopened citing problems persisting on the ARC CE (the CREAM appears to be fixed). Reopened (16/5)

GLASGOW 113095 (17/4)
ROD ticket about batch system BDII failures, left open to avoid unnecessary ticket filing. Gareth noted that the full migration to ARC and HTCondor, which should see the end of these issues, will hopefully be completed by the end of June. On Hold (12/5)

ECDF 95303 (31/7/13)
Somehow left this one out of the e-mail update. Edinburgh's glexec ticket, dependent on the tarball. I put in my tuppence worth today with my tarball hat on. On hold (18/5)

SHEFFIELD
113769 (18/5)
LHCB see a cvmfs problem at Sheffield. Elena has probably fixed the problem(restarted the sssd), just waiting to see if it all pans out. In progress (18/5)

MANCHESTER
113744 (15/5)
For the VOMS rather then the site, Jens' request for the creation of the dIrac VO, vo.dirac.ac.uk. In progress (18/5)

113692 (13/5)
A request from pheno to add support to for their new cvmfs area at Manchester, and as I understand it, to support them in a new "form" (pheno.egi.eu). In progress (13/5)

LIVERPOOL
113742 (15/5)
Sno+ noticed their nagios failures at Liverpool. Rob reckons this was a problem with the DPM BDII service certificate not being updated (that's bitten me too), and fixed things this morning. Let's see how that goes. In progress (18/5)

LANCASTER
95299 (1/7/13!)
Lancaster's vintage glexec ticket. An update on this - after have a roundtuit session last week I was building glexec for different paths. It still needs some testing to make sure it works properly. There however definitely won't be a one-size-fits-all tarball solution. On hold (15/5)

100566 (27/1/14)
Only the crustiest old tickets for us at Lancaster! Poor perfsonar performance. Sadly didn't get roundtuit on this one - we're pushing getting these nodes dual stacked as Ewan had pointed out that it would be interesting to see if IPv6 tests also saw this issue. On hild (18/5)

UCL
113721 (14/5)
The only UCL ticket, this is a egi "low availability" ticket. However Daniela notes that the plots are on the rise, so things are looking alright. Probably want to "On Hold" it but otherwise not much to be done. In progress (14/5)

IMPERIAL
113743 (15/5)
A ticket from Durham concerning the Dirac instance at Imperial's settings for their site. Daniela hopes to get it fixed soon. In progress (15/5)

100IT
112948 (10/4)
CA certificate update at 100IT leading to a discussion of other authentication based failures. David has asked for voms information after posting his configs. In progress (13/5)

TIER 1
113035 (14/4)
Ticket tracking the decommissioning of the Tier 1 CREAM CEs. I think things are just about done now, this ticket can soon be closed. In progress (11/5)

109694 (28/10/14)
Sno+ gfal-copy ticket. Brian reports that the Tier 1 is upgrading gfal2 on their WNs, and notes that there's a lot of active debugging work going on in the area. As he eloquently puts it "situation is quite fluid". In progress (13/5)

108944 (1/10/14)
CMS AAA tests failing at the Tier 1. There's been a lot of work on this, deploying then trying to get the new xrootd director configured. New problems have cropped up, and are under investigation. In progress (11/5)

112721 (28/3)
Atlas transfer failures ("failed to get source file size"). Tracked to a odd double transfer error, possibly introduced in one of the recent "upgrades". Brian has been declaring these files as bad, and a workaround or solution is being thought about. In progress (14/5)

113705 (13/5)
Atlas transfer failures from RAL tape. Checksum failures, which Brian tracked to being due to not being of a type Castor supports. Brian has asked if this can be changed at the CERN FTS or in rucio. Waiting for reply (14/5)

113748 (16/5)
Another atlas transfer ticket, but as the error indicates no space left at the Brunel space token being transferred to Elena has noted that this isn't a site problem, telling the submitter to put in a JIRA ticket instead. Waiting for reply, but probably can be just closed (16/5)

112866 (7/4)
Lots of cms job failures at RAL. This has been traced to some super-hot files, mitigation is being looked into. A candidate for perhaps On Holding, depends on the time frame of a work around. In progress (13/5)

113320 (27/4)
CMS data transfer issues. I'm not actually too sure what's going on. There are files that need invalidating, which seems to be the root of the evil befalling transfers. The issue is being actively worked on though. In progress (18/5)

Monday 11th May 2015, 14.10 BST
22 Open UK Tickets this week.

TIER 1
There are a few tickets at the Tier 1 that are set "In Progress" but haven't received an update yet this month:
108944 (CMS AAA Tests, 30/4)
112721 (Atlas Transfer problems, 16/4)
109694 (SNO+ gfal copy trouble, 15/4)
112866 (CMS job failures, 7/4)
112819 (SNO+ arcsync troubles, 20/4)

Other Tier 1 Tickets (sorry to be picking on you guys!)
111699 (10/2)
Atlas glexec hammercloud test jobs at the Tier 1. It appears to be working, but a batch of test jobs failed because they couldn't find the "mkgltempdir" utility on some nodes ("slot1_5@lcg1742.gridpp.rl.ac.uk" and "slot1_4@lcg1739.gridpp.rl.ac.uk"). In progress (4/5)

113320 (27/4)
Maybe repeating what Daniela is going to say in the CMS update - trouble with CMS data transfers within RAL. It's under investigation, but it looks like the files in question will need to be invalidated - even if it's just to paint a clearer picture. In progress (10/5)

APEL REPUBLISHING
113473
At last update Brunel, Liverpool, Edinburgh, Birmingham and Oxford need to republish still. Oxford have their own ticket about it due to complications (113482).

UCL Tickets - Ben is starting to move to close these, some are going to be "unsolved".

GLASGOW
113095 (17/4)
Andrew asks if the timeframe for the move to Condor be added to this ticket, for the ROD team's information. On Hold (7/4)

100IT
112948 (10/4)
No news on this 100IT ticket for a while. In progress (27/4)

Friday 1st May
The Bank Holiday weekend might muck up plans for a Ticket review this week. Just in case, some links!

Other VO Nagios page.

UK GGUS Tickets

Hope you all have a nice weekend!

A quick check of the Other VO Nagios page.

26 Open UK tickets this week.

ITWO Decommissioning
Three of the tickets are to the VOMS sites (Manchester, Oxford, IC), concerning the decommissioning of the ITWO VO. Just an FYI to y'all.

ECDF
113293 (26/4)
There was an APEL problem last month where a lot of sites needed to republish their data for the month. I think Edinburgh are the only UK site that suffered this problem, but another FYI ticket. Assigned (26/4) And solved

113181(21/4)
Atlas production jobs not running at ECDF. Andy noticed that analysis jobs were running fine, and believes that this might be a problem scheduling pilots in time. Perhaps a multicore issue if this is only effecting (affecting?) production jobs. In progress (22/4) Update - solved

ATLAS GLEXEC HAMMERCLOUD PROBLEMS
111699 (TIER 1)
111703 (RALPP)
It was discovered that there was a problem in the test code, so the ball is very much in atlas' court for this one. The problem has been fixed and the tests are being rebuilt and resubmitted.

TIER 1
112721 (28/3)
ATLAS FTS failures too RAL. A rucio issue causing double-transfers has been discovered (here), which would explain the behaviour seen. No news since this revelation. In progress (16/4)

There are a number of other Tier 1 tickets that could do with either an update or On Holding

Monday 20th April 2015, 14.30 BST
24 Open UK tickets this week, only a light review. Update - down to 20 open tickets as of this morning

NGI/TIER 1
113150 (20/4)
Fresh in - the NGI has been ticketed to change the regional VO from emi.argus to ngi.argus in the gocdb. Seems a bit pedantic, but hey! I assigned it to the NGI ops, and notified RAL as keepers of the regional argus. Assigned (20/4) Update - solved

TIER 1
113035 (14/4)
Just for people's interest, the ticket tracking the decommissioning of the last of the RAL CREAM CEs. In progress (14/4)

112819 (2/4)
A SNO+ ticket I must of somehow missed last week, concerning SNO+'s manual renewing of proxies on ARC machines. Matt M has noticed that ArcSync occasionally hangs rather then timeouts smoothly (although he later notes that he doesn't see the initial problems working from a different network). I'm thinking that this should be redirected at the arc devs, but I don't think they have a GGUS support group (I could be wrong, I'm well behind on the ARC curve). In progress (7/4)

EDINBURGH
113110 (17/4)
Looks like this atlas low transfer efficiency ticket can be closed. Waiting for reply (20/4) Update - solved

GLASGOW
113095 (17/4)
ROD ticket for some BDII misreporting at Glasgow. The botheration seems to be ephemeral in nature, the blunders passing with the abating of their batch system's burden. This ticket can probably be solved. In progress (17/4)


Monday 13th April 2015, 14.00 BST
24 Open tickets this week - going over all of them this week, site by site.

Fresh in this morning - 113010 and 113011 - Sno+ tickets concerning the RAL and Glasgow WMSes not updating job statuses.

RALPP
111703(11/2)
Atlas glexec hammercloud tests failing. There's been a lot of waiting on atlas to build new HC jobs. The most recent exchange (delayed due to Easter), was asking about SELinux - but no news since the first. In progress (1/4)

BIRMINGHAM
112875(7/4)
Low availability ROD ticket. Availability is crawling back up, just need it to go green. On hold (13/4)

GLASGOW
112967(10/4)
Another ROD ticket for bdii errors at Glasgow. Gareth has been doing everything right investigating this. Kashif recommended ticketed the midmon unit, but Gareth has spotted that the errors correspond to high load on their ARC CE - so it might be a site problem after all - Gareth asks for clarification. Waiting for reply (13/4)

EDINBURGH
95303 (1/7/13)
Tarball glexec ticket. No news (sorry). End of April I believe was the "deadline" I set for having this made. On Hold (9/3)

LANCASTER
100566 (27/1/14)
Lancaster's poor perfsonar performance. I'm not believing quite what I was seeing with the tests I performed so I'm aiming to rerun them. On hold (13/4)

95299 (1/7/13)
Lancaster's tarball glexec ticket. Same as ECDF. On hold (9/3)

BRUNEL
112966 (13/3)
A ROD cream job submit ticket, freshly assigned this afternoon. It's a bit mean of me to bring notice to it. Assigned (13/4) And POW, Raul closed this after kicking torque into shape - solved

100IT
112948 (10/4)
100IT needed to upgrade to the latest CA release. They've done this, but there are still authentication problems. In progress (13/4)

108356 (10/9/14)
Deploying vmcatcher at 100IT. After David's questions falling on deaf ears for a while it has been advised that the ticket be closed as this issue will be dealt with elsewhere. Whether or not it is to be "solved" or "unsolved" is open to debate! In progress (can possibly be closed) (13/4)

TIER 1
108944 (1/10/14)
CMS AAA tests failing at RAL. After a lot of work and new xrootd redirectors problems persist. It's looking to be a problem that needs the CASTOR and/or xrootd devs to look at. In progress (30/3)

112713 (27/3)
CMS asking to clean up the "unmerged area". Andrew conjured up a list of files and asked if they could be deleted - CMS responded with a "yes please then close the ticket". Has the deed been done? In progress (31/3)

109694 (28/10/14)
The Sno+ gfal copy ticket. Matt M still sees gfal-copy hang for files at RAL when he uses the GUID (SURL works). A Castor oddity perhaps? Matt asks a question about what problems like this (coupled with the move away from lcg tools) will mean for VOs that rely on the LFC. In progress (31/3)

112977 (10/3)
CMS high job failure rate at RAL. Related to 112896 (below) - the jobs all want that file! In progress (13/3)

https://ggus.eu/index.php?mode=ticket_info&ticket_id=112896 (9/4)
CMS Dataset access problems - caused by over a million access attempts on a single file over a 18 hour period. Andrew L comments that CMS needs to have a think about how they access pileup datasets. In progress (9/4)

111699 (10/2)
Tier 1 counterpart to 111703. A new HC stress test was submitted near the end of March, but no news on how it did. In progress (23/3)

112866 (7/4)
A different "lots of CMS job failures" ticket. Again a "hot file" seems to be the root cause. In progress (7/4)

112721 (28/3)
An atlas file access ticket, seemingly caused by some odd FTS behaviour. No answers to Shaun's question about this odd occurrence or much noise at all till today. Waiting for reply (13/4)

UCL
UCL has 6 tickets - 4 just "assigned". I'll just list them in the interests of brevity.
112371 (ROD low availiability, On Hold)
112841 (atlas 0% transfer efficiency, assigned)
112873 (ROD srm put failures, assigned)
95298 (glexec ticket, on hold)
112722 (atlas checksum timeouts, in progress)
112966 (ROD job submit failures, assigned)

Tuesday 7th April

Monday 23rd March 2015, 15.30 GMT
19 Open tickets this week.

BIRMINGHAM
https://ggus.eu/?mode=ticket_info&ticket_id=112550 (23/3)
A ticket fresh off the ROD dashboard - the Birmingham CREAMs aren't being matched ("BrokerHelper: no compatible resources"). Matt W has double checked their setup and can't spot anything wrong - they've been running "normal" atlas/lhcb etc jobs fine over the last few weeks. Any advice appreciated. In progress (23/3)

TIER 1
https://ggus.eu/?mode=ticket_info&ticket_id=112495 (21/3)
MICE problems running jobs at RAL, which Andrew L discovered coincided with WMS problems that he fixed. Probably should be in "Waiting for Reply/Seeing if the problem's evaporated". In progress (23/3)

https://ggus.eu/?mode=ticket_info&ticket_id=112350 (14/3)
The cause of this Sno+ ticket, about a recent user not being able to access files due to not being in the gridmap, has been discovered. As Robert F sagely pointed out the latest version of the mkgridmap rpm is required to talk to the voms server. Just waiting on the time for it to updated at now. In progress (17/3)

100IT
https://ggus.eu/?mode=ticket_info&ticket_id=108356 (10/9/14)
This 100IT ticket is still waiting for a reply (since mid-January). The question needs to be answered by someone familiar with the technologies and terminologies? Is anyone up on vmcatcher? Anyone know what other channel to pass David's query onto? Waiting for reply (15/1)

Monday 16th March 2015, 15.30 GMT

16 Open UK tickets this week. Half Red, Half Green.

RALPP and TIER 1 glexec HC tickets
111703 (RALPP)
111699 (TIER 1)
No news on these tickets since it was expected that a new stable HC job would be released last Tuesday. All very quiet.

OXFORD
112011(25/2)
Similar for this CMS glexec ticket - no news after Kashif asked for some more information way back. Waiting for reply (27/2)

TIER 1
109694(28/10/14)
The Sno+ gfal copying ticket. A lot of people are working on this, and attempts to recreate the problems seem to occasionally be devolving into "did we get this complicated command right?". At some point it might be necessary to get the gfal devs involved (are there gfal devs?). Waiting for reply (11/3)

Also there's the poor 100IT ticket, still waiting for a reply. The JET ticket also needs wrapping up, I put a reminder in to the end of it.


Monday 9th March 2015, 15.00 GMT
From last week's crusty ticket round up:

Tier 1
109694- 28/10/14
Matt M has managed to reclaim his tickets after a certificate change orphaned his old ones. Progress has resumed. Duncan has asked Matt to retry his failing tests with a simple copy to local disk example.

108944- 1/10/14
CMS AAA access at RAL. Andrew posed a question to the CMS xroot experts last week - if you have their details it might be a good idea to involve them in the ticket.

100IT
108356- 10/9/14
This VMCatcher ticket is still stuck waiting for a reply. Deafening silence for our 100IT colleagues.

EFDA-JET
97485 - 21/9/13
The Jet LHCB ticket is in the state of being wrapped up. LHCB have been removed from the local configs, and the site has been removed from LHCB's. I believe that this ticket can be terminated.

QMUL
110353 - 25/11/14
Dan's managed to get webdav working on one SE, but not t'other. Very strange, but Dan is investigating (see also 111942).

No movement on the three glexec tickets (none expected on the two tarball ones in the last week though), the Lancaster perfsonar ticket is still waiting on another batch of local tweaks (and I still need to make sense of what I'm seeing). Matt RB closed Sussex's perfsonar ticket though - nice one.

The "Normal" tickets:

Atlas gLexec Hammercloud failures (RALPP and Tier 1)
111699 (Tier 1)
111703 (RALPP)
These tests were waiting on a new stable job release being made - this has been delayed (hopefully out tomorrow).

TIER 1
111856(19/2)
This LHCB ticket about stalled jobs looks like it can be closed (LHCB no longer see a problem). Update - set to solved, the jobs were being killed for using too much memory.

OXFORD
112011(25/2)
A CMS user saw glexec failures on some nodes - Kashif asked for some more information but there has been no reply. I'd consider giving the user till the end of the week then closing the ticket if there's still no word. Waiting for reply (27/2)

Tuesday 3rd March

Concentrating on pre-2015 tickets this week in an attempt to Spring Clean the UK ggus presence. I will review these again next week - can people please take a look at these tickets if they're owned by them (or if they think they can help!).

SUSSEX - 26/11/14
110389
This is a perfsonar ticket - the initial request (reinstalling the perfsonar node) has been done a while ago but things weren't quite right. Matt RB did some soothing to this last week and asks if he's missed anything - I put it to waiting for reply this morning. Waiting for reply (24/2)

QMUL - 25/11/14
110353
Atlas wanting https access on QM's SE. Dan's been working on this nicely, carefully testing each stage of his rollout. The end is in sight here. In progress (17/2)

TIER 1 - 28/10/14
109694
This is a SNO+ ticket about getting gfal tools working for the Tier 1 - with the new version out Brian has tested it correctly (and I saw a related thread on lcg-rollout) - but no word from Sno+. Who is wrangling the other VOs in these post-Walker times? Waiting for reply (24/2)

TIER 1 - 1/10/14
108944
A CMS access about AAA tests at the Tier 1. This ticket is being actively worked on, with a new xrootd redirector at RAL and problems with the EU redirectors mucking things up. No problems that I can see. Waiting for reply (2/3)

100IT - 10/9/14
108356
Getting VMcatcher stuff to work at 100IT. This ticket seems to keep stalling due to lack of documentation or replies from the submitters. Waiting for reply (19/1)

LANCASTER - 27/1/14
100566
Lancaster's poor perfsonar performance. Being poked and prodded on and off over the last year, but the problems remain a mystery - Ewan's lending of a iperf endpoint has helped out greatly though, waiting on yet another network tweak. On Hold (23/2)

EFDA-JET - 21/9/13
97485
LHCB job failures at EFDA-JET. The causes of this remain a mystery, and is the first ticket on my "to be set to unsolved" list.

UCL - 1/7/13
95298
UCL's glexec ticket. Ben's been working hard at this recently, but keeps hitting show stoppers - the latest being a performance problem possibly due to the VM he's running argus on. On hold (19/2)

ECDF and LANCASTER - 1/7/13
95303 (ECDF)
95299 (Lancaster)
glexec for the tarball. This is *still* waiting on the tarball glexec, which is again waiting on me, which is waiting on me magicking some extra tarball development time. Will be reviewed by the end of March.


Monday 23rd February 2015, 15.00 GMT
Only 15 Open UK tickets this week. Feel free to bring up any ticket-based issues of your own on this quiet week.

RALPP
111703(11/2)
This CMS glexec hammercloud ticket is looking a little quiet - no update for a while. If it's continuing offline or waiting on input could it at least be put On Hold? In progress (11/2) (The Tier-1 version of this ticket, 111699, seems to be chugging along fine - there might be useful snippets in there).

100IT
111333(22/1)
The Cloud accounting probe ticket was reopened, asking if 100IT ticketed apel support (I assume contacting them via other means would work too) otherwise the new cloud accounting won't be properly republished. Reopened (20/2)


IMPERIAL (but not really their issue)
111872(20/2)
Tom opened a ticket after another cern@school user had troubles using the IC SE - there has been some problems with the newer versions of the dirac UI. Sometimes it's better to go Vintage! Although after trying Simon and Daniela haven't been able to reproduce the failure - perhaps something's up with the user's UI? Waiting for reply (23/2)

SUSSEX
110389(26/11/14)
Sussex's Perfsonar ticket. I know Matt RB has put the ticket On Hold and is very busy - is there any news/anything we can do to help? On Hold (21/1

TIER 1
109694(28/10/14)
Sno+ gfal copy problems at RAL. Brian informs us that the latest version of the gfal tools works for him and has asked if they work for Matt M. and Co. Did you get these packages out of epel/epel-testing or somewhere else Brian? Waiting for reply (18/2)

Monday 16th February 2015, 14.30 GMT
Only 19 open UK tickets today.

TIER 1
111120(12/1)
An atlas ticket concerning transfer failures between RAL and BNL. Brian mentioned last week that the lack of recent failures is due to atlas not attempting to transfer any older data recently. Perhaps this could do with being put into the ticket (and the ticket being put On Hold, or prodded some more)? Waiting for reply (29/1)

Also 111800 (17/2)
ARC CE issues at RAL detected.

RALPP
111703(11/2)
Atlas running glexec hammerclouds - having trouble at RALPP (and RAL, see 111699). The glexec experts have gotten involved on this one, and asked to take a peek at a proxy - not sure about anyone else, but I'd feel a tad uncomfortable sharing proxies, even with known and trusted experts as in this case. Am I being overly paranoid? Either way the ticket has gone a bit quiet. In progress (11/2)

100IT
108356 & 111333
Both the 100IT tickets are in Waiting for reply - the oldest one for quite a while - David asked a question a while back and no answer. The newest one asks if the 100IT logs made it to apel safely - I think what David has to do is submit a ticket with this question to the apel support team - have I got the right end of the stick?

Biomed Tickets at Manchester and Imperial
111356 & 111357
FYI There's a note at the bottom of both of these tickets that the version of CREAM that should fix this has been delayed until the end of February(ish).
Update - I read those updates wrong - the cream update has been released and these tickets have been (perhaps erroneously) closed.


Monday 9th February 2015, 15.00 GMT

Other VO Nagios Results
At the time of writing the only site showing red that aren't suffering an understood problem was RALPP with org.nordugrid.ARC-CE-submit and SRM-submit test failures for gridpp, pheno, t2k and southgrid for both its CEs and its SE. The failures are between 1 and 12 hours old, so it doesn't seem to be a persistent failure, but it seems to be quite consistent. They all seem to be failing with "Job submission failed... arcsub exited with code 256: ...ERROR: Failed to connect to XXXX(IPv4):443 .... Job submission failed, no more possible targets". Anyone seen something like this before?

Only 20 Open UK tickets this week.

Biomed tickets:
111356 (Manchester)
111357 (Imperial)
Biomed have linked both these tickets as children of 110636, being worked on by the cream blah team. AFAIKS no sign of Cream 1.16.5 just yet.

TIER 1
111347 (22/1)
CMS consistency checks for January 2015. It looks like everything that was asked of RAL has been done by RAL, so hopefully this can be successfully closed. In progress (3/2)

111120 (12/1)
Another ticket, this time concerning a period of Atlas transfer failures between RAL and BNL, that looks like it can be closed as the failures seem to have stopped (and might well have been at the BNL end). Waiting for reply (22/1)

108944 (1/10/14)
CMS AAA test failures at RAL. Federica can't connect to the new xrootd service according to the error messages. No news for a while. In progress (29/1)

100IT 108356
111333
Both of these 100IT tickets are looking a bit crusty - the first is waiting for advice, the second was just put "In progress".

QMUL
110353 (25/11/14) Dan has set up se02.esc.qmul.ac.uk to test out the latest https-accessible version of storm for dteam and atlas. As a cherry on top this node is also IPv6 enabled. I'm not sure if Dan wants others in the UK to "give it a go"? In progress (6/2)

LANCASTER
100566 (27/1/14)
(Blatantly scounging for advice) Trying to figure out why Lancaster's perfsonar is under-performing. Ewan kindly gave us access to a iperf endpoint and it's been very useful in characterising some of the weirdness - although I'm still confused. Ewan also gave us a bunch of suggestions for testing that have been useful - next stop, window sizes. If anyone else wants to throw advice to me all wisdom donations are thankfully accepted. My advice for others in be careful trying to connect to the default iperf port on a working DPM pool node.... In Progress (9/2)

Monday 2nd February 2015, 14.00 GMT
22 Open UK tickets this month.

SUSSEX
110389 (26/11/14)
A perfsonar ticket for Sussex. Their perfsonar has been reinstalled, but needs soothing. Matt has informed us that this might have to wait a few weeks due to other issues. On Hold (21/1)

RALPP
110536 (2/12/14)
MICE job failures at RALPP - it looked like they were dying due to running of of memory. The queues have been tweaked to give MICE more, but no word from the MICE if this has solved the problem. Waiting for reply (12/1)

BRISTOL
110365 (25/11/14)
Another perfsonar ticket. Again the node is reinstalled, just not quite working right. Winnie is waiting for news from the other sites in a similar boat. In progress (maybe On Hold it?) (20/1)

EDINBURGH
111118 (12/1)
ECDF "low availability" ticket - just waiting for the silly alarm to clear. Daniela submitted a ticket about this foolish alarm a while ago - 107689. On Hold (19/1)

95303 (1/7/13)
glexec tarball ticket. With my tarball hat on - still no positive news on this front - it's beginning to look like this can't be done but we're having one last go. Sorry! On Hold (19/12)

MANCHESTER
110225 (18/11/14)
Change of VO Manager for helios-vo.eu. It looks like this ticket is being held up at the user end a lot. I'm not sure there's anything we can do as it involves outside CAs. On Hold (20/1)

111356 (23/1)
One of Manchester's CEs not working for biomed, due to problems with the new CREAM/old WMS communication. Alessandra gave biomed some sagely advice, but I suspect this ticket will need to be prodded soon to get a reponse from biomed (who I agree should use a newer WMS and close it). On Hold (26/1)

LANCASTER
111547 (2/2)
I'm reporting on a ticket that I submitted to myself today. I'm not sure what that says about the world. Anyway - a ticket to track the decommissioning of one of Lancaster's CEs, as we try to do it all proper like. On Hold (2/2)

100566 (21/1/14)
Lancaster's perfsonar ticket, which I sadly let reach its first birthday. I've been prodding this offline, does anyone have the address for a regular, open iperf endpoint I could borrow? On Hold (9/1)

95299 (1/7/13)
Lancaster's tarball glexec ticket, as the ECDF one. On hold (26/1)

UCL
95299 (1/7/13)
UCL's glexec ticket. They've been having trouble getting it to behave, and at last check Ben was off ill - probably due to dealing with glexec :-) On Hold (20/1)

QMUL
110353 (25/11/14)
Atlas asking for QM's storage to be made available via https. Waiting on a production ready STORM that can provide this - Dan is trying it out on his testbed se02.esc.qmul.ac.uk, which still needs tweaking. In progress (28/1)

IMPERIAL
111357(23/1)
One of the IC CEs not working for biomed. Similar to the Manchester ticket, Daniela points to ticket 110635 and is waiting on an EMI release to fix it (due out imminently AIUI). On Hold (28/1)

EFDA-JET
97485 (21/9/13)
Jet's LCHB job failure tickets. I'm afraid I haven't been able to chase this up (partly due to only ever remembering on the first Monday of the month) - there's been no news for a while. On Hold (1/10/14)

100IT
111333 (22/1)
A ticket to 100IT and the NGI to get the cloud accounting probe upgraded. I notified 100IT, but forgot to reassign the ticket - thanks to Jeremy for doing it. Assigned (2/2)

108356(10/9/14)
Getting VMcatcher working at 100IT. David from 100IT has asked for some answers on which "glancepush" to use, but no reply for a while. Waiting for reply (19/1)

TIER 1
111477(29/1)
CMS would like to run some staging tests to warm up for Run2. The Tier 1 warned CMS of today's outage and they're happy to proceed tomorrow (the 3rd) - I think they'd like a response. In progress (30/1)

107935(27/8/14)
A ticket regarding inconsistent BDII and SRM storage numbers. Waiting on a fix from the developers regarding read-only disk accounting (I think), Brian is still on the case. Stephen B let us know that Maria the ticket submitter is on maternity leave, and asks in her stead if the numbers are expected to align now. On hold (28/1)

111120(12/1)
An atlas ticket about a large number of data transfer errors seen between RAL and BNL. Brian reckoned that this was due to shallow checksums on the old data being transferred, but had trouble looking at the BNL FTS. Regardless, the ADCoS shifter hadn't seen any errors for a week and suggests the ticket can be closed. Waiting for reply (29/1)

108944(1/10/14)
CMS AAA test problems at RAL. After setting up a new xrootd box the test failures have changed in nature, but sadly they're still failures. In progress (29/1)

111347(22/1)
CMS Consistency Check for RAL, January 2015 edition. Filelists were generated, orphan files were identified, then purged. Just need to know what CMS want to do next. Waiting for reply (26/1)

109694(28/10/14)
Sno+ ticket concerning gfal tool problems, waiting on the new release to come out (middle of this month I believe). If you don't want to wait that long then I believe the 2.8 gfal2 tools can be found in the fts3 repo at last check. On hold (20/1)


Monday 26th January 2015, 14.15 GMT
Back after being forgotten about by me:
Other VO Nagios Status:

At the time of writing I see:
Imperial: gridpp VO job submission errors (but only 34 minutes old so probably naught to worry about).
Brunel: gridpp VO jobs aborted (one of these is 94 days old, so might be something to worry about).
Lancaster: pheno failures (I can't see what's wrong, but this CE only has 10 days left to live).
Sussex: snoplus failures (but I think Sussex is in downtime).
RALPP: A number of failures across a number of CEs, all a few hours old. An SE problem?
Sheffield: gridpp VO job submission failure, but only 6 hours old. And of course the srm-$VONAME failures at the Tier 1, which are caused by incompatibility between the tests and Castor AIUI. Things are generally looking good.

22 Open UK Tickets this week.
NGI/100IT
111333(22/1)
The NGI has been asked to upgrade the cloud accounting probe, and then notify our (only at the moment) cloud site to republish their accounting. Not entirely sure what this entails or who this falls on, I assigned it to NGI-OPERATIONS (and also noticed that 100IT isn't on the "notify site" list - odd). Assigned (22/1)

TIER 1
108944(1/10/14)
CMS AAA test failures. Andrew Lahiff reported last week that the Tier 1 is building a replacement xrootd box which is currently being prepared. If that will take a while can the ticket be put on hold? In progress (19/1)

QMUL
110353(25/11/14)
An atlas ticket, asking for httpd access to at QMUL. The QM chaps were waiting on a production ready Storm that could handle this, and are preparing to test one out. This is another ticket that looks like it might need to be put On Hold (will leave that up to you chaps - there's a big difference between "slow and steady" progress and "no progress for a while"). In progress (21/1)

RHUL
111355(23/1)
A dteam ticket - concerning http access to RHUL's SE. Although the initial observation about the SE certificate being expired was incorrect (the expiry date was reported as 5/1/15, which to be fair I would read as the 5th of January and not the 1st of May!) there still is some underlying problem here with intermittent test failures. Also this ticket raises the question of under what context are these tests being conducted? Anyone know, or shall we ask the submitter? In progress (26/1)

BIOMED PROBLEMS:
Manchester: 111356(23/1)
Imperial: 111357(23/1)
Biomed are having job problems, looking to be caused by using crusty old WMSes to communicate with these site's shiny up-to-date CEs. According to ticket 110635 a cream side fix should be out by the end of January (CREAM 1.16.5), although Alessandra suggests that Biomed should try to use newer, working WMSes - or Dirac instead!


Monday 19th January 2015, 14.30 GMT
23 Open UK Tickets this week.

TIER 1
108944(1/10/14)
CMS seeing AAA test failures at RAL. The tests have been restarted recently and now seem to be having some suspicious looking authentication failures. In Progress (13/1)

SHEFFIELD
111162(14/1)
Atlas complaining about httpd doors not working on Sheffield's SE. After schooling the submitter in how to submit more useful information Elena is working on it. I bring this up as in the last few days I've had quite a few of my pool nodes have their httpd daemons crash on them (they're up to date, but still SL5), which may or may not be related. In Progress (19/1)

ECDF
111118(12/1)
ECDF "low availability" ticket after a few days of argus trouble, which Wahid fixed. Now the ticket will languish for a few weeks as the alarm clears. Daniela has reminded us of her ticket against these fairly silly alarms: https://ggus.eu/?mode=ticket_info&ticket_id=107689 . In the mean time this ticket could do with being put On Hold whilst the alarm clears. In progress (19/1)

100IT
108356(10/9/2014)
Setting up VMCatcher at 100IT. After some troubles things seem to have be looking up, although there are still some questions that the 100IT chaps have for the configurations and what they should be using that aren't getting answers. I set the ticket to "Waiting for Reply" hoping that this will help get those in the know's attention. Waiting for reply (15/1)

Perfsonar Tickets
110389(Sussex)
110382(TIER 1)
108273(Durham)
100566(Lancaster)
110365(Bristol)
Everyone seems to have updated their perfsonar hosts, so we're all good on that front, but a number of sites are either having trouble with their reinstalled hosts, or are having problems that they had pre-reinstall still haunt them. I'm afraid I have no suggestions of what to do about the growing number of these tickets though!