Monday 6th October 2014, 14.30 BST
On top of the lovely tickets there was a discussion in the Ops team last week and it was mentioned how it would be handy to look how sites were doing on the VO nagios, so I thought I'd go over that here.
VO Nagios
Site's that seem to be having trouble on one or more of their nodes at the time of writing are:
Durham: pheno and gridpp
Lancaster: pheno and gridpp
Sussex: snoplus
EFDA-JET: gridpp, pheno, southgrid
Liverpool: gridpp, snoplus
Sheffield: gridpp, snoplus
QMUL: t2k.org
TIER 1: snoplus and t2k
Although only Lancaster, Sheffield and the Tier 1 seem to be having really long term problems.
(I'm still trying to think how best to parse this information, so my apologies that it's poorly presented).
On to the tickets.
Only 24 open UK tickets this month (organised by site).
SUSSEX
108765(24/9)
Sussex have a ROD ticket, originating from a glue validation error (although it's just picked up some SHA-2 failures). Matt RB was away
though, so not much progress - Matt can you get to it this week? In progress (3/10)
RALPP
109115(6/10)
A fresh ticket from cms, complaining that RALPP don't have any backup squids listed in their site xml file. Assigned (6/10) and closed on (7/10) as the site name was old (the old one being too long!).
BRISTOL
106325(18/6)
CMS pilots losing network connectivity. CMS have confirmed that it is only a subset of the Bristol clusters seeing pilots dropping connections. Winnie has continued to poke and prod this, and between her and CMS they've (more or less) ruled out natting as the cause of the problem. Bristol are still quite stuck, and kind of hoping some unrelated network tweaks might sweep this issue away. On Hold (2/10)
ECDF
95303(1/7/13)
tarball glexec deployment - see Lancaster entry on the same issue. On hold (29/8)
DURHAM
108273(5/9)
Durham experienced a sudden, odd change in their perfsonar results (outbound bandwidth went up, in bound dropped). The Durham chaps were looking into this but were interrupted by this shellshock business. Oliver has included some long term plans in the ticket and will update it again when they have their perfsonar back. On hold (6/10)
SHEFFIELD
108716(23/9)
Snoplus jobs not running at Sheffield. Elena had to bash one of her CEs into shape, but it should be fixed now and has asked Matt M if he still sees a problem. Waiting for reply (6/10)
MANCHESTER
109001(2/10)
Not quite a site problem, but David M was having trouble committing to the SVN hosted at Manchester (and a reminder that I believe the "official" way of reporting problems with these services is to ticket the site). It looks like this has been solved and the ticket can probably be closed. In progress (3/10)
109049(4/10)
Atlas transfer problems - the underlying issue being a downed (and dead) disk server. Alessandra is doing the lost file declaration stuff and offered to provide lists of these files to the users directly. Not much more that Manchester can do. In progress (6/10)
LANCASTER
100566(27/1)
Poor, unexplained perfsonar performance. Although some ideas have been made how to tackle this, holidays then shellshock have got in the way of implementing them. On hold (1/10)
108715(23/9)
Sno+ jobs not running at Lancaster. Hopefully after a tweak to the information system on our CEs I fixed this - as Duncan pointed out things are looking okay on the VO nagios. I've asked Matt M how things are looking for "real" Sno+ work. Waiting for reply (1/10)
95299(1/7/13)
tarball glexec ticket. As mentioned in last week's Ops meeting, due to holidays there has been no progress over the last month but things look hopeful. On hold (9/9)
UCL
95298(1/7/13)
Non-tarball glexec ticket. Ben's been trying to install this, but having dependency troubles - did anyone who uses rpms notice this when they last tried to install the glexec WN? In progress (29/9)
109039(3/10)
Another Glue2 validation ROD ticket. In progress (3/10)
IMPERIAL
108723(23/9)
Chris W has ticket Imperial with a few dirac file catalogue queries. Duncan responded with some documentation that others might also find useful and some other information. I believe the ticket is now waiting for feedback from Chris (who may in turn be waiting for feedback from the other VO user groups). Waiting for reply (1/10)
EFDA-JET
108735(23/9)
biomed have asked that JET activate the biomed cvmfs repo at their site. Ticket seen but no news or action. In progress (23/9)
97485(21/9/13)
One of the ancient tickets. LHCB having authentication errors at Jet. No change. On hold (1/10)
109080(6/10)
A fresh ROD ticket about a number of alarms - at first glance I would say a certificate has expired. In progress (6/10)
100IT
108356(10/9)
VM images from fedcloud.egi.eu not available at 100IT. This ticket showed up an issue with creating an AppDB profile, but that has since been solved. No news on the state of this ticket other then that the issue persists. In progress (1/10)
THE TIER 1
107935(27/8)
"BDII vs SRM inconsistent storage capacity numbers". No news on this for a long time. This ticket really could do with some love (or at least on holding!). In progress (3/9)
106324(18/6)
CMS pilots losing connection, similar to the Bristol ticket. The issue has been tracked to being *something* in the Tier 1's internal network after comparing firewall rules to RALPP. CMS have updated the ticket with some more information and some nice plots, but the long and the short of it is the problem persists. In progress (1/10)
108546(16/9)
atlas seeing failures on the RAL-LCG2_HIMEM_SL6 queue. Ticket in an odd state - the atlas shifters seem to think the problem was transient but Gareth and go are seeing a lot of load on diskservers despite nothing on BiGpanda. The RAL team is keeping an eye, but this ticket could do with some updates/on holding in the mean time. In progress (22/9)
107880(26/8)
Sno+ asking RAL for help/alternatives with srmcping for a small group of seemingly awkward Suse using users. Some input from others but not much word from Sno+ or the Tier 1 - Chris, could you please take a peek with your small VO hat on? In progress (30/9)
108944(1/10)
CMS running into a lot of "file not found" errors when running a AAA check at RAL, and asking if things are alright. When looking over the whole Castor namespace it appears that all files are present and correct which doesn't explain why CMS had trouble finding them. In progress (1/10)
108845(27/9)
Atlas seeing gridftp timeouts. This looks to be a hotspot problem (at this point in the review I'm just skim reading tickets). Atlas also report seeing deletion errors, and have included links. I'm not sure if this ticket will be impacted by this afternoon's Castor intervention. Still very much In Progress (5/10)
|