Big computing in the Big Apple

Fri 1st June 2012

Last week the Computing in High Energy Physics meeting (CHEP), one of the foremost events in the GridPP calendar, was held in New York. CHEP brings together experts from across the world to discuss the technology and services supporting the global high energy physics community. Over 500 people attended the meeting and there was a large representation from GridPP giving talks and presenting posters.

CHEP is a global gathering held every 18 months that showcases the cutting edge developments in technology used by the physics community. As a leading contributor to the worldwide LHC Computing Grid (wLCG) GridPP sent along many of its members to New York to present and discuss the project’s successes since the last CHEP in 2010. As in previous years the meeting was preceded by the wLCG workshop, which covered recent operational experience and future plans, experiment-site discussions and the results of the Technical Evolution Groups (TEGs) reports. CHEP 2012 Logo

Prof Roger Jones, from Lancaster University, was at both meetings “New York is a great venue for this meeting, and our hosts made us very welcome. The wLCG workshop was especially invaluable as we plan for the future and we look to find common solutions between the experiments and thereby allow more of the development work to be carried by fewer people, largely within those experiments. It also provided a great social space for the creative, informal discussions that really drive a project. CHEP was particularly good at updating us on the evolving technologies in computing, which will present some interesting challenges in the rest of this decade. Finally, it allowed us to pay tribute to one of the bigger figures in our field, Rene Brun, who retires this year. While many have sat late at night cursing PAW or ROOT, there is no denying the huge impact he has had on our work in particle physics”.

The week was busy, with a mix of plenaries and parallel sessions, alongside the poster sessions and the social events. Mark Mitchell, who works on GridPP at Glasgow was very impressed “For me the main highlights of CHEP were that all of the presentations, talks and posters from GridPP were very well received. The work conducted by Wahid, Sam, Alessandra and Brian especially sparked discussion. Chris Walker’s poster on LUSTRE and the new network infrastructure at QMUL was a major talking point during the first poster session. CHEP, as always, is extremely helpful for the grid community, it brings together the US and European perspectives, which can be very different. Having a forum for all the experiments and computing support groups to come together promotes further collaboration and utilisation of new ideas discussed during the conference”.

The full list of GridPP contributions can be seen here:

Presentations

  • Analysing I/O bottlenecks in LHC data analysis on grid storage resources – Abstract
  • The ATLAS ROOT-based data formats: recent improvements and performance measurements – Abstract
  • Model of shared ATLAS Tier2 and Tier3 facilities in EGI/gLite Grid flavour – Abstract
  • Methods and the computing challenges of the realistic simulation of physics events in the presence of pile-up in the ATLAS experiment – Abstract
  • Fast simulation for ATLAS: Atlfast-II and ISF – Abstract
  • Multi-core job submission and grid resource scheduling for ATLAS AthenaMP – Abstract
  • From IPv4 to eternity – the HEPiX IPv6 working group – Abstract
  • The CMS workload management system – Abstract
  • Acceleration of multivariate analysis techniques in TMVA using GPUs – Abstract

Statue of Liberty by markusnl

Posters

  • Major changes to the LHCb Grid computing model in year 2 of LHC data – Abstract
  • Consistency between Grid Storage Elements and File Catalogs for the LHCb experiment’s data – Abstract
  • Investigating the performance of CMSSW on the AMD Bulldozer micro-architecture – Abstract
  • Optimising the read-write performance of mass storage systems through the introduction of a fast write cache – Abstract
  • The Memory of MICE, the Configuration Database – Abstract
  • Preparing for long-term data preservation and access in CMS – Abstract
  • Sysematic analysis of job failures at a Tier-2, and mitgation of the causes – Abstract
  • Bolting the Door – Abstract
  • Engaging with IPv6: addresses for all – Abstract
  • TAG Base Skimming In ATLAS – Abstract
  • Taking the C out of CVMFS: providing repositories for country-local VOs – Abstract
  • Performance of Standards-based transfers in WLCG SEs – Abstract
  • Tier2 procurements experiences in the UK – Abstract
  • UK efforts to improve networking rates on WAN transfers – Abstract
  • Integrated cluster management at the Manchester Tier-2 – Abstract
  • MARDI-Gross – Data Management Design for Large Experiments – Abstract
  • AutoPyFactory: A Scalable Flexible Pilot Factory Implementation – Abstract
  • EGI Security Monitoring integration into the Operations Portal – Abstract
  • ATLAS Distributed Computing Shift Operation in the first 2 full years of LHC data taking – Abstract
  • BESIII and SuperB: Distributed job management with Ganga – Abstract
  • Key developments of the Ganga task-management framework – Abstract
  • The WorkQueue project – a task queue for the CMS workload management system – Abstract
  • A scalable low-cost Petabyte scale storage for HEP using Lustre – Abstract
  • An Active CAD Geometry Handling System for MAUS Software – Abstract

© Copyright GridPP
If you wish to reproduce this piece please credit GridPP and contact us to say you are using it