Protected Site networking

From GridPP Wiki
Jump to: navigation, search

Page for tracking site status and plans

This page has been created to provide a central GridPP reference page that allows the project to understand the status and plans of each site in regard of their internal networks. It can be updated by the site administrators concerned or the Tier-2 (deputy) coordinators.

LondonGrid

UKI-LT2-BRUNEL

UKI-LT2-IC-HEP

The Imperial subnet is 146.179.246.0/23 .

7th June ops meeting: Reasonably defined subnet, reasonably defined monitoring. Some firewall trickier means cacti plots aren't 100% accurate.

UKI-LT2-QMUL

IP address range/ subnet 194.36.10.0/24

  • Monitoring : Cacti, Janet netsight
  • The university now has two 10Gbit links (for failover). The grid site is connected to the campus backbone at 10Gbit, and can achieve most of this. The plan is to put our traffic down one link and college traffic down the other - with mutual failover (at which point bandwidth limits may apply).
  • Jumbo frames (MTU=9000) can now reach campus from Janet - but we are only using this in tests at present.

UKI-LT2-RHUL

  • Separate subnet - 134.219.225.0/24
  • Plan to setup dedicated monitoring like cacti etc.
  • Dedicated 1Gb/s link to Janet at IC - Until April shared the link with college, which was capped at =~ 300 Mb/s
  • Network Hardware upgrade - 8x Dell PowerEdge 6248, double-stacked (2x48Gb/s + redundant loop) - SN: 4x1Gb/s bonded & vlan, WN: 2x1Gb/s bonded (only on private network)

UKI-LT2-UCL-HEP

7th June Ops meeting: subnet, some monitoring

NorthGrid

UKI-NORTHGRID-LANCS-HEP

  • Both the local user and grid site are on separate, routable subnets
  • The Tier-2 is actually split across two subnets, the “Grid” and the “HEC”.
  • The subnet ranges are 194.80.35.0/25 & 148.88.23.0/26
  • The Tier-2 is further split between a private and public VLANs.
  • There is a 10G backbone to the Tier 2 network.
  • There is a 1-Gb dedicated light path from the Tier-2 to RAL, and we also share the University's link to Janet (although I believe we are capped at 1Gb/s).
  • All switches are managed by ISS. We have access to the University Cacti pages.

UKI-NORTHGRID-LIV-HEP

  • Grid cluster is on a sort-of separate subnet (138.253.178/24, soon to be 138.253.202/23)
  • Shares some of this with local HEP systems
  • Most of these addresses may be freed up with local LAN reassignments
  • Monitored by Cacti/weathermap, Ganglia, Sflow/ntop (when it works), snort (sort of)
  • Grid site behind local bridge/firewall, 2G to CSD, 1G to Janet
  • Shared with other University traffic
  • Possible upgrades to 10G for WAN soon
  • Grid LAN under our control, everything outside our firewall CSD controlled

UKI-NORTHGRID-MAN-HEP

7th June ops meeting: Own network, skips univeristy. Have a few subnets. Uses number of tools - weathermap, RRD graphs on each switch.

UKI-NORTHGRID-SHEF-HEP

  • Separate subnet - 143.167.3.0/24
  • Monitor with Ganglia.

ScotGrid

UKI-SCOTGRID-DURHAM

UKI-SCOTGRID-ECDF

  • (shared) cluster on own subnet
  • WAN: 10Gb uplink from Eddie to SRIF
  • 20Gb SRIF@Bush to SRIF@KB 4Gb to SRIF@KB to SRIF@AT 1Gb from SRIF@AT to SPOP2

- weakest link but dedicated; not saturating and could be upgraded

  • 10Gb from SPOP2 to SJ5

File:ECDF-network.jpg

UKI-SCOTGRID-GLASGOW

  • Upgraded to 4 X 48x10Gb/s core switches + 16 x 40Gb/s interfaces: Device Extreme Network X670
  • Upgraded to 12 X 48x1Gb/s core switches + 16 x 10Gb/s interfaces+ 24 X 40Gb/s interfaces: Device Extreme Network X460
  • Upgraded internal backbone now capable of 320 Gbps.
  • Cluster network now passed through main physics 2 core switches directly to ClydeNET - no interaction with University firewalls
  • Primary WAN link 10 Gb/s; effective upper limit at 8-9 Gb/s 130.209.239.0/25 range
  • Secondary Wan link 10 Gb/s; effective upper limit at 8-9 Gb/s 130.209.239.0/25 range. To be installed during summer of 2012.
  • Monitoring: Nagios/Cacti/Ganglia/Ridgeline
  • In process of installing NagVis

File:Glasgow-network-new.png

SouthGrid

UKI-SOUTHGRID-BHAM-HEP

  • Whole cluster is on a well defined subnet (193.62.56.0/24).
  • We have a direct 10gig link to the shared cluster machines which are also on the above subnet
  • Connection to JANET via a direct 10gig link
  • Use Ganglia for the majority of online network monitoring.

UKI-SOUTHGRID-BRIS-HEP

  • Service nodes on public 137.222.79 subnet
  • cluster WN & some behind-scenes service nodes on private 10.129.5 subnet
  • 10Gb to newer WN
  • 2 x 10GB uplink from main service nodes (some are VMs on a VM host) - IIRC this bypasses site firewall direct to JaNET
  • monitoring = Ganglia, pakiti

UKI-SOUTHGRID-CAM-HEP

  • 10Gbps backbone with 10Gbps connection onto the University network (CUDN).
  • Connection onto CUDN shared with Group Systems, but GRID on separate 10Gbps network below core switch.
  • Core switch and two 10Gbps GRID switches are DELL PowerConnect 8024F
  • All the WNs are on 1Gbps PowerConnect 6248 switches, with 10Gbps uplink to backbone.
  • Dual 10Gbps University connections on to JANET (second acts as failover).
  • GRID systems on same public subnet (131.111.66.0/24) as Group Systems - though GRID occupies 131.111.66.128/25 and Group Systems occupy 131.111.66.0/25.
  • Monitoring – use MRTG for traffic history.

EFDA-JET

UKI-SOUTHGRID-OX-HEP

  • It's all on one subnet (163.1.5.0/24)
  • It has a dedicated 1Gbit connection to the university backbone, and the backbone and offsite link are both 10Gbit.
  • Monitoring is patchy, but bits and pieces come from Ganglia, and some from OUCS monitoring

File:Oxford-network.jpg

UKI-SOUTHGRID-RALPP

  • Whole site is on subnet 130.246.44.0/22
  • Grid storage is on 130.246.47.128/25
  • 10Gb internal networking between switch stacks and to some storage (10Gb to all storage is planned).
  • 10Gb connection to RAL site network.

Tier1

RAL-LCG2-Tier-1

  • Tier1 is a subnet of the RAL /16 network
  • Two overlaid subnets: 130.246.176.0/21 and 130.246.216/21
  • Third overlaid /22 subnet for Facilities Data Service
  • To be physically split later as traffic increases
  • Monitoring: Cacti with weathermaps
  • Site SJ5 link: 20Gb/s + 20Gb/s failover direct to SJ5 core two routes (Reading, London)
  • T1 <-> OPN link: 10Gb/s + 10Gb/s failover, two routes • T1 <-> Core 10GbE • T1 <-> SJ5 bypass: 10Gb/s • T1 <-> PPD-T2: 10GbE
  • Limited by line speeds and who else needs the bandwidth

File:Tier1-network-2012.png