Protected Site networking

From GridPP Wiki
Jump to: navigation, search

Page for tracking site status and plans

This page has been created to provide a central GridPP reference page that allows the project to understand the status and plans of each site in regard of their internal networks. It can be updated by the site administrators concerned or the Tier-2 (deputy) coordinators.




The Imperial subnet is .

7th June ops meeting: Reasonably defined subnet, reasonably defined monitoring. Some firewall trickier means cacti plots aren't 100% accurate.


IP address range/ subnet

  • Monitoring : Cacti, Janet netsight
  • The university now has two 10Gbit links (for failover). The grid site is connected to the campus backbone at 10Gbit, and can achieve most of this. The plan is to put our traffic down one link and college traffic down the other - with mutual failover (at which point bandwidth limits may apply).
  • Jumbo frames (MTU=9000) can now reach campus from Janet - but we are only using this in tests at present.


  • Separate subnet -
  • Plan to setup dedicated monitoring like cacti etc.
  • Dedicated 1Gb/s link to Janet at IC - Until April shared the link with college, which was capped at =~ 300 Mb/s
  • Network Hardware upgrade - 8x Dell PowerEdge 6248, double-stacked (2x48Gb/s + redundant loop) - SN: 4x1Gb/s bonded & vlan, WN: 2x1Gb/s bonded (only on private network)


7th June Ops meeting: subnet, some monitoring



  • Both the local user and grid site are on separate, routable subnets
  • The Tier-2 is actually split across two subnets, the “Grid” and the “HEC”.
  • The subnet ranges are &
  • The Tier-2 is further split between a private and public VLANs.
  • There is a 10G backbone to the Tier 2 network.
  • There is a 1-Gb dedicated light path from the Tier-2 to RAL, and we also share the University's link to Janet (although I believe we are capped at 1Gb/s).
  • All switches are managed by ISS. We have access to the University Cacti pages.


  • Grid cluster is on a sort-of separate subnet (138.253.178/24, soon to be 138.253.202/23)
  • Shares some of this with local HEP systems
  • Most of these addresses may be freed up with local LAN reassignments
  • Monitored by Cacti/weathermap, Ganglia, Sflow/ntop (when it works), snort (sort of)
  • Grid site behind local bridge/firewall, 2G to CSD, 1G to Janet
  • Shared with other University traffic
  • Possible upgrades to 10G for WAN soon
  • Grid LAN under our control, everything outside our firewall CSD controlled


7th June ops meeting: Own network, skips univeristy. Have a few subnets. Uses number of tools - weathermap, RRD graphs on each switch.


  • Separate subnet -
  • Monitor with Ganglia.




WAN: 40Gbps (4x 10Gbps) on Science DMZ

Internal Networking:

  • Nodes 1Gbps
  • Node Switches 4x 10Gbps (HP)
  • Disks 2x 10Gbps
  • Services 10Gbps
  • Disk/Service Switches 2x 40Gbps (Cisco)


  • (shared) cluster on own subnet
  • WAN: 10Gb uplink from Eddie to SRIF
  • 20Gb SRIF@Bush to SRIF@KB 4Gb to SRIF@KB to SRIF@AT 1Gb from SRIF@AT to SPOP2

- weakest link but dedicated; not saturating and could be upgraded

  • 10Gb from SPOP2 to SJ5



  • Upgraded to 4 X 48x10Gb/s core switches + 16 x 40Gb/s interfaces: Device Extreme Network X670
  • Upgraded to 12 X 48x1Gb/s core switches + 16 x 10Gb/s interfaces+ 24 X 40Gb/s interfaces: Device Extreme Network X460
  • Upgraded internal backbone now capable of 320 Gbps.
  • Cluster network now passed through main physics 2 core switches directly to ClydeNET - no interaction with University firewalls
  • Primary WAN link 10 Gb/s; effective upper limit at 8-9 Gb/s range
  • Secondary Wan link 10 Gb/s; effective upper limit at 8-9 Gb/s range. To be installed during summer of 2012.
  • Monitoring: Nagios/Cacti/Ganglia/Ridgeline
  • In process of installing NagVis




  • Whole cluster is on a well defined subnet (
  • We have a direct 10gig link to the shared cluster machines which are also on the above subnet
  • Connection to JANET via a direct 10gig link
  • Use Ganglia for the majority of online network monitoring.


  • Service nodes on public 137.222.79 subnet
  • cluster WN & some behind-scenes service nodes on private 10.129.5 subnet
  • 10Gb to newer WN
  • 2 x 10GB uplink from main service nodes (some are VMs on a VM host) - IIRC this bypasses site firewall direct to JaNET
  • monitoring = Ganglia, pakiti


  • 10Gbps backbone with 10Gbps connection onto the University network (CUDN).
  • Connection onto CUDN shared with Group Systems, but GRID on separate 10Gbps network below core switch.
  • Core switch and two 10Gbps GRID switches are DELL PowerConnect 8024F
  • All the WNs are on 1Gbps PowerConnect 6248 switches, with 10Gbps uplink to backbone.
  • Dual 10Gbps University connections on to JANET (second acts as failover).
  • GRID systems on same public subnet ( as Group Systems - though GRID occupies and Group Systems occupy
  • Monitoring – use MRTG for traffic history.



  • It's all on one subnet (
  • It has a dedicated 10Gbit connection to the university backbone, and the backbone and offsite link are both 20Gbit.
  • University JANET link scheduled to go to 2*100Gbit in April/May 2022
  • Oxford Tier-2 connection scheduled to go to 2* 10Gbit summer 2022
  • Monitoring is patchy, but bits and pieces come from Ganglia, and some from OUCS monitoring



  • Whole site is on subnet
  • Grid storage is on
  • 10Gb internal networking between switch stacks and to some storage (10Gb to all storage is planned).
  • 10Gb connection to RAL site network.



  • Tier1 is a subnet of the RAL /16 network
  • Two overlaid subnets: and 130.246.216/21
  • Third overlaid /22 subnet for Facilities Data Service
  • To be physically split later as traffic increases
  • Monitoring: Cacti with weathermaps
  • Site SJ5 link: 20Gb/s + 20Gb/s failover direct to SJ5 core two routes (Reading, London)
  • T1 <-> OPN link: 10Gb/s + 10Gb/s failover, two routes • T1 <-> Core 10GbE • T1 <-> SJ5 bypass: 10Gb/s • T1 <-> PPD-T2: 10GbE
  • Limited by line speeds and who else needs the bandwidth