Difference between revisions of "UKI-SOUTHGRID-BRIS-HEP"
(No difference)
|
Latest revision as of 14:25, 25 July 2012
Contents
UKI-SOUTHGRID-BRIS-HEP
Topic: HEPSPEC06
OS+cores,Bits | OS | Kernel | 32/64 | mem | gcc | Total | Per Core |
Dual Xeon E5405 2.0GHz | SL5.3 | 2.6.18-128.4.1.el5 | 64 | 16GB | 4.1.2 | 59.705 | 7.46 |
Dual AMD Opteron 2378 2.4GHz | SL5.3 | 2.6.18-164.9.1.el5 | 64 | 16GB | 4.1.2 | 74.34 | 9.2925 |
Topic: Middleware_transition
- StoRM v1.3 SE (lcgse02.phy.bris.ac.uk) is still SL4 as is its slave gridftp server gridftp01.phy.bris.ac.uk. StoRM v1.4 & 1.5 are not supported on SL5, and StoRM v1.6 & 1.7 (supported on SL5) are very unstable & not production ready yet (just ask Chris Walker who's had ++painful experience with them). Waiting on stable StoRM for SL5 - soon hopefuly.
- lcgnetmon (owned+operated by RAL) is still SL4 AFAIK
gLite3.2/EMI
ARGUS : Not yet
BDII_site : gLite 3.2
CE (CREAM/LCG) : 2 x gLite 3.2 CREAM-CE
glexec : Not yet
SE : see above
UI : gLite 3.2
WMS : Ain't got one (content to use others')
WN : gLite 3.2
Comments
Topic: Protected_Site_networking
- This section intentionally left blank
Topic: Resiliency_and_Disaster_Planning
- This section intentionally left blank
Topic: SL4_Survey_August_2011
- lcg-CE (lcgce04.phy.bris.ac.uk ) planning to upgrade to cream-ce very soon.
- StoRM v1.3 SE (lcgse02.phy.bris.ac.uk) is still SL4 as is its slave gridftp server gridftp01.phy.bris.ac.uk. StoRM v1.4 & 1.5 are not supported on SL5, and StoRM v1.6 & 1.7 (supported on SL5) are very unstable & not production ready yet (just ask Chris Walker who's had ++painful experience with them). Waiting on stable StoRM for SL5 - soon hopefuly.
- lcgnetmon (owned+operated by RAL) is still SL4 AFAIK
Topic: Site_information
Memory
1. Real physical memory per job slot:
- PP-managed cluster: Old WN (being phased out ASAP): 512MB/core; New WN: 2GB/core
- HPC-managed cluster: 2GB/core
2. Real memory limit beyond which a job is killed:
- None known (Unix default = unlimited) (both clusters)
3. Virtual memory limit beyond which a job is killed:
- None known (Unix default = unlimited) (both clusters)
4. Number of cores per WN:
- PP-managed cluster: Old WN (being phased out ASAP) 2 cores; New WN: 8 cores
- HPC-managed cluster: 4 cores
Comments:
Network
1. WAN problems experienced in the last year:
- None
2. Problems/issues seen with site networking:
- None
3. Forward look:
- Uni link to SWERN either will be or already is upgraded to 2.5Gbps AFAIK
Comments:
Topic: Site_status_and_plans
SL5 WNs
Current status (date): (Dec 2009) VM CE in production with SL5 WN passing all OPS SAM tests. More WN soon.
SRM
Current status (date): 1.6.11-3sec
Planned upgrade: No plans to upgrade, plan to retire DPM in Dec 2009.
StoRM SE must be upgra^H^H^H^H^H rebuilt (there's no upgrade path!) to 1.4 & enable other VO support on it.
ARGUS/glexec
Current status (date): Not yet installed (22.3.11)
Planned deployment: Waiting to hear how it goes elsewhere first.
Comments:
CREAM CE
Current status (date): Installed and working (22.3.11)
Planned deployment:
Comments: