DCache SRM v2.2 Configuration

From GridPP Wiki
Jump to: navigation, search

This page collects together useful information on configuring a dCache SRM2.2 server.

  • The main documentation can be found in the dCache Book.
  • As well as the trac wiki.
  • Additional material can be found in the SRM2.2 deployment workshop agenda page.

Examples of working configuration files are below.

PNFS tags

You may find it necessary (particularly if you are a disk only site) to specify the default access latency and retention policy in the PNFS directories. Remember, these tags are inherited by sub-directories upon creation of the sub-dir. If the sub-dir already exists, you will need to explicitly create the tags there as well.

cd /pnfs/epcc.ed.ac.uk/data/dteam
echo ONLINE > ".(tag)(AccessLatency)" 
echo REPLICA > ".(tag)(RetentionPolicy)" 

Reserving space

Once you have setup the link groups, you can make some space reservations in them by running the following command in the SrmSpaceManager cell.

reserve -vog=/lhcb -vor=lhcbprd -acclat=ONLINE -retpol=REPLICA -desc=LHCb_DST -lg=lhcb-linkGroup 24993207653 "-1" 

You can look at the status of reservations and link groups by running:

ls -l

Publishing space token descriptions

Follow the instructions here. For disk only sites, it is necessary to hack the provider so that it does not look for any tape related information. Place the code in /opt/d-cache/gip or where ever you think is appropriate.

dCacheSetup

#
# based on dCacheSetup.template $Revision: 1.33 $
#

#  -----------------------------------------------------------------------
#          config/dCacheSetup
#  -----------------------------------------------------------------------
#   This is the central configuration file for a dCache instance. In most
#   cases it should be possible to keep it identical across the nodes of
#   one dCache instance.
#
#   This template contains all options that can possibly be used. Most
#   may be left at the default value. If the option is commented out below
#   is indicates the default value. If it is not commented out it is set
#   to a reasonable value.
#
#   To get a dCache instance running it suffices to change the options:
#    - java                     The java binary
#    - serviceLocatorHost       The hostname of the admin node
#   The other values should only be changed when advised to do so by the
#   documentation.
#

#  -----------------------------------------------------------------------
#          Service Location
#  -----------------------------------------------------------------------

#  ---- Service Locater Host and Port
#   Adjust this to point to one unique server for one and only one
#   dCache instance (usually the admin node)
#
serviceLocatorHost=srm.epcc.ed.ac.uk
serviceLocatorPort=11111

#  -----------------------------------------------------------------------
#          Components
#  -----------------------------------------------------------------------

#  To activate Replica Manager you need make changes in all 3 places:
#   1) etc/node_config on ALL ADMIN NODES in this dcache instance.
#   2) replicaSetup file on node where replica manager is runnig
#   3) define Resilient pool group(s) in PoolManager.conf

#  ---- Will Replica Manager be started?
#   Values:  no, yes
#   Default: no
#
#   This has to be set to 'yes' on every node, if there is a replica
#   manager in this dCache instance. Where the replica manager is started
#   is controlled in 'etc/node_config'. If it is not started and this is
#   set to 'yes' there will be error messages in log/dCacheDomain.log. If
#   this is set to 'no' and a replica manager is started somewhere, it will
#   not work properly.
#
#
#replicaManager=no

#  ---- Which pool-group will be the group of resilient pools?
#   Values:  <pool-Group-Name>, a pool-group name existing in the PoolManager.conf
#   Default: ResilientPools
#
#   Only pools defined in pool group ResilientPools in config/PoolManager.conf
#   will be managed by ReplicaManager. You shall edit config/PoolManager.conf
#   to make replica manager work. To use another pool group defined
#   in PoolManager.conf for replication, please specify group name by changing setting :
#       #resilientGroupName=ResilientPools
#   Please scroll down "replica manager tuning" make this and other changes.

#  -----------------------------------------------------------------------
#          Java Configuration
#  -----------------------------------------------------------------------

#  ---- The binary of the Java VM
#   Adjust to the correct location.
#
# shold point to <JDK>/bin/java
#java="/usr/bin/java"
java=/usr/java/jdk1.5.0_10/bin/java

#
#  ---- Options for the Java VM
#   Do not change unless yoy know what you are doing.
#   If the globus.tcp.port.range is changed, the
#   variable 'clientDataPortRange' below has to be changed accordingly.
#
java_options="-server -Xmx512m -XX:MaxDirectMemorySize=512m \
              -Dsun.net.inetaddr.ttl=1800 \
              -Dorg.globus.tcp.port.range=50000,51000 \
              -Djava.net.preferIPv4Stack=true \
              -Dorg.dcache.dcap.port=0 \
              -Dorg.dcache.net.tcp.portrange=51001:52000 \
              -Dlog4j.configuration=file:${ourHomeDir}/config/log4j.properties \
             "
#   Option for Kerberos5 authentication:
#              -Djava.security.krb5.realm=FNAL.GOV \
#              -Djava.security.krb5.kdc=krb-fnal-1.fnal.gov \
#   Other options that might be useful:
#              -Dlog4j.configuration=/opt/d-cache/config/log4j.properties \
#              -Djavax.security.auth.useSubjectCredsOnly=false \
#              -Djava.security.auth.login.config=/opt/d-cache/config/jgss.conf \
#              -Xms400m \

#  ---- Classpath
#   Do not change unless yoy know what you are doing.
#
classesDir=${ourHomeDir}/classes
classpath=

#  ---- Librarypath
#   Do not change unless yoy know what you are doing.
#   Currently not used. Might contain .so librarys for JNI
#
librarypath=${ourHomeDir}/lib

#  -----------------------------------------------------------------------
#          Filesystem Locations
#  -----------------------------------------------------------------------

#  ---- Location of the configuration files
#   Do not change unless yoy know what you are doing.
#
config=${ourHomeDir}/config

#  ---- Location of the ssh
#   Do not change unless yoy know what you are doing.
#
keyBase=${ourHomeDir}/config

#  ---- SRM/GridFTP authentication file
#   Do not change unless yoy know what you are doing.
#
kpwdFile=${ourHomeDir}/etc/dcache.kpwd


#  -----------------------------------------------------------------------
#         pool tuning
#  -----------------------------------------------------------------------
#   Do not change unless yoy know what you are doing.
#
# poolIoQueue=
# checkRepository=true
# waitForRepositoryReady=false
#
#  ---- Which meta data repository implementation to use.
#    Values: org.dcache.pool.repository.meta.file.FileMetaDataRepository
#            org.dcache.pool.repository.meta.db.BerkeleyDBMetaDataRepository
#    Default: org.dcache.pool.repository.meta.file.FileMetaDataRepository
#
#   Selects which meta data repository implementation to use. This is
#   essentially a choice between storing meta data in a large number
#   of small files in the control/ directory, or to use the embedded
#   Berkeley database stored in the meta/ directory (both directories
#   placed in the pool directory).
#
# metaDataRepository=org.dcache.pool.repository.meta.db.BerkeleyDBMetaDataRepository
#
#  ---- Which meta data repository to import from.
#    Values: org.dcache.pool.repository.meta.file.FileMetaDataRepository
#            org.dcache.pool.repository.meta.db.BerkeleyDBMetaDataRepository
#            ""
#    Default: ""
#
#   Selects which meta data repository to import data from if the
#   information is missing from the main repository. This is useful
#   for converting from one repository implementation to another,
#   without having to fetch all the information from the central PNFS
#   manager.
#
# metaDataRepositoryImport=org.dcache.pool.repository.meta.file.FileMetaDataRepository
#
#  -----------------------------------------------------------------------
#         gPlazma tuning
#  -----------------------------------------------------------------------
#   Do not change unless yoy know what you are doing.
#
gplazmaPolicy=${ourHomeDir}/etc/dcachesrm-gplazma.policy
#
#gPlazmaNumberOfSimutaneousRequests  30
#gPlazmaRequestTimeout               30
#
useGPlazmaAuthorizationModule=false
useGPlazmaAuthorizationCell=true
#delegateToGPlazma=false
#
#
#  -----------------------------------------------------------------------
#         dcap tuning
#  -----------------------------------------------------------------------
#
# gsidcapIoQueue=
# gsidcapIoQueueOverwrite=denied
# gsidcapMaxLogin=1500
# dcapIoQueue=
# dcapIoQueueOverwrite=denied
# dcapMaxLogin=1500
#
#  -----------------------------------------------------------------------
#         gsiftp tuning
#  -----------------------------------------------------------------------
#
#  ---- Seconds between GridFTP performance markers
#   Set  performanceMarkerPeriod to 180 to get performanceMarkers
#   every 3 minutes.
#   Set to 0 to disable performance markers.
#   Default: 180
#
performanceMarkerPeriod=10
#
# gsiftpPoolManagerTimeout=5400
# gsiftpPoolTimeout=600
# gsiftpPnfsTimeout=300
# gsiftpMaxRetries=80
# gsiftpMaxStreamsPerClient=10
# gsiftpDeleteOnConnectionClosed=true
# gsiftpMaxLogin=100
# clientDataPortRange=20000:25000
# gsiftpIoQueue=
# gsiftpAdapterInternalInterface=
# remoteGsiftpIoQueue=
# FtpTLogDir=
#
#  ---- May pools accept incomming connection for GridFTP transfers?
#    Values: 'true', 'false'
#    Default: 'false' for FTP doors, 'true' for pools
#
#    If set to true, pools are allowed accept incomming connections for
#    for FTP transfers. This only affects passive transfers. Only passive
#    transfers using GFD.47 GETPUT (aka GridFTP 2) can be redirected to
#    the pool. Other passive transfers will be channelled through a
#    proxy component at the FTP door. If set to false, all passive
#    transfers to through a proxy.
#
#    This setting is interpreted by both FTP doors and pools, with
#    different defaults. If set to true at the door, then the setting
#    at the individual pool will be used.
#
# gsiftpAllowPassivePool=false
#
#
#  -----------------------------------------------------------------------
#         common to gsiftp and srm
#  -----------------------------------------------------------------------
#
srmSpaceManagerEnabled=yes
#
# will have no effect if srmSpaceManagerEnabled is "no"
srmImplicitSpaceManagerEnabled=yes
# overwriteEnabled=no
#
#  ---- Image and style directories for the dCache-internal web server
#   Do not change unless yoy know what you are doing.
#
images=${ourHomeDir}/docs/images
styles=${ourHomeDir}/docs/styles

#  -----------------------------------------------------------------------
#          Network Configuration
#  -----------------------------------------------------------------------

#  ---- Port Numbers for the various services
#   Do not change unless yoy know what you are doing.
#
portBase=22
dCapPort=${portBase}125
ftpPort=${portBase}126
kerberosFtpPort=${portBase}127
dCapGsiPort=${portBase}128
#gsiFtpPortNumber=2811
srmPort=8443
xrootdPort=1094

#  ---- GridFTP port range
#   Do not change unless yoy know what you are doing.
#
clientDataPortRange=50000:52000

#  ---- Port Numbers for the monitoring and administration
#   Do not change unless yoy know what you are doing.
#
adminPort=${portBase}223
httpdPort=${portBase}88
sshPort=${portBase}124
#   Telnet is only started if the telnetPort line is uncommented.
#   Debug only.
#telnetPort=${portBase}123
#
#  -----------------------------------------------------------------------
#        Maintenance Module Setup
#  -----------------------------------------------------------------------
#
# maintenanceLibPath=${ourHomeDir}/var/lib/dCache/maintenance
# maintenanceLibAutogeneratePaths=true
# maintenanceLogoutTime=18000
#

#  -----------------------------------------------------------------------
#          Database Configuration
#  -----------------------------------------------------------------------
#   The variable 'srmDbHost' is obsolete. For compatibility reasons,
#   it is still used if it is set and if the following variables are
#   not set

#   The current setup assumes that one or more PostgreSQL servers are
#   used by the various dCache components. Currently the database user
#   'srmdcache' with password 'srmdcache' is used by all components.
#   They use the databases 'dcache', 'replicas', 'companion',
#   'billing'.  However, these might be located on separate hosts.

#   The best idea is to have the database server running on the same
#   host as the dCache component which accesses it. Therefore, the
#   default value for the following variables is 'localhost'.
#   Uncomment and change these variables only if you have a reason to
#   deviate from this scheme.

#   (One possibility would be, to put the 'billing' DB on another host than
#   the pnfs server DB and companion, but keep the httpDomain on the admin
#   host.)

#  ---- pnfs Companion Database Host
#   Do not change unless yoy know what you are doing.
#   - Database name: companion
#
#companionDatabaseHost=localhost

#  ---- SRM Database Host
#   Do not change unless yoy know what you are doing.
#   - Database name: dcache
#   - If srmDbHost is set and this is not set, srmDbHost is used.
#
#srmDatabaseHost=localhost

#  ---- Space Manager Database Host
#   Do not change unless yoy know what you are doing.
#   - Database name: dcache
#   - If srmDbHost is set and this is not set, srmDbHost is used.
#
#spaceManagerDatabaseHost=localhost

#  ---- Pin Manager Database Host
#   Do not change unless yoy know what you are doing.
#   - Database name: dcache
#   - If srmDbHost is set and this is not set, srmDbHost is used.
#
#pinManagerDatabaseHost=localhost

#  ---- Replica Manager Database Host
#   Do not change unless yoy know what you are doing.
#   - Database name: replicas
#
# ----------------------------------------------------------------
#   replica manager tuning
# ----------------------------------------------------------------
#
# replicaManagerDatabaseHost=localhost
# replicaDbName=replicas
# replicaDbUser=srmdcache
# replicaDbPassword=srmdcache
# replicaPasswordFile=""
# resilientGroupName=ResilientPools
# replicaPoolWatchDogPeriod=600
# replicaWaitDBUpdateTimeout=600
# replicaExcludedFilesExpirationTimeout=43200
# replicaDelayDBStartTimeout=1200
# replicaAdjustStartTimeout=1200
# replicaWaitReplicateTimeout=43200
# replicaWaitReduceTimeout=43200
# replicaDebug=false
# replicaMaxWorkers=6
# replicaMin=2
# replicaMax=3
#


#  ---- Transfer / TCP Buffer Size
#   Do not change unless yoy know what you are doing.
#
bufferSize=1048576
tcpBufferSize=1048576

#  ---- Allow overwrite of existing files via GSIdCap
#   allow=true, disallow=false
#
truncate=false

#  ---- pnfs Mount Point for (Grid-)FTP
#   The current FTP door needs pnfs to be mounted for some file exist
#   checks and for the directory listing. Therefore it needs to know
#   where pnfs is mounted. In future the Ftp and dCap deamons will
#   ask the pnfsManager cell for help and the directory listing is
#   done by a DirListPool.
ftpBase=/pnfs/ftpBase

#  -----------------------------------------------------------------------
#          pnfs Manager Configuration
#  -----------------------------------------------------------------------
#
#  ---- pnfs Mount Point
#   The mount point of pnfs on the admin node. Default: /pnfs/fs
#
pnfs=/pnfs/fs

#   An older version of the pnfsManager actually autodetects the
#   possible pnfs filesystems. The ${defaultPnfsServer} is choosen
#   from the list and used as primary pnfs filesystem. (currently the
#   others are ignored).  The ${pnfs} variable can be used to override
#   this mechanism.
#
# defaultPnfsServer=localhost
#
#   -- leave this unless you are running an enstore HSM backend.
#
# pnfsInfoExtractor=diskCacheV111.util.OsmInfoExtractor
#
#   -- depending on the power of your pnfs server host you may
#      set this to up to 50.
#
# pnfsNumberOfThreads=4
#
#   -- don't change this
#
# namespaceProvider=diskCacheV111.namespace.provider.BasicNameSpaceProviderFactory
#
#   --- change this if you configured you postgres instance
#       other then described in the Book.
#
# pnfsDbUser=srmdcache
# pnfsDbPassword=srmdcache
# pnfsPasswordFile=
#
#  ---- Storage Method for cacheinfo: companion or pnfs
#   Values:  'comanion' -- cacheinfo will be stored in separate DB
#            other or missing -- cacheinfo will be stored in pnfs
#   Default: 'pnfs' -- for backward compatibility of old dCacheSetup files
#
#   'companion' is the default for new installs. Old installations have
#   to use 'pnfs register' in every pool after switching from 'pnfs' to
#   'companion'. See the documentation.
#
cacheInfo=companion
#
#
#


#  ---- Location of the trash directory
#   The cleaner (which can only run on the pnfs server machine itself)
#   autodetects the 'trash' directory.  Non-empty 'trash' overwrites the
#   autodetect.
#
#trash=

#   The cleaner stores persistency information in subdirectories of
#   the following directory.
#
# cleanerDB=/opt/pnfsdb/pnfs/trash/2
# cleanerRefresh=120
# cleanerRecover=240
# cleanerPoolTimeout=100
# cleanerProcessFilesPerRun=500
# cleanerArchive=none
#

#  ---- Whether to enable the HSM cleaner
#   Values:  'disabled', 'enabled'
#   Default: 'disabled'
#
#   The HSM cleaner scans the PNFS trash directory for deleted
#   files stored on an HSM and sends a request to an attached
#   pool to delete that file from the HSM.
#
#   The HSM cleaner by default runs in the PNFS domain. To
#   enable the cleaner, this setting needs to be set to enabled
#   at the PNFS domain *and* at all pools that are supposed
#   to delete files from an HSM.
#
# hsmCleaner=disabled
#
#
#  ---- Location of trash directory for files on tape
#   The HSM cleaner periodically scans this directory to
#   detect deleted files.
#
# hsmCleanerTrash=/opt/pnfsdb/pnfs/1
#
#  ---- Location of repository directory of the HSM cleaner
#   The HSM cleaner uses this directory to store information
#   about files in could not clean right away. The cleaner
#   will reattempt to clean the files later.
#
# hsmCleanerRepository=/opt/pnfsdb/pnfs/1/repository
#
#  ---- Interval between scans of the trash directory
#   Specifies the time in seconds between two scans of the
#   trash directory.
#
# hsmCleanerScan=90
#
#  ---- Interval between retries
#   Specifies the time in seconds between two attempts to
#   clean files stored in the cleaner repository.
#
# hsmCleanerRecover=3600
#
#  ---- Interval between flushing failures to the repository
#   When the cleaner failes to clean a file, information about this
#   file is added to the repository. This setting specifies the time
#   in seconds between flushes to the repository. Until the
#   information is kept in memory and in the trash directory.
#
#   Each flush will create a new file. A lower value will cause the
#   repository to be split into more files. A higher value will cause
#   a higher memory usage and a larger number of files in the trash
#   directory.
#
# hsmCleanerFlush=60
#
#  ---- Max. length of in memory queue of files to clean
#   When the trash directory is scanned, information about deleted
#   files is queued in memory. This setting specifies the maximum
#   length of this queue. When the queue length is reached, scanning
#   is suspended until files have been cleaned or flushed to the
#   repository.
#
# hsmCleanerCleanerQueue=10000
#
#  ---- Timeout for pool communication
#   Files are cleaned from an HSM by sending a message to a pool to
#   do so. This specifies the timeout in seconds after which the
#   operation is considered failed.
#
# hsmCleanerTimeout=120
#
#  ---- Maximum concurrent requests to a single HSM
#   Files are cleaned in batches. This specified the largest number
#   of files to include in a batch per HSM.
#
# hsmCleanerRequest=100
#
#  -----------------------------------------------------------------------
#         Directory Pools
#  -----------------------------------------------------------------------
#
#directoryPoolPnfsBase=/pnfs/fs
#

#  -----------------------------------------------------------------------
#          Srm Settings for experts
#  -----------------------------------------------------------------------
#
# srmVersion=version1
# pnfsSrmPath=/
parallelStreams=1

#srmAuthzCacheLifetime=60

# srmGetLifeTime=14400000
# srmPutLifeTime=14400000
# srmCopyLifeTime=14400000


# srmTimeout=3600
# srmVacuum=true
# srmVacuumPeriod=21600
# srmProxiesDirectory=/tmp
# srmBufferSize=1048576
# srmTcpBufferSize=1048576
# srmDebug=true

# srmGetReqThreadQueueSize=10000
# srmGetReqThreadPoolSize=250
# srmGetReqMaxWaitingRequests=1000
# srmGetReqReadyQueueSize=10000
# srmGetReqMaxReadyRequests=2000
# srmGetReqMaxNumberOfRetries=10
# srmGetReqRetryTimeout=60000
# srmGetReqMaxNumOfRunningBySameOwner=100

# srmPutReqThreadQueueSize=10000
# srmPutReqThreadPoolSize=250
# srmPutReqMaxWaitingRequests=1000
# srmPutReqReadyQueueSize=10000
# srmPutReqMaxReadyRequests=1000
# srmPutReqMaxNumberOfRetries=10
# srmPutReqRetryTimeout=60000
# srmPutReqMaxNumOfRunningBySameOwner=100

# srmCopyReqThreadQueueSize=10000
# srmCopyReqThreadPoolSize=250
# srmCopyReqMaxWaitingRequests=1000
# srmCopyReqMaxNumberOfRetries=10
# srmCopyReqRetryTimeout=60000
# srmCopyReqMaxNumOfRunningBySameOwner=100

# srmPoolManagerTimeout=300
# srmPoolTimeout=300
# srmPnfsTimeout=300
# srmMoverTimeout=7200
# remoteCopyMaxTransfers=150
# remoteHttpMaxTransfers=30
# remoteGsiftpMaxTransfers=${srmCopyReqThreadPoolSize}

#
# srmDbName=dcache
# srmDbUser=srmdcache
# srmDbPassword=srmdcache
# srmDbLogEnabled=false
#
# This variable enables logging of the history
# of the srm request transitions in the database
# so that it can be examined though the srmWatch
# monitoring tool
# srmJdbcMonitoringLogEnabled=false
#
# turning this on turns off the latest changes that made service
# to honor the srm client's prococol list order for
# get/put commands
# this is needed temprorarily to support old srmcp clients
# srmIgnoreClientProtocolOrder=false

#
#  -- Set this to /root/.pgpass in case
#     you need to have better security.
#
# srmPasswordFile=
#
#  -- Set this to true if you want overwrite to be enabled for
#     srm v1.1 interface as well as for srm v2.2 interface when
#     client does not specify desired overwrite mode.
#     This option will be considered only if overwriteEnabled is
#     set to yes (or true)
#
# srmOverwriteByDefault=false

# ----srmCustomGetHostByAddr enables using the BNL developed
#  procedure for host by ip resolution if standard
# InetAddress method failed
# srmCustomGetHostByAddr=false

#  ---- Allow automatic creation of directories via SRM
#   allow=true, disallow=false
#
RecursiveDirectoryCreation=true

#  ---- Allow delete via SRM
#   allow=true, disallow=false
#
AdvisoryDelete=true
#
# pinManagerDatabaseHost=${srmDbHost}
# spaceManagerDatabaseHost=${srmDbHost}
#
# ----if space reservation request does not specify retention policy
#     we will assign this retention policy by default
SpaceManagerDefaultRetentionPolicy=REPLICA
#
# ----if space reservation request does not specify access latency
#     we will assign this access latency by default
SpaceManagerDefaultAccessLatency=ONLINE
#
# ----if the transfer request come from the door, and there was not prior
#     space reservation made for this file, should we try to reserve
#     space before satisfying the request
SpaceManagerReserveSpaceForNonSRMTransfers=true

# LinkGroupAuthorizationFile contains the list of FQANs that are allowed to
# make space reservations in a given link group
SpaceManagerLinkGroupAuthorizationFileName=/opt/d-cache/etc/LinkGroupAuthorization.conf

#

#  -----------------------------------------------------------------------
#          Logging Configuration
#  -----------------------------------------------------------------------

#  ---- Directory for the Log Files
#   Default: ${ourHomeDir}/log/  (if unset or empty)
#
logArea=/var/log

#  ---- Restart Behaviour
#   Values:  'new' -- logfiles will be moved to LOG.old at restart.
#            other or missing -- logfiles will be appended at restart.
#   Default: 'keep'
#
#logMode=keep

#  -----------------------------------------------------------------------
#       Billing / Accounting
#  -----------------------------------------------------------------------

#   The directory the billing logs are written to
billingDb=${ourHomeDir}/billing

#   If billing information should be written to a
#   PostgreSQL database set to 'yes'.
#   A database called 'billing' has to be created there.
billingToDb=yes

#   The PostgreSQL database host:
#billingDatabaseHost=localhost

#   EXPERT: First is default if billingToDb=no, second for billingToDb=yes
#   Do NOT put passwords in setup file! They can be read by anyone logging into
#   the dCache admin interface!
#billingDbParams=
#billingDbParams="\
#                 -useSQL \
#                 -jdbcUrl=jdbc:postgresql://${billingDatabaseHost}/billing \
#                 -jdbcDriver=org.postgresql.Driver \
#                 -dbUser=srmdcache \
#                 -dbPass=srmdcache \
#                "

#  -----------------------------------------------------------------------
#       Info Provider
#  -----------------------------------------------------------------------
#
#   The following variables are used by the dynamic info provider, which
#   is used for integration of dCache as a storage element in the LCG
#   information system. All variables are used by the client side of the
#   dynamic info provider which is called regularly by the LCG GIP (generic
#   info provider). It consists of the two scripts
#     jobs/infoDynamicSE-plugin-dcache
#     jobs/infoDynamicSE-provider-dcache
#

#  ---- Seconds between information retrievals
#   Default: 180
#infoCollectorInterval=180

#  ---- The static file used by the GIP
#   This is also used by the plugin to determine the info it should
#   output.
#   Default: /opt/lcg/var/gip/ldif/lcg-info-static-se.ldif
infoProviderStaticFile=/opt/lcg/var/gip/ldif/static-file-SE.ldif

#  ---- The host where the InfoCollector cell runs
#   Default: localhost
#infoCollectorHost=localhost

#  ---- The port where the InfoCollector cell will listen
#   This will be used by the InfoCollector cell as well as the dynamic
#   info provider scripts
#   Default: 22111
#infoCollectorPort=22111



# ------------------------------------------------------------------------
#    Statistics module
# ------------------------------------------------------------------------

#  - point to place where statistic will be store
statisticsLocation=${ourHomeDir}/statistics

# ------------------------------------------------------------------------
# xrootd section
# ------------------------------------------------------------------------
#
#   forbids write access in general (to avoid unauthenticated writes). Overrides all other authorization settings.
# xrootdIsReadOnly=true
#
#   allow write access only to selected paths (and its subdirectories). Overrides any remote authorization settings (like from the filecatalogue)
# xrootdAllowedPaths=/path1:/path2:/path3
#
#       This will allow to enable authorization in the xrootd door by specifying a valid
#       authorization plugin. There is only one plugin in the moment, implementing token based
#       authorization controlled by a remote filecatalogue. This requires an additional parameter
#   'keystore', holding keypairs needed to do the authorization plugin. A template keystore
#   can be found in ${ourHomeDir}/etc/keystore.temp.

# xrootdAuthzPlugin=org.dcache.xrootd.security.plugins.tokenauthz.TokenAuthorizationFactory
# xrootdAuthzKeystore=${ourHomeDir}/etc/keystore

#       the mover queue on the pool where this request gets scheduled to
# xrootdIoQueue=

PoolManager.conf

#
# Setup of PoolManager (diskCacheV111.poolManager.PoolManagerV5) at Fri Feb 01 12:23:15 GMT 2008
#
set timeout pool 120
#
#
# Printed by diskCacheV111.poolManager.PoolSelectionUnitV2 at Fri Feb 01 12:23:15 GMT 2008
#
#
#
# The units ...
#
psu create unit -store  atlas:GENERATED@osm
psu create unit -store  babar:GENERATED@osm
psu create unit -store  pheno:GENERATED@osm
psu create unit -store  hone:GENERATED@osm
psu create unit -net    0.0.0.0/0.0.0.0
psu create unit -store  dteam:GENERATED@osm
psu create unit -store  ops:GENERATED@osm
psu create unit -store  ngs:GENERATED@osm
psu create unit -store  pheno:STATIC@osm
psu create unit -store  geant4:STATIC@osm
psu create unit -store  minos:STATIC@osm
psu create unit -store  ilc:STATIC@osm
psu create unit -store  esr:GENERATED@osm
psu create unit -store  magic:STATIC@osm
psu create unit -store  alice:STATIC@osm
psu create unit -store  hone:STATIC@osm
psu create unit -store  zeus:STATIC@osm
psu create unit -store  alice:GENERATED@osm
psu create unit -store  cms:GENERATED@osm
psu create unit -store  magic:GENERATED@osm
psu create unit -store  dteam:STATIC@osm
psu create unit -store  lhcb:GENERATED@osm
psu create unit -store  t2k:GENERATED@osm
psu create unit -store  geant4:GENERATED@osm
psu create unit -store  cdf:GENERATED@osm
psu create unit -store  biomed:GENERATED@osm
psu create unit -store  cms:STATIC@osm
psu create unit -store  ngs:STATIC@osm
psu create unit -store  planck:GENERATED@osm
psu create unit -store  biomed:STATIC@osm
psu create unit -store  sixt:GENERATED@osm
psu create unit -store  na48:GENERATED@osm
psu create unit -store  fusion:STATIC@osm
psu create unit -store  atlas:STATIC@osm
psu create unit -store  ops:STATIC@osm
psu create unit -store  fusion:GENERATED@osm
psu create unit -store  ilc:GENERATED@osm
psu create unit -store  zeus:GENERATED@osm
psu create unit -store  babar:STATIC@osm
psu create unit -store  na48:STATIC@osm
psu create unit -store  planck:STATIC@osm
psu create unit -store  minos:GENERATED@osm
psu create unit -protocol */*
psu create unit -store  dzero:GENERATED@osm
psu create unit -store  cdf:STATIC@osm
psu create unit -store  *@*
psu create unit -store  t2k:STATIC@osm
psu create unit -net    0.0.0.0/255.255.255.255
psu create unit -store  dzero:STATIC@osm
psu create unit -store  esr:STATIC@osm
psu create unit -store  lhcb:STATIC@osm
psu create unit -store  sixt:STATIC@osm
#
# The unit Groups ...
#
psu create ugroup ngs-groups
psu addto ugroup ngs-groups ngs:STATIC@osm
psu addto ugroup ngs-groups ngs:GENERATED@osm
psu create ugroup na48-groups
psu addto ugroup na48-groups na48:STATIC@osm
psu addto ugroup na48-groups na48:GENERATED@osm
psu create ugroup fusion-groups
psu addto ugroup fusion-groups fusion:STATIC@osm
psu addto ugroup fusion-groups fusion:GENERATED@osm
psu create ugroup zeus-groups
psu addto ugroup zeus-groups zeus:STATIC@osm
psu addto ugroup zeus-groups zeus:GENERATED@osm
psu create ugroup esr-groups
psu addto ugroup esr-groups esr:GENERATED@osm
psu addto ugroup esr-groups esr:STATIC@osm
psu create ugroup geant4-groups
psu addto ugroup geant4-groups geant4:GENERATED@osm
psu addto ugroup geant4-groups geant4:STATIC@osm
psu create ugroup alice-groups
psu addto ugroup alice-groups alice:STATIC@osm
psu addto ugroup alice-groups alice:GENERATED@osm
psu create ugroup sixt-groups
psu addto ugroup sixt-groups sixt:GENERATED@osm
psu addto ugroup sixt-groups sixt:STATIC@osm
psu create ugroup ops
psu addto ugroup ops ops:GENERATED@osm
psu addto ugroup ops ops:STATIC@osm
psu create ugroup dzero-groups
psu addto ugroup dzero-groups dzero:STATIC@osm
psu addto ugroup dzero-groups dzero:GENERATED@osm
psu create ugroup atlas-groups
psu addto ugroup atlas-groups atlas:GENERATED@osm
psu addto ugroup atlas-groups atlas:STATIC@osm
psu create ugroup lhcb-groups
psu addto ugroup lhcb-groups lhcb:GENERATED@osm
psu addto ugroup lhcb-groups lhcb:STATIC@osm
psu create ugroup cms-groups
psu addto ugroup cms-groups cms:STATIC@osm
psu addto ugroup cms-groups cms:GENERATED@osm
psu create ugroup minos-groups
psu addto ugroup minos-groups minos:STATIC@osm
psu addto ugroup minos-groups minos:GENERATED@osm
psu create ugroup hone-groups
psu addto ugroup hone-groups hone:STATIC@osm
psu addto ugroup hone-groups hone:GENERATED@osm
psu create ugroup pheno-groups
psu addto ugroup pheno-groups pheno:GENERATED@osm
psu addto ugroup pheno-groups pheno:STATIC@osm
psu create ugroup planck-groups
psu addto ugroup planck-groups planck:GENERATED@osm
psu addto ugroup planck-groups planck:STATIC@osm
psu create ugroup babar-groups
psu addto ugroup babar-groups babar:GENERATED@osm
psu addto ugroup babar-groups babar:STATIC@osm
psu create ugroup dteam-groups
psu addto ugroup dteam-groups dteam:STATIC@osm
psu addto ugroup dteam-groups dteam:GENERATED@osm
psu create ugroup world-net
psu addto ugroup world-net 0.0.0.0/0.0.0.0
psu create ugroup magic-groups
psu addto ugroup magic-groups magic:GENERATED@osm
psu addto ugroup magic-groups magic:STATIC@osm
psu create ugroup ilc-groups
psu addto ugroup ilc-groups ilc:STATIC@osm
psu addto ugroup ilc-groups ilc:GENERATED@osm
psu create ugroup t2k-groups
psu addto ugroup t2k-groups t2k:GENERATED@osm
psu addto ugroup t2k-groups t2k:STATIC@osm
psu create ugroup cdf-groups
psu addto ugroup cdf-groups cdf:STATIC@osm
psu addto ugroup cdf-groups cdf:GENERATED@osm
psu create ugroup ops-groups
psu addto ugroup ops-groups ops:GENERATED@osm
psu addto ugroup ops-groups ops:STATIC@osm
psu create ugroup dteam
psu create ugroup any-store
psu addto ugroup any-store atlas:GENERATED@osm
psu addto ugroup any-store babar:GENERATED@osm
psu addto ugroup any-store pheno:GENERATED@osm
psu addto ugroup any-store hone:GENERATED@osm
psu addto ugroup any-store dteam:GENERATED@osm
psu addto ugroup any-store ngs:GENERATED@osm
psu addto ugroup any-store ops:GENERATED@osm
psu addto ugroup any-store pheno:STATIC@osm
psu addto ugroup any-store geant4:STATIC@osm
psu addto ugroup any-store minos:STATIC@osm
psu addto ugroup any-store ilc:STATIC@osm
psu addto ugroup any-store esr:GENERATED@osm
psu addto ugroup any-store magic:STATIC@osm
psu addto ugroup any-store alice:STATIC@osm
psu addto ugroup any-store hone:STATIC@osm
psu addto ugroup any-store zeus:STATIC@osm
psu addto ugroup any-store alice:GENERATED@osm
psu addto ugroup any-store cms:GENERATED@osm
psu addto ugroup any-store magic:GENERATED@osm
psu addto ugroup any-store dteam:STATIC@osm
psu addto ugroup any-store t2k:GENERATED@osm
psu addto ugroup any-store lhcb:GENERATED@osm
psu addto ugroup any-store geant4:GENERATED@osm
psu addto ugroup any-store cdf:GENERATED@osm
psu addto ugroup any-store biomed:GENERATED@osm
psu addto ugroup any-store cms:STATIC@osm
psu addto ugroup any-store ngs:STATIC@osm
psu addto ugroup any-store planck:GENERATED@osm
psu addto ugroup any-store biomed:STATIC@osm
psu addto ugroup any-store sixt:GENERATED@osm
psu addto ugroup any-store na48:GENERATED@osm
psu addto ugroup any-store fusion:STATIC@osm
psu addto ugroup any-store atlas:STATIC@osm
psu addto ugroup any-store fusion:GENERATED@osm
psu addto ugroup any-store ops:STATIC@osm
psu addto ugroup any-store ilc:GENERATED@osm
psu addto ugroup any-store zeus:GENERATED@osm
psu addto ugroup any-store babar:STATIC@osm
psu addto ugroup any-store na48:STATIC@osm
psu addto ugroup any-store planck:STATIC@osm
psu addto ugroup any-store minos:GENERATED@osm
psu addto ugroup any-store dzero:GENERATED@osm
psu addto ugroup any-store cdf:STATIC@osm
psu addto ugroup any-store *@*
psu addto ugroup any-store t2k:STATIC@osm
psu addto ugroup any-store dzero:STATIC@osm
psu addto ugroup any-store esr:STATIC@osm
psu addto ugroup any-store lhcb:STATIC@osm
psu addto ugroup any-store sixt:STATIC@osm
psu create ugroup biomed-groups
psu addto ugroup biomed-groups biomed:STATIC@osm
psu addto ugroup biomed-groups biomed:GENERATED@osm
#
# The pools ...
#
psu create pool pool1_23
psu create pool pool1_01
psu create pool pool1_24
psu create pool pool2_4
psu create pool pool1_04
psu create pool pool1_02
psu create pool pool2_00
psu create pool pool1_26
psu create pool pool1_05
psu create pool pool1_03
psu create pool pool1_27
psu create pool pool1_14
psu create pool pool2_01
psu create pool pool2_2
psu create pool pool1_19
psu create pool pool1_16
psu create pool pool1_25
psu create pool pool2_06
psu create pool pool1_20
psu create pool pool1_06
psu create pool pool2_7
psu create pool pool1_28
psu create pool pool1_12
psu create pool pool2_1
psu create pool pool2_03
psu create pool pool2_04
psu create pool pool1_09
psu create pool pool2_5
psu create pool pool2_05
psu create pool pool1_10
psu create pool pool2_6
psu create pool pool1_08
psu create pool pool1_07
psu create pool pool1_21
psu create pool pool1_17
psu create pool pool1_18
psu create pool pool2_02
psu create pool pool2_3
#
# The pool groups ...
#
psu create pgroup na48
psu create pgroup lhcb2
psu addto pgroup lhcb2 pool1_09
psu addto pgroup lhcb2 pool1_10
psu addto pgroup lhcb2 pool1_19
psu addto pgroup lhcb2 pool1_16
psu addto pgroup lhcb2 pool2_00
psu addto pgroup lhcb2 pool1_17
psu addto pgroup lhcb2 pool1_18
psu create pgroup ResilientPools
psu create pgroup hone
psu create pgroup ops
psu addto pgroup ops pool2_01
psu addto pgroup ops pool1_28
psu create pgroup dzero
psu create pgroup esr
psu create pgroup minos
psu create pgroup geant4
psu create pgroup lhcb
psu addto pgroup lhcb pool1_12
psu addto pgroup lhcb pool2_03
psu addto pgroup lhcb pool2_04
psu addto pgroup lhcb pool1_09
psu addto pgroup lhcb pool2_05
psu addto pgroup lhcb pool2_01
psu addto pgroup lhcb pool1_14
psu addto pgroup lhcb pool1_19
psu addto pgroup lhcb pool1_16
psu addto pgroup lhcb pool2_06
psu addto pgroup lhcb pool1_20
psu addto pgroup lhcb pool1_18
psu addto pgroup lhcb pool2_02
psu addto pgroup lhcb pool1_17
psu create pgroup zeus
psu addto pgroup zeus pool1_12
psu create pgroup planck
psu create pgroup sixt
psu create pgroup babar
psu create pgroup cms
psu addto pgroup cms pool1_12
psu create pgroup pheno
psu create pgroup cdf
psu create pgroup magic
psu create pgroup default
psu create pgroup atlas
psu addto pgroup atlas pool1_05
psu addto pgroup atlas pool1_09
psu addto pgroup atlas pool1_03
psu addto pgroup atlas pool1_08
psu addto pgroup atlas pool1_01
psu addto pgroup atlas pool1_06
psu addto pgroup atlas pool1_04
psu addto pgroup atlas pool1_02
psu addto pgroup atlas pool1_07
psu create pgroup ngs
psu create pgroup alice
psu create pgroup ilc
psu create pgroup t2k
psu create pgroup biomed
psu addto pgroup biomed pool1_12
psu create pgroup dteam
psu addto pgroup dteam pool1_23
psu addto pgroup dteam pool1_27
psu addto pgroup dteam pool1_25
psu addto pgroup dteam pool1_24
psu addto pgroup dteam pool1_28
psu addto pgroup dteam pool1_26
psu create pgroup fusion
#
# The links ...
#
psu create link ilc-link ilc-groups world-net
psu set link ilc-link -readpref=20 -writepref=20 -cachepref=20 -p2ppref=-1
psu add link ilc-link ilc
psu create link geant4-link geant4-groups world-net
psu set link geant4-link -readpref=20 -writepref=20 -cachepref=20 -p2ppref=-1
psu add link geant4-link geant4
psu create link lhcb-link world-net lhcb-groups
psu set link lhcb-link -readpref=20 -writepref=20 -cachepref=20 -p2ppref=-1
psu add link lhcb-link lhcb
psu create link ngs-link ngs-groups world-net
psu set link ngs-link -readpref=20 -writepref=20 -cachepref=20 -p2ppref=-1
psu add link ngs-link ngs
psu create link t2k-link t2k-groups world-net
psu set link t2k-link -readpref=20 -writepref=20 -cachepref=20 -p2ppref=-1
psu add link t2k-link t2k
psu create link magic-link magic-groups world-net
psu set link magic-link -readpref=20 -writepref=20 -cachepref=20 -p2ppref=-1
psu add link magic-link magic
psu create link babar-link babar-groups world-net
psu set link babar-link -readpref=20 -writepref=20 -cachepref=20 -p2ppref=-1
psu add link babar-link babar
psu create link lhcb-link2 world-net lhcb-groups
psu set link lhcb-link2 -readpref=20 -writepref=30 -cachepref=20 -p2ppref=-1
psu add link lhcb-link2 lhcb2
psu create link esr-link esr-groups world-net
psu set link esr-link -readpref=20 -writepref=20 -cachepref=20 -p2ppref=-1
psu add link esr-link esr
psu create link planck-link planck-groups world-net
psu set link planck-link -readpref=20 -writepref=20 -cachepref=20 -p2ppref=-1
psu add link planck-link planck
psu create link fusion-link fusion-groups world-net
psu set link fusion-link -readpref=20 -writepref=20 -cachepref=20 -p2ppref=-1
psu add link fusion-link fusion
psu create link dteam-link dteam-groups world-net
psu set link dteam-link -readpref=20 -writepref=20 -cachepref=20 -p2ppref=-1
psu add link dteam-link dteam
psu create link zeus-link zeus-groups world-net
psu set link zeus-link -readpref=20 -writepref=20 -cachepref=20 -p2ppref=-1
psu add link zeus-link zeus
psu create link hone-link hone-groups world-net
psu set link hone-link -readpref=20 -writepref=20 -cachepref=20 -p2ppref=-1
psu add link hone-link hone
psu create link sixt-link world-net sixt-groups
psu set link sixt-link -readpref=20 -writepref=20 -cachepref=20 -p2ppref=-1
psu add link sixt-link sixt
psu create link biomed-link biomed-groups world-net
psu set link biomed-link -readpref=20 -writepref=20 -cachepref=20 -p2ppref=-1
psu add link biomed-link biomed
psu create link ops-link ops world-net
psu set link ops-link -readpref=20 -writepref=20 -cachepref=20 -p2ppref=-1
psu add link ops-link ops
psu create link cdf-link cdf-groups world-net
psu set link cdf-link -readpref=20 -writepref=20 -cachepref=20 -p2ppref=-1
psu add link cdf-link cdf
psu create link na48-link na48-groups world-net
psu set link na48-link -readpref=20 -writepref=20 -cachepref=20 -p2ppref=-1
psu add link na48-link na48
psu create link dzero-link dzero-groups world-net
psu set link dzero-link -readpref=20 -writepref=20 -cachepref=20 -p2ppref=-1
psu add link dzero-link dzero
psu create link default-link any-store world-net
psu set link default-link -readpref=10 -writepref=0 -cachepref=10 -p2ppref=-1
psu add link default-link default
psu create link cms-link cms-groups world-net
psu set link cms-link -readpref=20 -writepref=20 -cachepref=20 -p2ppref=-1
psu add link cms-link cms
psu create link minos-link minos-groups world-net
psu set link minos-link -readpref=20 -writepref=20 -cachepref=20 -p2ppref=-1
psu add link minos-link minos
psu create link alice-link alice-groups world-net
psu set link alice-link -readpref=20 -writepref=20 -cachepref=20 -p2ppref=-1
psu add link alice-link alice
psu create link pheno-link pheno-groups world-net
psu set link pheno-link -readpref=20 -writepref=20 -cachepref=20 -p2ppref=-1
psu add link pheno-link pheno
psu create link atlas-link atlas-groups world-net
psu set link atlas-link -readpref=20 -writepref=10 -cachepref=20 -p2ppref=-1
psu add link atlas-link atlas
#
# The link Groups ...
#
# dteam, ops
psu create linkGroup dteam-linkGroup
psu set linkGroup custodialAllowed dteam-linkGroup false
psu set linkGroup replicaAllowed dteam-linkGroup true
psu set linkGroup nearlineAllowed dteam-linkGroup false
psu set linkGroup outputAllowed dteam-linkGroup false
psu set linkGroup onlineAllowed dteam-linkGroup true
psu addto linkGroup dteam-linkGroup dteam-link
psu addto linkGroup dteam-linkGroup ops-link
# lhcb
psu create linkGroup lhcb-linkGroup
psu set linkGroup custodialAllowed lhcb-linkGroup false
psu set linkGroup replicaAllowed lhcb-linkGroup true
psu set linkGroup nearlineAllowed lhcb-linkGroup false
psu set linkGroup outputAllowed lhcb-linkGroup false
psu set linkGroup onlineAllowed lhcb-linkGroup true
psu addto linkGroup lhcb-linkGroup lhcb-link2
#
# Submodule [rc] : class diskCacheV111.poolManager.RequestContainerV5
#
rc onerror suspend
rc set max retries 3
rc set retry 900
rc set warning path billing
rc set poolpingtimer 600
rc set max restore unlimited
rc set sameHostCopy besteffort
rc set sameHostRetry notchecked
rc set max threads 2147483647
set pool decision -cpucostfactor=1.0 -spacecostfactor=2.0
set costcuts -idle=0.0 -p2p=2.0 -alert=0.0 -halt=0.0 -fallback=0.0
rc set p2p on
rc set p2p oncost
rc set stage oncost off
rc set stage off
rc set slope 0.0
rc set max copies 500

LinkGroupAuthorisation.conf

LinkGroup dteam-linkGroup
dteam001/Role=*
/dteam/Role=*
ops001/Role=*
/ops/Role=*

LinkGroup lhcb-linkGroup
lhcb001/Role=*
/lhcb/Role=*

LinkGroup atlas-linkGroup
atlas001/Role=*
/atlas/Role=*

srm.batch

You shouldn't have to change this, but just in case.

#
# $Id: srm.batch,v 1.35 2007-10-27 02:45:18 timur Exp $
#
set printout default 2
set printout CellGlue none
onerror shutdown
#
check -strong setupFile
#
copy file:${setupFile} context:setupContext
#
#  import the variables into our $context.
#  don't overwrite already existing variables.
#
import context -c setupContext
#
#   Make sure we got what we need.
#
check -strong serviceLocatorPort serviceLocatorHost
check -strong srmPort
#
#
create dmg.cells.services.RoutingManager  RoutingMgr
#
#   The LocationManager Part
#
create dmg.cells.services.LocationManager lm \
       "${serviceLocatorHost} ${serviceLocatorPort}"
#
#
#   srm     c e l l
#
#
#   Default values (it not specified in dCacheSetup
#
onerror continue

set context -c srmVersion         version1
set context -c srmDbHost          localhost
set context -c srmDatabaseHost    ${srmDbHost}
set context -c srmDbName          dcache
set context -c srmDbUser          srmdcache
set context -c srmDbPassword      srmdcache
set context -c srmPasswordFile    ""

set context -c useGPlazmaAuthorizationCell      true
set context -c delegateToGPlazma    false
set context -c useGPlazmaAuthorizationModule    false
set context -c gplazmaPolicy      ${ourHomeDir}/etc/dcachesrm-gplazma.policy
set context -c srmAuthzCacheLifetime   180

set context -c parallelStreams      10

set context -c srmTimeout           3600
set context -c srmVacuum            true
set context -c srmVacuumPeriod      21600
set context -c srmBufferSize        1048576
set context -c srmTcpBufferSize     1048576
set context -c srmDebug             true

set context -c srmGetReqThreadQueueSize             10000
set context -c srmGetReqThreadPoolSize                250
set context -c srmGetReqMaxWaitingRequests           1000
set context -c srmGetReqReadyQueueSize              10000
set context -c srmGetReqMaxReadyRequests             2000
set context -c srmGetReqMaxNumberOfRetries             10
set context -c srmGetReqRetryTimeout                60000
set context -c srmGetReqMaxNumOfRunningBySameOwner    100

set context -c srmPutReqThreadQueueSize             10000
set context -c srmPutReqThreadPoolSize                250
set context -c srmPutReqMaxWaitingRequests           1000
set context -c srmPutReqReadyQueueSize              10000
set context -c srmPutReqMaxReadyRequests             1000
set context -c srmPutReqMaxNumberOfRetries             10
set context -c srmPutReqRetryTimeout                60000
set context -c srmPutReqMaxNumOfRunningBySameOwner    100

set context -c srmCopyReqThreadQueueSize             10000
set context -c srmCopyReqThreadPoolSize                250
set context -c srmCopyReqMaxWaitingRequests           1000
set context -c srmCopyReqMaxNumberOfRetries             10
set context -c srmCopyReqRetryTimeout                60000
set context -c srmCopyReqMaxNumOfRunningBySameOwner    100

set context -c srmGetLifeTime      14400000
set context -c srmPutLifeTime      14400000
set context -c srmCopyLifeTime     14400000

set context -c srmVacuum          true
set context -c srmVacuumPeriod   21600

set context -c pnfsSrmPath                      /

set context -c srmPoolManagerTimeout   300
set context -c srmPoolTimeout          300
set context -c srmPnfsTimeout          300
set context -c srmMoverTimeout        7200
set context -c remoteCopyMaxTransfers  150
set context -c remoteHttpMaxTransfers   30
set context -c remoteGsiftpMaxTransfers  ${srmCopyReqThreadPoolSize}
set context -c remoteGsiftpIoQueue       ""

set context -c srmDbLogEnabled   false

set context -c RecursiveDirectoryCreation  true
set context -c AdvisoryDelete              true

set context -c kpwdFile          ${ourHomeDir}/etc/dcache.kpwd

set context -c useLambdaStation  false
set context -c lsMapFile          ${ourHomeDir}/lambdastation/config/l_station_map.xml
set context -c lsScript          ${ourHomeDir}/lambdastation/scripts/open_ls_ticket

set context -c overwriteEnabled false
set context -c srmOverwriteByDefault false

# this is the directory in which the delegated user credentials will be stored
# as files. We recommend set permissions to 700 on this dir
set context -c srmUserCredentialsDirectory  ${ourHomeDir}/credentials

set context -c srmPnfsManager PnfsManager
set context -c srmPoolManager PoolManager

#login broker timeout in millis
set context -c srmLoginBrokerUpdatePeriod 3000

#pool manager timeout in seconds
set context -c srmPoolManagerTimeout 60

#number of doors in the random selection
#srm will order doors according to their load
#and select sertain number of the least loaded
#and then randomly choose which one to use
set context -c srmNumberOfDoorsInRandomSelection 5

#srm will hold srm requests and their history in database
# for srmNumberOfDaysInDatabaseHistory days
#after that they will be removed
set context -c srmNumberOfDaysInDatabaseHistory 10

# how frequently to remove old requests from the database
set context -c srmOldRequestRemovalPeriodSeconds 60

# srmJdbcMonitoringLogEnabled is set to true srm will store sufficient
# information about srm requests and their execution history in database
# for monitoring interface to work
# if it is set to false, only the absiolutely necessary information will be stored
set context -c srmJdbcMonitoringLogEnabled false

#jdbc updates are now queued and their execution is
#decoupled from the execution of the srm requests
# the srmJdbcExecutionThreadNum controls the number of the threads
#that will be dedicated to execution of these updates
# and the srmMaxNumberOfJdbcTasksInQueue controls the maximum
# length of the queue
set context -c srmJdbcExecutionThreadNum 5
set context -c srmMaxNumberOfJdbcTasksInQueue 1000

# if space reservation request does not specify retention policy
# we will assign this retention policy by default
set context -c SpaceManagerDefaultRetentionPolicy CUSTODIAL

# if space reservation request does not specify access latency
# we will assign this access latency by default
set context -c SpaceManagerDefaultAccessLatency NEARLINE

#if the transfer request come from the door, and there was not prior
# space reservation made for this file, should we try to reserve
# space before satisfying the request
set context -c SpaceManagerReserveSpaceForNonSRMTransfers false

#
#  ----  Usage of Srm Space Manager
#
#   If srmSpaceManagerEnabled is on we need to use SrmSpaceManager
#   as both poolManager and poolProxy
#
onerror continue
set context -c srmSpaceManagerEnabled no


define env srmSpaceManagerOn.exe endExe
  set env -c remoteTransferManagerPoolProxy        "SrmSpaceManager"
  set env -c remoteTransferManagerPoolManager      "SrmSpaceManager"
  set context -c srmImplicitSpaceManagerEnabled true
  set context -c srmSpaceReservationStrict true
endExe

define env srmSpaceManagerOff.exe endExe
  srmSpaceReservation=false
  srmSpaceReservationStrict=false
endExe

eval ${srmSpaceManagerEnabled} yes ==
set env srmSpaceManagerIsOn ${rc}
exec env srmSpaceManagerOn.exe -run -ifok=srmSpaceManagerIsOn

eval ${srmSpaceManageriEnabled} yes !=
set env srmSpaceManagerIsOff ${rc}
exec env srmSpaceManagerOff.exe -run -ifok=srmSpaceManagerIsOff

set context -c remoteTransferManagerPoolProxy                "PoolManager"
set context -c remoteTransferManagerPoolManager              "PoolManager"


# srmCustomGetHostByAddr enables using the BNL developed procedure
# for host by ip resolution if standard InetAddress method failed
#
set context -c srmCustomGetHostByAddr false

# LinkGroupAuthorizationFile contains the list of FQANs that are allowed to
# make space reservations in a given link group
set context -c SpaceManagerLinkGroupAuthorizationFileName ""

#
# turning this on turns off the latest changes that made service
# to honor the srm client's prococol list order for
# get/put commands
# this is needed temprorarily to support old srmcp clients
set context -c srmIgnoreClientProtocolOrder false

#
#
onerror shutdown
#
### This would do the same and leave ${srmDbHost} unset
#onerror continue
#set context localhost.exe "set context -c srmDatabaseHost localhost"
#set context srmdbhost.exe "set context -c srmDatabaseHost ${srmDbHost}"
#check srmDbHost
#set context srmDbHostIsSet ${rc}
#exec context srmdbhost.exe -run -ifok=srmDbHostIsSet
#exec context localhost.exe -run -ifnotok=srmDbHostIsSet
#onerror shutdown
#

create diskCacheV111.util.ThreadManager ThreadManager \
       "default \
       -num-threads=200 \
       -thread-timeout=15 \
"

#
# RemoteHttpTransferManager
#
#
create diskCacheV111.doors.RemoteHttpTransferManager RemoteHttpTransferManager \
        "default \
        -export \
        -pool_manager_timeout=${srmPoolManagerTimeout} \
        -pool_timeout=${srmPoolTimeout} \
        -pnfs_timeout=${srmPnfsTimeout} \
        -mover_timeout=${srmMoverTimeout} \
        -max_transfers=${remoteHttpMaxTransfers} \
"
#
# RemoteGsiftpTransferManager
#
create diskCacheV111.services.GsiftpTransferManager RemoteGsiftpTransferManager \
        "default -export \
        -pool_manager_timeout=${srmPoolManagerTimeout} \
        -pool_timeout=${srmPoolTimeout} \
        -pnfs_timeout=${srmPnfsTimeout} \
        -mover_timeout=${srmMoverTimeout} \
        -max_transfers=${remoteGsiftpMaxTransfers} \
        -io-queue=${remoteGsiftpIoQueue} \
        -jdbcUrl=jdbc:postgresql://${srmDatabaseHost}/${srmDbName} \
        -jdbcDriver=org.postgresql.Driver  \
        -dbUser=${srmDbUser} \
        -dbPass=${srmDbPassword} \
        -pgPass=${srmPasswordFile}   \
        -doDbLog=${srmDbLogEnabled} \
        -poolManager=${remoteTransferManagerPoolManager} \
        -poolProxy=${remoteTransferManagerPoolProxy} \
"
#
# Copy Manager Cell
#
create diskCacheV111.doors.CopyManager CopyManager \
       "default -export \
        -pool_manager_timeout=${srmPoolManagerTimeout} \
        -pool_timeout=${srmPoolTimeout} \
        -pnfs_timeout=${srmPnfsTimeout} \
        -mover_timeout=${srmMoverTimeout} \
        -max_transfers=${remoteCopyMaxTransfers} \
        -poolManager=${remoteTransferManagerPoolManager} \
        -poolProxy=${remoteTransferManagerPoolProxy} \
"
#
# SRM Space Manager
#
create diskCacheV111.services.space.ManagerV2 SrmSpaceManager \
       "default \
        -export \
        -jdbcUrl=jdbc:postgresql://${srmDatabaseHost}/${srmDbName} \
        -jdbcDriver=org.postgresql.Driver \
        -dbUser=${srmDbUser} \
        -dbPass=${srmDbPassword} \
        -poolManager=PoolManager \
        -pnfsManager=PnfsManager \
        -defaultRetentionPolicy=${SpaceManagerDefaultRetentionPolicy} \
        -defaultAccessLatency=${SpaceManagerDefaultAccessLatency} \
        -reserveSpaceForNonSRMTransfers=${SpaceManagerReserveSpaceForNonSRMTransfers} \
        -deleteStoredFileRecord=false \
        -returnFlushedSpaceToReservation=true \
        -returnRemovedSpaceToReservation=true \
        -linkGroupAuthorizationFileName=${SpaceManagerLinkGroupAuthorizationFileName} \
        -spaceManagerEnabled=${srmSpaceManagerEnabled} \
"

create diskCacheV111.srm.dcache.Storage  SRM-${thisHostname} \
       "-srmport=${srmPort} \
        -export \
        -srmversion=${srmVersion}  \
        -timout=${srmTimeout} \
        -pnfsManager=${srmPnfsManager} \
        -pnfs-timeout=${srmPnfsTimeout} \
        -poolManager=${srmPoolManager} \
        -pool-manager-timeout=${srmPoolManagerTimeout} \
        -vacuum=${srmVacuum} \
        -vacuum-period=${srmVacuumPeriod} \
        -pnfs-srm-path=${pnfsSrmPath} \
        -gsissl=true \
        -reserve-space-implicitly=${srmImplicitSpaceManagerEnabled} \
        -space-reservation-strict=${srmSpaceReservationStrict} \
        -credentials-dir=${srmUserCredentialsDirectory} \
        -buffer_size=${srmBufferSize} \
        -tcp_buffer_size=${srmTcpBufferSize} \
        -parallel_streams=${parallelStreams} \
        -debug=${srmDebug} \
        -usekftp=false \
        -get-lifetime=${srmGetLifeTime} \
        -put-lifetime=${srmPutLifeTime} \
        -copy-lifetime=${srmCopyLifeTime} \
        -get-req-thread-queue-size=${srmGetReqThreadQueueSize} \
        -get-req-thread-pool-size=${srmGetReqThreadPoolSize} \
        -get-req-max-waiting-requests=${srmGetReqMaxWaitingRequests} \
        -get-req-ready-queue-size=${srmGetReqReadyQueueSize} \
        -get-req-max-ready-requests=${srmGetReqMaxReadyRequests} \
        -get-req-max-number-of-retries=${srmGetReqMaxNumberOfRetries} \
        -get-req-retry-timeout=${srmGetReqRetryTimeout} \
        -get-req-max-num-of-running-by-same-owner=${srmGetReqMaxNumOfRunningBySameOwner} \
        -put-req-thread-queue-size=${srmPutReqThreadQueueSize} \
        -put-req-thread-pool-size=${srmPutReqThreadPoolSize} \
        -put-req-max-waiting-requests=${srmPutReqMaxWaitingRequests} \
        -put-req-ready-queue-size=${srmPutReqReadyQueueSize} \
        -put-req-max-ready-requests=${srmPutReqMaxReadyRequests} \
        -put-req-max-number-of-retries=${srmPutReqMaxNumberOfRetries} \
        -put-req-retry-timeout=${srmPutReqRetryTimeout} \
        -put-req-max-num-of-running-by-same-owner=${srmPutReqMaxNumOfRunningBySameOwner} \
        -copy-req-thread-queue-size=${srmCopyReqThreadQueueSize} \
        -copy-req-thread-pool-size=${srmCopyReqThreadPoolSize} \
        -copy-req-max-waiting-requests=${srmCopyReqMaxWaitingRequests} \
        -copy-req-max-number-of-retries=${srmCopyReqMaxNumberOfRetries} \
        -copy-req-retry-timeout=${srmCopyReqRetryTimeout} \
        -copy-req-max-num-of-running-by-same-owner=${srmCopyReqMaxNumOfRunningBySameOwner} \
        -recursive-dirs-creation=${RecursiveDirectoryCreation} \
        -advisory-delete=${AdvisoryDelete} \
        -jdbcUrl=jdbc:postgresql://${srmDatabaseHost}/${srmDbName} \
        -jdbcDriver=org.postgresql.Driver \
        -dbUser=${srmDbUser} \
        -dbPass=${srmDbPassword} \
        -pgPass=${srmPasswordFile}   \
        -jdbc-monitoring-log=${srmJdbcMonitoringLogEnabled} \
        -num-days-history=${srmNumberOfDaysInDatabaseHistory} \
        -old-request-remove-period-secs=${srmOldRequestRemovalPeriodSeconds} \
        -jdbc-execution-thread-num=${srmJdbcExecutionThreadNum} \
        -max-queued-jdbc-tasks-num=${srmMaxNumberOfJdbcTasksInQueue} \
        -use-gplazma-authorization-cell=${useGPlazmaAuthorizationCell} \
        -delegate-to-gplazma=${delegateToGPlazma} \
        -use-gplazma-authorization-module=${useGPlazmaAuthorizationModule} \
        -gplazma-authorization-module-policy=${gplazmaPolicy} \
        -srm-authz-cache-lifetime=${srmAuthzCacheLifetime} \
        -srmLoginBroker=srm-LoginBroker \
        -protocolFamily=SRM \
        -protocolVersion=1.1.1 \
        -kpwd-file=${kpwdFile} \
#        -loginBroker=LoginBroker \
#        -brokerUpdateTime=300 \
        -start_server=false \
        -use_lambdastation=${useLambdaStation} \
        -lambdastation_map_file=${lsMapFile} \
        -lambdastation_script=${lsScript} \
        -login-broker-update-period=${srmLoginBrokerUpdatePeriod} \
        -num-doors-in-rand-selection=${srmNumberOfDoorsInRandomSelection} \
        -overwrite=${overwriteEnabled} \
        -overwrite_by_default=${srmOverwriteByDefault} \
        -custom-get-host-by-addr=${srmCustomGetHostByAddr} \
        -ignore-client-protocol-order=${srmIgnoreClientProtocolOrder}\
       "