The SRM ("Storage Resource Manager") is a protocol for Storage Resource Managment. The SRM protocol does not do any data transfer. The protocol is used to ask a Mass Storage System (MSS) to make a file ready for transfer, or to create space in a disk cache to which a file can be uploaded. The file is then transferred to or from a Transfer URL or TURL.
By abuse of notation, a Storage Element (SE) that provides an SRM interface is often called "an SRM".
Very simplified description of the protocol
There are two different versions of the SRM protocol (actually there are more, but for the purposes of this discussion we can pretend there are two). Be careful relying on the full API because (1) much of the API is optional, and (2) some clients don't even implement all mandatory functions.
Version 1.1 is relatively simple, particularly since not all implementations implement the full API. Basically when you wish to read the file you "get" it; the SRM returns an acknowledgment containing a request id. The client then queries the status of the id. When the file is ready, the status contains a TURL. When the client is done reading the file, it changes the status of the request to "Done". Upload is done similarly ("put"), and the client must change the status to Done when the upload has finished.
Version 1.1 also contains a delete function ("srmAdvisoryDelete").
Notably absent from 1.1 are directory functions (mkdir, ls). Pinning is usually not supported although it's supposed to be part of the API.
Version 2.1 is more complex and also supports SRM file types (sometimes misleadingly called "lifetime") and related space types, guaranteed reservations, and pinning. It also has some directory functions although there are concerns about implementing these because lsing large directories is practically a DoS attack.
The DPM from CERN/EGEE is the only storage element that implemented v2.1 of the SRM specification. There were not even any clients written to support it.
This is the most up to date version of the specification. The main differences with previous versions is the introduction of new concepts regarding the space management. In particular:
- Access Latency (ONLINE, NEARLINE, OFFLINE)
- Retention Policy (REPLICA, OUTPUT, CUSTODIAL)
These help to express the properties of disk and tape storage media. Users can ask for a particular type and this request can be refused or changed (after negotiation with the client) to match what the storage element can provide. Clients can also request that files stored in one particular combination of access latency and retention policy be moved to a new type. This is useful, for example, when a user wants all data on disk for quick reprocessing. It can then be moved to tape once the work is complete.
Multiple software providers are supporting this as it is regarded as an essential service for the LHC experiments. dCache, DPM, CASTOR, StoRM and BeStMan are all servers that implement SRM2.2. The FTS and lcg_utils client tools can interact with these servers. Much work has gone into understanding how the spec is actually implemented in practice and ensuring that the different providers can all interact. This has led to a WLCG-SRM2.2 subset of the original spec which is a number of methods which must be supported by the providers in order to meet the needs of the LHC experiments.
As far as the SRM is concerned, files are referenced as Site URLs or Storage URLs, or SURLs. The scheme for the URL is "srm" (e.g., srm://hostname/path). The path need not have any relation to where the file is actually stored; it is meaningful only for the SRM. Higher level services resolve Logical File Names or Globally Unique Identifiers into SURLs. The SRM resolves the SURL into a physical location, and returns a TURL to this location, or to another copy of the file, depending on how the SRM optimises access.
This page is used to track the status of SRM v2.2 deployment in WLCG.