Difference between revisions of "Argus Server"
(→Configuring Argus for Central Banning) |
(→Configuring Argus for Central Banning: typo) |
||
(14 intermediate revisions by 7 users not shown) | |||
Line 1: | Line 1: | ||
− | |||
− | |||
==Introduction== | ==Introduction== | ||
Line 9: | Line 7: | ||
https://twiki.cern.ch/twiki/bin/view/EMI/Argus | https://twiki.cern.ch/twiki/bin/view/EMI/Argus | ||
− | + | ==Installation== | |
Argus server needs a host certificate. | Argus server needs a host certificate. | ||
Line 25: | Line 23: | ||
yum install emi-argus | yum install emi-argus | ||
− | + | ==Configuration== | |
Argus server needs only few variable in site-info.def. VO_* variables can also be defined in vo.d directory | Argus server needs only few variable in site-info.def. VO_* variables can also be defined in vo.d directory | ||
Line 98: | Line 96: | ||
Now you can see policies which has been loaded | Now you can see policies which has been loaded | ||
pap-admin lp | pap-admin lp | ||
+ | |||
+ | ==Argus client configuration== | ||
===Configuring CE to use Argus authorization=== | ===Configuring CE to use Argus authorization=== | ||
Line 107: | Line 107: | ||
CREAM_PEPC_RESOURCEID="<string to match resource attribute in "Policy for CE" section, above>" | CREAM_PEPC_RESOURCEID="<string to match resource attribute in "Policy for CE" section, above>" | ||
e.g CREAM_PEPC_RESOURCEID="http://physics.ox.ac.uk/creamce" | e.g CREAM_PEPC_RESOURCEID="http://physics.ox.ac.uk/creamce" | ||
+ | |||
+ | ===Configuring ARC to use Argus authorization=== | ||
+ | |||
+ | See also [https://www.gridpp.ac.uk/wiki/Imperial_arc_ce_for_cloud Imperial notes] [https://www.gridpp.ac.uk/wiki/Example_Build_of_an_ARC/Condor_Cluster Liverpool notes] [https://www.gridpp.ac.uk/wiki/ARC_CE_Hints T1 notes] | ||
+ | |||
+ | Add (for example) lines like the following to <tt>arc.conf</tt>: | ||
+ | |||
+ | ... | ||
+ | [gridftpd] | ||
+ | ... | ||
+ | unixmap="* lcmaps liblcmaps.so /usr/lib64 /etc/lcmaps/lcmaps.db voms" | ||
+ | unixmap="999:999 all" | ||
+ | ... | ||
+ | |||
+ | (choose a UID/GID pair that does not exist on your system). This last line is required as ARC uses a fallback if the mapping fails. | ||
+ | |||
+ | where <tt>/etc/lcmaps/lcmaps.db</tt> | ||
+ | |||
+ | path = /usr/lib64/lcmaps | ||
+ | |||
+ | verify_proxy = "lcmaps_verify_proxy.mod" | ||
+ | "-certdir /etc/grid-security/certificates" | ||
+ | "--discard_private_key_absence" | ||
+ | "--allow-limited-proxy" | ||
+ | |||
+ | pepc = "lcmaps_c_pep.mod" | ||
+ | "--pep-daemon-endpoint-url <pependpoint>" | ||
+ | "--resourceid <resourceid>" | ||
+ | "--actionid http://glite.org/xacml/action/execute" | ||
+ | "--capath /etc/grid-security/certificates/" | ||
+ | "--certificate /etc/grid-security/hostcert.pem" | ||
+ | "--key /etc/grid-security/hostkey.pem" | ||
+ | |||
+ | get_account: | ||
+ | verify_proxy -> pep | ||
+ | |||
+ | where <pependpoint> is your Argus host and <resourceid> is your local CE id. | ||
===Configuring WN to use Argus for glexec authorization=== | ===Configuring WN to use Argus for glexec authorization=== | ||
Line 113: | Line 150: | ||
yum install emi-glexec_wn | yum install emi-glexec_wn | ||
+ | or | ||
+ | yum install glexec-wn (depends on the middleware version). | ||
Configuration variables required for glexec on WN | Configuration variables required for glexec on WN | ||
Line 142: | Line 181: | ||
To fix this you should remount lustre on the WNs with the 'flock' option (see man mount.lustre for details on this). | To fix this you should remount lustre on the WNs with the 'flock' option (see man mount.lustre for details on this). | ||
+ | ===Configuring dCache Argus Integration=== | ||
+ | |||
+ | Please see [https://www.dcache.org/manuals/2012/workshop/slides/gPlazma2_configuration.pdf Introduction to gPlazma2] talk. | ||
+ | |||
+ | Additional notes from Chris Brew - | ||
+ | |||
+ | For our configuration I’ve got: | ||
+ | account requisite argus | ||
+ | in /etc/dcache/gplazma.conf and: | ||
+ | gplazma.argus.endpoint = https://argus.pp.rl.ac.uk:8154/authz | ||
+ | in /etc/dcache/dcache.conf | ||
+ | That’s all the additional config needed, ... | ||
+ | ... it works in 2.13. It may have worked in 2.10 but I never got it to. | ||
+ | |||
+ | ===Configuring DPM Argus Integration=== | ||
+ | |||
+ | Please see the [https://www.gridpp.ac.uk/wiki/DPM_Argus_Integration DPM Argus Integration] page. | ||
+ | |||
+ | ===Configuring StoRM Argus Integration=== | ||
+ | |||
+ | Please see [https://www.gridpp.ac.uk/wiki/StoRM#Argus StoRM] page. | ||
+ | |||
+ | |||
+ | == Testing the ARGUS Server and Worker Node == | ||
+ | |||
+ | Make a proxy on some UI server. | ||
+ | |||
+ | voms-proxy-init --voms dteam | ||
+ | voms-proxy-info | ||
+ | |||
+ | Be on test worker node, as root. Copy in the proxy with scp from location shown in voms-proxy-init to /tmp/x509up_u460 | ||
+ | |||
+ | Change ownership of proxy to a pilot account. | ||
+ | |||
+ | chown pilSOMEVO01:SOMEVO /tmp/x509up_u460 | ||
+ | |||
+ | Where pilSOMEVO01 is a pilot account of some random VO. | ||
+ | |||
+ | Change permissions. | ||
+ | chmod 600 /tmp/x509up_u460 | ||
+ | |||
+ | Switch to some (e.g.) pilot user. | ||
+ | |||
+ | su - pilSOMEVO01 | ||
+ | |||
+ | Run these commands to setup for the test. | ||
+ | export GLEXEC_CLIENT_CERT=/tmp/x509up_u460 | ||
+ | export GLEXEC_SOURCE_PROXY=/tmp/x509up_u460 | ||
+ | export X509_USER_PROXY=/tmp/x509up_u460 | ||
+ | |||
+ | Do the test | ||
+ | |||
+ | /usr/sbin/glexec /usr/bin/id | ||
+ | |||
+ | If all is well, you will see something like this: | ||
+ | uid=24683(dteam184) gid=2028(dteam) groups=2028(dteam) | ||
+ | If you don't see that, something is wrong. Check the ARGUS policies if it says "Not Applicable". | ||
+ | |||
+ | With glexec going away this probably will not work on CentOS7, ArcCE uses lcas and lcmaps via "external" scripts which can be used to [[Testing_ArcCE_Argus_Integration|test your argus server]]. | ||
− | + | ==Configuring Argus for Central Banning== | |
'''Introduction''' | '''Introduction''' | ||
Line 153: | Line 251: | ||
'''Implementation Examples''' | '''Implementation Examples''' | ||
− | Admin at Liverpool use an ARGUS server for user authentication from the CEs and WNs. When they build or reconfigure THE ARGUS server, they use a script (argus.pol.sh) to load argus policies from a file (argus.pol). The middle section of the argus.pol.sh script | + | Admin at Liverpool use an ARGUS server for user authentication from the CEs and WNs. When they build or reconfigure THE ARGUS server, they use a script (argus.pol.sh) to load argus policies from a file (argus.pol). The middle section of the argus.pol.sh script shown below configures central banning (the first and last section relate to standard polices and buffer flushing, respectively). |
<pre> | <pre> | ||
Line 189: | Line 287: | ||
[paps:properties] | [paps:properties] | ||
− | poll_interval = | + | poll_interval = 1800 |
ordering = centralbanning | ordering = centralbanning | ||
ordering = default | ordering = default | ||
Line 197: | Line 295: | ||
'''Testing''' | '''Testing''' | ||
− | It's best to tell | + | It's best to tell David Crooks about this so he can send some tests over. To check if your site "looks" OK, try this: |
<pre> | <pre> | ||
pap-admin lp --all | pap-admin lp --all |
Latest revision as of 09:40, 18 March 2019
Contents
Introduction
Argus is centralize authorization service for distributed services. Details are here
https://twiki.cern.ch/twiki/bin/view/EMI/ArgusEMIDocumentation
https://twiki.cern.ch/twiki/bin/view/EMI/Argus
Installation
Argus server needs a host certificate. Enable emi2 and epel repo
yum install ca-policy-egi-core
install fetch-crl and enable it
yum install fetch-crl /sbin/chkconfig fetch-crl-cron on /sbin/service fetch-crl-cron start
install argus server
yum install emi-argus
Configuration
Argus server needs only few variable in site-info.def. VO_* variables can also be defined in vo.d directory
ARGUS_HOST=< HOST_NAME > PAP_ADMIN_DN=<DN of admin > #Users and Groups definition for grid and group mapfile USERS_CONF= GROUPS_CONF= #Supported VOs VOS="list of VOs " VO_<VO>_VOMSES VO_<VO>_VOMS_CA_DN
Configure with yaim
/opt/glite/yaim/bin/yaim -c -s site-info.def -n ARGUS_server
Yaim does not configure or load policies for VOs in Argus Server. So after this stage Argus server is configured and running but does nothing as no policies have been loaded.
PAP Admin CLI
Argus package provides a Policy Administration Point(PAP) Client Line Interface(CLI) to interact with argus server and it is installed with emi-argus package.
pap-admin --help # Gives complete list of option pap-admin lp # List policies which have been loaded pap-admin apf # add policies from file pap-admin rap # Remove all policies
Defining policies
Policies has to be defined separately for every CE and glexec on WN
Policy for CE
Like for a CE policy is defined in this way
resource "http://physics.ox.ac.uk/creamce" { obligation "http://glite.org/xacml/obligation/local-environment-map" {} action ".*" { rule permit { vo = "ops" } rule permit { vo = "dteam" } rule permit { vo = "atlas" } rule permit { vo = "alice" } ....other VO's .. } }
Policy for Glexec
resource "http://authz-interop.org/xacml/resource/resource-type/wn" { obligation "http://glite.org/xacml/obligation/local-environment-map" {} action "http://glite.org/xacml/action/execute" { rule permit {pfqan = "/atlas/Role=pilot" } rule permit {pfqan = "/atlas/Role=lcgadmin" } rule permit {pfqan = "/atlas/Role=production" } rule permit {pfqan = "/atlas" } rule permit {pfqan = "/ops/Role=pilot" } rule permit {pfqan = "/ops/Role=lcgadmin" } rule permit {pfqan = "/ops" } ....other VO's.... } }
Loading policies
Policies for all CE's and glexec can be defined in a single file and then load it to argus server through PAP CLI. e.g create a text file argus_policy and copy all policies into it
pap-admin apf argus_policy service argus-pdp restart service argus-pepd restart
Now you can see policies which has been loaded
pap-admin lp
Argus client configuration
Configuring CE to use Argus authorization
No extra installation or configuration steps are required to enable authorization through Argus server. Just add these configuration variables to site-info.def and run yaim for creamce
USE_ARGUS=yes ARGUS_PEPD_ENDPOINTS="https://<ARGUS_SERVER>:8154/authz" CREAM_PEPC_RESOURCEID="<string to match resource attribute in "Policy for CE" section, above>" e.g CREAM_PEPC_RESOURCEID="http://physics.ox.ac.uk/creamce"
Configuring ARC to use Argus authorization
See also Imperial notes Liverpool notes T1 notes
Add (for example) lines like the following to arc.conf:
... [gridftpd] ... unixmap="* lcmaps liblcmaps.so /usr/lib64 /etc/lcmaps/lcmaps.db voms" unixmap="999:999 all" ...
(choose a UID/GID pair that does not exist on your system). This last line is required as ARC uses a fallback if the mapping fails.
where /etc/lcmaps/lcmaps.db
path = /usr/lib64/lcmaps verify_proxy = "lcmaps_verify_proxy.mod" "-certdir /etc/grid-security/certificates" "--discard_private_key_absence" "--allow-limited-proxy" pepc = "lcmaps_c_pep.mod" "--pep-daemon-endpoint-url <pependpoint>" "--resourceid <resourceid>" "--actionid http://glite.org/xacml/action/execute" "--capath /etc/grid-security/certificates/" "--certificate /etc/grid-security/hostcert.pem" "--key /etc/grid-security/hostkey.pem" get_account: verify_proxy -> pep
where <pependpoint> is your Argus host and <resourceid> is your local CE id.
Configuring WN to use Argus for glexec authorization
glexec has to be installed on WN seperately
yum install emi-glexec_wn
or
yum install glexec-wn (depends on the middleware version).
Configuration variables required for glexec on WN
GLEXEC_WN_ARGUS_ENABLED="yes" ARGUS_PEPD_ENDPOINTS="https://<ARGUS_SERVER>:8154/authz" GLEXEC_WN_OPMODE="setuid" GLEXEC_WN_LOG_DESTINATION=file GLEXEC_WN_LOG_FILE=/var/log/glexec/glexec_log GLEXEC_WN_INPUT_LOCK=flock GLEXEC_WN_TARGET_LOCK=flock
run yaim
/opt/glite/yaim/bin/yaim -c -s /etc/yaim/site-info-emi.def -n WN -n TORQUE_client -n GLEXEC_wn
Regional Nagios test glexec service using ops pilot role
Corner Case: Additional mount option for WNs using Lustre for pool account Home Dirs
In the case that Lustre is being used for the home directories of pool accounts on the worker nodes, there can be an issue with glexec not being able to lock the input proxy when reading it if lustre hasn't been mounted with support for flock (the default locking mechanism used by glexec). You can see this from the following messages in the glexec logs (set log_level to 5 in /etc/glexec.conf on the WNs):
glexec[51695] 20140205T145808Z: Found key 'glexec:input_lock_mechanism' with value 'flock'. glexec[51695] 20140205T145808Z: Using "flock()" file locking mechanism to read the proxy files at the (default) $GLEXEC_SOURCE_PROXY and $GLEXEC_CLIENT_CERT locations. glexec[51695] 20140205T145808Z: Reading in GLEXEC_CLIENT_CERT='/mnt/lustre/grid/users/pilatl01/home_cream_445503617/cream_445503617.proxy'. glexec[51695] 20140205T145808Z: Could not lock file during reading of proxy /mnt/lustre/grid/users/pilatl01/home_cream_445503617/cream_445503617.proxy. glexec[51695] 20140205T145808Z: Reading proxy failed. glexec[51695] 20140205T145808Z: Failed to lock $GLEXEC_CLIENT_CERT=/mnt/lustre/grid/users/pilatl01/home_cream_445503617/cream_445503617.proxy, $GLEXEC_SOURCE_PROXY=(NULL) or destination proxy.
To fix this you should remount lustre on the WNs with the 'flock' option (see man mount.lustre for details on this).
Configuring dCache Argus Integration
Please see Introduction to gPlazma2 talk.
Additional notes from Chris Brew -
For our configuration I’ve got: account requisite argus in /etc/dcache/gplazma.conf and: gplazma.argus.endpoint = https://argus.pp.rl.ac.uk:8154/authz in /etc/dcache/dcache.conf That’s all the additional config needed, ... ... it works in 2.13. It may have worked in 2.10 but I never got it to.
Configuring DPM Argus Integration
Please see the DPM Argus Integration page.
Configuring StoRM Argus Integration
Please see StoRM page.
Testing the ARGUS Server and Worker Node
Make a proxy on some UI server.
voms-proxy-init --voms dteam voms-proxy-info
Be on test worker node, as root. Copy in the proxy with scp from location shown in voms-proxy-init to /tmp/x509up_u460
Change ownership of proxy to a pilot account.
chown pilSOMEVO01:SOMEVO /tmp/x509up_u460
Where pilSOMEVO01 is a pilot account of some random VO.
Change permissions.
chmod 600 /tmp/x509up_u460
Switch to some (e.g.) pilot user.
su - pilSOMEVO01
Run these commands to setup for the test.
export GLEXEC_CLIENT_CERT=/tmp/x509up_u460 export GLEXEC_SOURCE_PROXY=/tmp/x509up_u460 export X509_USER_PROXY=/tmp/x509up_u460
Do the test
/usr/sbin/glexec /usr/bin/id
If all is well, you will see something like this:
uid=24683(dteam184) gid=2028(dteam) groups=2028(dteam)
If you don't see that, something is wrong. Check the ARGUS policies if it says "Not Applicable".
With glexec going away this probably will not work on CentOS7, ArcCE uses lcas and lcmaps via "external" scripts which can be used to test your argus server.
Configuring Argus for Central Banning
Introduction
A requirement has arisen to implement central banning. Most of this material came from Ewan MacMahon’s TB_SUPPORT email (title: NGI Argus requests for NGI_UK) and from central banning documentation that is available here: Argus_Global_Banning_Setup_Overview. The central banning architecture consists of a hierarchy of ARGUS servers with the Central WLCG server at the top, local NGI servers below and sites at the bottom. The ban policies flow from the central WLCG server through the NGI one and down to the site, using a feature of ARGUS.
This section describes an implementation taken from Liverpool. The original blog post is available here: Central Argus Banning at Liverpool
Implementation Examples
Admin at Liverpool use an ARGUS server for user authentication from the CEs and WNs. When they build or reconfigure THE ARGUS server, they use a script (argus.pol.sh) to load argus policies from a file (argus.pol). The middle section of the argus.pol.sh script shown below configures central banning (the first and last section relate to standard polices and buffer flushing, respectively).
#!/bin/bash /usr/bin/pap-admin rap /usr/bin/pap-admin apf /root/scripts/argus.pol pap-admin add-pap ngi argusngi.gridpp.rl.ac.uk "/C=UK/O=eScience/OU=CLRC/L=RAL/CN=argusngi.gridpp.rl.ac.uk" pap-admin enable-pap ngi pap-admin set-paps-order ngi default pap-admin set-polling-interval 3600 /etc/init.d/argus-pdp reloadpolicy /etc/init.d/argus-pepd clearcache touch /root/scripts/done_argus.pol.sh
These commands add links to polices from the NGI ARGUS server. The script also reduces the polling interval to make the system more responsive. When the script is run, it connects the local ARGUS server to the NGI one and tells ARGUS to periodically download the remote (central) banning policies. Even with the improved polling interval, other delays exist that slow things up. These can be eliminated by changing /etc/argus/pdp/pdp.ini, setting "retentionInterval = 21", i.e. 21 minutes. After running the script, it's best to restart the Java daemons.
Alternative (Equivalent)
An alternative (and equivalent) implementation is suggested by Ewan at Oxford, using static files instead of PAP commands. Ewan set it up directly in the /etc/argus/pap/pap_configuration.ini config file. It has the advantage that it persists and so does not need reloading. The relevant bits of the file look like this:
[paps] ## Trusted PAPs will be listed here centralbanning.type = remote centralbanning.enabled = true centralbanning.dn = /C=UK/O=eScience/OU=CLRC/L=RAL/CN=argusngi.gridpp.rl.ac.uk centralbanning.hostname = argusngi.gridpp.rl.ac.uk centralbanning.port = 8150 centralbanning.path = /pap/services/ centralbanning.protocol = https centralbanning.public = true [paps:properties] poll_interval = 1800 ordering = centralbanning ordering = default
Testing
It's best to tell David Crooks about this so he can send some tests over. To check if your site "looks" OK, try this:
pap-admin lp --all
And you should see the "remote" policies, e.g.
ngi (argusngi.gridpp.rl.ac.uk:8150): resource ".*" BLAH BLAH BLAH