Gt3 Installation And Setup_0524

  • November 2019
  • PDF

This document was uploaded by user and they confirmed that they have the permission to share it. If you are author or own the copyright of this book, please report to us by using this DMCA report form. Report DMCA


Overview

Download & View Gt3 Installation And Setup_0524 as PDF for free.

More details

  • Words: 2,029
  • Pages: 15
GT3 Installation and Setup on RedHat 9.0 or Fedora Jun-Yi Shen, Fu-Zong Wang, Song-Yi Chen, Po-Chi Shih Note: (*1): GT3 recommend that your machine must have a full qualified domain name so your machine can be recognized everywhere in the internet, and your DNS server must support forward domain and reverse domain query. If you can’t make these things above, then simply add IP and domain name mapping in /etc/hosts, but remember each machine in grid must have same setting to recognize each other. (It means when you add a machine to the grid, you have to modify all the machine’s /etc/hosts file to add the IP/domain name of the new machine.) 1. Introduction GT3 is an implementation of the Open Grid Services Infrastructure (OGSI) version 1.0. Globus is using OGSI as their infrastructure for their GT3 base services. They also add some management services. OGSI represents the reference open source standard implementation of the Open Grid Services Infrastructure standard. The following Figure shows the key areas identified as a basis for Grid computing. Each pillar would be included in most of Grid implementations. These key areas are: Resource management, Information services, and Data management.

2. Account Requirement z globus or root account „ Toolkit environment „ For installation and execution of Toolkit. z Any other user account „ End user environment. „ For jobs execution on the Grid. 3. Software

z JAVA SDK http://java.sun.com

z

z

z

z

http://java.sun.com/j2se/1.4.2/download.html (j2sdk-1_4_2_03-linux-i586-rpm.bin) Apache Ant http://ant.apache.orgs http://ant.apache.org/bindownload.cgi (apache-ant-1.6.0-bin.tar.gz) JUnit http://www.junit.org http://prdownloads.sourceforge.net/junit/junit3.8.1.zip?download (junit3.8.1.zip) GT3 Source Installation Package (GT 3.0.2) http://www-unix.globus.org/toolkit/download.html (gt3.0.2-source-installer.taz) Globus Simple CA Package by NCHC http://www.globus.org/security/simple-ca.html

http://www-unix.globus.org/ftppub/gsi/simple_ca/globus_simple_ca-latest-src_bundl e.tar.gz (globus_simple_ca-latest-src_bundle.tar.gz) z MPICH-G2 (1.2.5.2) http://www-unix.mcs.anl.gov/mpi/mpich/ http://www-unix.mcs.anl.gov/mpi/mpich/downloads/mpich.tar.gz (mpich.tar.gz) z Install SCMS http://www.opensce.org/moin/Downloads z Install Condor 6.6.5 Require packages Perl 5 (Fedora core default version is version 5) glibc 2.3 (Fedora core default version is 2.3.2 Condor 6.6.5, http://www.cs.wisc.edu/condor/downloads/ 4. Installation of GT3 4.1. Setup your domain name and ip address. 4.1.1. Use root account. 4.1.2. Edit “/etc/hosts” file and add your ip address with domain name (any you want) as follow.

4.2. Install Java SDK 4.2.1. Use root account. 4.2.2. Add executable attribute to the SDK source installation package. ¾ chmod +x j2sdk-1_4_2_04-linux-i586-rpm.bin 4.2.3.

Uncompress and install ¾ ./j2sdk-1_4_2_04-linux-i586-rpm.bin ¾ rpm -ivh j2sdk-1_4_2_04-linux-i586.rpm

4.3. Install Apache Ant & JUnit 4.3.1. Use root account 4.3.2. Uncompress Apache Ant ¾ tar -xzvf apache-ant-1.6.1-bin.tar.gz ¾ mv apache-ant-1.6.1 /usr/local 4.3.3.

Unzip JUnit and copy junit.jar to the lib directory of your Apache Ant installation directory ¾ unzip junit3.8.1.zip ¾ cp junit3.8.1/junit.jar /usr/localapache-ant-1.6.0/lib

4.4. Setup environment variable 4.4.1. Use root account 4.4.2. Add Java and Ant install directory to system path ¾ vi /etc/profile — …(omits)… — if [ -z "$INPUTRC" -a ! -f "$HOME/.inputrc" ]; then — INPUTRC=/etc/inputrc — fi 9 # JAVA PATH 9 JAVA_HOME=/usr/java/j2sdk1.4.2_04

4.4.3.

9 9 9 9 9 9

PATH=$PATH:$JAVA_HOME/bin export JAVA_HOME # ANT PATH ANT_HOME=/usr/local/apache-ant-1.6.1 PATH=$PATH:$ANT_HOME/bin export ANT_HOME

—

export PATH USER LOGNAME MAIL HOSTNAME HISTSIZE INPUTRC

Rerun profile ¾ Source /etc/profile(. /etc/profile)

4.5. Install GT3 4.5.1. Use root account 4.5.2. Uncompress GT3 ¾ tar -xzvf gt3.0.2-source-installer.tar.gz 4.5.3.

Install GT3 ¾ cd gt3.0.2-source-installer ¾ ./install-gt3 /usr/local/globus/|tee installgt3.log

4.6. Setup environment variable 4.6.1. Use root account 4.6.2. Add Globus install directory and set globus environment variable ¾ vi /etc/profile — fi — # JAVA PATH — JAVA_HOME="/usr/java/j2sdk1.4.2_04" — PATH="$PATH:$JAVA_HOME/bin" — export JAVA_HOME — — # ANT PATH — ANT_HOME="/grid_software/ant/apache-ant-1.6.1" — PATH="$PATH:$ANT_HOME/bin" — export ANT_HOME — 9 #GLOBUS PATH 9 GLOBUS_LOCATION="/usr/local/globus" 9 PATH="$PATH:$GLOBUS_LOCATION/bin" 9 export GLOBUS_LOCATION — — — 9

export PATH USER LOGNAME MAIL HOSTNAME HISTSIZE INPUTRC . $GLOBUS_LOCATION/etc/globus-user-env.sh

4.6.3.

Rerun profile ¾ Source /etc/profile(. /etc/profile)

4.7. Install and setup Globus Simple CA (CA Server installed by NCHC) 4.7.1. Use root account 4.7.2. Install simple CA server “globus_simple_ca-latest-src_bundle.tar.gz” ¾ gpt-build globus_simple_ca-latest-src_bundle.tar.gz gcc32 ¾

gpt-postinstall

Enter the CA subject after gpt-postinstall, and enter a password for CA. 4.7.3. Setup CA use default setting ¾ /usr/local/globus/setup/globus_simple_ca__se tup/setup-gsi –default 4.8. Install and setup Globus Simple CA (CA client) 4.8.1. Use root account 4.8.2. Get the CA install file on your CA server, in our case it is called “globus_simple_ca_ffa40f5d_setup-0.13.tar.gz” 4.8.3. Install CA, in /usr/src directory, as root 4.8.3.1. Let me remind you something before this step. Now you have to check your host name( you can see it on command line prompt in linux), if it is “localhost”, then you need to register you machine to DNS, and get a domain name or you can edit “/etc/sysconfig/network” and change HOSTNAME= then reboot. Anyway you can’t use “localhost” to install simple CA.

¾

gpt-build globus_simple_ca_ffa40f5d_setup-0.13.tar.gz gcc32

¾

gpt-postinstall

If you can’t find the “gpt-build“ instruction, it locate at “usr\local\globus\sbin”. You can add this to path. 4.8.4. Setup CA use default setting ¾ /usr/local/globus/setup/globus_simple_ca_ffa40f5d_ setup/setup-gsi –default 4.8.5.

Setup host certificate ¾ grid-cert-request -host It will generate 3 files as follow: „ /etc/grid-security/hostkey.pem „ /etc/grid-security/hostcert_request.pem „ /etc/grid-security/hostcert.pem(zero size)

Note: In CA server, there is a file in /root/.globus/simpleCA/ that named globus_simple_ca_hash_setup-0.13.tar.gz (hash is depend on your ca) copy this file to other client node, and gpt-build with this file Note: If you are The CA server, command ‘grid-ca-sign -in hostcert_request.pem -out hostcert.pem’ and then send this file to the requested client node and save this ca file into /etc/grid-secruity/ and make the file named hostcert.pem read only for the other users. If your ca server also wants to join to grid computing, don’t forget to sign yourself. (machine) You have to send “hostcert_request.pem” file to the CA server to require a certificate for your machine, then Server will send back the “hostcert.pem” file. Finally you must replace this file with your original empty file. 4.8.6. Setup user certificate 4.8.6.1. Use you own user account. ¾ grid-cert-request „ „ „

user/.globus/userkey.pem user/.globus/usercert_request.pem user/.globus/usercert.pem (zero size)

This user certification step is almost same as host, first send the “usercert_request.pem” to CA server, and then take “usercert.pem” back from CA server. Last step is replacing this file with your original empty file.

Note: If your machine is CA server, type ‘grid-ca-sign -in usercert_request.pem -out usercert.pem’, and don’t forget to change the owner of the file to the requested user of CA. 4.8.7.

Test CA setup

4.8.7.1. We can test if CA is setting correctly by use grid-proxy-init command; this command is used to initiate a proxy to enable access to other machine. Correct result will show below. ¾ grid-proxy-init

4.9. Setup grid-mapfile 4.9.1. This step will add all accessible certificate users to our machine, so anyone you want to let them running job on you computer must be added. You can use ‘grid-cert-info –subject’ command to check the authorizations for users and copy the output then paste to grid-mapfile (see below) ¾ vi /etc/grid-security/grid-mapfile

4.10. Setup gatekeeper 4.10.1. In order to let the jog to run and communication to each other, we have to setup the port that allow job to pass. ¾ vi /etc/services Add this line “gsigatekeeper /etc/services.

2119/tcp

# Globus Gatekeeper” to

4.10.2. Next we need to add connection setting file “gsigatekeeper”. ¾ vi /etc/xinetd.d/gsigatekeeper Add the follow line to this file, but remember to replace the GLOBUS_LOCATION with your absolute GT3 installation path. In our case it will be replaced by “/usr/local/globus”.

9 9 9 9 9 9

service gsigatekeeper { socket_type = stream protocol = tcp wait = no user = root

9 9 9

env = LD_LIBRARY_PATH=/usr/local/globus/lib server = /usr/local/globus/sbin/globus-gatekeeper server_args = -conf /usr/local/globus/etc/globus-gatekeeper.conf

9 disable = no 9 } Note: be careful the server_args it’s only one line

4.10.3. Finally restart all service changed above. ¾ /etc/rc.d/init.d/xinetd restart 4.11. Setup host file(optional) 4.11.1. Edit “/etc/hosts” and add any computer domain name versus ip address that in our grid system. ¾ vi /etc/hosts

4.12. Test to run a simple job 4.12.1. This is a very exciting step of all things you done above. If this work, then I have to say “Congratulations!!”. But if not, you would better to check all step above and ensure that nothing you forget to setup or make any mistake. One last thing I must remind you is that you must run “grid-proxy-init” before this step. 4.12.2. Test local site ¾ globus-job-run /bin/date 4.12.3. Test remote site ¾ globus-job-run /bin/date

5. Install and setup mpich-g2 5.1 Use root account 5.2 Uncompress mpich-g2( do not put it in /usr/local directory) ¾ tar –zxvf mpich.tar.gz 5.3

Configuration 5.3.1 You have to change to the directory you want to install. In our case it is “/usr/local/mpich-1.2.5” ¾ ./configure --prefix= --with-device=globus2:-flavor=gcc32dbg

5.4

Make install 5.4.1 Make all the configuration ¾ make 5.4.2 Install ¾ make install

6. Run MPI program 6.1 Compiler a MPI program 6.1.1 First you have to choice a MPI program, you can choice one at “<mpi installation directory>/examples/basic/”. 6.1.2 Compiler your program by type “mpicc –o <program name> <source code name>” ¾ mpicc -o cpi cpi.c 6.2

Create machine file 6.2.1 Before you run mpi program you have to create a machine map file that hold the machine name and number of cpu.

6.3

Run mpi program 6.3.1 Copy your program to the other machine you want mpi program to execute; it must locate in the same directory. 6.3.2 Run mpi program by type “mpirun –machinefile <machinefile name> -np <program name>” ¾ mpirun -machinefile machinefile -np 2 cpi

Note: in machinefile, the host name must be the same with hostcert name, if the host name was not correct, you will see this error

If your host name was correct, it must be seen like this.

GridFTP As root, vi /etc/services add one line such as gsiftp 2811/tcp

Next we need to add connection setting file “gsiftp”. vi /etc/xinetd.d/gsiftp service gsiftp { instances = 1000 socket_type = stream wait = no user = root env = LD_LIBRARY_PATH=GLOBUS_LOCATION/lib server = GLOBUS_LOCATION/sbin/in.ftpd server_args = -l -a -G GLOBUS_LOCATION log_on_success += DURATION nice = 10 disable = no } Note: Be sure to replace GLOBUS_LOCATION with the actual $GLOBUS_LOCATION in your environment (our value is /usr/local/globus).

value

of

Then check the gsiftp service does work correctly or not, use ‘globus-url-copy gsiftp://biogrid01.hpc.csie.thu.edu.tw/home/test/test.txt file:///home/test/test1.txt’. Note: biogrid01.hpc.csie.thu.edu.tw is my hostname<domain name>, the name must be same with hostcert name see ‘grid-cert-request –host’command, do not use ‘localhost’ domain, if your host name in grid was not named localhost.

Install SCMS Require packages z scms_rpms.tgz This is the package file we made, you can simply uncompress this package and execute the shell file, it is named “postinstall.sh” and “install.sh”. First run “install.sh”, then run “postinstall.sh”. That is all. Install Condor Require packages z Perl 5 (Fedora core default version is version 5) z glibc 2.3 (Fedora core default version is 2.3.2 z Condor 6.6.5 http://www.cs.wisc.edu/condor/downloads/ Install on the machine to be the pool's central manager 1. Add a user name condor 2. Uncompress your conder tarball 3. Enter into your condor path and then type ‘./condor_install’ 4. Please answer these questions carefully.

Theres are 11 steps (安裝時會有這 11 個 steps 的問題,回答正確就好) STEP 1: What type of Condor installation do you want? 他的意思是問你你的 condor 是要做 execute, submit 或是 manage job,你必須決定你這 台機器到底是拿 condor 來做啥, 還有好像要填 hostname 吧, 要求是連 domain name 都要 STEP 2: How many machines are you setting up this way? 他的意思是問你到底有幾台機器要連 condor, 如果只有一台, 那就一台吧, 如果很 多台, 請填好每一台機器的 hostname STEP 3: Install the Condor release directory 如果沒有要改,就把他所有的目錄都裝預設就好 STEP 4: How and where should Condor send e-mail if things go wrong? Condor 掛掉了要把 e-mail 寄給誰?預設好像是他們開發者們的信箱 STEP 5: File system and UID domains. 這個有很多種 case, 你有可能很多台機器都有同一個 user, 但他們是用同一份 password 檔(如 NIS server), 也有可能是每一台機器都有同名字的 user, 但他們都是 用不同的 password 檔, 或是連使用者名字都不一樣, 請仔細閱讀提示訊息, 決定你 們系統的要求, 而檔案系統是你要不要讓每個 user 都使用共同的目錄等等一些設定, 看你的需要而定 STEP 6: Java Universe support in Condor. 把你的 Java VM 的 path 打進去, 例如/usr/local/j2sdk1.4.2_04/bin/java STEP 7: Where should public programs be installed? 一樣, 預設就行, 在預設的資料夾內, 大家都能使用存取這些程式 STEP 8: What machine will be your central manager? 你做 job manage 的機器是那一臺?, 一般而言, 這一臺就是主機, 應該只有一臺, 把 host 給填下去吧 STEP 9: Where will the local directory go? 預設的就好 STEP 10: Where will the local (machine-specific) configuration files go? 預設的就好

STEP 11: How shall Condor find its configuration file? 預設的就可以 試試看能不能用吧: type ‘/usr/local/condor/sbin/condor_master’ and then type ‘ps -aux | egrep condor_’, if you see the output such that , your condor is works well

Reference: http://www.globus.org/ http://www-unix.globus.org/toolkit/ http://crystal.uta.edu/~levine/class/spring2003/grid/globus_commands/ http://www-900.ibm.com/developerWorks/cn/grid/gr-redbook/index.shtml http://www-unix.globus.org/toolkit/3.0/ogsa/docs/java_programmers_guide.html http://www.cs.wisc.edu/condor/

Related Documents

Installation
June 2020 30
Installation
October 2019 41
Installation
October 2019 36