名古屋大学情報連携基盤センター計算機システムの利用について


名古屋大学情報連携基盤センターのホームページ
名古屋大学情報連携基盤センター計算機システムの利用期間について

講師と受講者が、情報メディア教育センターシステムをサマースクールで利用
できる期間は次のように予定しています。利用するためには予め登録が必要です。

 講師: 平成14年8月20日〜9月20日

 受講者:平成14年9月 9日〜9月20日


情報連携基盤センター計算機システムの利用について

名古屋大学情報連携基盤センター(旧大型計算機センター)のベクトル並列型
スーパーコンピュータ、Fujitsu VPP5000/64(vpp)の2PEの2セットとフロントエンド
のSUN(gpcs)を利用することができます。

これらは、MPI (Massage Passing Interface), VPP Fortran, HPF (High Performance  
Fortran)などの並列プログラミングや並列ジョブの実行に利用します。

以下の情報連携基盤センター計算機システムの利用方法の説明は、名古屋大学太陽地球
研究所の計算機利用共同研究者のためのより一般的な説明で、まだサマースクール用に
書き換えていません。サマースクールでは並列化のために2PEのみを利用しますので、
その点を考慮して参照・利用して下さい。


(I) How to Use Computer System in the Nagoya University Computer Center

(0) To use gpcs.cc.nagoya-u.ac.jp (133.6.90.3), front-end-processor 
    (SUN workstation) of Fujutsu supercomputer VPP5000/64 (vector-parallel 
    machine) or to use vpp.cc.nagoya-u.ac.jp (133.6.90.2) of Fujutsu 
    supercomputer VPP5000/64 itself.
 
  how to change password
  gpcs% yppasswd
  Old yp password: present password
  New password:    new password
  Retry new password: new password

(1) How to connect initially
    telnet gpcs.cc.nagoya-u.ac.jp (or 133.6.90.3)
      : You will connect Front end processor, gpcs to VPP5000/64
        You can use usual UNIX commands
    cdvpp:  transfer to desk area for VPP5000

(2) How to use
    The following is your directory which you first get in (not VPP desk area)
gpcs% pwd
/home/usr7/l46637a  : disk area of front end processor (gpcs)

    Change to VPP desk area use "cdvpp"
gpcs% pwd
/vpp/home/usr7/l46637a : disk area of superprocessor (vpp)


(II) How to use Fujitsu VPP5000/64 in Nagoya University Computation Center by UNIX

(0) User makes a directory of "wave" in user account of VPP desk area.
    All the programs are put in the directory of "wave".

    As an example, execution procedure by a program for 3-dimensional 
    wave propagation, "pwave3.f" will be explained in (a), (b), and (c).

(a) Execution by single processor element (1PE)
    only vectorization does work and all the control lines for parallel
    (!XOCL) are ignored.
    open sentence is used for input and output of files.
    1. cp pwave3.f prog.f
    2. qsub -q c -eo -o comp.out comp.sh
       compile "prog.f" by vectorization mode to obtain execution 
       file "prog" and the result of compile is written in "comp.out".
    3. qsub -q x -eo -o exec.out exec.sh
       execute the execution file "prog" and execution time is chosen 
       by "#  @$ -lt 6:00:00" (6 hours) in "exec.sh".
       output end error are written in "exec.out".

(b) Execution by Fortran77 with 16PE (Vectorization and Parallelization)
    In this case one needs to declare to use 16PE by (npe=16) in 
    source program.
    1. cp pwave3.f prog.f
    2. qsub -q c -eo -o pcomp.out pcomp.sh
       compile "prog.f" by parallelization and vectorization mode.
    3. qsub -q z -eo -lPv 16 -o pexec.out pexec.sh
       execute the execution file "prog".

(c) Execution by using SCALAR MODES for example to check vector capability
    1. cp pwave3.f prog.f
    2. qsub -q c -eo -o scomp.out scomp.sh
       compile "prog.f" by scalar option.
    3. qsub -q x -eo -o exec.out exec.sh
       execute the execution file "prog".

(d) Execution by Fortran90 with 16PE (Vectorization and Parallelization)
    In this case one needs to declare to use 16PE by (npe=16) in 
    source program.
    1. cp pwave3.f prog90.f
    2. qsub -q x -eo -o pcomp90.out pcomp90.sh
       compile "prog90.f" by parallelization and vectorization mode.
    3. qsub -q z -eo -lPv 16 -o pexec90.out pexec90.sh
       execute the execution file "prog90".

(e) Execution by Fortran90 with 2PE
    VPP Fortran program, prog90.f is located in directory, test and 
    the compile information is found in the file, prog90list.
       qsub -q x -eo -o pcomp902.out pcomp902.sh
       qsub -q z -eo -lP 2 -o pexec90.out pexec90.sh

       
         cd test
         frt -Wx,-Lt prog90.f -Pdt -o prog90 -Z prog90list
       
         #  @$ -lt 1:30:00
         #  @$-q z  -eo
         cd test
         timex prog90

(f) Execution by HPF (High Performance Fortran) with 2PE
    HPF Fortran program, proghpf.f is located in directory, test and 
    the compile information is found in the file, hpflist.

      proghpf.f:  Fortran program written by HPF
      proghpf:    execution file
      pconphpf2.out: output file for compile
      pexechpf02.out: output file for execution

       qsub -q c -eo -o pconphpf2.out pcomphpf2.sh
       qsub -q z -eo -lPv 2 -o pexechpf.out pexechpf.sh

       
         cd test
         frt -Wh,-Lt proghpf.f -Pdt -o proghpf -Z hpflist
       
         cd test
         frt -Wh -o proghpf proghpf.f
       
         #  @$ -lt 1:30:00
         #  @$-q z  -eo
         cd test
         timex proghpf

(g) Execution by MPI (Message Passing Interface) with 2PE (Batch job)
    MPI Fortran program, progmpi.f is located in directory, test and
    the compile information is found in the file, mpilist.

      progmpi.f:  Fortran program written by MPI
      progmpi:    execution file
      pconpmpi2.out: output file for compile
      pexecmpi02.out: output file for execution
      setenv  VPP_MBX_SIZE  10485760: set of environment for MPI scatter
 
       qsub -q c -eo -o pconpmpi2.out pcompmpi2.sh  :compile progmpi.f
       qsub mpi_lim02e.sh                           :execute progmpi by 2PE

       
         cd test
         mpifrt -Lt progmpi.f -Pdt -o progmpi -Z mpilist
       
         cd test
         mpifrt -o progmpi progmpi.f
       
         #  @$-q z  -eo -o pexecmpi02.out
         #  @$-lP 2
         setenv  VPP_MBX_SIZE  10485760
         ./test/progmpi -np 2

(h) Execution by MPI (Message Passing Interface) with 2PE (TSS job)
    MPI Fortran program, progmpi.f is located in directory, ~/test.

      progmpi.f:  Fortran program written by MPI
      progmpi, a.out:    execution file

       mpifrt -o progmpi progmpi.f
       jobexec -vp 2 ~/test/progmpi
    
    or
       mpifrt progmpi.f
       jobexec -vp 2 ~/test/a.out


Examples of programs, "readme" are located in "/vpp/home/usr4/w49304a/sub0"

Content of readme (/vpp/home/usr4/w49304a/sub0/readme)

gpcs% more readme

alias stc  'qstat c@vpp-g'        : status of compiler
alias stx  'qstat x@vpp-g'        : status of class x job(single)
alias stz  'qstat z@vpp-g'        : status of class z job (multi, 2-16PE)
alias stze  'qstat ze@vpp-g'      : status of class ze job (multi, 17-32PE)
alias qde  'qdel -k -r vpp-g canceljob'     : job cancel

qsub -q c -eo -o scomp.out scomp.sh         : compile for scalar (1PE)
qsub -q c -eo -o comp.out comp.sh           : compile for single (1PE)
qsub -q c -eo -o exec.out exec.sh           : execution for single (1PE)

qsub -q c -eo -o pcomp.out pcomp.sh         : compile for multi PE 
qsub -q z -eo -lPv 2 -o pexec.out pexec.sh  : execution by 2PE
qsub -q z -eo -lPv 4 -o pexec.out pexec.sh  : execution by 2PE
qsub -q z -eo -lPv 8 -o pexec.out pexec.sh  : execution by 8PE
qsub -q z -eo -lPv 16 -o pexec.out pexec.sh : execution by 16PE
qsub -q x -eo -o pcomp90.out pcomp90.sh     : compile for multi PE
qsub -q z -eo -lPv 16 -o pexec90.out pexec90.sh : execution by 16PE
qsub -q ze -eo -lPv 32 -o pexec90.out pexec90.sh : execution by 17-32PE

frt -Wh,-Lt -Pdt -Z list -o proghpf proghpf.f
qsub -q c -eo -o pconphpf2.out pcomphpf2.sh : compile for multi PE
qsub -q c -eo -o pconphpf.out pcomphpf.sh   : compile for multi PE
qsub -q z -eo -lPv 2 -o pexechpf.out pexechpf.sh
qsub -q z -eo -lPv 4 -o pexechpf.out pexechpf.sh
qsub -q z -eo -lPv 8 -o pexechpf.out pexechpf.sh
qsub -q z -eo -lPv 16 -o pexechpf.out pexechpf.sh
qsub -q ze -eo -lPv 32 -o pexechpf.out pexechpf.sh


(PE: Processor Element of VPP5000)


####  Contents of Shell  ####
<>
gpcs% more comp.sh
cd sub0
frt -o prog prog.f

<>
gpcs% more exec.sh
#  @$ -lt 10:00
#  @$-q x  -eo
cd sub0

timex prog
<>
gpcs% more pcomp.sh
cd sub0
frt -Wx -o prog prog.f

<>
gpcs% more pexec.sh
#  @$ -lt 9:00:00
#  @$-q z  -eo
cd sub0
timex prog

<>
gpcs% more pcomp90.sh
cd sub0
frt -Wx -o prog90 prog90.f

<>
gpcs% more pexec90.sh
#  @$ -lt 9:30:00
#  @$-q z  -eo
cd sub0
timex prog90
gpcs%