How to Use Fujitsu VPP5000/64 in the Nagoya University Computer Center

2002. 3. 5
by Tatsuki Ogino

(0) To use gpcs.cc.nagoya-u.ac.jp (133.6.90.3), front-end-processor 
    (SUN workstation) of Fujutsu supercomputer VPP5000/64 (vector-parallel 
    machine) or to use vpp.cc.nagoya-u.ac.jp (133.6.90.2) of Fujutsu 
    supercomputer VPP5000/64 itself.
 
  how to change password
  gpcs% yppasswd
  Old yp password: present password
  New password:    new password
  Retry new password: new password

(1) How to connect initially
    telnet gpcs.cc.nagoya-u.ac.jp (or 133.6.90.3)
      : You will connect Front end processor, gpcs to VPP5000/64
        You can use usual UNIX commands
    cdvpp:  transfer to desk area for VPP5000

(2) How to use
    I made example to run by VPP5000.
    The following is your directory which you first get in (not VPP desk area)
    ".cshrc.old" and " .login.old" are default in VPP to use "vt100" 
    terminal, and ".cshrc" and ".login" are for Sun workstation. 

gpcs% pwd
/home/usr7/l46637a
gpcs% ls -al
total 14
drwxr-xr-x  2 l46637a       512 Apr  9 14:33 .
drwxr-xr-x320 root         6144 Apr  9 10:00 ..
-rw-------  1 l46637a       789 Apr  9 14:35 .cshrc
-r--r--r--  1 l46637a       379 Apr  9 14:33 .cshrc.old
-r--r--r--  1 l46637a       649 Apr  5 09:50 .emacs
-rw-------  1 l46637a      1113 Apr  9 14:35 .login
-r--r--r--  1 l46637a      1263 Apr  9 14:33 .login.old
gpcs%
    
    Change to VPP desk area use "cdvpp"
    I put examples in VPP5000 in the following directory "example"

gpcs% pwd
/vpp/home/usr4/w49304a/sub0
gpcs% ls -l
total 10365
-rwxr-x---   1 w49304a      1611 Dec 28 15:51 comp.out
-rw-r-----   1 w49304a        27 Dec 28 15:39 comp.sh
-rwxr-x---   1 w49304a       532 Dec 28 15:52 exec.out
-rw-r-----   1 w49304a        50 Dec 28 15:40 exec.sh
-rw-r--r--   1 w49304a        71 Dec 28 16:05 pcomp.out
-rw-r--r--   1 w49304a        31 Dec 28 15:39 pcomp.sh
-rwxr-x---   1 w49304a        71 Dec 28 15:54 pcomp90.out
-rw-r-----   1 w49304a        35 Dec 28 15:40 pcomp90.sh
-rwxr-x---   1 w49304a       532 Dec 28 16:07 pexec.out
-rw-r--r--   1 w49304a        52 Dec 28 16:07 pexec.sh
-rwxr-x---   1 w49304a       496 Dec 28 16:01 pexec90.out
-rw-r-----   1 w49304a        54 Dec 28 15:58 pexec90.sh
-rwxr-x---   1 w49304a   5274096 Dec 28 16:05 prog
-rw-r--r--   1 w49304a     10281 May 12  1997 prog.f
-rwxr-x---   1 w49304a   5267388 Dec 28 15:54 prog90
-rw-r-----   1 w49304a      6343 Dec 28 15:53 prog90.f
-rw-r--r--   1 w49304a      6343 Apr  2  1997 pwave2.f
-rw-r--r--   1 w49304a     10281 Apr  2  1997 pwave3.f
-rw-r--r--   1 w49304a       526 Dec 28 15:57 readme
-rwxr-x---   1 w49304a      1611 Dec 28 15:42 scomp.out
-rw-r-----   1 w49304a        35 Dec 28 15:41 scomp.sh
gpcs%

How to use Fujitsu VPP in Nagoya University Computation Center by UNIX

2002. 3. 5
Tatsuki Ogino
Solar-Terrestrial Environment Laboratory, Nagoya University
e-mail: ogino@stelab.nagoya-u.ac.jp
TEL: +81-533-89-5207
FAX: +81-533-89-5090

(0) User makes a directory of "wave" in user account of VPP desk area.
    All the programs are put in the directory of "wave".

    As an example, execution procedure by a program for 3-dimensional 
    wave propagation, "pwave3.f" will be explained in (a), (b), and (c).

(a) Execution by single processor element (1PE)
    only vectorization does work and all the control lines for parallel
    (!XOCL) are ignored.
    open sentence is used for input and output of files.
    1. cp pwave3.f prog.f
    2. qsub -q c -eo -o comp.out comp.sh
       compile "prog.f" by vectorization mode to obtain execution 
       file "prog" and the result of compile is written in "comp.out".
    3. qsub -q x -eo -o exec.out exec.sh
       execute the execution file "prog" and execution time is chosen 
       by "#  @$ -lt 6:00:00" (6 hours) in "exec.sh".
       output end error are written in "exec.out".

(b) Execution by Fortran77 with 16PE (Vectorization and Parallelization)
    In this case one needs to declare to use 16PE by (npe=16) in 
    source program.
    1. cp pwave3.f prog.f
    2. qsub -q c -eo -o pcomp.out pcomp.sh
       compile "prog.f" by parallelization and vectorization mode.
    3. qsub -q z -eo -lPv 16 -o pexec.out pexec.sh
       execute the execution file "prog".

(c) Execution by using SCALAR MODES for example to check vector capability
    1. cp pwave3.f prog.f
    2. qsub -q c -eo -o scomp.out scomp.sh
       compile "prog.f" by scalar option.
    3. qsub -q x -eo -o exec.out exec.sh
       execute the execution file "prog".

(d) Execution by Fortran90 with 16PE (Vectorization and Parallelization)
    In this case one needs to declare to use 16PE by (npe=16) in 
    source program.
    1. cp pwave3.f prog90.f
    2. qsub -q x -eo -o pcomp90.out pcomp90.sh
       compile "prog90.f" by parallelization and vectorization mode.
    3. qsub -q z -eo -lPv 16 -o pexec90.out pexec90.sh
       execute the execution file "prog90".

(e) Execution by Fortran90 with 2PE
    VPP Fortran program, prog90.f is located in directory, test and 
    the compile information is found in the file, prog90list.
       qsub -q x -eo -o pcomp902.out pcomp902.sh
       qsub -q z -eo -lP 2 -o pexec90.out pexec90.sh

       <pcomp902.sh>
         cd test
         frt -Wx,-Lt prog90.f -Pdt -o prog90 -Z prog90list
       <pexec90.sh>
         #  @$ -lt 1:30:00
         #  @$-q z  -eo
         cd test
         timex prog90

(f) Execution by HPF (High Performance Fortran) with 2PE
    HPF Fortran program, proghpf.f is located in directory, test and 
    the compile information is found in the file, hpflist.

      proghpf.f:  Fortran program written by HPF
      proghpf:    execution file
      pconphpf2.out: output file for compile
      pexechpf02.out: output file for execution

       qsub -q c -eo -o pconphpf2.out pcomphpf2.sh
       qsub -q z -eo -lPv 2 -o pexechpf.out pexechpf.sh

       <pcomphpf2.sh>
         cd test
         frt -Wh,-Lt proghpf.f -Pdt -o proghpf -Z hpflist
       <pcomphpf.sh>
         cd test
         frt -Wh -o proghpf proghpf.f
       <pexechpf.sh>
         #  @$ -lt 1:30:00
         #  @$-q z  -eo
         cd test
         timex proghpf

(g) Execution by MPI (Message Passing Interface) with 2PE (Batch job)
    MPI Fortran program, progmpi.f is located in directory, test and
    the compile information is found in the file, mpilist.

      progmpi.f:  Fortran program written by MPI
      progmpi:    execution file
      pconpmpi2.out: output file for compile
      pexecmpi02.out: output file for execution
      setenv  VPP_MBX_SIZE  1256000000: set of environment for MPI scatter
 
       qsub -q c -eo -o pconpmpi2.out pcompmpi2.sh
       qsub mpi_lim02e.sh

       <pcompmpi2.sh>
         cd test
         mpifrt -Lt progmpi.f -Pdt -o progmpi -Z mpilist
       <pcompmpi.sh>
         cd test
         mpifrt -o progmpi progmpi.f
       <mpi_lim02e.sh>
         #  @$-q z  -eo -o pexecmpi02.out
         #  @$-lP 2
         setenv  VPP_MBX_SIZE  1256000000
         ./test/progmpi -np 2

(h) Execution by MPI (Message Passing Interface) with 2PE (TSS job)
    MPI Fortran program, progmpi.f is located in directory, ~/test.

      progmpi.f:  Fortran program written by MPI
      progmpi, a.out:    execution file

       mpifrt -o progmpi progmpi.f
       jobexec -vp 2 ~/test/progmpi
    
    or
       mpifrt progmpi.f
       jobexec -vp 2 ~/test/a.out


Examples of programs, "readme" are located in "/vpp/home/usr4/w49304a/sub0"

Content of readme (/vpp/home/usr4/w49304a/sub0/readme)

gpcs% more readme

alias stc  'qstat c@vpp-g'        : status of compiler
alias stx  'qstat x@vpp-g'        : status of class x job(single)
alias stz  'qstat z@vpp-g'        : status of class z job (multi, 2-16PE)
alias stze  'qstat ze@vpp-g'      : status of class ze job (multi, 17-32PE)
alias qde  'qdel -k -r vpp-g canceljob'     : job cancel

qsub -q c -eo -o scomp.out scomp.sh         : compile for scalar (1PE)
qsub -q c -eo -o comp.out comp.sh           : compile for single (1PE)
qsub -q c -eo -o exec.out exec.sh           : execution for single (1PE)

qsub -q c -eo -o pcomp.out pcomp.sh         : compile for multi PE 
qsub -q z -eo -lPv 2 -o pexec.out pexec.sh  : execution by 2PE
qsub -q z -eo -lPv 4 -o pexec.out pexec.sh  : execution by 2PE
qsub -q z -eo -lPv 8 -o pexec.out pexec.sh  : execution by 8PE
qsub -q z -eo -lPv 16 -o pexec.out pexec.sh : execution by 16PE
qsub -q x -eo -o pcomp90.out pcomp90.sh     : compile for multi PE
qsub -q z -eo -lPv 16 -o pexec90.out pexec90.sh : execution by 16PE
qsub -q ze -eo -lPv 32 -o pexec90.out pexec90.sh : execution by 17-32PE

frt -Wh,-Lt -Pdt -Z list -o proghpf proghpf.f
qsub -q c -eo -o pconphpf2.out pcomphpf2.sh : compile for multi PE
qsub -q c -eo -o pconphpf.out pcomphpf.sh   : compile for multi PE
qsub -q z -eo -lPv 2 -o pexechpf.out pexechpf.sh
qsub -q z -eo -lPv 4 -o pexechpf.out pexechpf.sh
qsub -q z -eo -lPv 8 -o pexechpf.out pexechpf.sh
qsub -q z -eo -lPv 16 -o pexechpf.out pexechpf.sh
qsub -q ze -eo -lPv 32 -o pexechpf.out pexechpf.sh


(PE: Processor Element of VPP5000)


####  Contents of Shell  ####
<<comp.sh>>
gpcs% more comp.sh
cd sub0
frt -o prog prog.f

<<exec.sh>>
gpcs% more exec.sh
#  @$ -lt 10:00
#  @$-q x  -eo
cd sub0

timex prog
<<pcomp.sh>>
gpcs% more pcomp.sh
cd sub0
frt -Wx -o prog prog.f

<<pexec.sh>>
gpcs% more pexec.sh
#  @$ -lt 9:00:00
#  @$-q z  -eo
cd sub0
timex prog

<<pcomp90.sh>>
gpcs% more pcomp90.sh
cd sub0
frt -Wx -o prog90 prog90.f

<<pexec90.sh>>
gpcs% more pexec90.sh
#  @$ -lt 9:30:00
#  @$-q z  -eo
cd sub0
timex prog90
gpcs%

前のページへ戻る