How to execute MPI 3D MHD Codes by using Fujitsu PRIMEPOWER HPC2500
The new supercomputer system of the Information Technology Center,
Nagoya University was begun to operate on March 1, 2005. It is
scalar-parallel supercomputer, Fujitsu PRIMEPOWER HPC2500.
We will demonstrate how to execute the 3-Dimensional magnetohydrodynamic
(MHD) Simulation of Earth's Magnetosphere by using Fujitsu PRIMEPOWER HPC2500.
In the MHD model, MHD and Maxwell's equations are solved in the solar-magnetospheric
coordinate system by using modified leap-frog method when the upstream solar
wind and interplanetary magnetic field (IMF) boundary conditions are given. Moreover,
north-south symmetry and dawn-dusk symmetry are not assumed, therefore it is
necessary to solve the whole volume simulation box.
The main simulation Fortran program is fully vectorized and fully parallerized.
you can connect the supercomputer by telnet as
telnet hpc.cc.nagoya-u.ac.jp
or (telnet 133.6.1.153)
Fujitsu PRIMEPOWER HPC2500 23 nodes and 1536 cpu's
Host name: hpc.cc.nagoya-u.ac.jp
IP address: 133.6.1.153
23 nodes
64 cpu/512 GB memory x 22 nodes
128 cpu/512 GB memory x 1 node
Total performance: 12 Tflops
Total memory: 11.5 TB
Disc capacity: 50 TB
Operation system: Solaris9
Job Class
Queue No. cpu cpu time upper limit elapsed time
(hours) of cpu time (hours)
a8 8 10 10 2
p8 8 10 unlimited 20
p16 16 200 unlimited 100
p64 64 200 unlimited 200
p128 128 200 unlimited 336
p256 256 200 unlimited 336
p1024 1024 200 unlimited 336
TSS 128 2 unlimited -
*) Cpu time is the sum of cpu time by all processors.
Connection of server and file system
hpc.cc.nagoya-u.ac.jp ----- /home
/large0 /large1
/large_tmp0 /large_tmp1
gpcs:/home/usrN/user-id --> /home/usrN/user-id
vpp:/home/usrN/user-id --> /home/vpp/usrN/user-id
vpp:/home/dpfs/usrN --> /large0/usrN/user-id
/large1/usrN/user-id
Command of compile
Fortran: frt
XPFortran: xpfrt (succession from VPP Fortran)
C: fcc
C++: FCC
MPI Fortran mpifrt
MPI C mpicc
Options of compiler
-Kfast_GP2=3
-Klargepage=2 :use largepage
-KV9 :use SPARC V9
-X9 :Fortran95
an example of compile
frt -Knolargepage -o prog_nlg prog.f
Command of NQS (Network Queuing System)
qsub
qstat
qdel
qcat
--------------------------------------------------
(A1) Compile and execution of MPI Fortran program
use 8 processors
8 process parallel
mpifrt -Lt progmpi.f -o progmpi08 -Kfast_GP2=3,V9,largepage=2 -Z mpilist
qsub mpi_lim08y20.sh
hpc% more mpi_lim08y20.sh
# @$-q p8 -lP 8 -eo -o pexecmpi08.out
# @$-me -lT 20:00:00 -lM 7gb
cd ./vpp05a/mearthb3
mpiexec -n 8 -mode limited ./progmpi08
(A2) Compile and execution of MPI Fortran program
use 128 processors
128 process parallel
mpifrt -Lt progmpi.f -o progmpi128 -Kfast_GP2=3,V9,largepage=2 -Z mpilist
qsub mpiex_0128th01.sh
hpc% more mpiex_0128th01.sh
# @$-q p128 -lP 128 -eo -o progmpi128.out
# @$-lM 8.0gb -lT 600:00:00
setenv VPP_MBX_SIZE 1128000000
cd ./mearthd4/
mpiexec -n 128 -mode limited ./progmpi128
(A3) Compile and execution of MPI Fortran program
use 128 processors
32 process parallel
4 thread parallel (sheared memory)
mpifrt -Lt progmpi.f -o progmpi128th04 -Kfast_GP2=3,V9,largepage=2 -Kparallel -Z mpilist
qsub mpiex_0128th04.sh
# @$-q p128 -lp 4 -lP 32 -eo -o progmpi128th04.out
# @$-lM 8.0gb -lT 600:00:00
setenv VPP_MBX_SIZE 1128000000
cd ./mearthd4/
mpiexec -n 32 -mode limited ./progmpi128th04
(1) Compile and execution of MPI Fortran program
use 16 processors
16 process parallel
mpifrt -Lt progmpi.f -o progmpi -Kfast_GP2=3,V9,largepage=2 -Z mpilist
qsub mpiexec_pp0016k1.sh
hpc% more mpiexec_pp0016k1.sh
# @$-q p16 -lP 16 -eo -o pexecmpi16.out
# @$-lM 10.0gb -lT 16:00:00
setenv VPP_MBX_SIZE 1256000000
cd /home/usr6/vpp/usr6/a41456a/mearthd3/
mpiexec -n 16 -mode limited ./progmpi
(2) Compile and execution of MPI Fortran program with automatic
parallel procedure
use 128 processors
32 process parallel (distributed memory)
8 automatic parallel or thread parallel (sheared memory)
mpifrt -Lt progmpi.f -o progmpi128s4 -Kfast_GP2=3,V9,largepage=2 -Kparallel -Z mpilist
qsub mpiex_0128k2s4.sh
hpc% more mpiex_0128k2s4.sh
# @$-q p128 -lp 4 -lP 32 -eo -o pexecmpi128s4.out
# @$-lM 10.0gb -lT 2000:00:00
setenv VPP_MBX_SIZE 1256000000
cd /home/usr6/vpp/usr6/a41456a/mearthd3/
mpiexec -n 32 -mode limited ./progmpi128s4
(3) Compile and execution of Fortran program (single)
qsub -q p8 -eo -o comp.out comp.sh
qsub -q p8 -eo -o exec.out exec.sh
hpc% more comp.sh
cd /home/usr6/vpp/usr6/a41456a/mearthd3
frt -o prog prog.f
hpc% more exec.sh
# @$ -lt 10:00
# @$-q x -eo
cd /home/usr6/vpp/usr6/a41456a/mearthd3
timex prog
--------------------------------------------------
(4) How to Use XPFortran
XPFortran is succession from VPP Fortran.
Compile
xpfrt -o xprog xprog.f
Execution
xpfexec -vp NP -m MODE prog
NP: number of processors
MODE: full or limited
prog name of execution file (pass + file name)
Execution by full mode (xp_prog)
qsub xpf_full.sh
Set number of cpu in -lP is added by 1 cpu to the number of cpu to use.
more xpf_full.sh
# @$-q p8 -eo -o xpf_full.out
# @$-lP 5
xpfexec -vp 4 ./xp_prog
Execution by limited mode (xp_prog)
qsub xpf_limited.sh
more xpf_limited.sh
# @$-q p8 -eo -o xpf_lim.out
# @$-lP 4
xpfexec -vp 4 -mode limited ./xp_prog
--------------------------------------------------