멀 깔아야 서버가 되려나?
$ sudo apt-cache search openmpi gromacs-openmpi - Molecular dynamics sim, binaries for OpenMPI parallelization libblacs-openmpi1 - Basic Linear Algebra Comm. Subprograms - Shared libs. for OpenMPI libhdf5-openmpi-8 - Hierarchical Data Format 5 (HDF5) - runtime files - OpenMPI version libhdf5-openmpi-8-dbg - Hierarchical Data Format 5 (HDF5) - OpenMPI Debug package libhdf5-openmpi-dev - Hierarchical Data Format 5 (HDF5) - development files - OpenMPI version libmeep-lam4-7 - library for using parallel (OpenMPI) version of meep libmeep-lam4-dev - development library for using parallel (OpenMPI) version of meep libmeep-mpi-default-dev - development library for using parallel (OpenMPI) version of meep libmeep-mpi-default7 - library for using parallel (OpenMPI) version of meep libmeep-mpich2-7 - library for using parallel (OpenMPI) version of meep libmeep-mpich2-dev - development library for using parallel (OpenMPI) version of meep libmeep-openmpi-dev - development library for using parallel (OpenMPI) version of meep libmeep-openmpi7 - library for using parallel (OpenMPI) version of meep libopenmpi-dev - high performance message passing library -- header files libopenmpi1.6 - high performance message passing library -- shared library libopenmpi1.6-dbg - high performance message passing library -- debug library libscalapack-openmpi1 - Scalable Linear Algebra Package - Shared libs. for OpenMPI meep-lam4 - software package for FDTD simulation, parallel (OpenMPI) version meep-mpi-default - software package for FDTD simulation, parallel (OpenMPI) version meep-mpich2 - software package for FDTD simulation, parallel (OpenMPI) version meep-openmpi - software package for FDTD simulation, parallel (OpenMPI) version mpqc-openmpi - Massively Parallel Quantum Chemistry Program (OpenMPI transitional package) netpipe-openmpi - Network performance tool using OpenMPI octave-openmpi-ext - Transitional package for parallel computing in Octave using MPI openmpi-bin - high performance message passing library -- binaries openmpi-checkpoint - high performance message passing library -- checkpoint support openmpi-common - high performance message passing library -- common files openmpi-doc - high performance message passing library -- man pages openmpi1.6-common - high performance message passing library -- common files openmpi1.6-doc - high performance message passing library -- man pages openmpipython - MPI-enhanced Python interpreter (OpenMPI based version) yorick-full - full installation of the Yorick interpreter and add-ons yorick-mpy-openmpi - Message Passing Yorick (OpenMPI build) |
$ sudo apt-cache search mpich gromacs-mpich - Molecular dynamics sim, binaries for MPICH parallelization libhdf5-mpich-8 - Hierarchical Data Format 5 (HDF5) - runtime files - MPICH2 version libhdf5-mpich-8-dbg - Hierarchical Data Format 5 (HDF5) - Mpich Debug package libhdf5-mpich-dev - Hierarchical Data Format 5 (HDF5) - development files - MPICH version libhdf5-mpich2-dev - Hierarchical Data Format 5 (HDF5) - development files - MPICH version libmeep-mpi-default-dev - development library for using parallel (OpenMPI) version of meep libmeep-mpi-default7 - library for using parallel (OpenMPI) version of meep libmeep-mpich2-7 - library for using parallel (OpenMPI) version of meep libmeep-mpich2-dev - development library for using parallel (OpenMPI) version of meep libmpich-dev - Development files for MPICH libmpich12 - Shared libraries for MPICH libmpich2-3 - Shared libraries for MPICH2 libmpich2-dev - Transitional dummy package for MPICH development files libmpl-dev - Development files for mpl part of MPICH libmpl1 - Shared libraries for mpl part of MPICH libopa-dev - Development files for opa part of MPICH libopa1 - Shared libraries for opa part of MPICH libscalapack-mpi-dev - Scalable Linear Algebra Package - Dev. files for MPICH meep-mpi-default - software package for FDTD simulation, parallel (OpenMPI) version meep-mpich2 - software package for FDTD simulation, parallel (OpenMPI) version mpb-mpi - MIT Photonic-Bands, parallel (mpich) version mpi-default-bin - Standard MPI runtime programs (metapackage) mpi-default-dev - Standard MPI development files (metapackage) mpich - Implementation of the MPI Message Passing Interface standard mpich-doc - Documentation for MPICH mpich2 - Transitional dummy package mpich2-doc - Transitional dummy package for MPICH documentation mpich2python - MPI-enhanced Python interpreter (MPICH2 based version) netpipe-mpich2 - Network performance tool using MPICH2 MPI scalapack-mpi-test - Scalable Linear Algebra Package - Test files for MPICH scalapack-test-common - Test data for ScaLAPACK testers yorick-full - full installation of the Yorick interpreter and add-ons yorick-mpy-mpich2 - Message Passing Yorick (MPICH2 build) |
$ sudo apt-cache search mpirun lam-runtime - LAM runtime environment for executing parallel programs mpi-default-bin - Standard MPI runtime programs (metapackage) |
[링크 : https://likymice.wordpress.com/2015/03/13/install-open-mpi-in-ubuntu-14-04-13-10/]
$ sudo apt-get install libcr-dev mpich2 mpich2-doc |
[링크 : https://jetcracker.wordpress.com/2012/03/01/how-to-install-mpi-in-ubuntu/]
mpich와 mpich2는 별 차이가 없네? 그래도 2는 1을 포함하는 듯
$ sudo apt-get install mpich Reading package lists... Done Building dependency tree Reading state information... Done The following extra packages will be installed: gfortran gfortran-4.9 hwloc-nox libcr0 libgfortran-4.9-dev libhwloc-plugins libhwloc5 libmpich-dev libmpich12 libmpl-dev libmpl1 libopa-dev libopa1 ocl-icd-libopencl1 Suggested packages: gfortran-doc gfortran-4.9-doc libgfortran3-dbg blcr-dkms libhwloc-contrib-plugins blcr-util mpich-doc opencl-icd The following NEW packages will be installed: gfortran gfortran-4.9 hwloc-nox libcr0 libgfortran-4.9-dev libhwloc-plugins libhwloc5 libmpich-dev libmpich12 libmpl-dev libmpl1 libopa-dev libopa1 mpich ocl-icd-libopencl1 0 upgraded, 15 newly installed, 0 to remove and 3 not upgraded. Need to get 6,879 kB of archives. After this operation, 25.5 MB of additional disk space will be used. Do you want to continue? [Y/n] |
$ sudo apt-get install mpich2 Reading package lists... Done Building dependency tree Reading state information... Done The following extra packages will be installed: gfortran gfortran-4.9 hwloc-nox libcr0 libgfortran-4.9-dev libhwloc-plugins libhwloc5 libmpich-dev libmpich12 libmpl-dev libmpl1 libopa-dev libopa1 mpich ocl-icd-libopencl1 Suggested packages: gfortran-doc gfortran-4.9-doc libgfortran3-dbg blcr-dkms libhwloc-contrib-plugins blcr-util mpich-doc opencl-icd The following NEW packages will be installed: gfortran gfortran-4.9 hwloc-nox libcr0 libgfortran-4.9-dev libhwloc-plugins libhwloc5 libmpich-dev libmpich12 libmpl-dev libmpl1 libopa-dev libopa1 mpich mpich2 ocl-icd-libopencl1 0 upgraded, 16 newly installed, 0 to remove and 3 not upgraded. Need to get 6,905 kB of archives. After this operation, 25.6 MB of additional disk space will be used. Do you want to continue? [Y/n] |
그냥 실행하는게 없으니 에러나네?
$ mpirun [mpiexec@raspberrypi] set_default_values (ui/mpich/utils.c:1528): no executable provided [mpiexec@raspberrypi] HYD_uii_mpx_get_parameters (ui/mpich/utils.c:1739): setting default values failed [mpiexec@raspberrypi] main (ui/mpich/mpiexec.c:153): error parsing parameters |
대충 소스 받아서 돌려보니 되긴한데.. 다른 서버를 구축해서 하는건 또 나중에 해봐야지..
$ mpicc mpi.c -o hello $ mpirun -np 2 ./hello Hello world from process 0 of 2 Hello world from process 1 of 2 |
-np는 number of processes
$ mpirun --help Usage: ./mpiexec [global opts] [local opts for exec1] [exec1] [exec1 args] : [local opts for exec2] [exec2] [exec2 args] : ... Global options (passed to all executables): Global environment options: -genv {name} {value} environment variable name and value -genvlist {env1,env2,...} environment variable list to pass -genvnone do not pass any environment variables -genvall pass all environment variables not managed by the launcher (default) Other global options: -f {name} file containing the host names -hosts {host list} comma separated host list -wdir {dirname} working directory to use -configfile {name} config file containing MPMD launch options Local options (passed to individual executables): Local environment options: -env {name} {value} environment variable name and value -envlist {env1,env2,...} environment variable list to pass -envnone do not pass any environment variables -envall pass all environment variables (default) Other local options: -n/-np {value} number of processes {exec_name} {args} executable name and arguments Hydra specific options (treated as global): Launch options: -launcher launcher to use (ssh rsh fork slurm ll lsf sge manual persist) -launcher-exec executable to use to launch processes -enable-x/-disable-x enable or disable X forwarding Resource management kernel options: -rmk resource management kernel to use (user slurm ll lsf sge pbs cobalt) Processor topology options: -topolib processor topology library (hwloc) -bind-to process binding -map-by process mapping -membind memory binding policy Checkpoint/Restart options: -ckpoint-interval checkpoint interval -ckpoint-prefix checkpoint file prefix -ckpoint-num checkpoint number to restart -ckpointlib checkpointing library (blcr) Demux engine options: -demux demux engine (poll select) Other Hydra options: -verbose verbose mode -info build information -print-all-exitcodes print exit codes of all processes -iface network interface to use -ppn processes per node -profile turn on internal profiling -prepend-rank prepend rank to output -prepend-pattern prepend pattern to output -outfile-pattern direct stdout to file -errfile-pattern direct stderr to file -nameserver name server information (host:port format) -disable-auto-cleanup don't cleanup processes on error -disable-hostname-propagation let MPICH auto-detect the hostname -order-nodes order nodes as ascending/descending cores -localhost local hostname for the launching node -usize universe size (SYSTEM, INFINITE, <value>) Please see the intructions provided at http://wiki.mpich.org/mpich/index.php/Using_the_Hydra_Process_Manager for further details |
'Programming > openMPI' 카테고리의 다른 글
openMPI 서비스 설치하기...? (0) | 2019.04.02 |
---|---|
ubuntu mpich cluster (0) | 2017.02.05 |
openmpi with heterogenerous (0) | 2017.02.03 |
openmpi with openmp (0) | 2017.02.03 |
OpenMP 그리고 OpenMPI (2) | 2011.09.19 |