'Programming > ffmpeg' 카테고리의 다른 글
ffplay.c (0) | 2017.03.06 |
---|---|
ffmpeg 예제 소스 분석 (0) | 2017.02.10 |
ffmpeg - vlc cache 설정관련 (0) | 2017.02.10 |
ffmpeg + opengl (0) | 2017.02.09 |
ffmpeg / ffplay 딜레이 관련 분석 (0) | 2017.02.09 |
ffplay.c (0) | 2017.03.06 |
---|---|
ffmpeg 예제 소스 분석 (0) | 2017.02.10 |
ffmpeg - vlc cache 설정관련 (0) | 2017.02.10 |
ffmpeg + opengl (0) | 2017.02.09 |
ffmpeg / ffplay 딜레이 관련 분석 (0) | 2017.02.09 |
도대체 vlc의 :network-caching 설정은 어디를 적용하는거야?!?!?
일단 구조적으로
rtsp 쪽 네트워크 버퍼
ffmpeg avcodec decoder쪽 버퍼
sdl등의 비디오 버퍼
버퍼만 해도 세개인데 어느녀석을 건드려야 레이턴시가 줄어들까?
[링크 : https://wiki.videolan.org/..HowTo/Advanced_Streaming_Using_the_Command_Line/]
[링크 : http://stackoverflow.com/.../which-functions-of-live555-is-used-in-vlc-for-network-caching-option]
ffmpeg 예제 소스 분석 (0) | 2017.02.10 |
---|---|
ffmpeg 3.2 소스관련 (0) | 2017.02.10 |
ffmpeg + opengl (0) | 2017.02.09 |
ffmpeg / ffplay 딜레이 관련 분석 (0) | 2017.02.09 |
ffmpeg 예제 (sdl / live555) (0) | 2017.02.06 |
[링크 : http://aslike.egloos.com/category/└%20FFMPEG] // 연재 중단으로 openGL 관련내용 x
[링크 : http://sijoo.tistory.com/82]
ffmpeg 예제 소스 분석 (0) | 2017.02.10 |
---|---|
ffmpeg 3.2 소스관련 (0) | 2017.02.10 |
ffmpeg - vlc cache 설정관련 (0) | 2017.02.10 |
ffmpeg / ffplay 딜레이 관련 분석 (0) | 2017.02.09 |
ffmpeg 예제 (sdl / live555) (0) | 2017.02.06 |
vlc 에서 rtsp 추가 옵션을 보면
캐쉬로 기본 1000ms 가 설정되는데 이걸 어디서 설정하나 검색중..
ffplay의 max_delay?
ffplay -max_delay 500000 -rtsp_transport udp -v trace "rtsp://user:pass@CamIP:Port/URL" |
[링크 : https://github.com/ZoneMinder/ZoneMinder/issues/811]
-max_delay <int> ED... maximum muxing or demuxing delay in microseconds |
다시 읽어 보니.. 레이턴시에 영향을 주긴 하지만.. TCP에는 적용사항이 없고
UDP에서 패킷 재조합시간에 대한 내용이라 일단 패스~
When receiving data over UDP, the demuxer tries to reorder received packets (since they may arrive out of order, or packets may get lost totally). This can be disabled by setting the maximum demuxing delay to zero (via the max_delay field of AVFormatContext). |
[링크 : https://ffmpeg.org/ffmpeg-protocols.html#rtsp]
[링크 : https://ffmpeg.org/doxygen/2.7/structAVFormatContext.html#a58422ed3d461b3440a15cf057ac5f5b7]
AVFormatContext의 buffer_size
cache 버퍼 크기
av_dict_set(&options, "buffer_size", "655360", 0); |
[링크 : http://stackoverflow.com/questions/29075467/set-rtsp-udp-buffer-size-in-ffmpeg-libav]
[링크 : https://git.libav.org/?p=libav.git;a=commit;h=e3ec6fe7bb2a622a863e3912181717a659eb1bad] commit log
[링크 : https://git.libav.org/?p=libav.git;a=blob;f=libavformat/rtsp.c;h...d] rtsp whole source
[링크 : https://git.libav.org/?p=libav.git;a=blobdiff;f=libavformat/rtsp.c;h...f] diff
1547 AVDictionary *metadata;
[링크 : https://ffmpeg.org/doxygen/trunk/avformat_8h_source.html#l01331] AVFormatContext structure
int av_dict_set (AVDictionary ** pm, const char * key, const char * value, int flags ) |
[링크 : https://www.ffmpeg.org/doxygen/2.7/group__lavu__dict.html#ga8d9c2de72b310cef8e6a28c9cd3acbbe]
AVCodecContext의 rc_buffer_size
int AVCodecContext::rc_buffer_size decoder bitstream buffer size encoding: Set by user. decoding: unused Definition at line 2291 of file avcodec.h. |
[링크 : https://www.ffmpeg.org/doxygen/2.5/structAVCodecContext.html#a15000607a7e2371162348bb35b0184c1]
일단... 두개 옵션을 줘서 어느 구간에서 지연시간들이 생기는지 검사할 수 있는 듯?
Also setting -probesize and -analyzeduration to low values may help your stream start up more quickly (it uses these to scan for "streams" in certain muxers, like ts, where some can appears "later", and also to estimate the duration, which, for live streams, the latter you don't need anyway). This should be unneeded by dshow input. |
ffmpeg 예제 소스 분석 (0) | 2017.02.10 |
---|---|
ffmpeg 3.2 소스관련 (0) | 2017.02.10 |
ffmpeg - vlc cache 설정관련 (0) | 2017.02.10 |
ffmpeg + opengl (0) | 2017.02.09 |
ffmpeg 예제 (sdl / live555) (0) | 2017.02.06 |
윈도우용 live555 + ffmpeg 프로그램 만들일이 생길것 같아 미리 조사중..
퍼포먼스를 위해서 sdl을 이용하는 녀석.. live555를 통한 rtsp는 제외 된 듯
sdl / ffmpeg / vs2010
[링크 : https://sourceforge.net/projects/simplestffmpegplayer/]
[링크 : https://sourceforge.net/u/leixiaohua1020/wiki/Home/]
[링크 : https://www.libsdl.org/]
113MB ... ㄷㄷㄷ
[링크 : https://www.imc-store.com.au/Articles.asp?ID=278]
먼가 빈약해 보이는데..
sdl / ffmpeg / live555 / visual studio
ffmpeg 예제 소스 분석 (0) | 2017.02.10 |
---|---|
ffmpeg 3.2 소스관련 (0) | 2017.02.10 |
ffmpeg - vlc cache 설정관련 (0) | 2017.02.10 |
ffmpeg + opengl (0) | 2017.02.09 |
ffmpeg / ffplay 딜레이 관련 분석 (0) | 2017.02.09 |
서버 마다 다른건가... lighttpd에서는 GET만 하면 400 에러가 발생하고
GET 다음 Host까지 해주어야 응답을 해준다.
+
GET /bin/login?User=Peter+Lee&pw=123456&action=login HTTP/1.1 Host: 127.0.0.1:8000 |
POST /bin/login HTTP/1.1 Host: 127.0.0.1:8000 Accept: image/gif, image/jpeg, */* Referer: http://127.0.0.1:8000/login.html Accept-Language: en-us Content-Type: application/x-www-form-urlencoded Accept-Encoding: gzip, deflate User-Agent: Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1) Content-Length: 37 Connection: Keep-Alive Cache-Control: no-cache
User=Peter+Lee&pw=123456&action=login |
[링크 : https://www.ntu.edu.sg/home/ehchua/programming/webprogramming/HTTP_Basics.html]
GET
/test/demo_form.asp?name1=value1&name2=value2 |
POST
POST /test/demo_form.asp HTTP/1.1 Host: w3schools.com name1=value1&name2=value2 |
[링크 : http://www.w3schools.com/Tags/ref_httpmethods.asp]
[링크 : https://www.w3.org/Protocols/rfc2616/rfc2616-sec9.html]
S.N. | Method and Description |
---|---|
1 | GET The GET method is used to retrieve information from the given server using a given URI. Requests using GET should only retrieve data and should have no other effect on the data. |
2 | HEAD Same as GET, but transfers the status line and header section only. |
3 | POST A POST request is used to send data to the server, for example, customer information, file upload, etc. using HTML forms. |
4 | PUT Replaces all current representations of the target resource with the uploaded content. |
5 | DELETE Removes all current representations of the target resource given by a URI. |
6 | CONNECT Establishes a tunnel to the server identified by a given URI. |
7 | OPTIONS Describes the communication options for the target resource. |
8 | TRACE Performs a message loop-back test along the path to the target resource. |
[링크 : https://www.tutorialspoint.com/http/http_methods.htm]
[링크 : https://en.wikipedia.org/wiki/Hypertext_Transfer_Protocol#Request_methods]
http digest (0) | 2017.03.03 |
---|---|
http header (0) | 2017.03.02 |
해싱 salt (0) | 2017.01.27 |
NPAPI / PPAPI - VLC ... (0) | 2016.01.14 |
HTML5 video player 720p/1080p 재생여부 (0) | 2016.01.13 |
음.. 구조를 일단 파악을 해봐야 하나..
odroid 쪽은 mpich2 패키지가 없네.. 버전이 달라서 그런가?
아무튼 mpiexec나 mpirun 나 둘다 똑같이 실행파일이 연결되네
lrwxrwxrwx 1 root root 24 2월 5 12:43 /usr/bin/mpirun -> /etc/alternatives/mpirun* lrwxrwxrwx 1 root root 25 2월 5 12:43 /usr/bin/mpiexec -> /etc/alternatives/mpiexec* lrwxrwxrwx 1 root root 21 2월 5 12:43 /etc/alternatives/mpirun -> /usr/bin/mpirun.mpich* lrwxrwxrwx 1 root root 22 2월 5 12:43 /etc/alternatives/mpiexec -> /usr/bin/mpiexec.mpich* lrwxrwxrwx 1 root root 13 2월 7 2016 /usr/bin/mpirun.mpich -> mpiexec.hydra* lrwxrwxrwx 1 root root 13 2월 7 2016 /usr/bin/mpiexec.mpich -> mpiexec.hydra* |
$ mpiexec --help Usage: ./mpiexec [global opts] [local opts for exec1] [exec1] [exec1 args] : [local opts for exec2] [exec2] [exec2 args] : ... Global options (passed to all executables): Global environment options: -genv {name} {value} environment variable name and value -genvlist {env1,env2,...} environment variable list to pass -genvnone do not pass any environment variables -genvall pass all environment variables not managed by the launcher (default) Other global options: -f {name} file containing the host names |
호스트 파일은 아래와 같이 넣는데 얘도 winbind 적용 가능하려나?
ub3:4 # this will spawn 4 processes on ub3 ub2:2 # this will spawn 2 processes on ub2 ub1 # this will spawn 1 process on ub1 ub0 # this will spawn 1 process on ub0 |
[링크 : https://help.ubuntu.com/community/MpichCluster]
[링크 : http://www.brianjp93.com/blog/building-a-beowulf-cluster-ubuntu-1404-server-lts/]
[링크 : https://www.digitalocean.com/.../how-to-create-a-beowulf-cluster-using-ubuntu-12-04-vps-instances]
[링크 : http://techtinkering.com/2009/12/02/setting-up-a-beowulf-cluster-using-open-mpi-on-linux/]
설정방법을 대충보니...
빌드 돌릴 녀석이 host node가 되고, 얘가 nfs로 공유할 디렉토리를 생성해서 work node들이 여기를 연결
빌드에는 ssh를 key적용해서 암호없이도 할 수 있도록 해야 하는 듯..
옛날에 있던 녀석이니.. rlogin이나 telnet으로는 안되려나?
$ mpiexec -info HYDRA build details: Version: 3.2 Release Date: Wed Nov 11 22:06:48 CST 2015 CC: gcc -Wl,-Bsymbolic-functions -Wl,-z,relro CXX: g++ -Wl,-Bsymbolic-functions -Wl,-z,relro F77: gfortran -Wl,-Bsymbolic-functions -Wl,-z,relro F90: gfortran -Wl,-Bsymbolic-functions -Wl,-z,relro Configure options: '--disable-option-checking' '--prefix=/usr' '--build=arm-linux-gnueabihf' '--includedir=${prefix}/include' '--mandir=${prefix}/share/man' '--infodir=${prefix}/share/info' '--sysconfdir=/etc' '--localstatedir=/var' '--disable-silent-rules' '--libdir=${prefix}/lib/arm-linux-gnueabihf' '--libexecdir=${prefix}/lib/arm-linux-gnueabihf' '--disable-maintainer-mode' '--disable-dependency-tracking' '--enable-shared' '--enable-fortran=all' '--disable-rpath' '--disable-wrapper-rpath' '--sysconfdir=/etc/mpich' '--libdir=/usr/lib/arm-linux-gnueabihf' '--includedir=/usr/include/mpich' '--docdir=/usr/share/doc/mpich' '--with-hwloc-prefix=system' 'CPPFLAGS= -Wdate-time -D_FORTIFY_SOURCE=2 -I/build/mpich-52KREu/mpich-3.2/src/mpl/include -I/build/mpich-52KREu/mpich-3.2/src/mpl/include -I/build/mpich-52KREu/mpich-3.2/src/openpa/src -I/build/mpich-52KREu/mpich-3.2/src/openpa/src -D_REENTRANT -I/build/mpich-52KREu/mpich-3.2/src/mpi/romio/include' 'CFLAGS= -g -O2 -fstack-protector-strong -Wformat -Werror=format-security -O2' 'CXXFLAGS= -g -O2 -fstack-protector-strong -Wformat -Werror=format-security -O2' 'FFLAGS= -g -O2 -fstack-protector-strong -O2' 'FCFLAGS= -g -O2 -fstack-protector-strong -O2' 'build_alias=arm-linux-gnueabihf' 'MPICHLIB_CFLAGS=-g -O2 -fstack-protector-strong -Wformat -Werror=format-security' 'MPICHLIB_CPPFLAGS=-Wdate-time -D_FORTIFY_SOURCE=2' 'MPICHLIB_CXXFLAGS=-g -O2 -fstack-protector-strong -Wformat -Werror=format-security' 'MPICHLIB_FFLAGS=-g -O2 -fstack-protector-strong' 'MPICHLIB_FCFLAGS=-g -O2 -fstack-protector-strong' 'LDFLAGS=-Wl,-Bsymbolic-functions -Wl,-z,relro' 'FC=gfortran' 'F77=gfortran' 'MPILIBNAME=mpich' '--cache-file=/dev/null' '--srcdir=.' 'CC=gcc' 'LIBS=-lpthread ' Process Manager: pmi Launchers available: ssh rsh fork slurm ll lsf sge manual persist Topology libraries available: hwloc Resource management kernels available: user slurm ll lsf sge pbs cobalt Checkpointing libraries available: blcr Demux engines available: poll select |
openMPI 서비스 설치하기...? (0) | 2019.04.02 |
---|---|
opempi 패키지 (0) | 2017.02.04 |
openmpi with heterogenerous (0) | 2017.02.03 |
openmpi with openmp (0) | 2017.02.03 |
OpenMP 그리고 OpenMPI (2) | 2011.09.19 |
멀 깔아야 서버가 되려나?
$ sudo apt-cache search openmpi gromacs-openmpi - Molecular dynamics sim, binaries for OpenMPI parallelization libblacs-openmpi1 - Basic Linear Algebra Comm. Subprograms - Shared libs. for OpenMPI libhdf5-openmpi-8 - Hierarchical Data Format 5 (HDF5) - runtime files - OpenMPI version libhdf5-openmpi-8-dbg - Hierarchical Data Format 5 (HDF5) - OpenMPI Debug package libhdf5-openmpi-dev - Hierarchical Data Format 5 (HDF5) - development files - OpenMPI version libmeep-lam4-7 - library for using parallel (OpenMPI) version of meep libmeep-lam4-dev - development library for using parallel (OpenMPI) version of meep libmeep-mpi-default-dev - development library for using parallel (OpenMPI) version of meep libmeep-mpi-default7 - library for using parallel (OpenMPI) version of meep libmeep-mpich2-7 - library for using parallel (OpenMPI) version of meep libmeep-mpich2-dev - development library for using parallel (OpenMPI) version of meep libmeep-openmpi-dev - development library for using parallel (OpenMPI) version of meep libmeep-openmpi7 - library for using parallel (OpenMPI) version of meep libopenmpi-dev - high performance message passing library -- header files libopenmpi1.6 - high performance message passing library -- shared library libopenmpi1.6-dbg - high performance message passing library -- debug library libscalapack-openmpi1 - Scalable Linear Algebra Package - Shared libs. for OpenMPI meep-lam4 - software package for FDTD simulation, parallel (OpenMPI) version meep-mpi-default - software package for FDTD simulation, parallel (OpenMPI) version meep-mpich2 - software package for FDTD simulation, parallel (OpenMPI) version meep-openmpi - software package for FDTD simulation, parallel (OpenMPI) version mpqc-openmpi - Massively Parallel Quantum Chemistry Program (OpenMPI transitional package) netpipe-openmpi - Network performance tool using OpenMPI octave-openmpi-ext - Transitional package for parallel computing in Octave using MPI openmpi-bin - high performance message passing library -- binaries openmpi-checkpoint - high performance message passing library -- checkpoint support openmpi-common - high performance message passing library -- common files openmpi-doc - high performance message passing library -- man pages openmpi1.6-common - high performance message passing library -- common files openmpi1.6-doc - high performance message passing library -- man pages openmpipython - MPI-enhanced Python interpreter (OpenMPI based version) yorick-full - full installation of the Yorick interpreter and add-ons yorick-mpy-openmpi - Message Passing Yorick (OpenMPI build) |
$ sudo apt-cache search mpich gromacs-mpich - Molecular dynamics sim, binaries for MPICH parallelization libhdf5-mpich-8 - Hierarchical Data Format 5 (HDF5) - runtime files - MPICH2 version libhdf5-mpich-8-dbg - Hierarchical Data Format 5 (HDF5) - Mpich Debug package libhdf5-mpich-dev - Hierarchical Data Format 5 (HDF5) - development files - MPICH version libhdf5-mpich2-dev - Hierarchical Data Format 5 (HDF5) - development files - MPICH version libmeep-mpi-default-dev - development library for using parallel (OpenMPI) version of meep libmeep-mpi-default7 - library for using parallel (OpenMPI) version of meep libmeep-mpich2-7 - library for using parallel (OpenMPI) version of meep libmeep-mpich2-dev - development library for using parallel (OpenMPI) version of meep libmpich-dev - Development files for MPICH libmpich12 - Shared libraries for MPICH libmpich2-3 - Shared libraries for MPICH2 libmpich2-dev - Transitional dummy package for MPICH development files libmpl-dev - Development files for mpl part of MPICH libmpl1 - Shared libraries for mpl part of MPICH libopa-dev - Development files for opa part of MPICH libopa1 - Shared libraries for opa part of MPICH libscalapack-mpi-dev - Scalable Linear Algebra Package - Dev. files for MPICH meep-mpi-default - software package for FDTD simulation, parallel (OpenMPI) version meep-mpich2 - software package for FDTD simulation, parallel (OpenMPI) version mpb-mpi - MIT Photonic-Bands, parallel (mpich) version mpi-default-bin - Standard MPI runtime programs (metapackage) mpi-default-dev - Standard MPI development files (metapackage) mpich - Implementation of the MPI Message Passing Interface standard mpich-doc - Documentation for MPICH mpich2 - Transitional dummy package mpich2-doc - Transitional dummy package for MPICH documentation mpich2python - MPI-enhanced Python interpreter (MPICH2 based version) netpipe-mpich2 - Network performance tool using MPICH2 MPI scalapack-mpi-test - Scalable Linear Algebra Package - Test files for MPICH scalapack-test-common - Test data for ScaLAPACK testers yorick-full - full installation of the Yorick interpreter and add-ons yorick-mpy-mpich2 - Message Passing Yorick (MPICH2 build) |
$ sudo apt-cache search mpirun lam-runtime - LAM runtime environment for executing parallel programs mpi-default-bin - Standard MPI runtime programs (metapackage) |
[링크 : https://likymice.wordpress.com/2015/03/13/install-open-mpi-in-ubuntu-14-04-13-10/]
$ sudo apt-get install libcr-dev mpich2 mpich2-doc |
[링크 : https://jetcracker.wordpress.com/2012/03/01/how-to-install-mpi-in-ubuntu/]
mpich와 mpich2는 별 차이가 없네? 그래도 2는 1을 포함하는 듯
$ sudo apt-get install mpich Reading package lists... Done Building dependency tree Reading state information... Done The following extra packages will be installed: gfortran gfortran-4.9 hwloc-nox libcr0 libgfortran-4.9-dev libhwloc-plugins libhwloc5 libmpich-dev libmpich12 libmpl-dev libmpl1 libopa-dev libopa1 ocl-icd-libopencl1 Suggested packages: gfortran-doc gfortran-4.9-doc libgfortran3-dbg blcr-dkms libhwloc-contrib-plugins blcr-util mpich-doc opencl-icd The following NEW packages will be installed: gfortran gfortran-4.9 hwloc-nox libcr0 libgfortran-4.9-dev libhwloc-plugins libhwloc5 libmpich-dev libmpich12 libmpl-dev libmpl1 libopa-dev libopa1 mpich ocl-icd-libopencl1 0 upgraded, 15 newly installed, 0 to remove and 3 not upgraded. Need to get 6,879 kB of archives. After this operation, 25.5 MB of additional disk space will be used. Do you want to continue? [Y/n] |
$ sudo apt-get install mpich2 Reading package lists... Done Building dependency tree Reading state information... Done The following extra packages will be installed: gfortran gfortran-4.9 hwloc-nox libcr0 libgfortran-4.9-dev libhwloc-plugins libhwloc5 libmpich-dev libmpich12 libmpl-dev libmpl1 libopa-dev libopa1 mpich ocl-icd-libopencl1 Suggested packages: gfortran-doc gfortran-4.9-doc libgfortran3-dbg blcr-dkms libhwloc-contrib-plugins blcr-util mpich-doc opencl-icd The following NEW packages will be installed: gfortran gfortran-4.9 hwloc-nox libcr0 libgfortran-4.9-dev libhwloc-plugins libhwloc5 libmpich-dev libmpich12 libmpl-dev libmpl1 libopa-dev libopa1 mpich mpich2 ocl-icd-libopencl1 0 upgraded, 16 newly installed, 0 to remove and 3 not upgraded. Need to get 6,905 kB of archives. After this operation, 25.6 MB of additional disk space will be used. Do you want to continue? [Y/n] |
그냥 실행하는게 없으니 에러나네?
$ mpirun [mpiexec@raspberrypi] set_default_values (ui/mpich/utils.c:1528): no executable provided [mpiexec@raspberrypi] HYD_uii_mpx_get_parameters (ui/mpich/utils.c:1739): setting default values failed [mpiexec@raspberrypi] main (ui/mpich/mpiexec.c:153): error parsing parameters |
대충 소스 받아서 돌려보니 되긴한데.. 다른 서버를 구축해서 하는건 또 나중에 해봐야지..
$ mpicc mpi.c -o hello $ mpirun -np 2 ./hello Hello world from process 0 of 2 Hello world from process 1 of 2 |
-np는 number of processes
$ mpirun --help Usage: ./mpiexec [global opts] [local opts for exec1] [exec1] [exec1 args] : [local opts for exec2] [exec2] [exec2 args] : ... Global options (passed to all executables): Global environment options: -genv {name} {value} environment variable name and value -genvlist {env1,env2,...} environment variable list to pass -genvnone do not pass any environment variables -genvall pass all environment variables not managed by the launcher (default) Other global options: -f {name} file containing the host names -hosts {host list} comma separated host list -wdir {dirname} working directory to use -configfile {name} config file containing MPMD launch options Local options (passed to individual executables): Local environment options: -env {name} {value} environment variable name and value -envlist {env1,env2,...} environment variable list to pass -envnone do not pass any environment variables -envall pass all environment variables (default) Other local options: -n/-np {value} number of processes {exec_name} {args} executable name and arguments Hydra specific options (treated as global): Launch options: -launcher launcher to use (ssh rsh fork slurm ll lsf sge manual persist) -launcher-exec executable to use to launch processes -enable-x/-disable-x enable or disable X forwarding Resource management kernel options: -rmk resource management kernel to use (user slurm ll lsf sge pbs cobalt) Processor topology options: -topolib processor topology library (hwloc) -bind-to process binding -map-by process mapping -membind memory binding policy Checkpoint/Restart options: -ckpoint-interval checkpoint interval -ckpoint-prefix checkpoint file prefix -ckpoint-num checkpoint number to restart -ckpointlib checkpointing library (blcr) Demux engine options: -demux demux engine (poll select) Other Hydra options: -verbose verbose mode -info build information -print-all-exitcodes print exit codes of all processes -iface network interface to use -ppn processes per node -profile turn on internal profiling -prepend-rank prepend rank to output -prepend-pattern prepend pattern to output -outfile-pattern direct stdout to file -errfile-pattern direct stderr to file -nameserver name server information (host:port format) -disable-auto-cleanup don't cleanup processes on error -disable-hostname-propagation let MPICH auto-detect the hostname -order-nodes order nodes as ascending/descending cores -localhost local hostname for the launching node -usize universe size (SYSTEM, INFINITE, <value>) Please see the intructions provided at http://wiki.mpich.org/mpich/index.php/Using_the_Hydra_Process_Manager for further details |
openMPI 서비스 설치하기...? (0) | 2019.04.02 |
---|---|
ubuntu mpich cluster (0) | 2017.02.05 |
openmpi with heterogenerous (0) | 2017.02.03 |
openmpi with openmp (0) | 2017.02.03 |
OpenMP 그리고 OpenMPI (2) | 2011.09.19 |
이기종간에도(arm + x86) 적용이 가능한가 보네.. 어떻게 되는 원리지?!
[링크 : https://rafaelaroca.wordpress.com/2011/08/31/mpi-on-arm/]
openMPI 서비스 설치하기...? (0) | 2019.04.02 |
---|---|
ubuntu mpich cluster (0) | 2017.02.05 |
opempi 패키지 (0) | 2017.02.04 |
openmpi with openmp (0) | 2017.02.03 |
OpenMP 그리고 OpenMPI (2) | 2011.09.19 |
openMP + openMPI 예제
그나저나 서버 구축부터 찾아 봐야겠네
[링크 : http://www.slac.stanford.edu/comp/unix/farm/mpi_and_openmp.html]
openMPI 서비스 설치하기...? (0) | 2019.04.02 |
---|---|
ubuntu mpich cluster (0) | 2017.02.05 |
opempi 패키지 (0) | 2017.02.04 |
openmpi with heterogenerous (0) | 2017.02.03 |
OpenMP 그리고 OpenMPI (2) | 2011.09.19 |