- With gst-launch-1.0 For Jetson: $ gst-launch-1.0 filesrc location=../../samples/streams/sample_1080p_h264.mp4 ! \ decodebin ! m.sink_0 nvstreammux name=m batch-size=1 width=1280 height=720 ! \ nvinfer config-file-path= config_infer_primary_ssd.txt ! \ nvvideoconvert ! nvdsosd ! nvegltransform ! nveglglessink - With deepstream-app $ deepstream-app -c deepstream_app_config_ssd.txt |
$ cat deepstream_app_config_ssd.txt [application] enable-perf-measurement=1 perf-measurement-interval-sec=1 gie-kitti-output-dir=streamscl [tiled-display] enable=0 rows=1 columns=1 width=1280 height=720 gpu-id=0 nvbuf-memory-type=0 [source0] enable=1 #Type - 1=CameraV4L2 2=URI 3=MultiURI type=3 num-sources=1 uri=file://../../samples/streams/sample_1080p_h264.mp4 gpu-id=0 cudadec-memtype=0 [streammux] gpu-id=0 batch-size=1 batched-push-timeout=-1 ## Set muxer output width and height width=1920 height=1080 nvbuf-memory-type=0 [sink0] enable=1 #Type - 1=FakeSink 2=EglSink 3=File type=2 sync=1 source-id=0 gpu-id=0 [osd] enable=1 gpu-id=0 border-width=3 text-size=15 text-color=1;1;1;1; text-bg-color=0.3;0.3;0.3;1 font=Serif show-clock=0 clock-x-offset=800 clock-y-offset=820 clock-text-size=12 clock-color=1;0;0;0 nvbuf-memory-type=0 [primary-gie] enable=1 gpu-id=0 batch-size=1 gie-unique-id=1 interval=0 labelfile-path=/home/nvidia/tmp_onnx/labels.txt #labelfile-path=ssd_coco_labels.txt model-engine-file=sample_ssd_relu6.uff_b1_gpu0_fp32.engine config-file=config_infer_primary_ssd.txt nvbuf-memory-type=0 |
$ cat config_infer_primary_ssd.txt [property] gpu-id=0 net-scale-factor=0.0078431372 offsets=127.5;127.5;127.5 model-color-format=0 # yw onnx-file=/home/nvidia/tmp_onnx/model.onnx labelfile=/home/nvidia/tmp_onnx/labels.txt model-engine-file=sample_ssd_relu6.uff_b1_gpu0_fp32.engine labelfile-path=ssd_coco_labels.txt uff-file=sample_ssd_relu6.uff infer-dims=3;300;300 uff-input-order=0 uff-input-blob-name=Input batch-size=1 ## 0=FP32, 1=INT8, 2=FP16 mode network-mode=2 num-detected-classes=91 interval=0 gie-unique-id=1 is-classifier=0 output-blob-names=MarkOutput_0 parse-bbox-func-name=NvDsInferParseCustomSSD custom-lib-path=nvdsinfer_custom_impl_ssd/libnvdsinfer_custom_impl_ssd.so #scaling-filter=0 #scaling-compute-hw=0 [class-attrs-all] threshold=0.5 roi-top-offset=0 roi-bottom-offset=0 detected-min-w=0 detected-min-h=0 detected-max-w=0 detected-max-h=0 ## Per class configuration #[class-attrs-2] #threshold=0.6 #roi-top-offset=20 #roi-bottom-offset=10 #detected-min-w=40 #detected-min-h=40 #detected-max-w=400 #detected-max-h=800 |
'embeded > jetson' 카테고리의 다른 글
ssd_inception_v2_coco_2017_11_17.tar.gz (0) | 2022.04.13 |
---|---|
nvidia jetson deepstream objectDetector_SSD 플러그인 분석 (0) | 2022.04.13 |
jetson / armv8 EL (0) | 2022.04.07 |
nvidia jetson partition table (0) | 2022.04.06 |
jetson nano 부팅이 안됨 (0) | 2022.04.06 |