소스를 분석해보니
objectList에는 x,y,w,h 로 infercence 할 크기 기준으로 결과를 넣어주면 된다.
즉, x = output * width 식으로 계산해야 한다.
cat config_infer_primary_ssd.txt
num-detected-classes=91
위의 설정은 deepstream plugin의 아래 값으로 넘어옴
detectionParams.numClassesConfigured
$ gst-inspect-1.0 nvinfer Factory Details: Rank primary (256) Long-name NvInfer plugin Klass NvInfer Plugin Description Nvidia DeepStreamSDK TensorRT plugin Author NVIDIA Corporation. Deepstream for Tesla forum: https://devtalk.nvidia.com/default/board/209 Plugin Details: Name nvdsgst_infer Description NVIDIA DeepStreamSDK TensorRT plugin Filename /usr/lib/aarch64-linux-gnu/gstreamer-1.0/deepstream/libnvdsgst_infer.so Version 6.0.1 License Proprietary Source module nvinfer Binary package NVIDIA DeepStreamSDK TensorRT plugin Origin URL http://nvidia.com/ GObject +----GInitiallyUnowned +----GstObject +----GstElement +----GstBaseTransform +----GstNvInfer Pad Templates: SINK template: 'sink' Availability: Always Capabilities: video/x-raw(memory:NVMM) format: { (string)NV12, (string)RGBA } width: [ 1, 2147483647 ] height: [ 1, 2147483647 ] framerate: [ 0/1, 2147483647/1 ] SRC template: 'src' Availability: Always Capabilities: video/x-raw(memory:NVMM) format: { (string)NV12, (string)RGBA } width: [ 1, 2147483647 ] height: [ 1, 2147483647 ] framerate: [ 0/1, 2147483647/1 ] Element has no clocking capabilities. Element has no URI handling capabilities. Pads: SINK: 'sink' Pad Template: 'sink' SRC: 'src' Pad Template: 'src' Element Properties: name : The name of the object flags: readable, writable String. Default: "nvinfer0" parent : The parent of the object flags: readable, writable Object of type "GstObject" qos : Handle Quality-of-Service events flags: readable, writable Boolean. Default: false unique-id : Unique ID for the element. Can be used to identify output of the element flags: readable, writable, changeable only in NULL or READY state Unsigned Integer. Range: 0 - 4294967295 Default: 15 process-mode : Infer processing mode flags: readable, writable, changeable only in NULL or READY state Enum "GstNvInferProcessModeType" Default: 1, "primary" (1): primary - Primary (Full Frame) (2): secondary - Secondary (Objects) config-file-path : Path to the configuration file for this instance of nvinfer flags: readable, writable, changeable in NULL, READY, PAUSED or PLAYING state String. Default: "" infer-on-gie-id : Infer on metadata generated by GIE with this unique ID. Set to -1 to infer on all metadata. flags: readable, writable, changeable only in NULL or READY state Integer. Range: -1 - 2147483647 Default: -1 infer-on-class-ids : Operate on objects with specified class ids Use string with values of class ids in ClassID (int) to set the property. e.g. 0:2:3 flags: readable, writable, changeable only in NULL or READY state String. Default: "" filter-out-class-ids: Ignore metadata for objects of specified class ids Use string with values of class ids in ClassID (int) to set the property. e.g. 0;2;3 flags: readable, writable, changeable only in NULL or READY state String. Default: "" model-engine-file : Absolute path to the pre-generated serialized engine file for the model flags: readable, writable, changeable in NULL, READY, PAUSED or PLAYING state String. Default: "" batch-size : Maximum batch size for inference flags: readable, writable, changeable only in NULL or READY state Unsigned Integer. Range: 1 - 1024 Default: 1 interval : Specifies number of consecutive batches to be skipped for inference flags: readable, writable, changeable only in NULL or READY state Unsigned Integer. Range: 0 - 2147483647 Default: 0 gpu-id : Set GPU Device ID flags: readable, writable, changeable only in NULL or READY state Unsigned Integer. Range: 0 - 4294967295 Default: 0 raw-output-file-write: Write raw inference output to file flags: readable, writable, changeable only in NULL or READY state Boolean. Default: false raw-output-generated-callback: Pointer to the raw output generated callback funtion (type: gst_nvinfer_raw_output_generated_callback in 'gstnvdsinfer.h') flags: readable, writable, changeable only in NULL or READY state Pointer. raw-output-generated-userdata: Pointer to the userdata to be supplied with raw output generated callback flags: readable, writable, changeable only in NULL or READY state Pointer. output-tensor-meta : Attach inference tensor outputs as buffer metadata flags: readable, writable, changeable only in NULL or READY state Boolean. Default: false output-instance-mask: Instance mask expected in network output and attach it to metadata flags: readable, writable, changeable only in NULL or READY state Boolean. Default: false input-tensor-meta : Use preprocessed input tensors attached as metadata instead of preprocessing inside the plugin flags: readable, writable, changeable only in NULL or READY state Boolean. Default: false Element Signals: "model-updated" : void user_function (GstElement* object, gint arg0, gchararray arg1, gpointer user_data); |
[링크 : https://docs.nvidia.com/metropolis/deepstream/dev-guide/text/DS_plugin_gst-nvinfer.html]
model-file (Caffe model) proto-file (Caffe model) uff-file (UFF models) onnx-file (ONNX models) model-engine-file, if already generated int8-calib-file for INT8 mode mean-file, if required offsets, if required maintain-aspect-ratio, if required parse-bbox-func-name (detectors only) parse-classifier-func-name (classifiers only) custom-lib-path output-blob-names (Caffe and UFF models) network-type model-color-format process-mode engine-create-func-name infer-dims (UFF models) uff-input-order (UFF models) |
[링크 : https://docs.nvidia.com/metropolis/deepstream/dev-guide/text/DS_using_custom_model.html]
'embeded > jetson' 카테고리의 다른 글
FLIR ETS320 / v4l (0) | 2022.04.21 |
---|---|
deepstream 구조, gstreamer module 설명 (2) | 2022.04.19 |
deepstream SSD (0) | 2022.04.15 |
deepstream (0) | 2022.04.15 |
ssd_inception_v2_coco_2017_11_17.tar.gz (0) | 2022.04.13 |