$ gst-inspect-1.0 nvinfer Factory Details: Rank primary (256) Long-name NvInfer plugin Klass NvInfer Plugin Description Nvidia DeepStreamSDK TensorRT plugin Author NVIDIA Corporation. Deepstream for Tesla forum: https://devtalk.nvidia.com/default/board/209
Plugin Details: Name nvdsgst_infer Description NVIDIA DeepStreamSDK TensorRT plugin Filename /usr/lib/aarch64-linux-gnu/gstreamer-1.0/deepstream/libnvdsgst_infer.so Version 6.0.1 License Proprietary Source module nvinfer Binary package NVIDIA DeepStreamSDK TensorRT plugin Origin URL http://nvidia.com/
GObject +----GInitiallyUnowned +----GstObject +----GstElement +----GstBaseTransform +----GstNvInfer
Pad Templates: SINK template: 'sink' Availability: Always Capabilities: video/x-raw(memory:NVMM) format: { (string)NV12, (string)RGBA } width: [ 1, 2147483647 ] height: [ 1, 2147483647 ] framerate: [ 0/1, 2147483647/1 ] SRC template: 'src' Availability: Always Capabilities: video/x-raw(memory:NVMM) format: { (string)NV12, (string)RGBA } width: [ 1, 2147483647 ] height: [ 1, 2147483647 ] framerate: [ 0/1, 2147483647/1 ]
Element has no clocking capabilities. Element has no URI handling capabilities.
Pads: SINK: 'sink' Pad Template: 'sink' SRC: 'src' Pad Template: 'src'
Element Properties: name : The name of the object flags: readable, writable String. Default: "nvinfer0" parent : The parent of the object flags: readable, writable Object of type "GstObject" qos : Handle Quality-of-Service events flags: readable, writable Boolean. Default: false unique-id : Unique ID for the element. Can be used to identify output of the element flags: readable, writable, changeable only in NULL or READY state Unsigned Integer. Range: 0 - 4294967295 Default: 15 process-mode : Infer processing mode flags: readable, writable, changeable only in NULL or READY state Enum "GstNvInferProcessModeType" Default: 1, "primary" (1): primary - Primary (Full Frame) (2): secondary - Secondary (Objects) config-file-path : Path to the configuration file for this instance of nvinfer flags: readable, writable, changeable in NULL, READY, PAUSED or PLAYING state String. Default: "" infer-on-gie-id : Infer on metadata generated by GIE with this unique ID. Set to -1 to infer on all metadata. flags: readable, writable, changeable only in NULL or READY state Integer. Range: -1 - 2147483647 Default: -1 infer-on-class-ids : Operate on objects with specified class ids Use string with values of class ids in ClassID (int) to set the property. e.g. 0:2:3 flags: readable, writable, changeable only in NULL or READY state String. Default: "" filter-out-class-ids: Ignore metadata for objects of specified class ids Use string with values of class ids in ClassID (int) to set the property. e.g. 0;2;3 flags: readable, writable, changeable only in NULL or READY state String. Default: "" model-engine-file : Absolute path to the pre-generated serialized engine file for the model flags: readable, writable, changeable in NULL, READY, PAUSED or PLAYING state String. Default: "" batch-size : Maximum batch size for inference flags: readable, writable, changeable only in NULL or READY state Unsigned Integer. Range: 1 - 1024 Default: 1 interval : Specifies number of consecutive batches to be skipped for inference flags: readable, writable, changeable only in NULL or READY state Unsigned Integer. Range: 0 - 2147483647 Default: 0 gpu-id : Set GPU Device ID flags: readable, writable, changeable only in NULL or READY state Unsigned Integer. Range: 0 - 4294967295 Default: 0 raw-output-file-write: Write raw inference output to file flags: readable, writable, changeable only in NULL or READY state Boolean. Default: false raw-output-generated-callback: Pointer to the raw output generated callback funtion (type: gst_nvinfer_raw_output_generated_callback in 'gstnvdsinfer.h') flags: readable, writable, changeable only in NULL or READY state Pointer. raw-output-generated-userdata: Pointer to the userdata to be supplied with raw output generated callback flags: readable, writable, changeable only in NULL or READY state Pointer. output-tensor-meta : Attach inference tensor outputs as buffer metadata flags: readable, writable, changeable only in NULL or READY state Boolean. Default: false output-instance-mask: Instance mask expected in network output and attach it to metadata flags: readable, writable, changeable only in NULL or READY state Boolean. Default: false input-tensor-meta : Use preprocessed input tensors attached as metadata instead of preprocessing inside the plugin flags: readable, writable, changeable only in NULL or READY state Boolean. Default: false
Element Signals: "model-updated" : void user_function (GstElement* object, gint arg0, gchararray arg1, gpointer user_data);
|