'embeded'에 해당되는 글 1485건

  1. 2025.09.03 vainfo
  2. 2025.09.03 ubuntu nvidia 드라이버 설치
  3. 2025.08.28 SVE(Scalable Vector Extension)
  4. 2025.08.26 eiq 에러들
  5. 2025.08.26 eiq 학습 시도
  6. 2025.08.24 아두이노 sd 카드
  7. 2025.08.22 nvidia tao toolkit
  8. 2025.08.22 eqi - model tool
  9. 2025.08.21 iperf3 설정별 속도
  10. 2025.08.18 NNstreamer - tensor*
embeded/i.mx 8m plus2025. 9. 3. 11:06

eiq 에서 libva 에러가 났는데

wayland에서 x.org로 바꾸니 에러가 발생하지 않는다. 중요한(?) 에러는 아니었는 듯.

 

$ cat /proc/cpuinfo 
model name : Intel(R) Core(TM) i7-10510U CPU @ 1.80GHz

$ vainfo
libva info: VA-API version 1.14.0
libva info: Trying to open /usr/lib/x86_64-linux-gnu/dri/iHD_drv_video.so
libva info: Found init function __vaDriverInit_1_14
libva info: va_openDriver() returns 0
vainfo: VA-API version: 1.14 (libva 2.12.0)
vainfo: Driver version: Intel iHD driver for Intel(R) Gen Graphics - 22.3.1 ()
vainfo: Supported profile and entrypoints
      VAProfileMPEG2Simple            : VAEntrypointVLD
      VAProfileMPEG2Main              : VAEntrypointVLD
      VAProfileH264Main               : VAEntrypointVLD
      VAProfileH264Main               : VAEntrypointEncSliceLP
      VAProfileH264High               : VAEntrypointVLD
      VAProfileH264High               : VAEntrypointEncSliceLP
      VAProfileJPEGBaseline           : VAEntrypointVLD
      VAProfileJPEGBaseline           : VAEntrypointEncPicture
      VAProfileH264ConstrainedBaseline: VAEntrypointVLD
      VAProfileH264ConstrainedBaseline: VAEntrypointEncSliceLP
      VAProfileVP8Version0_3          : VAEntrypointVLD
      VAProfileHEVCMain               : VAEntrypointVLD
      VAProfileHEVCMain10             : VAEntrypointVLD
      VAProfileVP9Profile0            : VAEntrypointVLD
      VAProfileVP9Profile2            : VAEntrypointVLD

 

G3460 + nvidia 1060

$ cat /proc/cpuinfo 
model name : Intel(R) Pentium(R) CPU G3460 @ 3.50GHz

$ cat log 
$ vainfo
libva info: VA-API version 1.14.0
libva info: Trying to open /usr/lib/x86_64-linux-gnu/dri/nvidia_drv_video.so
libva info: va_openDriver() returns -1
vaInitialize failed with error code -1 (unknown libva error),exit

 

$ LIBVA_DRIVER_NAME=iHD vainfo 
libva info: VA-API version 1.14.0
libva info: User environment variable requested driver 'iHD'
libva info: Trying to open /usr/lib/x86_64-linux-gnu/dri/iHD_drv_video.so
libva info: Found init function __vaDriverInit_1_14
DRM_IOCTL_I915_GEM_APERTURE failed: Invalid argument
Assuming 131072kB available aperture size.
May lead to reduced performance or incorrect rendering.
get chip id failed: -1 [22]
param: 4, val: 0
libva error: /usr/lib/x86_64-linux-gnu/dri/iHD_drv_video.so init failed
libva info: va_openDriver() returns 18
vaInitialize failed with error code 18 (invalid parameter),exit

 

G3460 

$ vainfo
libva info: VA-API version 1.14.0
libva info: Trying to open /usr/lib/x86_64-linux-gnu/dri/iHD_drv_video.so
libva info: Found init function __vaDriverInit_1_14
libva error: /usr/lib/x86_64-linux-gnu/dri/iHD_drv_video.so init failed
libva info: va_openDriver() returns 1
libva info: Trying to open /usr/lib/x86_64-linux-gnu/dri/i965_drv_video.so
libva info: Found init function __vaDriverInit_1_10
libva info: va_openDriver() returns 0
vainfo: VA-API version: 1.14 (libva 2.12.0)
vainfo: Driver version: Intel i965 driver for Intel(R) Haswell Desktop - 2.4.1
vainfo: Supported profile and entrypoints
      VAProfileMPEG2Simple            : VAEntrypointVLD
      VAProfileMPEG2Simple            : VAEntrypointEncSlice
      VAProfileMPEG2Main              : VAEntrypointVLD
      VAProfileMPEG2Main              : VAEntrypointEncSlice
      VAProfileH264ConstrainedBaseline: VAEntrypointVLD
      VAProfileH264ConstrainedBaseline: VAEntrypointEncSlice
      VAProfileH264Main               : VAEntrypointVLD
      VAProfileH264Main               : VAEntrypointEncSlice
      VAProfileH264High               : VAEntrypointVLD
      VAProfileH264High               : VAEntrypointEncSlice
      VAProfileH264MultiviewHigh      : VAEntrypointVLD
      VAProfileH264MultiviewHigh      : VAEntrypointEncSlice
      VAProfileH264StereoHigh         : VAEntrypointVLD
      VAProfileH264StereoHigh         : VAEntrypointEncSlice
      VAProfileVC1Simple              : VAEntrypointVLD
      VAProfileVC1Main                : VAEntrypointVLD
      VAProfileVC1Advanced            : VAEntrypointVLD
      VAProfileNone                   : VAEntrypointVideoProc
      VAProfileJPEGBaseline           : VAEntrypointVLD

[링크 : https://github.com/elFarto/nvidia-vaapi-driver/issues/272]

'embeded > i.mx 8m plus' 카테고리의 다른 글

import tensorflow illegal instruction  (0) 2025.09.04
eiq on windows with nvidia  (0) 2025.09.03
ubuntu nvidia 드라이버 설치  (0) 2025.09.03
eiq 에러들  (0) 2025.08.26
eiq 학습 시도  (0) 2025.08.26
Posted by 구차니
embeded/i.mx 8m plus2025. 9. 3. 10:41

 

sudo add-apt-repository ppa:graphics-drivers/ppa
ubuntu-drivers devices
sudo ubuntu-drivers autoinstall

 

$ ubuntu-drivers devices
== /sys/devices/pci0000:00/0000:00:01.0/0000:01:00.0 ==
modalias : pci:v000010DEd00001C03sv000010DEsd00000000bc03sc00i00
vendor   : NVIDIA Corporation
model    : GP106 [GeForce GTX 1060 6GB]
driver   : nvidia-driver-550 - distro non-free
driver   : nvidia-driver-470 - distro non-free
driver   : nvidia-driver-450-server - distro non-free
driver   : nvidia-driver-565 - third-party non-free
driver   : nvidia-driver-575 - distro non-free recommended
driver   : nvidia-driver-580 - third-party non-free
driver   : nvidia-driver-470-server - distro non-free
driver   : nvidia-driver-575-server - distro non-free
driver   : nvidia-driver-570 - third-party non-free
driver   : nvidia-driver-545 - distro non-free
driver   : nvidia-driver-535 - distro non-free
driver   : nvidia-driver-418-server - distro non-free
driver   : nvidia-driver-390 - distro non-free
driver   : nvidia-driver-570-server - distro non-free
driver   : nvidia-driver-535-server - distro non-free
driver   : xserver-xorg-video-nouveau - distro free builtin

 

sudo apt-get install nvidia-driver-580

 

리부팅 이후

$ nvidia-smi
Wed Sep  3 10:53:56 2025       
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 580.76.05              Driver Version: 580.76.05      CUDA Version: 13.0     |
+-----------------------------------------+------------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id          Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |           Memory-Usage | GPU-Util  Compute M. |
|                                         |                        |               MIG M. |
|=========================================+========================+======================|
|   0  NVIDIA GeForce GTX 1060 6GB    Off |   00000000:01:00.0  On |                  N/A |
| 42%   38C    P8              6W /  120W |     172MiB /   6144MiB |      1%      Default |
|                                         |                        |                  N/A |
+-----------------------------------------+------------------------+----------------------+

+-----------------------------------------------------------------------------------------+
| Processes:                                                                              |
|  GPU   GI   CI              PID   Type   Process name                        GPU Memory |
|        ID   ID                                                               Usage      |
|=========================================================================================|
|    0   N/A  N/A             976      G   /usr/lib/xorg/Xorg                       73MiB |
|    0   N/A  N/A            1271      G   /usr/bin/gnome-shell                     86MiB |
+-----------------------------------------------------------------------------------------+

[링크 : https://2dudwns.tistory.com/20]

 

cuda 설치도 패키지로..!

sudo apt-get install nvidia-cuda-toolkit

'embeded > i.mx 8m plus' 카테고리의 다른 글

eiq on windows with nvidia  (0) 2025.09.03
vainfo  (0) 2025.09.03
eiq 에러들  (0) 2025.08.26
eiq 학습 시도  (0) 2025.08.26
nvidia tao toolkit  (0) 2025.08.22
Posted by 구차니
embeded/ARM2025. 8. 28. 14:18

armv7 까지는 NEON 이고 armv8 부터는 SVE 라고 이름이 달라지는 듯

 

Scalable Vector Extension (SVE) is a vector extension the A64 instruction set of the Armv8-A architecture. Armv9-A builds on SVE with the SVE2 extension. Unlike other SIMD architectures, SVE and SVE2 do not define the size of the vector registers, but constrains it to a range of possible values, from a minimum of 128 bits up to a maximum of 2048 in 128-bit wide units. Therefore, any CPU vendor can implement the extension by choosing the vector register size that better suits the workloads the CPU is targeting. The design of SVE and SVE2 guarantees that the same program can run on different implementations of the instruction set architecture without the need to recompile the code.

[링크 : https://developer.arm.com/Architectures/Scalable%20Vector%20Extensions]

[링크 : https://developer.arm.com/Architectures/SVE]

'embeded > ARM' 카테고리의 다른 글

emmc 파티션 정렬  (0) 2024.02.07
arm asm rev  (0) 2023.09.14
cortex-a53  (0) 2023.08.31
aarch64 vector register  (0) 2023.08.23
arm vsub operator  (0) 2023.08.09
Posted by 구차니
embeded/i.mx 8m plus2025. 8. 26. 15:08

 

FATAL:gpu_data_manager_impl_private.cc(415)] GPU process isn't usable. Goodbye.

 

It is Electron issue, when Electron-libs don't match your system --in-process-gpu option helps as workaround.
Also --disable-gpu-sandbox or --no-sandbox options helps too

[링크 : https://www.reddit.com/r/archlinux/comments/xf5pkt/how_to_fix_gpu_data_manager_impl_privatecc415_gpu/?tl=ko]

[링크 : https://github.com/Automattic/simplenote-electron/issues/3096]

[링크 : https://stackoverflow.com/questions/65679630/fatalgpu-data-manager-impl-private-cc439-gpu-process-isnt-usable-goodbye]

 

$ sudo ./eiq-portal 
[37727:0826/154023.382498:FATAL:electron_main_delegate.cc(252)] Running as root without --no-sandbox is not supported. See https://crbug.com/638180.
Trace/breakpoint trap

'embeded > i.mx 8m plus' 카테고리의 다른 글

vainfo  (0) 2025.09.03
ubuntu nvidia 드라이버 설치  (0) 2025.09.03
eiq 학습 시도  (0) 2025.08.26
nvidia tao toolkit  (0) 2025.08.22
eqi - model tool  (0) 2025.08.22
Posted by 구차니
embeded/i.mx 8m plus2025. 8. 26. 11:53

cpu로 일단 굴리는 중 (g4400t야 힘내!!)

cpu라 학습도 느리고, 1000번 정도 돌린걸로는 0.1 정도 밖에 안되서

다시 0.9를 넘길때 까지 무한 뺑뻉이 돌리는 중

 

현재까진 윈도우에서만 되고

ubuntu 22.04 + eiq 1.16 실패

ubuntu 22.04 + eiq 1.16 실패.

 

전찬리에 실패중!

'embeded > i.mx 8m plus' 카테고리의 다른 글

ubuntu nvidia 드라이버 설치  (0) 2025.09.03
eiq 에러들  (0) 2025.08.26
nvidia tao toolkit  (0) 2025.08.22
eqi - model tool  (0) 2025.08.22
NNstreamer - tensor*  (0) 2025.08.18
Posted by 구차니
embeded/arduino(genuino)2025. 8. 24. 23:29

api는 좀 찾아봐야겠다.

나중에 esp32 + sd 카드 + 보조배터리 + LCD + GPS + 가속도센서 + 자이로 + 콤파스

하면 먼가 재미난게 나올려나?

 

[링크 : https://blog.naver.com/roboholic84/221789023195]

'embeded > arduino(genuino)' 카테고리의 다른 글

아두이노 시리얼 이벤트 핸들러  (0) 2025.10.09
퀄컴 아두이노 인수  (0) 2025.10.08
skt-444 콘덴서 마이크 모듈 분해  (0) 2025.08.07
HW-504 이상해..  (0) 2025.08.02
ads1115 복수 장치 읽기  (0) 2025.08.02
Posted by 구차니
embeded/i.mx 8m plus2025. 8. 22. 16:36

eIQ의 TAO가 먼가 해서 눌렀더니

 

전이학습이 가능한 솔루션이라..

NVIDIA TAO Toolkit은 어떤 솔루션인가요?

AI/머신 러닝 모델을 처음부터 만들려면 엄청난 양의 데이터와 수많은 데이터 과학자가 필요합니다. 하지만 이제는 기존 신경망 모델에서 학습된 기능을 추출하여 새로운 맞춤형 모델에 적용하는 주요 기법인 transfer learning을 통해 모델 개발을 가속할 수 있습니다.

TensorFlow와 PyTorch를 기반으로 하는 NVIDIA TAO Toolkit은 AI/딥 러닝 프레임워크의 복잡성을 없애 모델 훈련 프로세스를 가속하는 NVIDIA TAO 프레임워크의 로코드 버전입니다. TAO Toolkit을 사용하면 AI 전문 지식이나 대규모 훈련 데이터 세트 없이도 전이 학습의 이점을 활용해 자체 데이터로 NVIDIA의 사전 훈련된 모델을 미세 조정하고 추론에 맞춰 최적화할 수 있습니다.

[링크 : https://developer.nvidia.com/ko-kr/tao-toolkit]

'embeded > i.mx 8m plus' 카테고리의 다른 글

eiq 에러들  (0) 2025.08.26
eiq 학습 시도  (0) 2025.08.26
eqi - model tool  (0) 2025.08.22
NNstreamer - tensor*  (0) 2025.08.18
openVX, verisilicon(vivante)  (0) 2025.08.14
Posted by 구차니
embeded/i.mx 8m plus2025. 8. 22. 16:33

facenet 모델열어 보려고 실행했는데

160x160x3 음.. 160x160 사이즈의 RGB 영상을 입력 받고, 128개의 float32형 벡터를 출력하는 모델이다.

 

많이 익숙한 녀석이라 확인해보니, 역시 netron 이었군.

'embeded > i.mx 8m plus' 카테고리의 다른 글

eiq 학습 시도  (0) 2025.08.26
nvidia tao toolkit  (0) 2025.08.22
NNstreamer - tensor*  (0) 2025.08.18
openVX, verisilicon(vivante)  (0) 2025.08.14
gstreamer pipeline / appsink  (0) 2025.08.14
Posted by 구차니
embeded/odroid2025. 8. 21. 11:13

odroid c2 <-> notebook(gbe) 간의 퍼포먼스 테스트

 

length 1514

$ iperf3 -l 45 -c 192.168.220.50
Connecting to host 192.168.220.50, port 5201
[  5] local 192.168.220.108 port 50248 connected to 192.168.220.50 port 5201
[ ID] Interval           Transfer     Bitrate         Retr  Cwnd
[  5]   0.00-1.00   sec  7.75 MBytes  65.0 Mbits/sec    0   62.2 KBytes       
[  5]   1.00-2.00   sec  8.03 MBytes  67.4 Mbits/sec    0   62.2 KBytes       
[  5]   2.00-3.00   sec  7.91 MBytes  66.3 Mbits/sec    0   62.2 KBytes       

 

length 144

$ iperf3 -l 45 -M 90 -c 192.168.220.50
Connecting to host 192.168.220.50, port 5201
[  5] local 192.168.220.108 port 48086 connected to 192.168.220.50 port 5201
[ ID] Interval           Transfer     Bitrate         Retr  Cwnd
[  5]   0.00-1.00   sec  11.1 MBytes  92.9 Mbits/sec  270   31.8 KBytes       
[  5]   1.00-2.00   sec  2.52 MBytes  21.1 Mbits/sec    5   31.8 KBytes       
[  5]   2.00-3.00   sec  2.33 MBytes  19.5 Mbits/sec    5   17.4 KBytes   

 

length 144

$ iperf3 -M 90 -c 192.168.220.50
Connecting to host 192.168.220.50, port 5201
[  5] local 192.168.220.108 port 40712 connected to 192.168.220.50 port 5201
[ ID] Interval           Transfer     Bitrate         Retr  Cwnd
[  5]   0.00-1.00   sec  18.4 MBytes   155 Mbits/sec   44   30.2 KBytes       
[  5]   1.00-2.00   sec  19.4 MBytes   163 Mbits/sec    4   39.8 KBytes       
[  5]   2.00-3.00   sec  18.9 MBytes   159 Mbits/sec   10   37.9 KBytes    

 

length 1514

$ iperf3 -c 192.168.220.50
Connecting to host 192.168.220.50, port 5201
[  5] local 192.168.220.108 port 35404 connected to 192.168.220.50 port 5201
[ ID] Interval           Transfer     Bitrate         Retr  Cwnd
[  5]   0.00-1.00   sec   112 MBytes   940 Mbits/sec   21    554 KBytes       
[  5]   1.00-2.00   sec   112 MBytes   944 Mbits/sec    0    691 KBytes       
[  5]   2.00-2.94   sec   105 MBytes   937 Mbits/sec    0    701 KBytes      
Posted by 구차니
embeded/i.mx 8m plus2025. 8. 18. 15:01

i.mx8의 selfie_segmentrer.py 에서 발췌

        # Set backend and delegates
        if self.backend == "CPU":
            if self.platform == "i.MX8MP":
                backend = "true:CPU custom=NumThreads:4"
            else:
                backend = "true:CPU custom=NumThreads:2"
        else:
            if self.platform == "i.MX8MP":
                os.environ["USE_GPU_INFERENCE"] = "0"
                backend = (
                    "true:npu custom=Delegate:External,ExtDelegateLib:libvx_delegate.so"
                )
            else:
                backend = "true:npu custom=Delegate:External,ExtDelegateLib:libethosu_delegate.so"



                + " ! videoconvert ! video/x-raw,format=RGB ! tensor_converter ! "
                + "tensor_transform mode=arithmetic option=typecast:float32,div:255.0 ! "
                + "tensor_filter framework=tensorflow-lite model="
                + self.tflite_model
                + " accelerator="
                + backend
                + " name=tensor_filter latency=1 ! tensor_sink name=tensor_sink "

 

gsteramer의 엘리먼트를 기본으로 NN에 맞춰서 몇개 추가한 것 같은데

[링크 : https://nnstreamer.github.io/gst/nnstreamer/tensor_converter/README.html]

[링크 : https://nnstreamer.github.io/gst/nnstreamer/tensor_transform/README.html]

[링크 : https://nnstreamer.github.io/gst/nnstreamer/tensor_decoder/README.html]

[링크 : https://nnstreamer.github.io/gst/nnstreamer/tensor_filter/README.html]

[링크 : https://nnstreamer.github.io/gst/nnstreamer/tensor_sink/README.html]

   [링크 : https://nnstreamer.github.io/gst/nnstreamer/elements/gsttensor_sink.html]

 

사용예에서 보면 converter / transform / filter / sink 순으로 쓰는 듯

[CAM] - [videoconvert] - [videoscale] - [tee] -+- [queue] - [videoconvert] - [cairooverlay] - [ximagesink]
                                               +- [queue] - [videoscale] - [tensor_converter] - [tensor_transform] - [tensor_filter] - [tensor_sink]

[링크 : https://nnstreamer.github.io/how-to-run-examples.html]

 

 

+

gst-inspector tensor_converter

더보기
root@imx8mpevk:~# gst-inspect-1.0 tensor_converter
Factory Details:
  Rank                     none (0)
  Long-name                TensorConverter
  Klass                    Converter/Tensor
  Description              Converts an audio, video, text, or arbitrary stream to a tensor stream of C-Array for neural network framework filters
  Author                   MyungJoo Ham <myungjoo.ham@samsung.com></myungjoo.ham@samsung.com>

Plugin Details:
  Name                     nnstreamer
  Description              nnstreamer plugin library
  Filename                 /usr/lib/gstreamer-1.0/libnnstreamer.so
  Version                  2.4.0
  License                  LGPL
  Source module            nnstreamer
  Binary package           nnstreamer
  Origin URL               https://github.com/nnstreamer/nnstreamer

GObject
 +----GInitiallyUnowned
       +----GstObject
             +----GstElement
                   +----GstTensorConverter

Pad Templates:
  SINK template: 'sink'
    Availability: Always
    Capabilities:
      video/x-raw
                 format: { (string)RGB, (string)BGR, (string)RGBx, (string)BGRx, (string)xRGB, (string)xBGR, (string)RGBA, (string)BGRA, (string)ARGB, (string)ABGR, (string)GRAY8, (string)GRAY16_BE, (string)GRAY16_LE }
                  width: [ 1, 2147483647 ]
                 height: [ 1, 2147483647 ]
              framerate: [ 0/1, 2147483647/1 ]
         interlace-mode: progressive
      audio/x-raw
                 format: { (string)S8, (string)U8, (string)S16LE, (string)S16BE, (string)U16LE, (string)U16BE, (string)S32LE, (string)S32BE, (string)U32LE, (string)U32BE, (string)F32LE, (string)F32BE, (string)F64LE, (string)F64BE }
                   rate: [ 1, 2147483647 ]
               channels: [ 1, 2147483647 ]
                 layout: interleaved
      text/x-raw
                 format: utf8
      application/octet-stream
      other/tensors
                 format: flexible
              framerate: [ 0/1, 2147483647/1 ]
      application/octet-stream
      other/protobuf-tensor
              framerate: [ 0/1, 2147483647/1 ]
  
  SRC template: 'src'
    Availability: Always
    Capabilities:
      other/tensor
              framerate: [ 0/1, 2147483647/1 ]
      other/tensors
                 format: { (string)static, (string)flexible }
              framerate: [ 0/1, 2147483647/1 ]

Element has no clocking capabilities.
Element has no URI handling capabilities.

Pads:
  SINK: 'sink'
    Pad Template: 'sink'
  SRC: 'src'
    Pad Template: 'src'

Element Properties:

  frames-per-tensor   : The number of frames in output tensor
                        flags: readable, writable
                        Unsigned Integer. Range: 1 - 4294967295 Default: 1 
  
  input-dim           : Input tensor dimension from inner array
                        flags: readable, writable
                        String. Default: ""
  
  input-type          : Type of each element of the input tensor
                        flags: readable, writable
                        String. Default: ""
  
  mode                : Converter mode. e.g., mode=custom-code:. For detail, refer to </registered callback name>https://github.com/nnstreamer/nnstreamer/blob/main/gst/nnstreamer/elements/gsttensor_converter.md#custom-converter
                        flags: readable, writable
                        String. Default: ""
  
  name                : The name of the object
                        flags: readable, writable
                        String. Default: "tensorconverter0"
  
  parent              : The parent of the object
                        flags: readable, writable
                        Object of type "GstObject"
  
  set-timestamp       : The flag to set timestamp when received a buffer with invalid timestamp
                        flags: readable, writable
                        Boolean. Default: true
  
  silent              : Produce verbose output
                        flags: readable, writable
                        Boolean. Default: true
  
  sub-plugins         : Registrable sub-plugins list
                        flags: readable
                        String. Default: "python3,protobuf"

 

gst-inspector tensor_transform

더보기
root@imx8mpevk:~# gst-inspect-1.0 tensor_transform
Factory Details:
  Rank                     none (0)
  Long-name                TensorTransform
  Klass                    Filter/Tensor
  Description              Transforms other/tensor dimensions for different models or frameworks
  Author                   MyungJoo Ham <myungjoo.ham@samsung.com>

Plugin Details:
  Name                     nnstreamer
  Description              nnstreamer plugin library
  Filename                 /usr/lib/gstreamer-1.0/libnnstreamer.so
  Version                  2.4.0
  License                  LGPL
  Source module            nnstreamer
  Binary package           nnstreamer
  Origin URL               https://github.com/nnstreamer/nnstreamer

GObject
 +----GInitiallyUnowned
       +----GstObject
             +----GstElement
                   +----GstBaseTransform
                         +----GstTensorTransform

Pad Templates:
  SINK template: 'sink'
    Availability: Always
    Capabilities:
      other/tensor
              framerate: [ 0/1, 2147483647/1 ]
      other/tensors
                 format: { (string)static, (string)flexible }
              framerate: [ 0/1, 2147483647/1 ]
  
  SRC template: 'src'
    Availability: Always
    Capabilities:
      other/tensor
              framerate: [ 0/1, 2147483647/1 ]
      other/tensors
                 format: { (string)static, (string)flexible }
              framerate: [ 0/1, 2147483647/1 ]

Element has no clocking capabilities.
Element has no URI handling capabilities.

Pads:
  SINK: 'sink'
    Pad Template: 'sink'
  SRC: 'src'
    Pad Template: 'src'

Element Properties:

  acceleration        : Orc acceleration
                        flags: readable, writable
                        Boolean. Default: true
  
  apply               : Select tensors to apply, separated with ',' in case of multiple tensors. Default to apply all tensors.
                        flags: readable, writable
                        String. Default: ""
  
  mode                : Mode used for transforming tensor
                        flags: readable, writable
                        Enum "gtt_mode_type" Default: -1, "unknown"
                           (0): dimchg           - Mode for changing tensor dimensions, option=FROM_DIM:TO_DIM (with a regex, ^([0-9]|1[0-5]):([0-9]|1[0-5])$, where NNS_TENSOR_RANK_LIMIT is 16)
                           (1): typecast         - Mode for casting type of tensor, option=(^[u]?int(8|16|32|64)$|^float(16|32|64)$)
                           (2): arithmetic       - Mode for arithmetic operations with tensor, option=[typecast:TYPE,][per-channel:(false|true@DIM),]add|mul|div:NUMBER[@CH_IDX], ...
                           (3): transpose        - Mode for transposing shape of tensor, option=D1':D2':D3':D4 (fixed to 3)
                           (4): stand            - Mode for statistical standardization of tensor, option=(default|dc-average)[:TYPE][,per-channel:(false|true)]
                           (5): clamp            - Mode for clamping all elements of tensor into the range, option=CLAMP_MIN:CLAMP_MAX
                           (-1): unknown          - Unknown or not-implemented-yet mode
  
  name                : The name of the object
                        flags: readable, writable
                        String. Default: "tensortransform0"
  
  option              : Option for the tensor transform mode ?
                        flags: readable, writable
                        String. Default: null
  
  parent              : The parent of the object
                        flags: readable, writable
                        Object of type "GstObject"
  
  qos                 : Handle Quality-of-Service events
                        flags: readable, writable
                        Boolean. Default: false
  
  silent              : Produce verbose output ?
                        flags: readable, writable
                        Boolean. Default: true
  
  transpose-rank-limit: The rank limit of transpose, which varies per version of nnstreamer and may be lower than the global rank limit if it is over 4.
                        flags: readable
                        Unsigned Integer. Range: 0 - 16 Default: 4 

 

 

gst-inspector tensor_decoder

더보기
root@imx8mpevk:~# gst-inspect-1.0 tensor_decoder  
Factory Details:
  Rank                     none (0)
  Long-name                TensorDecoder
  Klass                    Converter/Tensor
  Description              Converts tensor stream of C-Array for neural network framework filters to audio or video stream
  Author                   Jijoong Moon <jijoong.moon@samsung.com>

Plugin Details:
  Name                     nnstreamer
  Description              nnstreamer plugin library
  Filename                 /usr/lib/gstreamer-1.0/libnnstreamer.so
  Version                  2.4.0
  License                  LGPL
  Source module            nnstreamer
  Binary package           nnstreamer
  Origin URL               https://github.com/nnstreamer/nnstreamer

GObject
 +----GInitiallyUnowned
       +----GstObject
             +----GstElement
                   +----GstBaseTransform
                         +----GstTensorDecoder

Pad Templates:
  SINK template: 'sink'
    Availability: Always
    Capabilities:
      other/tensor
              framerate: [ 0/1, 2147483647/1 ]
      other/tensors
                 format: static
            num_tensors: [ 1, 16 ]
              framerate: [ 0/1, 2147483647/1 ]
      other/tensors
                 format: flexible
              framerate: [ 0/1, 2147483647/1 ]
  
  SRC template: 'src'
    Availability: Always
    Capabilities:
      ANY

Element has no clocking capabilities.
Element has no URI handling capabilities.

Pads:
  SINK: 'sink'
    Pad Template: 'sink'
  SRC: 'src'
    Pad Template: 'src'

Element Properties:

  config-file         : sets config file path which contains plugins properties
                        flags: 
** (gst-inspect-1.0:1706): WARNING **: 05:54:08.872: /usr/src/debug/nnstreamer/2.4.0/gst/nnstreamer/elements/gsttensor_decoder.c:592: invalid property id 13 for "config-file" of type 'GParamString' in 'GstTensorDecoder'
readable, writable
                        String. Default: null
  
  mode                : Decoder mode
                        flags: readable, writable
                        String. Default: ""
  
  name                : The name of the object
                        flags: readable, writable
                        String. Default: "tensordecoder0"
  
  option1             : option for specific decoder modes, 1st one.
                        flags: readable, writable
                        String. Default: null
  
  option2             : option for specific decoder modes, 2nd one.
                        flags: readable, writable
                        String. Default: null
  
  option3             : option for specific decoder modes, 3rd one.
                        flags: readable, writable
                        String. Default: null
  
  option4             : option for specific decoder modes, 4th one.
                        flags: readable, writable
                        String. Default: null
  
  option5             : option for specific decoder modes, 5th one.
                        flags: readable, writable
                        String. Default: null
  
  option6             : option for specific decoder modes, 6th one.
                        flags: readable, writable
                        String. Default: null
  
  option7             : option for specific decoder modes, 7th one.
                        flags: readable, writable
                        String. Default: null
  
  option8             : option for specific decoder modes, 8th one.
                        flags: readable, writable
                        String. Default: null
  
  option9             : option for specific decoder modes, 9th one.
                        flags: readable, writable
                        String. Default: null
  
  parent              : The parent of the object
                        flags: readable, writable
                        Object of type "GstObject"
  
  qos                 : Handle Quality-of-Service events
                        flags: readable, writable
                        Boolean. Default: false
  
  silent              : Produce verbose output
                        flags: readable, writable
                        Boolean. Default: true
  
  sub-plugins         : Registrable sub-plugins list
                        flags: readable
                        String. Default: "protobuf,direct_video,bounding_boxes,image_segment,python3,octet_stream,pose_estimation,tensor_region,image_labeling"

 

 

gst-inspector tensor_filter

더보기
root@imx8mpevk:~# gst-inspect-1.0 tensor_filter 
Factory Details:
  Rank                     none (0)
  Long-name                TensorFilter
  Klass                    Filter/Tensor
  Description              Handles NN Frameworks (e.g., tensorflow) as Media Filters with other/tensor type stream
  Author                   MyungJoo Ham <myungjoo.ham@samsung.com>

Plugin Details:
  Name                     nnstreamer
  Description              nnstreamer plugin library
  Filename                 /usr/lib/gstreamer-1.0/libnnstreamer.so
  Version                  2.4.0
  License                  LGPL
  Source module            nnstreamer
  Binary package           nnstreamer
  Origin URL               https://github.com/nnstreamer/nnstreamer

GObject
 +----GInitiallyUnowned
       +----GstObject
             +----GstElement
                   +----GstBaseTransform
                         +----GstTensorFilter

Pad Templates:
  SINK template: 'sink'
    Availability: Always
    Capabilities:
      other/tensor
              framerate: [ 0/1, 2147483647/1 ]
      other/tensors
                 format: { (string)static, (string)flexible }
              framerate: [ 0/1, 2147483647/1 ]
  
  SRC template: 'src'
    Availability: Always
    Capabilities:
      other/tensor
              framerate: [ 0/1, 2147483647/1 ]
      other/tensors
                 format: { (string)static, (string)flexible }
              framerate: [ 0/1, 2147483647/1 ]

Element has no clocking capabilities.
Element has no URI handling capabilities.

Pads:
  SINK: 'sink'
    Pad Template: 'sink'
  SRC: 'src'
    Pad Template: 'src'

Element Properties:

  accelerator         : Set accelerator for the subplugin with format (true/false):(comma separated ACCELERATOR(s)). true/false determines if accelerator is to be used. list of accelerators determines the backend (ignored with false). Example, if GPU, NPU can be used but not CPU - true:npu,gpu,!cpu. The full list of accelerators can be found in nnstreamer_plugin_api_filter.h. Note that only a few subplugins support this property.
                        flags: readable, writable
                        String. Default: ""
  
  custom              : Custom properties for subplugins ?
                        flags: readable, writable
                        String. Default: ""
  
  framework           : Neural network framework
                        flags: readable, writable
                        String. Default: "auto"
  
  input               : Input tensor dimension from inner array, up to 4 dimensions ?
                        flags: readable, writable
                        String. Default: ""
  
  input-combination   : Select the input tensor(s) to invoke the models
                        flags: readable, writable
                        String. Default: ""
  
  inputlayout         : Set channel first (NCHW) or channel last layout (NHWC) or None for input data. Layout of the data can be any or NHWC or NCHW or none for now. 
                        flags: readable, writable
                        String. Default: ""
  
  inputname           : The Name of Input Tensor
                        flags: readable, writable
                        String. Default: ""
  
  inputranks          : The Rank of the Input Tensor, which is separated with ',' in case of multiple Tensors
                        flags: readable
                        String. Default: ""
  
  inputtype           : Type of each element of the input tensor ?
                        flags: readable, writable
                        String. Default: ""
  
  invoke-dynamic      : Flexible tensors whose memory size changes can be used asinput and output of the tensor filter. With this option, the output caps is always in the format of flexible tensors.
                        flags: readable, writable
                        Boolean. Default: false
  
  is-updatable        : Indicate whether a given model to this tensor filter is updatable in runtime. (e.g., with on-device training)
                        flags: readable, writable
                        Boolean. Default: false
  
  latency             : Turn on performance profiling for the average latency over the recent 10 inferences in microseconds. Currently, this accepts either 0 (OFF) or 1 (ON).
                        flags: readable, writable
                        Integer. Range: 0 - 1 Default: -1 
  
  latency-report      : Report to the pipeline the estimated tensor-filter element latency.
                        flags: readable, writable
                        Boolean. Default: false
  
  model               : File path to the model file. Separated with ',' in case of multiple model files(like caffe2)
                        flags: readable, writable
                        String. Default: ""
  
  name                : The name of the object
                        flags: readable, writable
                        String. Default: "tensorfilter0"
  
  output              : Output tensor dimension from inner array, up to 4 dimensions ?
                        flags: readable, writable
                        String. Default: ""
  
  output-combination  : Select the output tensor(s) from the input tensor(s) and/or model output
                        flags: readable, writable
                        String. Default: ""
  
  outputlayout        : Set channel first (NCHW) or channel last layout (NHWC) or None for output data. Layout of the data can be any or NHWC or NCHW or none for now. 
                        flags: readable, writable
                        String. Default: ""
  
  outputname          : The Name of Output Tensor
                        flags: readable, writable
                        String. Default: ""
  
  outputranks         : The Rank of the Out Tensor, which is separated with ',' in case of multiple Tensors
                        flags: readable
                        String. Default: ""
  
  outputtype          : Type of each element of the output tensor ?
                        flags: readable, writable
                        String. Default: ""
  
  parent              : The parent of the object
                        flags: readable, writable
                        Object of type "GstObject"
  
  qos                 : Handle Quality-of-Service events
                        flags: readable, writable
                        Boolean. Default: false
  
  shared-tensor-filter-key: Multiple element instances of tensor-filter in a pipeline may share a single resource instance if they share the same framework (subplugin) and neural network model. Designate "shared-tensor-filter-key" to declare and share such instances. If it is NULL, it means the model representations is not shared.
                        flags: readable, writable
                        String. Default: ""
  
  silent              : Produce verbose output
                        flags: readable, writable
                        Boolean. Default: true
  
  sub-plugins         : Registrable sub-plugins list
                        flags: readable
                        String. Default: "custom,custom-easy,cpp,python3,tvm,tensorflow2-lite"
  
  throughput          : Turn on performance profiling for the average throughput in the number of outputs per seconds (i.e., FPS), multiplied by 1000 to represent a floating point using an integer. Currently, this accepts either 0 (OFF) or 1 (ON).
                        flags: readable, writable
                        Integer. Range: 0 - 1 Default: -1 

 

 

gst-inspector tensor_sink

더보기
root@imx8mpevk:~# gst-inspect-1.0 tensor_sink  
Factory Details:
  Rank                     none (0)
  Long-name                TensorSink
  Klass                    Sink/Tensor
  Description              Sink element to handle tensor stream
  Author                   Samsung Electronics Co., Ltd.

Plugin Details:
  Name                     nnstreamer
  Description              nnstreamer plugin library
  Filename                 /usr/lib/gstreamer-1.0/libnnstreamer.so
  Version                  2.4.0
  License                  LGPL
  Source module            nnstreamer
  Binary package           nnstreamer
  Origin URL               https://github.com/nnstreamer/nnstreamer

GObject
 +----GInitiallyUnowned
       +----GstObject
             +----GstElement
                   +----GstBaseSink
                         +----GstTensorSink

Pad Templates:
  SINK template: 'sink'
    Availability: Always
    Capabilities:
      other/tensor
              framerate: [ 0/1, 2147483647/1 ]
      other/tensors
                 format: { (string)static, (string)flexible, (string)sparse }
              framerate: [ 0/1, 2147483647/1 ]

Element has no clocking capabilities.
Element has no URI handling capabilities.

Pads:
  SINK: 'sink'
    Pad Template: 'sink'

Element Properties:

  async               : Go asynchronously to PAUSED
                        flags: readable, writable
                        Boolean. Default: true
  
  blocksize           : Size in bytes to pull per buffer (0 = default)
                        flags: readable, writable
                        Unsigned Integer. Range: 0 - 4294967295 Default: 4096 
  
  emit-signal         : Emit signal for new data, stream start, eos
                        flags: readable, writable
                        Boolean. Default: true
  
  enable-last-sample  : Enable the last-sample property
                        flags: readable, writable
                        Boolean. Default: true
  
  last-sample         : The last sample received in the sink
                        flags: readable
                        Boxed pointer of type "GstSample"
  
  max-bitrate         : The maximum bits per second to render (0 = disabled)
                        flags: readable, writable
                        Unsigned Integer64. Range: 0 - 18446744073709551615 Default: 0 
  
  max-lateness        : Maximum number of nanoseconds that a buffer can be late before it is dropped (-1 unlimited)
                        flags: readable, writable
                        Integer64. Range: -1 - 9223372036854775807 Default: -1 
  
  name                : The name of the object
                        flags: readable, writable
                        String. Default: "tensorsink0"
  
  parent              : The parent of the object
                        flags: readable, writable
                        Object of type "GstObject"
  
  processing-deadline : Maximum processing time for a buffer in nanoseconds
                        flags: readable, writable
                        Unsigned Integer64. Range: 0 - 18446744073709551615 Default: 20000000 
  
  qos                 : Generate Quality-of-Service events upstream
                        flags: readable, writable
                        Boolean. Default: true
  
  render-delay        : Additional render delay of the sink in nanoseconds
                        flags: readable, writable
                        Unsigned Integer64. Range: 0 - 18446744073709551615 Default: 0 
  
  signal-rate         : New data signals per second (0 for unlimited, max 500)
                        flags: readable, writable
                        Unsigned Integer. Range: 0 - 500 Default: 0 
  
  silent              : Produce verbose output
                        flags: readable, writable
                        Boolean. Default: true
  
  stats               : Sink Statistics
                        flags: readable
                        Boxed pointer of type "GstStructure"
                                                        average-rate: 0
                                                             dropped: 0
                                                            rendered: 0

  
  sync                : Sync on the clock
                        flags: readable, writable
                        Boolean. Default: false
  
  throttle-time       : The time to keep between rendered buffers (0 = disabled)
                        flags: readable, writable
                        Unsigned Integer64. Range: 0 - 18446744073709551615 Default: 0 
  
  ts-offset           : Timestamp offset in nanoseconds
                        flags: readable, writable
                        Integer64. Range: -9223372036854775808 - 9223372036854775807 Default: 0 
  

Element Signals:

  "eos" :  void user_function (GstElement * object,
                               gpointer user_data);

  "stream-start" :  void user_function (GstElement * object,
                                        gpointer user_data);

  "new-data" :  void user_function (GstElement * object,
                                    GstBuffer * arg0,
                                    gpointer user_data);

 

'embeded > i.mx 8m plus' 카테고리의 다른 글

nvidia tao toolkit  (0) 2025.08.22
eqi - model tool  (0) 2025.08.22
openVX, verisilicon(vivante)  (0) 2025.08.14
gstreamer pipeline / appsink  (0) 2025.08.14
nxp eiq 우분투 실행 실패  (0) 2025.07.31
Posted by 구차니