embeded/i.mx 8m plus2025. 8. 18. 15:01

i.mx8의 selfie_segmentrer.py 에서 발췌

        # Set backend and delegates
        if self.backend == "CPU":
            if self.platform == "i.MX8MP":
                backend = "true:CPU custom=NumThreads:4"
            else:
                backend = "true:CPU custom=NumThreads:2"
        else:
            if self.platform == "i.MX8MP":
                os.environ["USE_GPU_INFERENCE"] = "0"
                backend = (
                    "true:npu custom=Delegate:External,ExtDelegateLib:libvx_delegate.so"
                )
            else:
                backend = "true:npu custom=Delegate:External,ExtDelegateLib:libethosu_delegate.so"



                + " ! videoconvert ! video/x-raw,format=RGB ! tensor_converter ! "
                + "tensor_transform mode=arithmetic option=typecast:float32,div:255.0 ! "
                + "tensor_filter framework=tensorflow-lite model="
                + self.tflite_model
                + " accelerator="
                + backend
                + " name=tensor_filter latency=1 ! tensor_sink name=tensor_sink "

 

gsteramer의 엘리먼트를 기본으로 NN에 맞춰서 몇개 추가한 것 같은데

[링크 : https://nnstreamer.github.io/gst/nnstreamer/tensor_converter/README.html]

[링크 : https://nnstreamer.github.io/gst/nnstreamer/tensor_transform/README.html]

[링크 : https://nnstreamer.github.io/gst/nnstreamer/tensor_decoder/README.html]

[링크 : https://nnstreamer.github.io/gst/nnstreamer/tensor_filter/README.html]

[링크 : https://nnstreamer.github.io/gst/nnstreamer/tensor_sink/README.html]

   [링크 : https://nnstreamer.github.io/gst/nnstreamer/elements/gsttensor_sink.html]

 

사용예에서 보면 converter / transform / filter / sink 순으로 쓰는 듯

[CAM] - [videoconvert] - [videoscale] - [tee] -+- [queue] - [videoconvert] - [cairooverlay] - [ximagesink]
                                               +- [queue] - [videoscale] - [tensor_converter] - [tensor_transform] - [tensor_filter] - [tensor_sink]

[링크 : https://nnstreamer.github.io/how-to-run-examples.html]

 

 

+

gst-inspector tensor_converter

더보기
root@imx8mpevk:~# gst-inspect-1.0 tensor_converter
Factory Details:
  Rank                     none (0)
  Long-name                TensorConverter
  Klass                    Converter/Tensor
  Description              Converts an audio, video, text, or arbitrary stream to a tensor stream of C-Array for neural network framework filters
  Author                   MyungJoo Ham <myungjoo.ham@samsung.com></myungjoo.ham@samsung.com>

Plugin Details:
  Name                     nnstreamer
  Description              nnstreamer plugin library
  Filename                 /usr/lib/gstreamer-1.0/libnnstreamer.so
  Version                  2.4.0
  License                  LGPL
  Source module            nnstreamer
  Binary package           nnstreamer
  Origin URL               https://github.com/nnstreamer/nnstreamer

GObject
 +----GInitiallyUnowned
       +----GstObject
             +----GstElement
                   +----GstTensorConverter

Pad Templates:
  SINK template: 'sink'
    Availability: Always
    Capabilities:
      video/x-raw
                 format: { (string)RGB, (string)BGR, (string)RGBx, (string)BGRx, (string)xRGB, (string)xBGR, (string)RGBA, (string)BGRA, (string)ARGB, (string)ABGR, (string)GRAY8, (string)GRAY16_BE, (string)GRAY16_LE }
                  width: [ 1, 2147483647 ]
                 height: [ 1, 2147483647 ]
              framerate: [ 0/1, 2147483647/1 ]
         interlace-mode: progressive
      audio/x-raw
                 format: { (string)S8, (string)U8, (string)S16LE, (string)S16BE, (string)U16LE, (string)U16BE, (string)S32LE, (string)S32BE, (string)U32LE, (string)U32BE, (string)F32LE, (string)F32BE, (string)F64LE, (string)F64BE }
                   rate: [ 1, 2147483647 ]
               channels: [ 1, 2147483647 ]
                 layout: interleaved
      text/x-raw
                 format: utf8
      application/octet-stream
      other/tensors
                 format: flexible
              framerate: [ 0/1, 2147483647/1 ]
      application/octet-stream
      other/protobuf-tensor
              framerate: [ 0/1, 2147483647/1 ]
  
  SRC template: 'src'
    Availability: Always
    Capabilities:
      other/tensor
              framerate: [ 0/1, 2147483647/1 ]
      other/tensors
                 format: { (string)static, (string)flexible }
              framerate: [ 0/1, 2147483647/1 ]

Element has no clocking capabilities.
Element has no URI handling capabilities.

Pads:
  SINK: 'sink'
    Pad Template: 'sink'
  SRC: 'src'
    Pad Template: 'src'

Element Properties:

  frames-per-tensor   : The number of frames in output tensor
                        flags: readable, writable
                        Unsigned Integer. Range: 1 - 4294967295 Default: 1 
  
  input-dim           : Input tensor dimension from inner array
                        flags: readable, writable
                        String. Default: ""
  
  input-type          : Type of each element of the input tensor
                        flags: readable, writable
                        String. Default: ""
  
  mode                : Converter mode. e.g., mode=custom-code:. For detail, refer to </registered callback name>https://github.com/nnstreamer/nnstreamer/blob/main/gst/nnstreamer/elements/gsttensor_converter.md#custom-converter
                        flags: readable, writable
                        String. Default: ""
  
  name                : The name of the object
                        flags: readable, writable
                        String. Default: "tensorconverter0"
  
  parent              : The parent of the object
                        flags: readable, writable
                        Object of type "GstObject"
  
  set-timestamp       : The flag to set timestamp when received a buffer with invalid timestamp
                        flags: readable, writable
                        Boolean. Default: true
  
  silent              : Produce verbose output
                        flags: readable, writable
                        Boolean. Default: true
  
  sub-plugins         : Registrable sub-plugins list
                        flags: readable
                        String. Default: "python3,protobuf"

 

gst-inspector tensor_transform

더보기
root@imx8mpevk:~# gst-inspect-1.0 tensor_transform
Factory Details:
  Rank                     none (0)
  Long-name                TensorTransform
  Klass                    Filter/Tensor
  Description              Transforms other/tensor dimensions for different models or frameworks
  Author                   MyungJoo Ham <myungjoo.ham@samsung.com>

Plugin Details:
  Name                     nnstreamer
  Description              nnstreamer plugin library
  Filename                 /usr/lib/gstreamer-1.0/libnnstreamer.so
  Version                  2.4.0
  License                  LGPL
  Source module            nnstreamer
  Binary package           nnstreamer
  Origin URL               https://github.com/nnstreamer/nnstreamer

GObject
 +----GInitiallyUnowned
       +----GstObject
             +----GstElement
                   +----GstBaseTransform
                         +----GstTensorTransform

Pad Templates:
  SINK template: 'sink'
    Availability: Always
    Capabilities:
      other/tensor
              framerate: [ 0/1, 2147483647/1 ]
      other/tensors
                 format: { (string)static, (string)flexible }
              framerate: [ 0/1, 2147483647/1 ]
  
  SRC template: 'src'
    Availability: Always
    Capabilities:
      other/tensor
              framerate: [ 0/1, 2147483647/1 ]
      other/tensors
                 format: { (string)static, (string)flexible }
              framerate: [ 0/1, 2147483647/1 ]

Element has no clocking capabilities.
Element has no URI handling capabilities.

Pads:
  SINK: 'sink'
    Pad Template: 'sink'
  SRC: 'src'
    Pad Template: 'src'

Element Properties:

  acceleration        : Orc acceleration
                        flags: readable, writable
                        Boolean. Default: true
  
  apply               : Select tensors to apply, separated with ',' in case of multiple tensors. Default to apply all tensors.
                        flags: readable, writable
                        String. Default: ""
  
  mode                : Mode used for transforming tensor
                        flags: readable, writable
                        Enum "gtt_mode_type" Default: -1, "unknown"
                           (0): dimchg           - Mode for changing tensor dimensions, option=FROM_DIM:TO_DIM (with a regex, ^([0-9]|1[0-5]):([0-9]|1[0-5])$, where NNS_TENSOR_RANK_LIMIT is 16)
                           (1): typecast         - Mode for casting type of tensor, option=(^[u]?int(8|16|32|64)$|^float(16|32|64)$)
                           (2): arithmetic       - Mode for arithmetic operations with tensor, option=[typecast:TYPE,][per-channel:(false|true@DIM),]add|mul|div:NUMBER[@CH_IDX], ...
                           (3): transpose        - Mode for transposing shape of tensor, option=D1':D2':D3':D4 (fixed to 3)
                           (4): stand            - Mode for statistical standardization of tensor, option=(default|dc-average)[:TYPE][,per-channel:(false|true)]
                           (5): clamp            - Mode for clamping all elements of tensor into the range, option=CLAMP_MIN:CLAMP_MAX
                           (-1): unknown          - Unknown or not-implemented-yet mode
  
  name                : The name of the object
                        flags: readable, writable
                        String. Default: "tensortransform0"
  
  option              : Option for the tensor transform mode ?
                        flags: readable, writable
                        String. Default: null
  
  parent              : The parent of the object
                        flags: readable, writable
                        Object of type "GstObject"
  
  qos                 : Handle Quality-of-Service events
                        flags: readable, writable
                        Boolean. Default: false
  
  silent              : Produce verbose output ?
                        flags: readable, writable
                        Boolean. Default: true
  
  transpose-rank-limit: The rank limit of transpose, which varies per version of nnstreamer and may be lower than the global rank limit if it is over 4.
                        flags: readable
                        Unsigned Integer. Range: 0 - 16 Default: 4 

 

 

gst-inspector tensor_decoder

더보기
root@imx8mpevk:~# gst-inspect-1.0 tensor_decoder  
Factory Details:
  Rank                     none (0)
  Long-name                TensorDecoder
  Klass                    Converter/Tensor
  Description              Converts tensor stream of C-Array for neural network framework filters to audio or video stream
  Author                   Jijoong Moon <jijoong.moon@samsung.com>

Plugin Details:
  Name                     nnstreamer
  Description              nnstreamer plugin library
  Filename                 /usr/lib/gstreamer-1.0/libnnstreamer.so
  Version                  2.4.0
  License                  LGPL
  Source module            nnstreamer
  Binary package           nnstreamer
  Origin URL               https://github.com/nnstreamer/nnstreamer

GObject
 +----GInitiallyUnowned
       +----GstObject
             +----GstElement
                   +----GstBaseTransform
                         +----GstTensorDecoder

Pad Templates:
  SINK template: 'sink'
    Availability: Always
    Capabilities:
      other/tensor
              framerate: [ 0/1, 2147483647/1 ]
      other/tensors
                 format: static
            num_tensors: [ 1, 16 ]
              framerate: [ 0/1, 2147483647/1 ]
      other/tensors
                 format: flexible
              framerate: [ 0/1, 2147483647/1 ]
  
  SRC template: 'src'
    Availability: Always
    Capabilities:
      ANY

Element has no clocking capabilities.
Element has no URI handling capabilities.

Pads:
  SINK: 'sink'
    Pad Template: 'sink'
  SRC: 'src'
    Pad Template: 'src'

Element Properties:

  config-file         : sets config file path which contains plugins properties
                        flags: 
** (gst-inspect-1.0:1706): WARNING **: 05:54:08.872: /usr/src/debug/nnstreamer/2.4.0/gst/nnstreamer/elements/gsttensor_decoder.c:592: invalid property id 13 for "config-file" of type 'GParamString' in 'GstTensorDecoder'
readable, writable
                        String. Default: null
  
  mode                : Decoder mode
                        flags: readable, writable
                        String. Default: ""
  
  name                : The name of the object
                        flags: readable, writable
                        String. Default: "tensordecoder0"
  
  option1             : option for specific decoder modes, 1st one.
                        flags: readable, writable
                        String. Default: null
  
  option2             : option for specific decoder modes, 2nd one.
                        flags: readable, writable
                        String. Default: null
  
  option3             : option for specific decoder modes, 3rd one.
                        flags: readable, writable
                        String. Default: null
  
  option4             : option for specific decoder modes, 4th one.
                        flags: readable, writable
                        String. Default: null
  
  option5             : option for specific decoder modes, 5th one.
                        flags: readable, writable
                        String. Default: null
  
  option6             : option for specific decoder modes, 6th one.
                        flags: readable, writable
                        String. Default: null
  
  option7             : option for specific decoder modes, 7th one.
                        flags: readable, writable
                        String. Default: null
  
  option8             : option for specific decoder modes, 8th one.
                        flags: readable, writable
                        String. Default: null
  
  option9             : option for specific decoder modes, 9th one.
                        flags: readable, writable
                        String. Default: null
  
  parent              : The parent of the object
                        flags: readable, writable
                        Object of type "GstObject"
  
  qos                 : Handle Quality-of-Service events
                        flags: readable, writable
                        Boolean. Default: false
  
  silent              : Produce verbose output
                        flags: readable, writable
                        Boolean. Default: true
  
  sub-plugins         : Registrable sub-plugins list
                        flags: readable
                        String. Default: "protobuf,direct_video,bounding_boxes,image_segment,python3,octet_stream,pose_estimation,tensor_region,image_labeling"

 

 

gst-inspector tensor_filter

더보기
root@imx8mpevk:~# gst-inspect-1.0 tensor_filter 
Factory Details:
  Rank                     none (0)
  Long-name                TensorFilter
  Klass                    Filter/Tensor
  Description              Handles NN Frameworks (e.g., tensorflow) as Media Filters with other/tensor type stream
  Author                   MyungJoo Ham <myungjoo.ham@samsung.com>

Plugin Details:
  Name                     nnstreamer
  Description              nnstreamer plugin library
  Filename                 /usr/lib/gstreamer-1.0/libnnstreamer.so
  Version                  2.4.0
  License                  LGPL
  Source module            nnstreamer
  Binary package           nnstreamer
  Origin URL               https://github.com/nnstreamer/nnstreamer

GObject
 +----GInitiallyUnowned
       +----GstObject
             +----GstElement
                   +----GstBaseTransform
                         +----GstTensorFilter

Pad Templates:
  SINK template: 'sink'
    Availability: Always
    Capabilities:
      other/tensor
              framerate: [ 0/1, 2147483647/1 ]
      other/tensors
                 format: { (string)static, (string)flexible }
              framerate: [ 0/1, 2147483647/1 ]
  
  SRC template: 'src'
    Availability: Always
    Capabilities:
      other/tensor
              framerate: [ 0/1, 2147483647/1 ]
      other/tensors
                 format: { (string)static, (string)flexible }
              framerate: [ 0/1, 2147483647/1 ]

Element has no clocking capabilities.
Element has no URI handling capabilities.

Pads:
  SINK: 'sink'
    Pad Template: 'sink'
  SRC: 'src'
    Pad Template: 'src'

Element Properties:

  accelerator         : Set accelerator for the subplugin with format (true/false):(comma separated ACCELERATOR(s)). true/false determines if accelerator is to be used. list of accelerators determines the backend (ignored with false). Example, if GPU, NPU can be used but not CPU - true:npu,gpu,!cpu. The full list of accelerators can be found in nnstreamer_plugin_api_filter.h. Note that only a few subplugins support this property.
                        flags: readable, writable
                        String. Default: ""
  
  custom              : Custom properties for subplugins ?
                        flags: readable, writable
                        String. Default: ""
  
  framework           : Neural network framework
                        flags: readable, writable
                        String. Default: "auto"
  
  input               : Input tensor dimension from inner array, up to 4 dimensions ?
                        flags: readable, writable
                        String. Default: ""
  
  input-combination   : Select the input tensor(s) to invoke the models
                        flags: readable, writable
                        String. Default: ""
  
  inputlayout         : Set channel first (NCHW) or channel last layout (NHWC) or None for input data. Layout of the data can be any or NHWC or NCHW or none for now. 
                        flags: readable, writable
                        String. Default: ""
  
  inputname           : The Name of Input Tensor
                        flags: readable, writable
                        String. Default: ""
  
  inputranks          : The Rank of the Input Tensor, which is separated with ',' in case of multiple Tensors
                        flags: readable
                        String. Default: ""
  
  inputtype           : Type of each element of the input tensor ?
                        flags: readable, writable
                        String. Default: ""
  
  invoke-dynamic      : Flexible tensors whose memory size changes can be used asinput and output of the tensor filter. With this option, the output caps is always in the format of flexible tensors.
                        flags: readable, writable
                        Boolean. Default: false
  
  is-updatable        : Indicate whether a given model to this tensor filter is updatable in runtime. (e.g., with on-device training)
                        flags: readable, writable
                        Boolean. Default: false
  
  latency             : Turn on performance profiling for the average latency over the recent 10 inferences in microseconds. Currently, this accepts either 0 (OFF) or 1 (ON).
                        flags: readable, writable
                        Integer. Range: 0 - 1 Default: -1 
  
  latency-report      : Report to the pipeline the estimated tensor-filter element latency.
                        flags: readable, writable
                        Boolean. Default: false
  
  model               : File path to the model file. Separated with ',' in case of multiple model files(like caffe2)
                        flags: readable, writable
                        String. Default: ""
  
  name                : The name of the object
                        flags: readable, writable
                        String. Default: "tensorfilter0"
  
  output              : Output tensor dimension from inner array, up to 4 dimensions ?
                        flags: readable, writable
                        String. Default: ""
  
  output-combination  : Select the output tensor(s) from the input tensor(s) and/or model output
                        flags: readable, writable
                        String. Default: ""
  
  outputlayout        : Set channel first (NCHW) or channel last layout (NHWC) or None for output data. Layout of the data can be any or NHWC or NCHW or none for now. 
                        flags: readable, writable
                        String. Default: ""
  
  outputname          : The Name of Output Tensor
                        flags: readable, writable
                        String. Default: ""
  
  outputranks         : The Rank of the Out Tensor, which is separated with ',' in case of multiple Tensors
                        flags: readable
                        String. Default: ""
  
  outputtype          : Type of each element of the output tensor ?
                        flags: readable, writable
                        String. Default: ""
  
  parent              : The parent of the object
                        flags: readable, writable
                        Object of type "GstObject"
  
  qos                 : Handle Quality-of-Service events
                        flags: readable, writable
                        Boolean. Default: false
  
  shared-tensor-filter-key: Multiple element instances of tensor-filter in a pipeline may share a single resource instance if they share the same framework (subplugin) and neural network model. Designate "shared-tensor-filter-key" to declare and share such instances. If it is NULL, it means the model representations is not shared.
                        flags: readable, writable
                        String. Default: ""
  
  silent              : Produce verbose output
                        flags: readable, writable
                        Boolean. Default: true
  
  sub-plugins         : Registrable sub-plugins list
                        flags: readable
                        String. Default: "custom,custom-easy,cpp,python3,tvm,tensorflow2-lite"
  
  throughput          : Turn on performance profiling for the average throughput in the number of outputs per seconds (i.e., FPS), multiplied by 1000 to represent a floating point using an integer. Currently, this accepts either 0 (OFF) or 1 (ON).
                        flags: readable, writable
                        Integer. Range: 0 - 1 Default: -1 

 

 

gst-inspector tensor_sink

더보기
root@imx8mpevk:~# gst-inspect-1.0 tensor_sink  
Factory Details:
  Rank                     none (0)
  Long-name                TensorSink
  Klass                    Sink/Tensor
  Description              Sink element to handle tensor stream
  Author                   Samsung Electronics Co., Ltd.

Plugin Details:
  Name                     nnstreamer
  Description              nnstreamer plugin library
  Filename                 /usr/lib/gstreamer-1.0/libnnstreamer.so
  Version                  2.4.0
  License                  LGPL
  Source module            nnstreamer
  Binary package           nnstreamer
  Origin URL               https://github.com/nnstreamer/nnstreamer

GObject
 +----GInitiallyUnowned
       +----GstObject
             +----GstElement
                   +----GstBaseSink
                         +----GstTensorSink

Pad Templates:
  SINK template: 'sink'
    Availability: Always
    Capabilities:
      other/tensor
              framerate: [ 0/1, 2147483647/1 ]
      other/tensors
                 format: { (string)static, (string)flexible, (string)sparse }
              framerate: [ 0/1, 2147483647/1 ]

Element has no clocking capabilities.
Element has no URI handling capabilities.

Pads:
  SINK: 'sink'
    Pad Template: 'sink'

Element Properties:

  async               : Go asynchronously to PAUSED
                        flags: readable, writable
                        Boolean. Default: true
  
  blocksize           : Size in bytes to pull per buffer (0 = default)
                        flags: readable, writable
                        Unsigned Integer. Range: 0 - 4294967295 Default: 4096 
  
  emit-signal         : Emit signal for new data, stream start, eos
                        flags: readable, writable
                        Boolean. Default: true
  
  enable-last-sample  : Enable the last-sample property
                        flags: readable, writable
                        Boolean. Default: true
  
  last-sample         : The last sample received in the sink
                        flags: readable
                        Boxed pointer of type "GstSample"
  
  max-bitrate         : The maximum bits per second to render (0 = disabled)
                        flags: readable, writable
                        Unsigned Integer64. Range: 0 - 18446744073709551615 Default: 0 
  
  max-lateness        : Maximum number of nanoseconds that a buffer can be late before it is dropped (-1 unlimited)
                        flags: readable, writable
                        Integer64. Range: -1 - 9223372036854775807 Default: -1 
  
  name                : The name of the object
                        flags: readable, writable
                        String. Default: "tensorsink0"
  
  parent              : The parent of the object
                        flags: readable, writable
                        Object of type "GstObject"
  
  processing-deadline : Maximum processing time for a buffer in nanoseconds
                        flags: readable, writable
                        Unsigned Integer64. Range: 0 - 18446744073709551615 Default: 20000000 
  
  qos                 : Generate Quality-of-Service events upstream
                        flags: readable, writable
                        Boolean. Default: true
  
  render-delay        : Additional render delay of the sink in nanoseconds
                        flags: readable, writable
                        Unsigned Integer64. Range: 0 - 18446744073709551615 Default: 0 
  
  signal-rate         : New data signals per second (0 for unlimited, max 500)
                        flags: readable, writable
                        Unsigned Integer. Range: 0 - 500 Default: 0 
  
  silent              : Produce verbose output
                        flags: readable, writable
                        Boolean. Default: true
  
  stats               : Sink Statistics
                        flags: readable
                        Boxed pointer of type "GstStructure"
                                                        average-rate: 0
                                                             dropped: 0
                                                            rendered: 0

  
  sync                : Sync on the clock
                        flags: readable, writable
                        Boolean. Default: false
  
  throttle-time       : The time to keep between rendered buffers (0 = disabled)
                        flags: readable, writable
                        Unsigned Integer64. Range: 0 - 18446744073709551615 Default: 0 
  
  ts-offset           : Timestamp offset in nanoseconds
                        flags: readable, writable
                        Integer64. Range: -9223372036854775808 - 9223372036854775807 Default: 0 
  

Element Signals:

  "eos" :  void user_function (GstElement * object,
                               gpointer user_data);

  "stream-start" :  void user_function (GstElement * object,
                                        gpointer user_data);

  "new-data" :  void user_function (GstElement * object,
                                    GstBuffer * arg0,
                                    gpointer user_data);

 

'embeded > i.mx 8m plus' 카테고리의 다른 글

openVX, verisilicon(vivante)  (0) 2025.08.14
gstreamer pipeline / appsink  (0) 2025.08.14
nxp eiq 우분투 실행 실패  (0) 2025.07.31
nxp g2d_blit  (0) 2025.04.01
sounddevice on arm i.mx8 evk  (0) 2024.05.14
Posted by 구차니
embeded/i.mx 8m plus2025. 8. 14. 16:42

영상처리의 가속을 위한 크로스 플랫폼.

Portable, Power-efficient Vision Processing
OpenVX™ is an open, royalty-free standard for cross platform acceleration of computer vision applications. OpenVX enables performance and power-optimized computer vision processing, especially important in embedded and real-time use cases such as face, body and gesture tracking, smart video surveillance, advanced driver assistance systems (ADAS), object and scene reconstruction, augmented reality, visual inspection, robotics and more.

[링크 : https://www.khronos.org/openvx/]

[링크 : https://cho001.tistory.com/224]

 

소스 뒤져보다 보니 so 파일 두개가 보이는데

        if os.path.exists("/usr/lib/libtim-vx.so"):
            backends_available = ["NPU", "CPU"]
            
            ext_delegate = tflite.load_delegate("/usr/lib/libvx_delegate.so")

 

libtim-vx.so는 openVX를 통해 가속받는 VeriSilicon 사의 Tensor Interface Module 을 위한 라이브러리이고

[링크 : https://github.com/VeriSilicon/TIM-VX]

 

libvx_delegate.so 는 openVX 함수를 위한 것인듯 한데..

[링크 : https://github.com/nxp-imx/tflite-vx-delegate-imx]

 

verisilicon 으로 검색하면 vivante NPU IP 를 소유하고 있는 것으로 보인다.

NXP 에서 vivante GPU를 구매해서 GC7000UL을 i.mx8mp에 넣은것 같다.

그러다 보니 gpu / npu를 통일 업체걸로 구매한 듯.

[링크 : https://www.verisilicon.com/en/IPPortfolio/VivanteNPUIP]

'embeded > i.mx 8m plus' 카테고리의 다른 글

NNstreamer - tensor*  (0) 2025.08.18
gstreamer pipeline / appsink  (0) 2025.08.14
nxp eiq 우분투 실행 실패  (0) 2025.07.31
nxp g2d_blit  (0) 2025.04.01
sounddevice on arm i.mx8 evk  (0) 2024.05.14
Posted by 구차니
embeded/i.mx 8m plus2025. 8. 14. 15:07

nxp i.mx8mp 예제를 뜯어 보다가 신기한 걸 발견. 저렇게 해서 성능이 잘 나왔던건가?

            cam_pipeline = cv2.VideoCapture(
                "v4l2src device=" + cam + " ! imxvideoconvert_g2d ! "
                "video/x-raw,format=RGBA,width="
                + str(self.width)
                + ",height="
                + str(self.height)
                + " ! "
                + "videoconvert ! appsink"
            )

        status, org_img = cam_pipeline.read()

[링크 : https://gstreamer.freedesktop.org/documentation/applib/gstappsink.html?gi-language=c]

 


+

i.mx8mp 에서 해보니 비디오 받아오는게 느린게 아니라서 위의 파이프라인을 넣어도 느린건 여전하다.

+

 

cv::VideoCapture::VideoCapture ( const String &  filename, int  apiPreference)

Open video file or a capturing device or a IP video stream for video capturing with API Preference.
This is an overloaded member function, provided for convenience. It differs from the above function only in what argument(s) it accepts.

Parameters
filename it can be:
name of video file (eg. video.avi)
or image sequence (eg. img_%02d.jpg, which will read samples like img_00.jpg, img_01.jpg, img_02.jpg, ...)
or URL of video stream (eg. protocol://host:port/script_name?script_params|auth)
or GStreamer pipeline string in gst-launch tool format in case if GStreamer is used as backend Note that each video stream or IP camera feed has its own URL scheme. Please refer to the documentation of source stream to know the right URL.
apiPreference preferred Capture API backends to use. Can be used to enforce a specific reader implementation if multiple are available: e.g. cv::CAP_FFMPEG or cv::CAP_IMAGES or cv::CAP_DSHOW.

See also
cv::VideoCaptureAPIs

[링크 : https://docs.opencv.org/3.4/d8/dfe/classcv_1_1VideoCapture.html#a949d90b766ba42a6a93fe23a67785951]

 

camSet='v4l2src device=/dev/video0 ! video/x-raw,width=640,height=360 ! nvvidconv flip-method='+str(flip)+' \
        ! video/x-raw(memory:NVMM), format=I420, width=640, height=360 ! nvvidconv ! video/x-raw, format=BGRx ! videoconvert \
        ! video/x-raw, format=BGR enable-max-performance=1 ! appsink '
cam=cv2.VideoCapture(camSet,cv2.CAP_GSTREAMER)

[링크 : https://stackoverflow.com/questions/71816725/streaming-opencv-videocapture-frames-using-gstreamer-in-python-for-webcam]

 

CAP_GSTREAMER 
Python: cv.CAP_GSTREAMER
GStreamer.

[링크 : https://docs.opencv.org/3.4/d4/d15/group__videoio__flags__base.html#gga023786be1ee68a9105bf2e48c700294da38dcac6866f7608675dd35ba0b9c3c07]

 

appsink 예제

[링크 : https://makepluscode.tistory.com/entry/Gstreamer-Python-Appsink-구현하기] python

[링크 : https://ralpioxxcs.github.io/post/gstreamer/1_gst/] cpp

'embeded > i.mx 8m plus' 카테고리의 다른 글

NNstreamer - tensor*  (0) 2025.08.18
openVX, verisilicon(vivante)  (0) 2025.08.14
nxp eiq 우분투 실행 실패  (0) 2025.07.31
nxp g2d_blit  (0) 2025.04.01
sounddevice on arm i.mx8 evk  (0) 2024.05.14
Posted by 구차니
embeded/i.mx 8m plus2025. 7. 31. 17:51

설치하고 실행하는데 패키지가 부족한지 에러가 발생

그런데 에러가 매번 동일한건 함정.. -_-


콘솔 실행

/opt/nxp/eIQ_Toolkit_v1.8.0/bin/eiqenv.sh
/opt/nxp/eIQ_Toolkit_v1.8.0/eiq-portal

[링크 : https://community.nxp.com/t5/eIQ-Machine-Learning-Software/eIQ-Toolkit-Ubuntu-Installation/td-p/1727046]

 

이런 에러가 나서 두 개 패키지 설치해주고 해결되었다.

ubuntu 22.04.5 LTS 기준

 

$ sudo apt-cache search libffi
libffi7 - 외부 함수 인터페이스 라이브러리 런타임

 

libssl이 3.0대로 올라가면서 하위호환 문제가 발생한거로 보임

$ wget https://nz2.archive.ubuntu.com/ubuntu/pool/main/o/openssl/libssl1.1_1.1.1f-1ubuntu2.24_amd64.deb
$ sudo dpkg -i libssl1.1_1.1.1f-1ubuntu2.24_amd64.deb

[링크 : https://stackoverflow.com/questions/72133316/libssl-so-1-1-cannot-open-shared-object-file-no-such-file-or-directory]

 

실행 성공!

 

쓰는법은 이제 봐야함

[링크 : https://docs.nxp.com/bundle/EIQTUG/page/topics/introduction.html]

'embeded > i.mx 8m plus' 카테고리의 다른 글

openVX, verisilicon(vivante)  (0) 2025.08.14
gstreamer pipeline / appsink  (0) 2025.08.14
nxp g2d_blit  (0) 2025.04.01
sounddevice on arm i.mx8 evk  (0) 2024.05.14
NXP i.mx8mp LF_v6.1.55-2.2.0 테스트  (0) 2023.12.21
Posted by 구차니
embeded/i.mx 8m plus2025. 4. 1. 18:28

하.. 성능 땜시 blit 함수 찾았는데

nxp 의 i.mx6 및 imx8 에서도 사용이 가능한 라이브러리가 존재한다.

그나저나 i.mx9이 imx8quad max 보다는 떨어지고 나머지 i.mx8 시리즈 보단 나은 신기한 구성이다.

[링크 : https://www.nxp.com/docs/en/user-guide/IMX_GRAPHICS_USERS_GUIDE.pdf]

 

원하던 함수는 바로 이것. 그런데 user guide지 application note 가 아니라 함수만 설명하고 상세 인자 설명은 없는 상황..

[링크 : https://www.nxp.com/docs/en/user-guide/IMX_GRAPHICS_USERS_GUIDE.pdf]

 

뒤지면 나오긴 한데.. 크흡..

[링크 : https://github.com/nxp-imx/g2d-samples]

    [링크 : https://github.com/nxp-imx/g2d-samples/blob/imx_2.3/multiblit_test/g2d_multiblit.c]

[링크 : https://community.nxp.com/t5/i-MX-Processors/g2d-alloc-alloc-memory-fail-with-size-6220800/m-p/451245]

 

Posted by 구차니
embeded/i.mx 8m plus2024. 5. 14. 15:45

fail!!

portaudio는 라이브러리 같은데

반대로.. 리눅스 시스템에서 지원하고 있는 오디오 시스템이 어떻게 되는지 확인하는 방법을 찾아 봐야 할 듯.

 

root@imx8mpevk:~# python3 tone.py 
Traceback (most recent call last):
  File "/home/root/tone.py", line 2, in <module>
    import sounddevice as sd
  File "/usr/lib/python3.11/site-packages/sounddevice.py", line 71, in <module>
    raise OSError('PortAudio library not found')
OSError: PortAudio library not found

 

라즈베리에서는 portaudio를 지원하나 본데 i.mx8 보드에서는 apt로도 설치가 안되는데

귀찮으니(?) 라이브러리 빌드해서 넣어봐야 하나?

sudo apt-get install libportaudio2
sudo apt-get install libasound2-dev

[링크 : https://park-duck.tistory.com/entry/portAudio-library-not-found-에러-해결]

'embeded > i.mx 8m plus' 카테고리의 다른 글

nxp eiq 우분투 실행 실패  (0) 2025.07.31
nxp g2d_blit  (0) 2025.04.01
NXP i.mx8mp LF_v6.1.55-2.2.0 테스트  (0) 2023.12.21
missed: not vectorized: relevant stmt not supported:  (0) 2023.08.31
nxp i.mx8mp win iot part 2  (0) 2023.05.26
Posted by 구차니
embeded/i.mx 8m plus2023. 12. 21. 15:51

i.MX Machine Learning User's Guide에 있는거 테스트 해봄

The behavior is as follows:
• If USE_GPU_INFERENCE=1, the graph is executed on the GPU
• Otherwise, the graph is executed on the NPU (if available)
By default, the NPU is used for OpenVX graph execution.

 

전체 실행시간으로는 neon 이 가장 빨랐고, inference 시간으로는 NPU가 가장 빨랐다.

의외로 GPU 가속이 NEON만도 못하다는 충격적인(!) 결과가..

root@imx8mpevk:/usr/bin/tensorflow-lite-2.12.1/examples# time ./label_image 
INFO: Loaded model ./mobilenet_v1_1.0_224_quant.tflite
INFO: resolved reporter
INFO: invoked
INFO: average time: 41.16 ms
INFO: 0.764706: 653 military uniform
INFO: 0.121569: 907 Windsor tie
INFO: 0.0156863: 458 bow tie
INFO: 0.0117647: 466 bulletproof vest
INFO: 0.00784314: 835 suit

real    0m0.173s
user    0m0.520s
sys     0m0.024s
root@imx8mpevk:/usr/bin/tensorflow-lite-2.12.1/examples# time USE_GPU_INFERENCE=0 ./label_image --external_delegate_path=/usr/lib/libvx_delegate.so
INFO: Loaded model ./mobilenet_v1_1.0_224_quant.tflite
INFO: resolved reporter
Vx delegate: allowed_cache_mode set to 0.
Vx delegate: device num set to 0.
Vx delegate: allowed_builtin_code set to 0.
Vx delegate: error_during_init set to 0.
Vx delegate: error_during_prepare set to 0.
Vx delegate: error_during_invoke set to 0.
EXTERNAL delegate created.
INFO: Applied EXTERNAL delegate.
W [HandleLayoutInfer:291]Op 162: default layout inference pass.
INFO: invoked
INFO: average time: 2.861 ms
INFO: 0.768627: 653 military uniform
INFO: 0.105882: 907 Windsor tie
INFO: 0.0196078: 458 bow tie
INFO: 0.0117647: 466 bulletproof vest
INFO: 0.00784314: 835 suit

real    0m3.116s
user    0m2.916s
sys     0m0.195s
root@imx8mpevk:/usr/bin/tensorflow-lite-2.12.1/examples# time USE_GPU_INFERENCE=1 ./label_image --external_delegate_path=/usr/lib/libvx_delegate.so
INFO: Loaded model ./mobilenet_v1_1.0_224_quant.tflite
INFO: resolved reporter
Vx delegate: allowed_cache_mode set to 0.
Vx delegate: device num set to 0.
Vx delegate: allowed_builtin_code set to 0.
Vx delegate: error_during_init set to 0.
Vx delegate: error_during_prepare set to 0.
Vx delegate: error_during_invoke set to 0.
EXTERNAL delegate created.
INFO: Applied EXTERNAL delegate.
W [query_hardware_caps:89]Unsupported evis version
W [HandleLayoutInfer:291]Op 162: default layout inference pass.
INFO: invoked
INFO: average time: 171.261 ms
INFO: 0.784314: 653 military uniform
INFO: 0.105882: 907 Windsor tie
INFO: 0.0156863: 458 bow tie
INFO: 0.00784314: 466 bulletproof vest
INFO: 0.00392157: 835 suit

real    0m1.992s
user    0m1.377s
sys     0m0.103s

'embeded > i.mx 8m plus' 카테고리의 다른 글

nxp g2d_blit  (0) 2025.04.01
sounddevice on arm i.mx8 evk  (0) 2024.05.14
missed: not vectorized: relevant stmt not supported:  (0) 2023.08.31
nxp i.mx8mp win iot part 2  (0) 2023.05.26
nxp i.mx8mp win iot  (0) 2023.05.26
Posted by 구차니
embeded/i.mx 8m plus2023. 8. 31. 10:33

cortex-a53 에서 neon 으로 실수 나누기를 하는데

이상하게 적용이 안되서 확인중

 

소스는 아래 한줄인데 stdev_val은 앞서 단계에서 계산한 변수라 그런가

scope 문제로 인해서 안될 것 같진 않은데

int tempval = (int)((value - abs_avg) / (int)stdev_val);

 

아무튼 에러는 아래와 같이. 332 라인은 for() 337은 위의 소스인데

modules/m7_adc.c:332:27: missed: couldn't vectorize loop
modules/m7_adc.c:337:11: missed: not vectorized: relevant stmt not supported: tempval_168 = _73 / stdev_val_162;

 

얘 되고

int tempval = (int)((double)(value - abs_avg) / (double)stdev_val);

 

얘도 된다.

int tempval = (int)((value - abs_avg) / (double)stdev_val);

 

나누기 연산에서 피 연산자가 int 형이면 안되고, double 형이면 된다라..

아래 문서를 참조해도 딱히 divide 연산자에 대한 이야기는 없고

cortex-a53이라면.. cortex-a9도 아니고.. hardware divider가 없을리도 없을 것 같은데.

아무튼.. 의외의 점이라면 예제코드 중에 곱셈은 있어도 나눗셈은 없다. (에이 설마..)

[링크 : https://gcc.gnu.org/projects/tree-ssa/vectorization.html]

'embeded > i.mx 8m plus' 카테고리의 다른 글

sounddevice on arm i.mx8 evk  (0) 2024.05.14
NXP i.mx8mp LF_v6.1.55-2.2.0 테스트  (0) 2023.12.21
nxp i.mx8mp win iot part 2  (0) 2023.05.26
nxp i.mx8mp win iot  (0) 2023.05.26
rpmsg-lite  (0) 2023.03.23
Posted by 구차니
embeded/i.mx 8m plus2023. 5. 26. 12:33

다시 문서를 보니 이전에 10.1 단계를 건너뛰고 10.2만 진행한 것 같다.

 

아무생각 없이 무지성으로 win 10 x64용 에서 해당 파일 꺼내려고 보니 2GB.

아키텍쳐가 안되겠지...? 라는 생각에

win 10 iot ent 다운로드 받는데 4시간..(32bit / 64bit + 영어 / 국문...)

10.1 Flashing Windows 10 IoT Installer to the SD card
Currently, the only way to deploy a Windows IoT Enterprise on the onboard eMMC is to use WinPE.
OS (Windows Preinstallation Environment) to write the Windows IoT image to eMMC.
Windows manufacturing OS (WinPE) that can be fully loaded and run from memory without using persistent
storage. The following steps create an SD card with WinPE and a Windows IoT image that contains the BSP
drivers. The boot firmware checks the SD card and boots WinPE, which then installs the Windows IoT image to
the eMMC.
1. Decompress the W21H2-1-x-x-imx-windows-bsp-binaries.zip file. The package contains release-signed prebuilt binaries and image files.
2. Open the elevated command prompt and navigate to the IoTEntOnNXP directory.
3. Mount the previously downloaded Windows IoT Enterprise ISO image file (see chapter Software
requirements) and copy the install.wim file from the <DVD mount drive:>\sources\install.wim to the
IoTEntOnNXP directory.
4. Execute the command:
make-winpe-enterprise.cmd /disable_updates /disable_transparency /test_signing
This command creates a copy of the selected install.wim image with injected i.MX drivers and applied
updates from the kbpatch/ directory. These patches are for Windows 21H2, build 19044.1288 and update
the image to build 19044.2566.
Note: Be sure to copy the whole command line.
5. Execute the command:
make-winpe-enterprise.cmd /apply <disk_number>
where <disk_number> is the physical number of the SD card disk on your PC. It can be obtained using the
Disk Management tool (right-click the start menu icon and select Disk Management).
This command deploys the WinPE image to the SD card.
CAUTION: Make sure to select the correct disk number, as this step formats the selected disk! The WinPEbased Windows installer is now deployed on the SD card.
6. Continue with the firmware installation to the SD card.

10.2 Flashing firmware to the SD card
During active development of the boot firmware, it can be time-consuming and error-prone to repeatedly change
the dip switches between UUU download mode and eMMC boot mode. To simplify this process, the i.MX EVK
boards support SD card boot mode that allows you to keep the boot firmware on an SD card.
To deploy boot firmware to an SD card from Windows, we recommend using the Cfimager tool from https://www.
nxp.com/webapp/Download?colCode=CF_IMAGER.
Perform the following steps to flash the firmware to the SD card:
1. Download the NXP cfimager tool and copy it into the firmware directory or a directory listed in the system
environment variable %PATH%.
2. Navigate to the firmware directory.
3. Plug the SD card into the host PC and execute the following board-specific command:
For i.MX 8M Mini EVK board:
flash_bootloader.cmd /device MX8M_MINI_EVK /target_drive <SD card driver letter, for example, f:>
For i.MX 8M Quad EVK board:
flash_bootloader.cmd /device MX8M_EVK /target_drive <SD card driver letter, for example, f:>
For i.MX 8M Nano EVK board:
flash_bootloader.cmd /device MX8M_NANO_EVK /target_drive <SD card driver letter, for example, f:>
For i.MX 8M Plus EVK board:
flash_bootloader.cmd /device MX8M_PLUS_EVK /target_drive <SD card driver letter, for example, f:>
For i.MX 8QuadXPlus MEK board:
flash_bootloader.cmd /device MX8QXP_MEK /target_drive <SD card driver letter, for example, f:>
For i.MX 93 EVK board:
flash_bootloader.cmd /device MX93_11X11_EVK /target_drive <SD card driver letter, for example, f:>
4. Power off the board.
5. Insert the SD card to the board.
6. Change the boot device to the SD card.
7. Power on the board

[링크 : https://www.nxp.com/docs/en/quick-reference-guide/IMXWQSG.pdf]

 

 

+

다시 시도 하는데 이런저런 또 막히는 것들 산더미 -_-

C:\nxp\W21H2-1-3-0-imx-windows-bsp-binaries\IoTEntOnNXP>make-winpe-enterprise.cmd /disable_updates /disable_transparency /test_signing

You must install the Windows PE Add-on for the ADK
https://docs.microsoft.com/en-us/windows-hardware/manufacture/desktop/download-winpe--windows-pe

Script failed Cleaning up

배포 이미지 서비스 및 관리 도구
버전: 10.0.22621.1


오류: 50

지원되지 않는 요청입니다.

DISM 로그 파일은 C:\Windows\Logs\DISM\dism.log에 있습니다.

배포 이미지 서비스 및 관리 도구
버전: 10.0.22621.1


오류: 50

지원되지 않는 요청입니다.

DISM 로그 파일은 C:\Windows\Logs\DISM\dism.log에 있습니다.
C:\nxp\W21H2-1-3-0-imx-windows-bsp-binaries\IoTEntOnNXP\diskpart.txt을(를) 찾을 수 없습니다.

 

그래서 PE Addon 만 까는 청개구리 짓을 발동!

 

다시 문서를 읽어 보니 adk for win10, winpe addon for adk 2004 두개를 깔아야 한다고 한데 winPE add-on for ADK 만 설치해도 일단은 넘어간다.

3 Software requirements
• Binary drivers and firmware (either downloaded from nxp.com or built locally)
• Windows IoT operating system. There are two options:
– Visual Studio Subscription portal my.visualstudio.com
– At the portal, click Downloads -> Windows 10 -> Search for “Windows 10 IoT Enterprise LTSC 2021” or
“Windows 10 IoT Enterprise 2021”
– The default architecture is set to x64, click the dropdown menu to change it to Arm64 and download the
DVD
– Through microsoftoem.com facilitated by a Windows IoT OS distributor
– To find a distributor, visit aka.ms/iotdistributorlist
• Windows ADK for Windows 10 and Windows PE add-on for ADK, version 2004.

 

다시 시도하니 되는것 같긴한데

무지성으로(!) 받은 x86 , x64용 Win IoT Ent 라서 ARM64 아니라고 뱉는다.

C:\nxp\W21H2-1-3-0-imx-windows-bsp-binaries\IoTEntOnNXP>make-winpe-enterprise.cmd /disable_updates /disable_transparency /test_signing

Selected: "install.wim"


Source image:                          install.wim
Test signing:                          yes
Patch Sdport:                          no
Windows debug over ethernet:           no, IP:
Windows PE debug over ethernet:        no, IP:
KD_NET file (for debug over net):      no
Windows debug over serial:             no
Windows PE debug over serial:          no
Unattended install answer file:        no
Disable updates:                       yes
Disable transparency:                  yes
Split wim:                             no
Cummulative update:                    no, path:

*********************************************************************************************************************************************************
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
*** Step 1 Creating i.MX Windows IoT Enterprise image: out\imx_win_iot_install.wim
*********************************************************************************************************************************************************
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Cleaning up from previous run

---------------------------------------------------------------------------------------------------------------------------------------------------------
*** Step 1.1 Copying Windows Enterprise image install.wim to out\imx_win_iot_install.wim
---------------------------------------------------------------------------------------------------------------------------------------------------------
하위 디렉터리 또는 파일 out이(가) 이미 있습니다.
copy "install.wim" "out\imx_win_iot_install.wim"
        1개 파일이 복사되었습니다.

---------------------------------------------------------------------------------------------------------------------------------------------------------
*** Step 1.2 Mounting i.MX Windows IoT Enterprise image at out\mount_enterprise
---------------------------------------------------------------------------------------------------------------------------------------------------------
dism /mount-wim /wimfile:"out\imx_win_iot_install.wim" /mountdir:"out\mount_enterprise" /index:2

배포 이미지 서비스 및 관리 도구
버전: 10.0.22621.1


오류: 0xc1510113

지정한 이미지가 WIM에 없습니다.
WIM에서 먼저 기존 이미지가 있는지 확인하십시오.

DISM 로그 파일은 C:\Windows\Logs\DISM\dism.log에 있습니다.

'embeded > i.mx 8m plus' 카테고리의 다른 글

NXP i.mx8mp LF_v6.1.55-2.2.0 테스트  (0) 2023.12.21
missed: not vectorized: relevant stmt not supported:  (0) 2023.08.31
nxp i.mx8mp win iot  (0) 2023.05.26
rpmsg-lite  (0) 2023.03.23
i.mx8m plus cortex-m7 part.2  (0) 2023.02.23
Posted by 구차니
embeded/i.mx 8m plus2023. 5. 26. 10:56

baudrate 보소.. 이런 변태 같은 -_-

3.1 Serial logging setup
To help troubleshoot issues during boot, use the USB micro-B port on i.MX EVK boards to output U-Boot and
UEFI firmware serial debug logs to a host PC. The USB micro-B port on the EVK presents a virtual serial port to
the host PC, which can be read by common Windows serial terminal applications such as HyperTerminal, Tera
Term, or PuTTY.
1. Connect the target and the PC using the cable mentioned above.
2. Open Device Manager on the PC and locate the Enhanced Virtual serial device and note the COM port
number.
3. Open the terminal on the PC. Configure the Enhanced Virtual serial/COM port to 921600 baud/s, 8-bit, onestop bit.

[링크 : https://www.nxp.com/docs/en/quick-reference-guide/IMXWQSG.pdf]

 

아무튼 baudrate 맞추니 이상하게 라도 나온다(키보드 인식 안해서 uart를 통해서 제어해야 하다니 -_ㅠ)

 

Device manager 뜨는게 없다.

 

Boot Manager 에서는 UEFI 옵션별로 뜨는데

 

요건 i.mx8mp 보드의 eMMC로 부팅하는거. 어떤 버전이 구워져있는진 모르겠지만

부팅하다가 kernel panic 뜨는데 어쩌면 윈도우용으로 부트로더가 설정하는 바람에

uboot + kernel 에서 설정되지 않는 부분때문에 그런걸지도 모르겠다.

 

윈도우 IoT 이미지가 2.5MB 밖에 안되서 그런가 SD 카드로 부팅하려고 하면 진행이 안된다.

 

Boot Maintenance Manager 에서는 먼가 조금 뜨는데 들어간다고 먼가 되는게 있는 것도 아니라서..

 

그나마 console option 으로 들어가면 input / output / stderr 요런걸로 어떤 장치를 쓸거냐 정도만 나온다.

그냥 윈도우 처럼 GUI 화면은 도대체 어떻게 해야 하냐..

'embeded > i.mx 8m plus' 카테고리의 다른 글

missed: not vectorized: relevant stmt not supported:  (0) 2023.08.31
nxp i.mx8mp win iot part 2  (0) 2023.05.26
rpmsg-lite  (0) 2023.03.23
i.mx8m plus cortex-m7 part.2  (0) 2023.02.23
i.mx8m plus cortex-m7  (0) 2023.02.23
Posted by 구차니