Programming/golang2022. 4. 18. 19:15

 

$ cat hello.go 
package main

import "fmt"

func main() {
fmt.Println("Hello world")
hello()
}

$ cat func.go 
package main

import "fmt"

func hello() {
fmt.Println("Hello world 2")
}

$ go run .
go: go.mod file not found in current directory or any parent directory; see 'go help modules'

$ go mod init
go: creating new go.mod: module go2
go: to add module requirements and sums:
go mod tidy

$ go run .
Hello world
Hello world 2

'Programming > golang' 카테고리의 다른 글

golang unused import  (0) 2022.07.20
golang websocket package  (0) 2022.07.15
golang module  (0) 2022.04.13
golang 구조체  (0) 2022.04.11
golang defer와 if  (0) 2022.04.11
Posted by 구차니
embeded/jetson2022. 4. 18. 15:12

소스를 분석해보니

objectList에는 x,y,w,h 로 infercence 할 크기 기준으로 결과를 넣어주면 된다.

즉, x = output * width 식으로 계산해야 한다.

 

cat config_infer_primary_ssd.txt
num-detected-classes=91

위의 설정은 deepstream plugin의 아래 값으로 넘어옴
detectionParams.numClassesConfigured

 

 

$ gst-inspect-1.0 nvinfer
Factory Details:
  Rank                     primary (256)
  Long-name                NvInfer plugin
  Klass                    NvInfer Plugin
  Description              Nvidia DeepStreamSDK TensorRT plugin
  Author                   NVIDIA Corporation. Deepstream for Tesla forum: https://devtalk.nvidia.com/default/board/209

Plugin Details:
  Name                     nvdsgst_infer
  Description              NVIDIA DeepStreamSDK TensorRT plugin
  Filename                 /usr/lib/aarch64-linux-gnu/gstreamer-1.0/deepstream/libnvdsgst_infer.so
  Version                  6.0.1
  License                  Proprietary
  Source module            nvinfer
  Binary package           NVIDIA DeepStreamSDK TensorRT plugin
  Origin URL               http://nvidia.com/

GObject
 +----GInitiallyUnowned
       +----GstObject
             +----GstElement
                   +----GstBaseTransform
                         +----GstNvInfer

Pad Templates:
  SINK template: 'sink'
    Availability: Always
    Capabilities:
      video/x-raw(memory:NVMM)
                 format: { (string)NV12, (string)RGBA }
                  width: [ 1, 2147483647 ]
                 height: [ 1, 2147483647 ]
              framerate: [ 0/1, 2147483647/1 ]
  
  SRC template: 'src'
    Availability: Always
    Capabilities:
      video/x-raw(memory:NVMM)
                 format: { (string)NV12, (string)RGBA }
                  width: [ 1, 2147483647 ]
                 height: [ 1, 2147483647 ]
              framerate: [ 0/1, 2147483647/1 ]

Element has no clocking capabilities.
Element has no URI handling capabilities.

Pads:
  SINK: 'sink'
    Pad Template: 'sink'
  SRC: 'src'
    Pad Template: 'src'

Element Properties:
  name                : The name of the object
                        flags: readable, writable
                        String. Default: "nvinfer0"
  parent              : The parent of the object
                        flags: readable, writable
                        Object of type "GstObject"
  qos                 : Handle Quality-of-Service events
                        flags: readable, writable
                        Boolean. Default: false
  unique-id           : Unique ID for the element. Can be used to identify output of the element
                        flags: readable, writable, changeable only in NULL or READY state
                        Unsigned Integer. Range: 0 - 4294967295 Default: 15 
  process-mode        : Infer processing mode
                        flags: readable, writable, changeable only in NULL or READY state
                        Enum "GstNvInferProcessModeType" Default: 1, "primary"
                           (1): primary          - Primary (Full Frame)
                           (2): secondary        - Secondary (Objects)
  config-file-path    : Path to the configuration file for this instance of nvinfer
                        flags: readable, writable, changeable in NULL, READY, PAUSED or PLAYING state
                        String. Default: ""
  infer-on-gie-id     : Infer on metadata generated by GIE with this unique ID.
Set to -1 to infer on all metadata.
                        flags: readable, writable, changeable only in NULL or READY state
                        Integer. Range: -1 - 2147483647 Default: -1 
  infer-on-class-ids  : Operate on objects with specified class ids
Use string with values of class ids in ClassID (int) to set the property.
 e.g. 0:2:3
                        flags: readable, writable, changeable only in NULL or READY state
                        String. Default: ""
  filter-out-class-ids: Ignore metadata for objects of specified class ids
Use string with values of class ids in ClassID (int) to set the property.
 e.g. 0;2;3
                        flags: readable, writable, changeable only in NULL or READY state
                        String. Default: ""
  model-engine-file   : Absolute path to the pre-generated serialized engine file for the model
                        flags: readable, writable, changeable in NULL, READY, PAUSED or PLAYING state
                        String. Default: ""
  batch-size          : Maximum batch size for inference
                        flags: readable, writable, changeable only in NULL or READY state
                        Unsigned Integer. Range: 1 - 1024 Default: 1 
  interval            : Specifies number of consecutive batches to be skipped for inference
                        flags: readable, writable, changeable only in NULL or READY state
                        Unsigned Integer. Range: 0 - 2147483647 Default: 0 
  gpu-id              : Set GPU Device ID
                        flags: readable, writable, changeable only in NULL or READY state
                        Unsigned Integer. Range: 0 - 4294967295 Default: 0 
  raw-output-file-write: Write raw inference output to file
                        flags: readable, writable, changeable only in NULL or READY state
                        Boolean. Default: false
  raw-output-generated-callback: Pointer to the raw output generated callback funtion
(type: gst_nvinfer_raw_output_generated_callback in 'gstnvdsinfer.h')
                        flags: readable, writable, changeable only in NULL or READY state
                        Pointer.
  raw-output-generated-userdata: Pointer to the userdata to be supplied with raw output generated callback
                        flags: readable, writable, changeable only in NULL or READY state
                        Pointer.
  output-tensor-meta  : Attach inference tensor outputs as buffer metadata
                        flags: readable, writable, changeable only in NULL or READY state
                        Boolean. Default: false
  output-instance-mask: Instance mask expected in network output and attach it to metadata
                        flags: readable, writable, changeable only in NULL or READY state
                        Boolean. Default: false
  input-tensor-meta   : Use preprocessed input tensors attached as metadata instead of preprocessing inside the plugin
                        flags: readable, writable, changeable only in NULL or READY state
                        Boolean. Default: false

Element Signals:
  "model-updated" :  void user_function (GstElement* object,
                                         gint arg0,
                                         gchararray arg1,
                                         gpointer user_data);

 

 

[링크 : https://docs.nvidia.com/metropolis/deepstream/dev-guide/text/DS_plugin_gst-nvinfer.html]

 

model-file (Caffe model)
proto-file (Caffe model)
uff-file (UFF models)
onnx-file (ONNX models)
model-engine-file, if already generated
int8-calib-file for INT8 mode
mean-file, if required
offsets, if required
maintain-aspect-ratio, if required
parse-bbox-func-name (detectors only)
parse-classifier-func-name (classifiers only)
custom-lib-path
output-blob-names (Caffe and UFF models)
network-type
model-color-format
process-mode
engine-create-func-name
infer-dims (UFF models)
uff-input-order (UFF models)

[링크 : https://docs.nvidia.com/metropolis/deepstream/dev-guide/text/DS_using_custom_model.html]

'embeded > jetson' 카테고리의 다른 글

FLIR ETS320 / v4l  (0) 2022.04.21
deepstream 구조, gstreamer module 설명  (2) 2022.04.19
deepstream SSD  (0) 2022.04.15
deepstream  (0) 2022.04.15
ssd_inception_v2_coco_2017_11_17.tar.gz  (0) 2022.04.13
Posted by 구차니

부모님과 누나네 가족이 코로나 걸리는 바람에 밀접접촉이라고 해야하나?

아무튼 그런 이유로 애들 학교 보내기 전에 코로나 검사

 

그나저나.. 이제 보건소에서도 신속항원 안한다는 것 같은데

매번 학교에서 요청할때 마다 내돈들여서 병원가야하나?

'개소리 왈왈 > 육아관련 주저리' 카테고리의 다른 글

메인보드 구매, 교체  (0) 2022.04.30
장모님댁  (0) 2022.04.29
짧은 주말  (0) 2022.04.16
하루 종일 골골골  (0) 2022.04.10
운전 8시간  (0) 2022.04.09
Posted by 구차니

맨날 기절해도 밤에 잘 자는거 보면

피로가 엄청 쌓였나 보다..

'개소리 왈왈 > 육아관련 주저리' 카테고리의 다른 글

장모님댁  (0) 2022.04.29
신속항원 검사  (0) 2022.04.17
하루 종일 골골골  (0) 2022.04.10
운전 8시간  (0) 2022.04.09
으아아아 피로가 안풀려  (0) 2022.04.08
Posted by 구차니
embeded/jetson2022. 4. 15. 18:04

 

 

Pre-requisites:
- Copy the model's label file "ssd_coco_labels.txt" from the data/ssd directory
  in TensorRT samples to this directory.
- Steps to generate the UFF model from ssd_inception_v2_coco TensorFlow frozen
  graph. These steps have been referred from TensorRT sampleUffSSD README:
  1. Make sure TensorRT's uff-converter-tf package is installed.
  2. Install tensorflow-gpu package for python:
     For dGPU:
       $ pip install tensorflow-gpu
     For Jetson, refer to https://elinux.org/Jetson_Zoo#TensorFlow
  3. Download and untar the ssd_inception_v2_coco TensorFlow trained model from
     http://download.tensorflow.org/models/object_detection/ssd_inception_v2_coco_2017_11_17.tar.gz
  4. Navigate to the extracted directory and convert the frozen graph to uff:
     $ cd ssd_inception_v2_coco_2017_11_17
     $ python /usr/lib/python2.7/dist-packages/uff/bin/convert_to_uff.py \
         frozen_inference_graph.pb -O NMS \
         -p /usr/src/tensorrt/samples/sampleUffSSD/config.py \
         -o sample_ssd_relu6.uff
  5. Copy sample_ssd_relu6.uff to this directory.
$ python3 /usr/lib/python3.6/dist-packages/uff/bin/convert_to_uff.py

[링크 : https://docs.nvidia.com/metropolis/deepstream/dev-guide/text/DS_FAQ.html]

 

변환해서 netron.app 에서 보니 읭... output 이름이 NMS

타입과 텐서 차원이 안보인다?

'embeded > jetson' 카테고리의 다른 글

deepstream 구조, gstreamer module 설명  (2) 2022.04.19
deepstream nvinfer  (0) 2022.04.18
deepstream  (0) 2022.04.15
ssd_inception_v2_coco_2017_11_17.tar.gz  (0) 2022.04.13
nvidia jetson deepstream objectDetector_SSD 플러그인 분석  (0) 2022.04.13
Posted by 구차니
embeded/jetson2022. 4. 15. 17:14

 

 

[링크 : https://docs.nvidia.com/metropolis/deepstream/sdk-api/nvdsinfer__custom__impl_8h.html]

 

NvDsInferLayerInfo Struct

[링크 : https://docs.nvidia.com/metropolis/deepstream/sdk-api/structNvDsInferLayerInfo.html]

 

 

 

   45 typedef struct
   46 {
   48   unsigned int numDims;
   50   unsigned int d[NVDSINFER_MAX_DIMS];
   52   unsigned int numElements;
   53 } NvDsInferDims;

   71 typedef enum
   72 {
   74   FLOAT = 0,
   76   HALF = 1,
   78   INT8 = 2,
   80   INT32 = 3
   81 } NvDsInferDataType;

   86 typedef struct
   87 {
   89   NvDsInferDataType dataType;
   91   union {
   92       NvDsInferDims inferDims;
   93       NvDsInferDims dims _DS_DEPRECATED_("dims is deprecated. Use inferDims instead");
   94   };
   96   int bindingIndex;
   98   const char* layerName;
  100   void *buffer;
  103   int isInput;
  104 } NvDsInferLayerInfo;

[링크 : https://docs.nvidia.com/metropolis/deepstream/sdk-api/nvdsinfer_8h_source.html]

[링크 : https://docs.nvidia.com/metropolis/deepstream/sdk-api/group__ee__nvinf.html#ga6a35747b3bb45d13db9be3a2aa981e49]

Posted by 구차니

 

ffmpeg -framerate 30 -i Image%08d.jpg -crf 23 Output.mp4

[링크 : https://stackoverflow.com/questions/3002601/converting-avi-frames-to-jpgs-on-linux]

 

ffmpeg -i in.avi -vsync cfr -r 1 -f image2 'img-%03d.jpeg'

[링크 : https://ffmpeg.org/ffmpeg-formats.html#Examples-8]

'프로그램 사용 > ffmpeg & ffserver' 카테고리의 다른 글

ffmpeg을 이용한 원하는 동영상 구간 자르기  (0) 2022.07.28
mp4 복구 시도  (0) 2022.01.24
ffmpeg을 이용한 rgb565 to bmp  (0) 2021.10.18
ffmpeg 재생 어렵다 -_ㅠ  (0) 2021.02.22
ffmpeg fbdev  (0) 2021.02.09
Posted by 구차니

먼가 있어 보이지만 정작 가서 보면

그냥 기존에 해오던 것에 용어만 좀 덧 씌운 느낌?

 

시뮬레이션도 그냥 XR, AR,  디지털 트윈 이런 용어만 더 추가되고

40분 발표시간으로 인해서 수박 겉핥기만도 못한 내용이 되어버려서 아쉽..

'개소리 왈왈 > 직딩의 비애' 카테고리의 다른 글

집에서 PLC 코드 분석  (0) 2022.05.14
보안 전시회  (0) 2022.04.20
외근, 구매  (0) 2022.03.15
대선, 코로나  (0) 2022.03.09
먹방  (0) 2022.02.26
Posted by 구차니
embeded/jetson2022. 4. 13. 15:39

netron 웹 버전에서 받아서 보는 중

 

받아볼 녀석은 아래의 링크이고..

[링크 : http://download.tensorflow.org/models/object_detection/ssd_inception_v2_coco_2017_11_17.tar.gz]

 

어떤 이름의 레이어에서 출력을 내주는지 한번 찾아보는 중

크게 4개인 것 같고 명칭은 아래와 같은데..

detection_boxes, detection_scores, detection_classes, num_detections

detection_boxes의 경우 4개 좌표로 나와있을줄 알았는데 단순히 float32라고만 되어있어서 멘붕..

 

 

 

 

 

 

 

 

 

'embeded > jetson' 카테고리의 다른 글

deepstream SSD  (0) 2022.04.15
deepstream  (0) 2022.04.15
nvidia jetson deepstream objectDetector_SSD 플러그인 분석  (0) 2022.04.13
nvidia jetson deepstream objectDetector_SSD 실행 스크립트 분석  (0) 2022.04.13
jetson / armv8 EL  (0) 2022.04.07
Posted by 구차니
embeded/jetson2022. 4. 13. 11:48

코드 기본 구조만 남기고 상세 코드는 분석을 위해 삭제

$ cat nvdsiplugin_ssd.cpp
#include "NvInferPlugin.h"
#include <vector>
#include "cuda_runtime_api.h"
#include <cassert>
#include <cublas_v2.h>
#include <functional>
#include <numeric>
#include <algorithm>
#include <iostream>

using namespace nvinfer1;

class FlattenConcat : public IPluginV2
{
public:
    FlattenConcat(int concatAxis, bool ignoreBatch)
        : mIgnoreBatch(ignoreBatch)
        , mConcatAxisID(concatAxis)
    {
        assert(mConcatAxisID == 1 || mConcatAxisID == 2 || mConcatAxisID == 3);
    }
    //clone constructor
    FlattenConcat(int concatAxis, bool ignoreBatch, int numInputs, int outputConcatAxis, int* inputConcatAxis)
        : mIgnoreBatch(ignoreBatch)
        , mConcatAxisID(concatAxis)
        , mOutputConcatAxis(outputConcatAxis)
        , mNumInputs(numInputs)
    {
        CHECK(cudaMallocHost((void**) &mInputConcatAxis, mNumInputs * sizeof(int)));
        for (int i = 0; i < mNumInputs; ++i)
            mInputConcatAxis[i] = inputConcatAxis[i];
    }

    FlattenConcat(const void* data, size_t length)     {    }
    ~FlattenConcat()    {    }
    int getNbOutputs() const noexcept override { return 1; }
    Dims getOutputDimensions(int index, const Dims* inputs, int nbInputDims) noexcept override    {    }
    int initialize() noexcept override    {    }
    void terminate() noexcept override    {    }
    size_t getWorkspaceSize(int) const noexcept override { return 0; }
    int enqueue(int batchSize, void const* const* inputs, void* const* outputs, void*, cudaStream_t stream) noexcept override    {    }
    size_t getSerializationSize() const noexcept override   {    }
    void serialize(void* buffer) const noexcept override    {   }
    void configureWithFormat(const Dims* inputs, int nbInputs, const Dims* outputDims, int nbOutputs, nvinfer1::DataType type, nvinfer1::PluginFormat format, int maxBatchSize) noexcept override   {    }
    bool supportsFormat(DataType type, PluginFormat format) const noexcept override    {    }
    const char* getPluginType() const noexcept override { return "FlattenConcat_TRT"; }
    const char* getPluginVersion() const noexcept override { return "1"; }
    void destroy() noexcept override { delete this; }
    IPluginV2* clone() const noexcept override    {    }
    void setPluginNamespace(const char* libNamespace) noexcept override { mNamespace = libNamespace; }
    const char* getPluginNamespace() const noexcept override { return mNamespace.c_str(); }

private:
    template <typename T>    void write(char*& buffer, const T& val) const    {    }
    template <typename T>    T read(const char*& buffer)    {    }
    size_t* mCopySize = nullptr;
    bool mIgnoreBatch{false};
    int mConcatAxisID{0}, mOutputConcatAxis{0}, mNumInputs{0};
    int* mInputConcatAxis = nullptr;
    nvinfer1::Dims mCHW;
    cublasHandle_t mCublas;
    std::string mNamespace;
};

namespace
{
const char* FLATTENCONCAT_PLUGIN_VERSION{"1"};
const char* FLATTENCONCAT_PLUGIN_NAME{"FlattenConcat_TRT"};
} // namespace

class FlattenConcatPluginCreator : public IPluginCreator
{
public:
    FlattenConcatPluginCreator()
    {
        mPluginAttributes.emplace_back(PluginField("axis", nullptr, PluginFieldType::kINT32, 1));
        mPluginAttributes.emplace_back(PluginField("ignoreBatch", nullptr, PluginFieldType::kINT32, 1));
        mFC.nbFields = mPluginAttributes.size();
        mFC.fields = mPluginAttributes.data();
    }

    ~FlattenConcatPluginCreator() {}
    const char* getPluginName() const noexcept override { return FLATTENCONCAT_PLUGIN_NAME; }
    const char* getPluginVersion() const noexcept override { return FLATTENCONCAT_PLUGIN_VERSION; }
    const PluginFieldCollection* getFieldNames() noexcept override { return &mFC; }
    IPluginV2* createPlugin(const char* name, const PluginFieldCollection* fc) noexcept override    {    }
    IPluginV2* deserializePlugin(const char* name, const void* serialData, size_t serialLength) noexcept override    {        return new FlattenConcat(serialData, serialLength);    }
    void setPluginNamespace(const char* libNamespace) noexcept override { mNamespace = libNamespace; }
    const char* getPluginNamespace() const noexcept override { return mNamespace.c_str(); }

private:
    static PluginFieldCollection mFC;
    bool mIgnoreBatch{false};
    int mConcatAxisID;
    static std::vector<PluginField> mPluginAttributes;
    std::string mNamespace = "";
};

PluginFieldCollection FlattenConcatPluginCreator::mFC{};
std::vector<PluginField> FlattenConcatPluginCreator::mPluginAttributes;

REGISTER_TENSORRT_PLUGIN(FlattenConcatPluginCreator);

 

$ cat nvdsparsebbox_ssd.cpp
#include <cstring>
#include <iostream>
#include "nvdsinfer_custom_impl.h"

#define MIN(a,b) ((a) < (b) ? (a) : (b))
#define MAX(a,b) ((a) > (b) ? (a) : (b))
#define CLIP(a,min,max) (MAX(MIN(a, max), min))

/* This is a sample bounding box parsing function for the sample SSD UFF
 * detector model provided with the TensorRT samples. */

extern "C"
bool NvDsInferParseCustomSSD (std::vector<NvDsInferLayerInfo> const &outputLayersInfo,
        NvDsInferNetworkInfo  const &networkInfo,
        NvDsInferParseDetectionParams const &detectionParams,
        std::vector<NvDsInferObjectDetectionInfo> &objectList);

/* C-linkage to prevent name-mangling */
extern "C"
bool NvDsInferParseCustomSSD (std::vector<NvDsInferLayerInfo> const &outputLayersInfo,
        NvDsInferNetworkInfo  const &networkInfo,
        NvDsInferParseDetectionParams const &detectionParams,
        std::vector<NvDsInferObjectDetectionInfo> &objectList)
{
  for (int i = 0; i < keepCount; ++i)
  {
    NvDsInferObjectDetectionInfo object;
        object.classId = classId;
        object.detectionConfidence = det[2];
        object.left = CLIP(rectx1, 0, networkInfo.width - 1);
        object.top = CLIP(recty1, 0, networkInfo.height - 1);
        object.width = CLIP(rectx2, 0, networkInfo.width - 1) - object.left + 1;
        object.height = CLIP(recty2, 0, networkInfo.height - 1) - object.top + 1;
        objectList.push_back(object);
  }

  return true;
}

/* Check that the custom function has been defined correctly */
CHECK_CUSTOM_PARSE_FUNC_PROTOTYPE(NvDsInferParseCustomSSD);

 

 

+ ?

[링크 : https://github.com/AastaNV/eLinux_data/blob/main/deepstream/ssd-jetson_inference/ssd-jetson_inference.patch]

 

+

NvDsInferDataType  dataType
union {
   NvDsInferDims   inferDims
}; 
int  bindingIndex
const char *  layerName
void *  buffer
int  isInput

[링크 : https://docs.nvidia.com/metropolis/deepstream/5.0DP/dev-guide/DeepStream_Development_Guide/baggage/structNvDsInferLayerInfo.html]

'embeded > jetson' 카테고리의 다른 글

deepstream  (0) 2022.04.15
ssd_inception_v2_coco_2017_11_17.tar.gz  (0) 2022.04.13
nvidia jetson deepstream objectDetector_SSD 실행 스크립트 분석  (0) 2022.04.13
jetson / armv8 EL  (0) 2022.04.07
nvidia jetson partition table  (0) 2022.04.06
Posted by 구차니