'잡동사니'에 해당되는 글 13453건
- 2022.04.17 신속항원 검사
- 2022.04.16 짧은 주말
- 2022.04.15 deepstream SSD
- 2022.04.15 deepstream
- 2022.04.15 ffmpeg을 이용하여 동영상을 프레임 별로 jpeg 로 변환하기
- 2022.04.14 ai 컨퍼런스
- 2022.04.13 ssd_inception_v2_coco_2017_11_17.tar.gz
- 2022.04.13 nvidia jetson deepstream objectDetector_SSD 플러그인 분석
- 2022.04.13 nvidia jetson deepstream objectDetector_SSD 실행 스크립트 분석
- 2022.04.13 digital twin
Pre-requisites: - Copy the model's label file "ssd_coco_labels.txt" from the data/ssd directory in TensorRT samples to this directory. - Steps to generate the UFF model from ssd_inception_v2_coco TensorFlow frozen graph. These steps have been referred from TensorRT sampleUffSSD README: 1. Make sure TensorRT's uff-converter-tf package is installed. 2. Install tensorflow-gpu package for python: For dGPU: $ pip install tensorflow-gpu For Jetson, refer to https://elinux.org/Jetson_Zoo#TensorFlow 3. Download and untar the ssd_inception_v2_coco TensorFlow trained model from http://download.tensorflow.org/models/object_detection/ssd_inception_v2_coco_2017_11_17.tar.gz 4. Navigate to the extracted directory and convert the frozen graph to uff: $ cd ssd_inception_v2_coco_2017_11_17 $ python /usr/lib/python2.7/dist-packages/uff/bin/convert_to_uff.py \ frozen_inference_graph.pb -O NMS \ -p /usr/src/tensorrt/samples/sampleUffSSD/config.py \ -o sample_ssd_relu6.uff 5. Copy sample_ssd_relu6.uff to this directory. |
$ python3 /usr/lib/python3.6/dist-packages/uff/bin/convert_to_uff.py |
[링크 : https://docs.nvidia.com/metropolis/deepstream/dev-guide/text/DS_FAQ.html]
변환해서 netron.app 에서 보니 읭... output 이름이 NMS
타입과 텐서 차원이 안보인다?
'embeded > jetson' 카테고리의 다른 글
deepstream 구조, gstreamer module 설명 (2) | 2022.04.19 |
---|---|
deepstream nvinfer (0) | 2022.04.18 |
deepstream (0) | 2022.04.15 |
ssd_inception_v2_coco_2017_11_17.tar.gz (0) | 2022.04.13 |
nvidia jetson deepstream objectDetector_SSD 플러그인 분석 (0) | 2022.04.13 |
[링크 : https://docs.nvidia.com/metropolis/deepstream/sdk-api/nvdsinfer__custom__impl_8h.html]
NvDsInferLayerInfo Struct
[링크 : https://docs.nvidia.com/metropolis/deepstream/sdk-api/structNvDsInferLayerInfo.html]
45 typedef struct 46 { 48 unsigned int numDims; 50 unsigned int d[NVDSINFER_MAX_DIMS]; 52 unsigned int numElements; 53 } NvDsInferDims; 71 typedef enum 72 { 74 FLOAT = 0, 76 HALF = 1, 78 INT8 = 2, 80 INT32 = 3 81 } NvDsInferDataType; 86 typedef struct 87 { 89 NvDsInferDataType dataType; 91 union { 92 NvDsInferDims inferDims; 93 NvDsInferDims dims _DS_DEPRECATED_("dims is deprecated. Use inferDims instead"); 94 }; 96 int bindingIndex; 98 const char* layerName; 100 void *buffer; 103 int isInput; 104 } NvDsInferLayerInfo; |
[링크 : https://docs.nvidia.com/metropolis/deepstream/sdk-api/nvdsinfer_8h_source.html]
'embeded > jetson' 카테고리의 다른 글
deepstream nvinfer (0) | 2022.04.18 |
---|---|
deepstream SSD (0) | 2022.04.15 |
ssd_inception_v2_coco_2017_11_17.tar.gz (0) | 2022.04.13 |
nvidia jetson deepstream objectDetector_SSD 플러그인 분석 (0) | 2022.04.13 |
nvidia jetson deepstream objectDetector_SSD 실행 스크립트 분석 (0) | 2022.04.13 |
ffmpeg -framerate 30 -i Image%08d.jpg -crf 23 Output.mp4 |
[링크 : https://stackoverflow.com/questions/3002601/converting-avi-frames-to-jpgs-on-linux]
ffmpeg -i in.avi -vsync cfr -r 1 -f image2 'img-%03d.jpeg' |
'프로그램 사용 > ffmpeg & ffserver' 카테고리의 다른 글
ffmpeg을 이용한 원하는 동영상 구간 자르기 (0) | 2022.07.28 |
---|---|
mp4 복구 시도 (0) | 2022.01.24 |
ffmpeg을 이용한 rgb565 to bmp (0) | 2021.10.18 |
ffmpeg 재생 어렵다 -_ㅠ (0) | 2021.02.22 |
ffmpeg fbdev (0) | 2021.02.09 |
netron 웹 버전에서 받아서 보는 중
받아볼 녀석은 아래의 링크이고..
[링크 : http://download.tensorflow.org/models/object_detection/ssd_inception_v2_coco_2017_11_17.tar.gz]
어떤 이름의 레이어에서 출력을 내주는지 한번 찾아보는 중
크게 4개인 것 같고 명칭은 아래와 같은데..
detection_boxes, detection_scores, detection_classes, num_detections
detection_boxes의 경우 4개 좌표로 나와있을줄 알았는데 단순히 float32라고만 되어있어서 멘붕..
'embeded > jetson' 카테고리의 다른 글
deepstream SSD (0) | 2022.04.15 |
---|---|
deepstream (0) | 2022.04.15 |
nvidia jetson deepstream objectDetector_SSD 플러그인 분석 (0) | 2022.04.13 |
nvidia jetson deepstream objectDetector_SSD 실행 스크립트 분석 (0) | 2022.04.13 |
jetson / armv8 EL (0) | 2022.04.07 |
코드 기본 구조만 남기고 상세 코드는 분석을 위해 삭제
$ cat nvdsiplugin_ssd.cpp #include "NvInferPlugin.h" #include <vector> #include "cuda_runtime_api.h" #include <cassert> #include <cublas_v2.h> #include <functional> #include <numeric> #include <algorithm> #include <iostream> using namespace nvinfer1; class FlattenConcat : public IPluginV2 { public: FlattenConcat(int concatAxis, bool ignoreBatch) : mIgnoreBatch(ignoreBatch) , mConcatAxisID(concatAxis) { assert(mConcatAxisID == 1 || mConcatAxisID == 2 || mConcatAxisID == 3); } //clone constructor FlattenConcat(int concatAxis, bool ignoreBatch, int numInputs, int outputConcatAxis, int* inputConcatAxis) : mIgnoreBatch(ignoreBatch) , mConcatAxisID(concatAxis) , mOutputConcatAxis(outputConcatAxis) , mNumInputs(numInputs) { CHECK(cudaMallocHost((void**) &mInputConcatAxis, mNumInputs * sizeof(int))); for (int i = 0; i < mNumInputs; ++i) mInputConcatAxis[i] = inputConcatAxis[i]; } FlattenConcat(const void* data, size_t length) { } ~FlattenConcat() { } int getNbOutputs() const noexcept override { return 1; } Dims getOutputDimensions(int index, const Dims* inputs, int nbInputDims) noexcept override { } int initialize() noexcept override { } void terminate() noexcept override { } size_t getWorkspaceSize(int) const noexcept override { return 0; } int enqueue(int batchSize, void const* const* inputs, void* const* outputs, void*, cudaStream_t stream) noexcept override { } size_t getSerializationSize() const noexcept override { } void serialize(void* buffer) const noexcept override { } void configureWithFormat(const Dims* inputs, int nbInputs, const Dims* outputDims, int nbOutputs, nvinfer1::DataType type, nvinfer1::PluginFormat format, int maxBatchSize) noexcept override { } bool supportsFormat(DataType type, PluginFormat format) const noexcept override { } const char* getPluginType() const noexcept override { return "FlattenConcat_TRT"; } const char* getPluginVersion() const noexcept override { return "1"; } void destroy() noexcept override { delete this; } IPluginV2* clone() const noexcept override { } void setPluginNamespace(const char* libNamespace) noexcept override { mNamespace = libNamespace; } const char* getPluginNamespace() const noexcept override { return mNamespace.c_str(); } private: template <typename T> void write(char*& buffer, const T& val) const { } template <typename T> T read(const char*& buffer) { } size_t* mCopySize = nullptr; bool mIgnoreBatch{false}; int mConcatAxisID{0}, mOutputConcatAxis{0}, mNumInputs{0}; int* mInputConcatAxis = nullptr; nvinfer1::Dims mCHW; cublasHandle_t mCublas; std::string mNamespace; }; namespace { const char* FLATTENCONCAT_PLUGIN_VERSION{"1"}; const char* FLATTENCONCAT_PLUGIN_NAME{"FlattenConcat_TRT"}; } // namespace class FlattenConcatPluginCreator : public IPluginCreator { public: FlattenConcatPluginCreator() { mPluginAttributes.emplace_back(PluginField("axis", nullptr, PluginFieldType::kINT32, 1)); mPluginAttributes.emplace_back(PluginField("ignoreBatch", nullptr, PluginFieldType::kINT32, 1)); mFC.nbFields = mPluginAttributes.size(); mFC.fields = mPluginAttributes.data(); } ~FlattenConcatPluginCreator() {} const char* getPluginName() const noexcept override { return FLATTENCONCAT_PLUGIN_NAME; } const char* getPluginVersion() const noexcept override { return FLATTENCONCAT_PLUGIN_VERSION; } const PluginFieldCollection* getFieldNames() noexcept override { return &mFC; } IPluginV2* createPlugin(const char* name, const PluginFieldCollection* fc) noexcept override { } IPluginV2* deserializePlugin(const char* name, const void* serialData, size_t serialLength) noexcept override { return new FlattenConcat(serialData, serialLength); } void setPluginNamespace(const char* libNamespace) noexcept override { mNamespace = libNamespace; } const char* getPluginNamespace() const noexcept override { return mNamespace.c_str(); } private: static PluginFieldCollection mFC; bool mIgnoreBatch{false}; int mConcatAxisID; static std::vector<PluginField> mPluginAttributes; std::string mNamespace = ""; }; PluginFieldCollection FlattenConcatPluginCreator::mFC{}; std::vector<PluginField> FlattenConcatPluginCreator::mPluginAttributes; REGISTER_TENSORRT_PLUGIN(FlattenConcatPluginCreator); |
$ cat nvdsparsebbox_ssd.cpp #include <cstring> #include <iostream> #include "nvdsinfer_custom_impl.h" #define MIN(a,b) ((a) < (b) ? (a) : (b)) #define MAX(a,b) ((a) > (b) ? (a) : (b)) #define CLIP(a,min,max) (MAX(MIN(a, max), min)) /* This is a sample bounding box parsing function for the sample SSD UFF * detector model provided with the TensorRT samples. */ extern "C" bool NvDsInferParseCustomSSD (std::vector<NvDsInferLayerInfo> const &outputLayersInfo, NvDsInferNetworkInfo const &networkInfo, NvDsInferParseDetectionParams const &detectionParams, std::vector<NvDsInferObjectDetectionInfo> &objectList); /* C-linkage to prevent name-mangling */ extern "C" bool NvDsInferParseCustomSSD (std::vector<NvDsInferLayerInfo> const &outputLayersInfo, NvDsInferNetworkInfo const &networkInfo, NvDsInferParseDetectionParams const &detectionParams, std::vector<NvDsInferObjectDetectionInfo> &objectList) { for (int i = 0; i < keepCount; ++i) { NvDsInferObjectDetectionInfo object; object.classId = classId; object.detectionConfidence = det[2]; object.left = CLIP(rectx1, 0, networkInfo.width - 1); object.top = CLIP(recty1, 0, networkInfo.height - 1); object.width = CLIP(rectx2, 0, networkInfo.width - 1) - object.left + 1; object.height = CLIP(recty2, 0, networkInfo.height - 1) - object.top + 1; objectList.push_back(object); } return true; } /* Check that the custom function has been defined correctly */ CHECK_CUSTOM_PARSE_FUNC_PROTOTYPE(NvDsInferParseCustomSSD); |
+ ?
+
NvDsInferDataType dataType union { NvDsInferDims inferDims }; int bindingIndex const char * layerName void * buffer int isInput |
'embeded > jetson' 카테고리의 다른 글
deepstream (0) | 2022.04.15 |
---|---|
ssd_inception_v2_coco_2017_11_17.tar.gz (0) | 2022.04.13 |
nvidia jetson deepstream objectDetector_SSD 실행 스크립트 분석 (0) | 2022.04.13 |
jetson / armv8 EL (0) | 2022.04.07 |
nvidia jetson partition table (0) | 2022.04.06 |
- With gst-launch-1.0 For Jetson: $ gst-launch-1.0 filesrc location=../../samples/streams/sample_1080p_h264.mp4 ! \ decodebin ! m.sink_0 nvstreammux name=m batch-size=1 width=1280 height=720 ! \ nvinfer config-file-path= config_infer_primary_ssd.txt ! \ nvvideoconvert ! nvdsosd ! nvegltransform ! nveglglessink - With deepstream-app $ deepstream-app -c deepstream_app_config_ssd.txt |
$ cat deepstream_app_config_ssd.txt [application] enable-perf-measurement=1 perf-measurement-interval-sec=1 gie-kitti-output-dir=streamscl [tiled-display] enable=0 rows=1 columns=1 width=1280 height=720 gpu-id=0 nvbuf-memory-type=0 [source0] enable=1 #Type - 1=CameraV4L2 2=URI 3=MultiURI type=3 num-sources=1 uri=file://../../samples/streams/sample_1080p_h264.mp4 gpu-id=0 cudadec-memtype=0 [streammux] gpu-id=0 batch-size=1 batched-push-timeout=-1 ## Set muxer output width and height width=1920 height=1080 nvbuf-memory-type=0 [sink0] enable=1 #Type - 1=FakeSink 2=EglSink 3=File type=2 sync=1 source-id=0 gpu-id=0 [osd] enable=1 gpu-id=0 border-width=3 text-size=15 text-color=1;1;1;1; text-bg-color=0.3;0.3;0.3;1 font=Serif show-clock=0 clock-x-offset=800 clock-y-offset=820 clock-text-size=12 clock-color=1;0;0;0 nvbuf-memory-type=0 [primary-gie] enable=1 gpu-id=0 batch-size=1 gie-unique-id=1 interval=0 labelfile-path=/home/nvidia/tmp_onnx/labels.txt #labelfile-path=ssd_coco_labels.txt model-engine-file=sample_ssd_relu6.uff_b1_gpu0_fp32.engine config-file=config_infer_primary_ssd.txt nvbuf-memory-type=0 |
$ cat config_infer_primary_ssd.txt [property] gpu-id=0 net-scale-factor=0.0078431372 offsets=127.5;127.5;127.5 model-color-format=0 # yw onnx-file=/home/nvidia/tmp_onnx/model.onnx labelfile=/home/nvidia/tmp_onnx/labels.txt model-engine-file=sample_ssd_relu6.uff_b1_gpu0_fp32.engine labelfile-path=ssd_coco_labels.txt uff-file=sample_ssd_relu6.uff infer-dims=3;300;300 uff-input-order=0 uff-input-blob-name=Input batch-size=1 ## 0=FP32, 1=INT8, 2=FP16 mode network-mode=2 num-detected-classes=91 interval=0 gie-unique-id=1 is-classifier=0 output-blob-names=MarkOutput_0 parse-bbox-func-name=NvDsInferParseCustomSSD custom-lib-path=nvdsinfer_custom_impl_ssd/libnvdsinfer_custom_impl_ssd.so #scaling-filter=0 #scaling-compute-hw=0 [class-attrs-all] threshold=0.5 roi-top-offset=0 roi-bottom-offset=0 detected-min-w=0 detected-min-h=0 detected-max-w=0 detected-max-h=0 ## Per class configuration #[class-attrs-2] #threshold=0.6 #roi-top-offset=20 #roi-bottom-offset=10 #detected-min-w=40 #detected-min-h=40 #detected-max-w=400 #detected-max-h=800 |
'embeded > jetson' 카테고리의 다른 글
ssd_inception_v2_coco_2017_11_17.tar.gz (0) | 2022.04.13 |
---|---|
nvidia jetson deepstream objectDetector_SSD 플러그인 분석 (0) | 2022.04.13 |
jetson / armv8 EL (0) | 2022.04.07 |
nvidia jetson partition table (0) | 2022.04.06 |
jetson nano 부팅이 안됨 (0) | 2022.04.06 |
디지털 트윈은.. 테슬라 자율주행의 컨셉이라고 하면 쉽게 이해가 되려나?
현실을 스캔해서 가상현실로 끌어오고
그걸 이용해 다양하게 분석, 시뮬레이션 하여 최적의 운전을 하는 것
그 대상이 빌딩일 수도 있고, 땅일수도 있고 특정 물건일 수도 있다.
[링크 : https://matterport.com/ko/what-digital-twin]
[링크 : https://redshift.autodesk.co.kr/what-is-a-digital-twin/]
디지털 트윈은 물리적 객체, 프로세스, 관계, 행동을 포함하는 실세계의 가상 표현입니다. GIS는 자연 및 인공 환경의 디지털 트윈을 생성하며 다양한 유형의 디지털 모델을 고유하게 통합합니다. |
[링크 : https://www.esri.com/ko-kr/digital-twin/overview]
‘디지털 트윈(Digital-Twin)’이란? 실제 장비나 공간을 가상 세계에 쌍둥이처럼 똑같이 구현하는 기술이에요. |
[링크 : https://www.korea.kr/news/visualNewsView.do?newsId=148876722]
'이론 관련 > 컴퓨터 관련' 카테고리의 다른 글
DR - Disaster Recovery Plan (0) | 2022.10.17 |
---|---|
SLP - Superword Level Parallelism (0) | 2022.06.02 |
current loop to rs232 (0) | 2021.10.22 |
usb dwc (0) | 2021.09.23 |
fp16 (0) | 2021.05.10 |