'프로그램 사용'에 해당되는 글 2354건

  1. 2021.02.12 ssd_mobilenet_v2 on tf1, tf2
  2. 2021.02.11 fpn - Feature Pyramid Network
  3. 2021.02.10 tensorflow pipeline.config
  4. 2021.02.09 gst videorate
  5. 2021.02.09 ffmpeg fbdev
  6. 2021.02.08 tensorflow pipeline.conf
  7. 2021.02.08 gstreamer tee
  8. 2021.02.05 tf object detection COCO
  9. 2021.02.02 tensorflow lite SELECT_TF_OPS
  10. 2021.02.02 saved_model_cli

+ 2021.02.16

오는길에 다시 보니 tensorflow model garden / research / object detection 에서 구현된

내용들이지 엄밀하게는 tensorflow 자체의 구현은 아니다.

tensorflow를 가지고 구현한 내용이라고 해야하려나?

-

 

model ssd

type ssd_mobilenet_v2_keras 를

ssd_mobilenet_v2 로 바꾸었더니 아래와 같은 에러가 발생했다.

 

INFO:tensorflow:Maybe overwriting train_steps: 1
I0212 20:50:37.009305 140651210348352 config_util.py:552] Maybe overwriting train_steps: 1
INFO:tensorflow:Maybe overwriting use_bfloat16: False
I0212 20:50:37.009468 140651210348352 config_util.py:552] Maybe overwriting use_bfloat16: False
Traceback (most recent call last):
  File "model_main_tf2.py", line 113, in <module>
    tf.compat.v1.app.run()
  File "/home/minimonk/.local/lib/python3.8/site-packages/tensorflow/python/platform/app.py", line 40, in run
    _run(main=main, argv=argv, flags_parser=_parse_flags_tolerate_undef)
  File "/home/minimonk/.local/lib/python3.8/site-packages/absl/app.py", line 303, in run
    _run_main(main, args)
  File "/home/minimonk/.local/lib/python3.8/site-packages/absl/app.py", line 251, in _run_main
    sys.exit(main(argv))
  File "model_main_tf2.py", line 104, in main
    model_lib_v2.train_loop(
  File "/home/minimonk/src/SSD-MobileNet-TF/object_detection/model_lib_v2.py", line 507, in train_loop
    detection_model = MODEL_BUILD_UTIL_MAP['detection_model_fn_base'](
  File "/home/minimonk/src/SSD-MobileNet-TF/object_detection/builders/model_builder.py", line 1106, in build
    return build_func(getattr(model_config, meta_architecture), is_training,
  File "/home/minimonk/src/SSD-MobileNet-TF/object_detection/builders/model_builder.py", line 377, in _build_ssd_model
    _check_feature_extractor_exists(ssd_config.feature_extractor.type)
  File "/home/minimonk/src/SSD-MobileNet-TF/object_detection/builders/model_builder.py", line 249, in _check_feature_extractor_exists
    raise ValueError('{} is not supported. See `model_builder.py` for features '
ValueError: ssd_mobilenet_v2 is not supported. See `model_builder.py` for features extractors compatible with different versions of Tensorflow

 

model_builder.py를 열어보라는데 여러개 파일이 나타난다.?

$ sudo find / -name model_builder.py
/home/minimonk/src/SSD-MobileNet-TF/object_detection/builders/model_builder.py
/home/minimonk/src/SSD-MobileNet-TF/models/research/object_detection/builders/model_builder.py
/home/minimonk/src/SSD-MobileNet-TF/models/research/lstm_object_detection/model_builder.py
/home/minimonk/src/SSD-MobileNet-TF/build/lib/object_detection/builders/model_builder.py

 

lstm 어쩌구를 제외하면 용량이 동일하니 같은 파일로 간주하고 하나를 열어보니 다음과 같이 나오는데..

if tf_version.is_tf2() 에 의해서 사용가능한 녀석은.. 

ssd_mobilenet_v2_fpn_keras 와

ssd_mobilenet_v2_keras 뿐이다 -_-

기대했던 ssd_mobilenet_v2는  tf1 ㅠㅠ

$ vi /home/minimonk/src/SSD-MobileNet-TF/object_detection/builders/model_builder.py
if tf_version.is_tf2():
  from object_detection.models import center_net_hourglass_feature_extractor
  from object_detection.models import center_net_mobilenet_v2_feature_extractor
  from object_detection.models import center_net_mobilenet_v2_fpn_feature_extractor
  from object_detection.models import center_net_resnet_feature_extractor
  from object_detection.models import center_net_resnet_v1_fpn_feature_extractor
  from object_detection.models import faster_rcnn_inception_resnet_v2_keras_feature_extractor as frcnn_inc_res_keras
  from object_detection.models import faster_rcnn_resnet_keras_feature_extractor as frcnn_resnet_keras
  from object_detection.models import ssd_resnet_v1_fpn_keras_feature_extractor as ssd_resnet_v1_fpn_keras
  from object_detection.models import faster_rcnn_resnet_v1_fpn_keras_feature_extractor as frcnn_resnet_fpn_keras
  from object_detection.models.ssd_mobilenet_v1_fpn_keras_feature_extractor import SSDMobileNetV1FpnKerasFeatureExtractor
  from object_detection.models.ssd_mobilenet_v1_keras_feature_extractor import SSDMobileNetV1KerasFeatureExtractor
  from object_detection.models.ssd_mobilenet_v2_fpn_keras_feature_extractor import SSDMobileNetV2FpnKerasFeatureExtractor
  from object_detection.models.ssd_mobilenet_v2_keras_feature_extractor import SSDMobileNetV2KerasFeatureExtractor
  from object_detection.predictors import rfcn_keras_box_predictor
  if sys.version_info[0] >= 3:
    from object_detection.models import ssd_efficientnet_bifpn_feature_extractor as ssd_efficientnet_bifpn

if tf_version.is_tf1():
  from object_detection.models import faster_rcnn_inception_resnet_v2_feature_extractor as frcnn_inc_res
  from object_detection.models import faster_rcnn_inception_v2_feature_extractor as frcnn_inc_v2
  from object_detection.models import faster_rcnn_nas_feature_extractor as frcnn_nas
  from object_detection.models import faster_rcnn_pnas_feature_extractor as frcnn_pnas
  from object_detection.models import faster_rcnn_resnet_v1_feature_extractor as frcnn_resnet_v1
  from object_detection.models import ssd_resnet_v1_fpn_feature_extractor as ssd_resnet_v1_fpn
  from object_detection.models import ssd_resnet_v1_ppn_feature_extractor as ssd_resnet_v1_ppn
  from object_detection.models.embedded_ssd_mobilenet_v1_feature_extractor import EmbeddedSSDMobileNetV1FeatureExtractor
  from object_detection.models.ssd_inception_v2_feature_extractor import SSDInceptionV2FeatureExtractor
  from object_detection.models.ssd_mobilenet_v2_fpn_feature_extractor import SSDMobileNetV2FpnFeatureExtractor
  from object_detection.models.ssd_mobilenet_v2_mnasfpn_feature_extractor import SSDMobileNetV2MnasFPNFeatureExtractor
  from object_detection.models.ssd_inception_v3_feature_extractor import SSDInceptionV3FeatureExtractor
  from object_detection.models.ssd_mobilenet_edgetpu_feature_extractor import SSDMobileNetEdgeTPUFeatureExtractor
  from object_detection.models.ssd_mobilenet_v1_feature_extractor import SSDMobileNetV1FeatureExtractor
  from object_detection.models.ssd_mobilenet_v1_fpn_feature_extractor import SSDMobileNetV1FpnFeatureExtractor
  from object_detection.models.ssd_mobilenet_v1_ppn_feature_extractor import SSDMobileNetV1PpnFeatureExtractor
  from object_detection.models.ssd_mobilenet_v2_feature_extractor import SSDMobileNetV2FeatureExtractor
  from object_detection.models.ssd_mobilenet_v3_feature_extractor import SSDMobileNetV3LargeFeatureExtractor
  from object_detection.models.ssd_mobilenet_v3_feature_extractor import SSDMobileNetV3SmallFeatureExtractor
  from object_detection.models.ssd_mobiledet_feature_extractor import SSDMobileDetCPUFeatureExtractor
  from object_detection.models.ssd_mobiledet_feature_extractor import SSDMobileDetDSPFeatureExtractor
  from object_detection.models.ssd_mobiledet_feature_extractor import SSDMobileDetEdgeTPUFeatureExtractor
  from object_detection.models.ssd_mobiledet_feature_extractor import SSDMobileDetGPUFeatureExtractor
  from object_detection.models.ssd_pnasnet_feature_extractor import SSDPNASNetFeatureExtractor
  from object_detection.predictors import rfcn_box_predictor

 

[링크 : https://stackoverflow.com/questions/65938445/]

 

+

와.. ssd_mobilenet_v2_fpn_keras를 돌리는데 메모리 부족으로 죽어버리네 ㄷㄷ

눈에 보이는건.. additional_layer_depth 인가.. 이걸 줄이고 해봐야 겠네..

    feature_extractor {
      type: 'ssd_mobilenet_v2_fpn_keras'
      use_depthwise: true
      fpn {
        min_level: 3
        max_level: 7
        additional_layer_depth: 128
      }
      min_depth: 16
      depth_multiplier: 1.0
      conv_hyperparams {
        activation: RELU_6,
        regularizer {
          l2_regularizer {
            weight: 0.00004
          }
        }
        initializer {
          random_normal_initializer {
            stddev: 0.01
            mean: 0.0
          }
        }
        batch_norm {
          scale: true,
          decay: 0.997,
          epsilon: 0.001,
        }
      }
      override_base_feature_extractor_hyperparams: true
    }

 

/home/minimonk/.local/lib/python3.8/site-packages/tensorflow/python/keras/backend.py:434: UserWarning: `tf.keras.backend.set_learning_phase` is deprecated and will be removed after 2020-10-11. To update it, simply pass a True/False value to the `training` argument of the `__call__` method of your layer or model.
  warnings.warn('`tf.keras.backend.set_learning_phase` is deprecated and '
2021-02-12 21:23:47.163320: W tensorflow/core/framework/cpu_allocator_impl.cc:80] Allocation of 1258291200 exceeds 10% of free system memory.
죽었음

 

+

depth를 줄이고 해보니 되는척 하다가 또 에러가 발생 ㅋㅋ

    ValueError: Number of feature maps is expected to equal the length of `num_anchors_per_location`.

 

되는 척 하더니 안되네? ㅠㅠ

AssertionError: Some Python objects were not bound to checkpointed values, likely due to changes in the Python program: [MirroredVariable:{
  0: <tf.Variable 'block_8_depthwise/depthwise_kernel:0' shape=(3, 3, 384, 1) dtype=float32, numpy=

...

WARNING:tensorflow:A checkpoint was restored (e.g. tf.train.Checkpoint.restore or tf.keras.Model.load_weights) but not all checkpointed values were used. See above for specific issues. Use expect_partial() on the load status object, e.g. tf.train.Checkpoint.restore(...).expect_partial(), to silence these warnings, or use assert_consumed() to make the check explicit. See https://www.tensorflow.org/guide/checkpoint#loading_mechanics for details.
W0212 21:37:27.939997 140162079770432 util.py:168] A checkpoint was restored (e.g. tf.train.Checkpoint.restore or tf.keras.Model.load_weights) but not all checkpointed values were used. See above for specific issues. Use expect_partial() on the load status object, e.g. tf.train.Checkpoint.restore(...).expect_partial(), to silence these warnings, or use assert_consumed() to make the check explicit. See https://www.tensorflow.org/guide/checkpoint#loading_mechanics for details.

'프로그램 사용 > yolo_tensorflow' 카테고리의 다른 글

tensorflow model 학습 시작지점  (0) 2021.02.18
tensorflow tag set 'serve'  (0) 2021.02.18
fpn - Feature Pyramid Network  (0) 2021.02.11
tensorflow pipeline.config  (0) 2021.02.10
tensorflow pipeline.conf  (0) 2021.02.08
Posted by 구차니

모델 생성해서 보니 피라미드라고 불릴 만큼 크고 아름답다(?)

 

원본은 변환하다 문제가 생긴건지 잘 올려져서 그냥 크롬에서 줄여서 올리는데 티가 안나네

이걸 모바일 디바이스에서 돌릴순 있는게 맞나... ㄷㄷ

 

It stands for Feature Pyramid Network. Its a subnetwork which outputs feature maps of different resolutions. An explanation of FPN using detectron2 as an example is here: https://medium.com/@hirotoschwert/digging-into-detectron-2-part-2-dd6e8b0526e

[링크 : https://stackoverflow.com/questions/63653903]

'프로그램 사용 > yolo_tensorflow' 카테고리의 다른 글

tensorflow tag set 'serve'  (0) 2021.02.18
ssd_mobilenet_v2 on tf1, tf2  (0) 2021.02.12
tensorflow pipeline.config  (0) 2021.02.10
tensorflow pipeline.conf  (0) 2021.02.08
tf object detection COCO  (0) 2021.02.05
Posted by 구차니

pipeline.config 파일의 설정에 대한 페이지는 없나?

 

있는거 뜯어보니 상위 엘리먼트(?)는 아래와 같이 5개로 나눠진다.

이름만 보면 직관적이긴 한데 model / training / evaluation 세가지 그리고 training, evaluation에 대한 읽기 설정인 듯.

 

model {}

train_config {}

train_input_reader {}

eval_config {}

eval_input_reader {}

 

google model garden 에서 받아서 파일에서 분석을 해보니 아래와 같은 종류가 나온다.

'cosine'
'darknet'
'dilated_resnet'
'embedded_ssd_mobilenet_v1'
'exponential'
'faster_rcnn_inception_resnet_v2'
'faster_rcnn_inception_resnet_v2_keras'
'faster_rcnn_inception_v2'
'faster_rcnn_nas'
'faster_rcnn_resnet101'
'faster_rcnn_resnet101_keras'
'faster_rcnn_resnet152'
'faster_rcnn_resnet152_keras'
'faster_rcnn_resnet50'
'faster_rcnn_resnet50_fpn_keras'
'faster_rcnn_resnet50_keras'
'identity'
'linear'
'lstm_mobilenet_v1'
'lstm_mobilenet_v1_fpn'
'lstm_ssd_interleaved_mobilenet_v2'
'lstm_ssd_mobilenet_v1'
'mobilenet'
'polynomial'
'resnet'
'sgd'
'spinenet'
'ssd_efficientnet-b0_bifpn_keras'
'ssd_efficientnet-b1_bifpn_keras'
'ssd_efficientnet-b2_bifpn_keras'
'ssd_efficientnet-b3_bifpn_keras'
'ssd_efficientnet-b4_bifpn_keras'
'ssd_efficientnet-b5_bifpn_keras'
'ssd_efficientnet-b6_bifpn_keras'
'ssd_inception_v2'
'ssd_inception_v3'
'ssd_mobiledet_cpu'
'ssd_mobiledet_dsp'
'ssd_mobiledet_edgetpu'
'ssd_mobiledet_gpu'
'ssd_mobilenet_edgetpu'
'ssd_mobilenet_v1'
'ssd_mobilenet_v1_fpn'
'ssd_mobilenet_v1_fpn_keras'
'ssd_mobilenet_v1_ppn'
'ssd_mobilenet_v2'
'ssd_mobilenet_v2_fpn'
'ssd_mobilenet_v2_fpn_keras'
'ssd_mobilenet_v2_keras'
'ssd_mobilenet_v2_mnasfpn'
'ssd_mobilenet_v3_large'
'ssd_mobilenet_v3_small'
'ssd_resnet101_v1_fpn'
'ssd_resnet101_v1_fpn_keras'
'ssd_resnet152_v1_fpn_keras'
'ssd_resnet50_v1_fpn'
'ssd_resnet50_v1_fpn_keras'
'stepwise'

 

아래는 research / object_detection 아래만 검색한 내용

'embedded_ssd_mobilenet_v1'
'faster_rcnn_inception_resnet_v2'
'faster_rcnn_inception_resnet_v2_keras'
'faster_rcnn_inception_v2'
'faster_rcnn_nas'
'faster_rcnn_resnet101'
'faster_rcnn_resnet101_keras'
'faster_rcnn_resnet152'
'faster_rcnn_resnet152_keras'
'faster_rcnn_resnet50'
'faster_rcnn_resnet50_fpn_keras'
'faster_rcnn_resnet50_keras'
'ssd_efficientnet-b0_bifpn_keras'
'ssd_efficientnet-b1_bifpn_keras'
'ssd_efficientnet-b2_bifpn_keras'
'ssd_efficientnet-b3_bifpn_keras'
'ssd_efficientnet-b4_bifpn_keras'
'ssd_efficientnet-b5_bifpn_keras'
'ssd_efficientnet-b6_bifpn_keras'
'ssd_inception_v2'
'ssd_inception_v3'
'ssd_mobiledet_cpu'
'ssd_mobiledet_dsp'
'ssd_mobiledet_edgetpu'
'ssd_mobiledet_gpu'
'ssd_mobilenet_edgetpu'
'ssd_mobilenet_v1'
'ssd_mobilenet_v1_fpn'
'ssd_mobilenet_v1_fpn_keras'
'ssd_mobilenet_v1_ppn'
'ssd_mobilenet_v2'
'ssd_mobilenet_v2_fpn'
'ssd_mobilenet_v2_fpn_keras'
'ssd_mobilenet_v2_keras'
'ssd_mobilenet_v2_mnasfpn'
'ssd_mobilenet_v3_large'
'ssd_mobilenet_v3_small'
'ssd_resnet101_v1_fpn'
'ssd_resnet101_v1_fpn_keras'
'ssd_resnet152_v1_fpn_keras'
'ssd_resnet50_v1_fpn'
'ssd_resnet50_v1_fpn_keras'

 

'프로그램 사용 > yolo_tensorflow' 카테고리의 다른 글

ssd_mobilenet_v2 on tf1, tf2  (0) 2021.02.12
fpn - Feature Pyramid Network  (0) 2021.02.11
tensorflow pipeline.conf  (0) 2021.02.08
tf object detection COCO  (0) 2021.02.05
tensorflow lite SELECT_TF_OPS  (0) 2021.02.02
Posted by 구차니

'프로그램 사용 > gstreamer' 카테고리의 다른 글

gstreamer element 생성 gst_element_factory_make()  (0) 2021.07.13
gst fpsdisplaysink  (0) 2021.02.18
gstreamer tee  (0) 2021.02.08
gstreamer pipeline  (0) 2015.11.02
gstreamer  (0) 2015.08.05
Posted by 구차니

 

 

[링크 : https://unix.stackexchange.com/questions/342815/how-to-send-ffmpeg-output-to-framebuffer]

 

Pixel formats:
I.... = Supported Input  format for conversion
.O... = Supported Output format for conversion
..H.. = Hardware accelerated format
...P. = Paletted format
....B = Bitstream format
FLAGS NAME            NB_COMPONENTS BITS_PER_PIXEL
-----
IO... yuv420p                3            12
IO... yuyv422                3            16
IO... rgb24                  3            24
IO... bgr24                  3            24
IO... yuv422p                3            16
IO... yuv444p                3            24
IO... yuv410p                3             9
IO... yuv411p                3            12
IO... gray                   1             8

[링크 : https://ffmpeg.org/ffmpeg-devices.html]

'프로그램 사용 > ffmpeg & ffserver' 카테고리의 다른 글

ffmpeg을 이용한 rgb565 to bmp  (0) 2021.10.18
ffmpeg 재생 어렵다 -_ㅠ  (0) 2021.02.22
ffmpeg build  (0) 2020.11.25
webm을 mp3로 변환하기  (0) 2020.04.01
ffmpeg h264 encoding 옵션  (0) 2019.02.22
Posted by 구차니

 

137 train_config {
138   batch_size: 10
139   data_augmentation_options {
140     random_horizontal_flip {
141     }
142   }
143   data_augmentation_options {
144     ssd_random_crop {
145     }
146   }
147   sync_replicas: true
148   optimizer {
149     momentum_optimizer {
150       learning_rate {
151         cosine_decay_learning_rate {
152           learning_rate_base: 0.800000011920929
153           total_steps: 50000
154           warmup_learning_rate: 0.13333000242710114
155           warmup_steps: 2000
156         }
157       }
158       momentum_optimizer_value: 0.8999999761581421
159     }
160     use_moving_average: false
161   }
162   fine_tune_checkpoint: "ssd_mobilenet_v2_320x320_coco17_tpu-8/checkpoint/ckpt-0"
163   num_steps: 50000
164   startup_delay_steps: 0.0
165   replicas_to_aggregate: 8
166   max_number_of_boxes: 100
167   unpad_groundtruth_tensors: false
168   fine_tune_checkpoint_type: "detection"
169   fine_tune_checkpoint_version: V2
170 }


148   optimizer {
149     momentum_optimizer {
150       learning_rate {
151         cosine_decay_learning_rate {
152           learning_rate_base: 0.800000011920929
153           total_steps: 50000
154           warmup_learning_rate: 0.13333000242710114
155           warmup_steps: 2000
156         }
157       }
158       momentum_optimizer_value: 0.8999999761581421
159     }
160     use_moving_average: false
161   }

[링크 : https://github.com/tensorflow/models/blob/master/research/object_detection/configs/tf2/ssd_mobilenet_v2_320x320_coco17_tpu-8.config]

[링크 : https://blog.naver.com/bdh0727/221537759295]

 

+

ModuleNotFoundError: No module named 'tf_slim'
ModuleNotFoundError: No module named 'pycocotools'
ModuleNotFoundError: No module named 'lvis

 

num_train_steps=1로 하니 cpu 로만 학습해도 1회 뿐이라 금세 끝난다.

$ python3 model_main_tf2.py --pipeline_config_path=ssd_mobilenet_v2_320x320_coco17_tpu-8/pipeline.config --model_dir=trained-checkpoint --alsologtostderr --num_train_steps=1 --sample_1_of_n_eval_examples=1 --num_eval_steps=1
$ find ./ -type f -mmin -10
/trained-checkpoint/train/events.out.tfevents.1612780803.mini2760p.5335.2928.v2

 

$ python3 exporter_main_v2.py --input_type image_tensor --pipeline_config_path ./ssd_mobilenet_v2_320x320_coco17_tpu-8/pipeline.config --trained_checkpoint_dir ./trained-checkpoint --output_directory exported-model/mobile-model
$ find ./ -type f -mmin -10
./exported-model/mobile-model/saved_model/variables/variables.index
./exported-model/mobile-model/saved_model/variables/variables.data-00000-of-00001
./exported-model/mobile-model/saved_model/saved_model.pb
./exported-model/mobile-model/checkpoint/checkpoint
./exported-model/mobile-model/checkpoint/ckpt-0.data-00000-of-00001
./exported-model/mobile-model/checkpoint/ckpt-0.index
./exported-model/mobile-model/pipeline.config

 

[링크 : https://github.com/abhimanyu1990/SSD-Mobilenet-Custom-Object-Detector-Model-using-Tensorflow-2] <<

[링크 : https://stackoverflow.com/questions/64510791/tf2-object-detection-api-model-main-tf2-py-validation-loss]

[링크 : https://ichi.pro/ko/tensorflow-gaegche-gamji-gaideu-tensorflow-2-252181752953859]

 

+

[링크 : https://neptune.ai/blog/how-to-train-your-own-object-detector-using-tensorflow-object-detection-api]

 

+

[링크 : https://tensorflow-object-detection-api-tutorial.readthedocs.io/en/latest/auto_examples/plot_object_detection_saved_model.html]

[링크 : https://github.com/tensorflow/models/tree/master/research/object_detection/configs/tf2]

[링크 : https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/tf2_detection_zoo.md]

'프로그램 사용 > yolo_tensorflow' 카테고리의 다른 글

fpn - Feature Pyramid Network  (0) 2021.02.11
tensorflow pipeline.config  (0) 2021.02.10
tf object detection COCO  (0) 2021.02.05
tensorflow lite SELECT_TF_OPS  (0) 2021.02.02
saved_model_cli  (0) 2021.02.02
Posted by 구차니

요런식으로 해두는걸 tee 라고 하는듯

gst-launch-1.0 filesrc location=song.ogg ! decodebin ! tee name=t ! queue ! audioconvert ! audioresample ! autoaudiosink t. ! queue ! audioconvert ! goom ! videoconvert ! autovideosink

[링크 : https://gstreamer.freedesktop.org/documentation/coreelements/tee.html]

 

 

+

2025.08.22

tee를 선언하고 자기 자신을 불러도 되고

! tee name=t t. ! queue

 

호출하지 않고 바로 연결해도 된다.

! tee name=t ! queue

 

tee 자체가 sink가 없는애도 아니라 가능한게 정상...이다(?)

Pad Templates:
  SINK template: 'sink'
    Availability: Always
    Capabilities:
      ANY

'프로그램 사용 > gstreamer' 카테고리의 다른 글

gstreamer element 생성 gst_element_factory_make()  (0) 2021.07.13
gst fpsdisplaysink  (0) 2021.02.18
gst videorate  (0) 2021.02.09
gstreamer pipeline  (0) 2015.11.02
gstreamer  (0) 2015.08.05
Posted by 구차니

'프로그램 사용 > yolo_tensorflow' 카테고리의 다른 글

tensorflow pipeline.config  (0) 2021.02.10
tensorflow pipeline.conf  (0) 2021.02.08
tensorflow lite SELECT_TF_OPS  (0) 2021.02.02
saved_model_cli  (0) 2021.02.02
tensorflow bazel build 옵션  (0) 2021.02.02
Posted by 구차니

SELECT_TF_OPS로 이리저리 찾으니, cmake 파일에서 똭?!

그런데 FATAL_ERROR.. 수상하다...

TODO: Add support.... 수상하다.....

 68 # This must be enabled when converting from TF models with SELECT_TF_OPS
 69 # enabled.
 70 # https://www.tensorflow.org/lite/guide/ops_select#converting_the_model
 71 # This is currently not supported.
 72 option(TFLITE_ENABLE_FLEX "Enable SELECT_TF_OPS" OFF) # TODO: Add support
 
197 if(TFLITE_ENABLE_FLEX)
198   message(FATAL_ERROR "TF Lite Flex delegate is currently not supported.")
199   populate_tflite_source_vars("delegates/flex" TFLITE_DELEGATES_FLEX_SRCS)
200   list(APPEND TFLITE_TARGET_DEPENDENCIES
201     absl::inlined_vector
202     absl::optional
203     absl::type_traits
204   )
205 endif()

 

아래처럼 하면 된다는데

sudo apt-get install cmake
git clone https://github.com/tensorflow/tensorflow.git tensorflow_src
mkdir tflite_build
cd tflite_build
cmake ../tensorflow_src/tensorflow/lite

[링크 : https://www.tensorflow.org/lite/guide/build_cmake]

 

안되서 -S 옵션주니 된....

$ cmake -S ../tensorflow/tensorflow/lite

 

야이 그지깽깽이들아!!!! ㅠㅠㅠ

CMake Error at CMakeLists.txt:198 (message):
  TF Lite Flex delegate is currently not supported.

 

'프로그램 사용 > yolo_tensorflow' 카테고리의 다른 글

tensorflow pipeline.conf  (0) 2021.02.08
tf object detection COCO  (0) 2021.02.05
saved_model_cli  (0) 2021.02.02
tensorflow bazel build 옵션  (0) 2021.02.02
tensorflow bazel build  (0) 2021.02.01
Posted by 구차니

어디까지 흘러가야 답을 찾을수 있을까 ㅠㅠ

 

아래 사이트에서 pb와 tflite를 모두 제공해주어서 한번 시도

[링크 : https://tfhub.dev/google/aiy/vision/classifier/birds_V1/1]

 

위 링크의 pb를 확인해보면 아래와 같이 나오는데, tag-set이 'serve' 라는게 없어서 변환하려니 안된다

$ saved_model_cli show --dir ./ --all
2021-02-02 16:42:12.474893: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory
2021-02-02 16:42:12.474941: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.

MetaGraphDef with tag-set: '' contains the following SignatureDefs:

signature_def['default']:
  The given SavedModel SignatureDef contains the following input(s):
    inputs['images'] tensor_info:
        dtype: DT_FLOAT
        shape: (-1, 224, 224, 3)
        name: hub_input/images:0
  The given SavedModel SignatureDef contains the following output(s):
    outputs['default'] tensor_info:
        dtype: DT_FLOAT
        shape: (-1, 965)
        name: prediction:0
  Method name is:

signature_def['image_classifier']:
  The given SavedModel SignatureDef contains the following input(s):
    inputs['images'] tensor_info:
        dtype: DT_FLOAT
        shape: (-1, 224, 224, 3)
        name: hub_input/images:0
  The given SavedModel SignatureDef contains the following output(s):
    outputs['logits'] tensor_info:
        dtype: DT_FLOAT
        shape: (-1, 965)
        name: prediction:0
  Method name is:

 

이건 다른 모델.. 도대체 어떻게 봐야 하는걸까 ㅠㅠ

$ saved_model_cli show --dir ./ --all
2021-02-02 16:42:44.443703: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory
2021-02-02 16:42:44.443752: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.

MetaGraphDef with tag-set: 'serve' contains the following SignatureDefs:

signature_def['__saved_model_init_op']:
  The given SavedModel SignatureDef contains the following input(s):
  The given SavedModel SignatureDef contains the following output(s):
    outputs['__saved_model_init_op'] tensor_info:
        dtype: DT_INVALID
        shape: unknown_rank
        name: NoOp
  Method name is:

signature_def['serving_default']:
  The given SavedModel SignatureDef contains the following input(s):
    inputs['input_tensor'] tensor_info:
        dtype: DT_UINT8
        shape: (1, -1, -1, 3)
        name: serving_default_input_tensor:0
  The given SavedModel SignatureDef contains the following output(s):
    outputs['detection_anchor_indices'] tensor_info:
        dtype: DT_FLOAT
        shape: (1, 100)
        name: StatefulPartitionedCall:0
    outputs['detection_boxes'] tensor_info:
        dtype: DT_FLOAT
        shape: (1, 100, 4)
        name: StatefulPartitionedCall:1
    outputs['detection_classes'] tensor_info:
        dtype: DT_FLOAT
        shape: (1, 100)
        name: StatefulPartitionedCall:2
    outputs['detection_multiclass_scores'] tensor_info:
        dtype: DT_FLOAT
        shape: (1, 100, 91)
        name: StatefulPartitionedCall:3
    outputs['detection_scores'] tensor_info:
        dtype: DT_FLOAT
        shape: (1, 100)
        name: StatefulPartitionedCall:4
    outputs['num_detections'] tensor_info:
        dtype: DT_FLOAT
        shape: (1)
        name: StatefulPartitionedCall:5
    outputs['raw_detection_boxes'] tensor_info:
        dtype: DT_FLOAT
        shape: (1, 1917, 4)
        name: StatefulPartitionedCall:6
    outputs['raw_detection_scores'] tensor_info:
        dtype: DT_FLOAT
        shape: (1, 1917, 91)
        name: StatefulPartitionedCall:7
  Method name is: tensorflow/serving/predict

Defined Functions:
  Function Name: '__call__'
    Option #1
      Callable with:
        Argument #1
          input_tensor: TensorSpec(shape=(1, None, None, 3), dtype=tf.uint8, name='input_tensor')

 

혹시나 해서 다시 한번 select_tf_ops 옵션을 빼고 해보았지만 역시나 안된다.

눈에 들어오는 에러는 아래것 정도인데 -emit-select-tf-ops 옵션을 누구에게 주어야 하는건지 모르겠다.

그리고 custom op nor flex op. flex op는 또 무엇인가...

tensorflow.lite.python.convert.ConverterError: <unknown>:0: error: loc(callsite(callsite("Postprocessor/BatchMultiClassNonMaxSuppression/MultiClassNonMaxSuppression/SortByField_1/Size@__inference___call___21591" at "StatefulPartitionedCall@__inference_signature_wrapper_23250") at "StatefulPartitionedCall")): 'tf.Size' op is neither a custom op nor a flex op
<unknown>:0: note: loc("StatefulPartitionedCall"): called from
<unknown>:0: error: failed while converting: 'main': Ops that can be supported by the flex runtime (enabled via setting the -emit-select-tf-ops flag):
        tf.Size {device = ""}

 

 

$ cat c.py
import tensorflow as tf

saved_model_dir="./"
converter = tf.lite.TFLiteConverter.from_saved_model(saved_model_dir)
#converter.target_spec.supported_ops = [
#  tf.lite.OpsSet.TFLITE_BUILTINS, # enable TensorFlow Lite ops.
#  tf.lite.OpsSet.SELECT_TF_OPS # enable TensorFlow ops.
#]
tflite_model = converter.convert()
open("converted_model.tflite", "wb").write(tflite_model)

$ python3 c.py
2021-02-02 16:45:08.234068: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory
2021-02-02 16:45:08.234112: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
2021-02-02 16:45:10.382490: I tensorflow/compiler/jit/xla_cpu_device.cc:41] Not creating XLA devices, tf_xla_enable_xla_devices not set
2021-02-02 16:45:10.382710: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library 'libcuda.so.1'; dlerror: libcuda.so.1: cannot open shared object file: No such file or directory
2021-02-02 16:45:10.382742: W tensorflow/stream_executor/cuda/cuda_driver.cc:326] failed call to cuInit: UNKNOWN ERROR (303)
2021-02-02 16:45:10.382775: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:156] kernel driver does not appear to be running on this host (mini2760p): /proc/driver/nvidia/version does not exist
2021-02-02 16:45:10.383265: I tensorflow/compiler/jit/xla_gpu_device.cc:99] Not creating XLA devices, tf_xla_enable_xla_devices not set
2021-02-02 16:45:26.331917: W tensorflow/compiler/mlir/lite/python/tf_tfl_flatbuffer_helpers.cc:316] Ignored output_format.
2021-02-02 16:45:26.331970: W tensorflow/compiler/mlir/lite/python/tf_tfl_flatbuffer_helpers.cc:319] Ignored drop_control_dependency.
2021-02-02 16:45:26.331981: W tensorflow/compiler/mlir/lite/python/tf_tfl_flatbuffer_helpers.cc:325] Ignored change_concat_input_ranges.
2021-02-02 16:45:26.333113: I tensorflow/cc/saved_model/reader.cc:32] Reading SavedModel from: ./
2021-02-02 16:45:26.442652: I tensorflow/cc/saved_model/reader.cc:55] Reading meta graph with tags { serve }
2021-02-02 16:45:26.442721: I tensorflow/cc/saved_model/reader.cc:93] Reading SavedModel debug info (if present) from: ./
2021-02-02 16:45:26.442798: I tensorflow/compiler/jit/xla_gpu_device.cc:99] Not creating XLA devices, tf_xla_enable_xla_devices not set
2021-02-02 16:45:26.752919: I tensorflow/compiler/mlir/mlir_graph_optimization_pass.cc:196] None of the MLIR optimization passes are enabled (registered 0 passes)
2021-02-02 16:45:26.824027: I tensorflow/cc/saved_model/loader.cc:206] Restoring SavedModel bundle.
2021-02-02 16:45:26.900734: I tensorflow/core/platform/profile_utils/cpu_utils.cc:112] CPU Frequency: 2494085000 Hz
2021-02-02 16:45:27.788741: I tensorflow/cc/saved_model/loader.cc:190] Running initialization op on SavedModel bundle at path: ./
2021-02-02 16:45:28.227404: I tensorflow/cc/saved_model/loader.cc:277] SavedModel load for tags { serve }; Status: success: OK. Took 1894293 microseconds.
2021-02-02 16:45:34.080047: I tensorflow/compiler/mlir/tensorflow/utils/dump_mlir_util.cc:194] disabling MLIR crash reproducer, set env var `MLIR_CRASH_REPRODUCER_DIRECTORY` to enable.
2021-02-02 16:45:35.369335: I tensorflow/compiler/jit/xla_gpu_device.cc:99] Not creating XLA devices, tf_xla_enable_xla_devices not set
loc(callsite(callsite("Postprocessor/BatchMultiClassNonMaxSuppression/MultiClassNonMaxSuppression/SortByField_1/Size@__inference___call___21591" at "StatefulPartitionedCall@__inference_signature_wrapper_23250") at "StatefulPartitionedCall")): error: 'tf.Size' op is neither a custom op nor a flex op
error: failed while converting: 'main': Ops that can be supported by the flex runtime (enabled via setting the -emit-select-tf-ops flag):
        tf.Size {device = ""}
Traceback (most recent call last):
  File "/home/minimonk/.local/lib/python3.8/site-packages/tensorflow/lite/python/convert.py", line 210, in toco_convert_protos
    model_str = wrap_toco.wrapped_toco_convert(model_flags_str,
  File "/home/minimonk/.local/lib/python3.8/site-packages/tensorflow/lite/python/wrap_toco.py", line 32, in wrapped_toco_convert
    return _pywrap_toco_api.TocoConvert(
Exception: <unknown>:0: error: loc(callsite(callsite("Postprocessor/BatchMultiClassNonMaxSuppression/MultiClassNonMaxSuppression/SortByField_1/Size@__inference___call___21591" at "StatefulPartitionedCall@__inference_signature_wrapper_23250") at "StatefulPartitionedCall")): 'tf.Size' op is neither a custom op nor a flex op
<unknown>:0: note: loc("StatefulPartitionedCall"): called from
<unknown>:0: error: failed while converting: 'main': Ops that can be supported by the flex runtime (enabled via setting the -emit-select-tf-ops flag):
        tf.Size {device = ""}


During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "c.py", line 9, in <module>
    tflite_model = converter.convert()
  File "/home/minimonk/.local/lib/python3.8/site-packages/tensorflow/lite/python/lite.py", line 739, in convert
    result = _convert_saved_model(**converter_kwargs)
  File "/home/minimonk/.local/lib/python3.8/site-packages/tensorflow/lite/python/convert.py", line 632, in convert_saved_model
    data = toco_convert_protos(
  File "/home/minimonk/.local/lib/python3.8/site-packages/tensorflow/lite/python/convert.py", line 216, in toco_convert_protos
    raise ConverterError(str(e))
tensorflow.lite.python.convert.ConverterError: <unknown>:0: error: loc(callsite(callsite("Postprocessor/BatchMultiClassNonMaxSuppression/MultiClassNonMaxSuppression/SortByField_1/Size@__inference___call___21591" at "StatefulPartitionedCall@__inference_signature_wrapper_23250") at "StatefulPartitionedCall")): 'tf.Size' op is neither a custom op nor a flex op
<unknown>:0: note: loc("StatefulPartitionedCall"): called from
<unknown>:0: error: failed while converting: 'main': Ops that can be supported by the flex runtime (enabled via setting the -emit-select-tf-ops flag):
        tf.Size {device = ""}

 

saved_model_cli 명령어를 이용해서 변환하려는데 tensorrt가 안들어가면 인자가 부족하다고 하고

넣으면 libvinfer 에러가 나고... 후...

$ saved_model_cli convert --dir=. --output_dir=output --tag_set serving_default tensorrt
2021-02-02 16:52:12.317957: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory
2021-02-02 16:52:12.318003: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
2021-02-02 16:52:13.640651: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library 'libnvinfer.so.7'; dlerror: libnvinfer.so.7: cannot open shared object file: No such file or directory
2021-02-02 16:52:13.640699: F tensorflow/compiler/tf2tensorrt/stub/nvinfer_stub.cc:49] getInferLibVersion symbol not found.
중지됨 (core dumped)

[링크 : https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/tools/saved_model_cli.py]

 

-emi-select-tf-ops=true가 flatbuffer_translate 에 전달되는 옵션인가?

// RUN: flatbuffer_translate -mlir-to-tflite-flatbuffer %s -emit-select-tf-ops=true -emit-builtin-tflite-ops=false -o - | flatbuffer_to_string - | FileCheck %s

[링크 : http://110.249.209.116/tiansongzhao/QT-Platform/-/blob/ffe4404132bbba3c690232c9f846ac160aa38e65/Software/resource/samples/tensorflow/tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer/flex_exclusively.mlir]

 

MLIR (Multi-Level Intermediate Representation)

[링크 : https://mlir.llvm.org/]

 

결국은 돌아돌아 원점인가..

import tensorflow as tf

converter = tf.lite.TFLiteConverter.from_saved_model(saved_model_dir)
converter.target_spec.supported_ops = [
  tf.lite.OpsSet.TFLITE_BUILTINS, # enable TensorFlow Lite ops.
  tf.lite.OpsSet.SELECT_TF_OPS # enable TensorFlow ops.
]
tflite_model = converter.convert()
open("converted_model.tflite", "wb").write(tflite_model)

[링크 : https://www.tensorflow.org/lite/guide/ops_select#convert_a_model]

 

TFLite interpreter가 select ops 를 가지고 있는지 확인해라..

tensorflow.a 가 그럼 그 옵션을 받아야 한다는건가?

Try using TF select ops. However, you may needs to ensure your TFLite interpreter has these select ops for inference.

  [링크 : https://stackoverflow.com/questions/65661737/]

 

'프로그램 사용 > yolo_tensorflow' 카테고리의 다른 글

tf object detection COCO  (0) 2021.02.05
tensorflow lite SELECT_TF_OPS  (0) 2021.02.02
tensorflow bazel build 옵션  (0) 2021.02.02
tensorflow bazel build  (0) 2021.02.01
convert from tensorflow to tensorflow lite  (0) 2021.02.01
Posted by 구차니