'프로그램 사용 > yolo_tensorflow' 카테고리의 다른 글

tensorflow pipeline.config  (0) 2021.02.10
tensorflow pipeline.conf  (0) 2021.02.08
tensorflow lite SELECT_TF_OPS  (0) 2021.02.02
saved_model_cli  (0) 2021.02.02
tensorflow bazel build 옵션  (0) 2021.02.02
Posted by 구차니

SELECT_TF_OPS로 이리저리 찾으니, cmake 파일에서 똭?!

그런데 FATAL_ERROR.. 수상하다...

TODO: Add support.... 수상하다.....

 68 # This must be enabled when converting from TF models with SELECT_TF_OPS
 69 # enabled.
 70 # https://www.tensorflow.org/lite/guide/ops_select#converting_the_model
 71 # This is currently not supported.
 72 option(TFLITE_ENABLE_FLEX "Enable SELECT_TF_OPS" OFF) # TODO: Add support
 
197 if(TFLITE_ENABLE_FLEX)
198   message(FATAL_ERROR "TF Lite Flex delegate is currently not supported.")
199   populate_tflite_source_vars("delegates/flex" TFLITE_DELEGATES_FLEX_SRCS)
200   list(APPEND TFLITE_TARGET_DEPENDENCIES
201     absl::inlined_vector
202     absl::optional
203     absl::type_traits
204   )
205 endif()

 

아래처럼 하면 된다는데

sudo apt-get install cmake
git clone https://github.com/tensorflow/tensorflow.git tensorflow_src
mkdir tflite_build
cd tflite_build
cmake ../tensorflow_src/tensorflow/lite

[링크 : https://www.tensorflow.org/lite/guide/build_cmake]

 

안되서 -S 옵션주니 된....

$ cmake -S ../tensorflow/tensorflow/lite

 

야이 그지깽깽이들아!!!! ㅠㅠㅠ

CMake Error at CMakeLists.txt:198 (message):
  TF Lite Flex delegate is currently not supported.

 

'프로그램 사용 > yolo_tensorflow' 카테고리의 다른 글

tensorflow pipeline.conf  (0) 2021.02.08
tf object detection COCO  (0) 2021.02.05
saved_model_cli  (0) 2021.02.02
tensorflow bazel build 옵션  (0) 2021.02.02
tensorflow bazel build  (0) 2021.02.01
Posted by 구차니

어디까지 흘러가야 답을 찾을수 있을까 ㅠㅠ

 

아래 사이트에서 pb와 tflite를 모두 제공해주어서 한번 시도

[링크 : https://tfhub.dev/google/aiy/vision/classifier/birds_V1/1]

 

위 링크의 pb를 확인해보면 아래와 같이 나오는데, tag-set이 'serve' 라는게 없어서 변환하려니 안된다

$ saved_model_cli show --dir ./ --all
2021-02-02 16:42:12.474893: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory
2021-02-02 16:42:12.474941: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.

MetaGraphDef with tag-set: '' contains the following SignatureDefs:

signature_def['default']:
  The given SavedModel SignatureDef contains the following input(s):
    inputs['images'] tensor_info:
        dtype: DT_FLOAT
        shape: (-1, 224, 224, 3)
        name: hub_input/images:0
  The given SavedModel SignatureDef contains the following output(s):
    outputs['default'] tensor_info:
        dtype: DT_FLOAT
        shape: (-1, 965)
        name: prediction:0
  Method name is:

signature_def['image_classifier']:
  The given SavedModel SignatureDef contains the following input(s):
    inputs['images'] tensor_info:
        dtype: DT_FLOAT
        shape: (-1, 224, 224, 3)
        name: hub_input/images:0
  The given SavedModel SignatureDef contains the following output(s):
    outputs['logits'] tensor_info:
        dtype: DT_FLOAT
        shape: (-1, 965)
        name: prediction:0
  Method name is:

 

이건 다른 모델.. 도대체 어떻게 봐야 하는걸까 ㅠㅠ

$ saved_model_cli show --dir ./ --all
2021-02-02 16:42:44.443703: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory
2021-02-02 16:42:44.443752: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.

MetaGraphDef with tag-set: 'serve' contains the following SignatureDefs:

signature_def['__saved_model_init_op']:
  The given SavedModel SignatureDef contains the following input(s):
  The given SavedModel SignatureDef contains the following output(s):
    outputs['__saved_model_init_op'] tensor_info:
        dtype: DT_INVALID
        shape: unknown_rank
        name: NoOp
  Method name is:

signature_def['serving_default']:
  The given SavedModel SignatureDef contains the following input(s):
    inputs['input_tensor'] tensor_info:
        dtype: DT_UINT8
        shape: (1, -1, -1, 3)
        name: serving_default_input_tensor:0
  The given SavedModel SignatureDef contains the following output(s):
    outputs['detection_anchor_indices'] tensor_info:
        dtype: DT_FLOAT
        shape: (1, 100)
        name: StatefulPartitionedCall:0
    outputs['detection_boxes'] tensor_info:
        dtype: DT_FLOAT
        shape: (1, 100, 4)
        name: StatefulPartitionedCall:1
    outputs['detection_classes'] tensor_info:
        dtype: DT_FLOAT
        shape: (1, 100)
        name: StatefulPartitionedCall:2
    outputs['detection_multiclass_scores'] tensor_info:
        dtype: DT_FLOAT
        shape: (1, 100, 91)
        name: StatefulPartitionedCall:3
    outputs['detection_scores'] tensor_info:
        dtype: DT_FLOAT
        shape: (1, 100)
        name: StatefulPartitionedCall:4
    outputs['num_detections'] tensor_info:
        dtype: DT_FLOAT
        shape: (1)
        name: StatefulPartitionedCall:5
    outputs['raw_detection_boxes'] tensor_info:
        dtype: DT_FLOAT
        shape: (1, 1917, 4)
        name: StatefulPartitionedCall:6
    outputs['raw_detection_scores'] tensor_info:
        dtype: DT_FLOAT
        shape: (1, 1917, 91)
        name: StatefulPartitionedCall:7
  Method name is: tensorflow/serving/predict

Defined Functions:
  Function Name: '__call__'
    Option #1
      Callable with:
        Argument #1
          input_tensor: TensorSpec(shape=(1, None, None, 3), dtype=tf.uint8, name='input_tensor')

 

혹시나 해서 다시 한번 select_tf_ops 옵션을 빼고 해보았지만 역시나 안된다.

눈에 들어오는 에러는 아래것 정도인데 -emit-select-tf-ops 옵션을 누구에게 주어야 하는건지 모르겠다.

그리고 custom op nor flex op. flex op는 또 무엇인가...

tensorflow.lite.python.convert.ConverterError: <unknown>:0: error: loc(callsite(callsite("Postprocessor/BatchMultiClassNonMaxSuppression/MultiClassNonMaxSuppression/SortByField_1/Size@__inference___call___21591" at "StatefulPartitionedCall@__inference_signature_wrapper_23250") at "StatefulPartitionedCall")): 'tf.Size' op is neither a custom op nor a flex op
<unknown>:0: note: loc("StatefulPartitionedCall"): called from
<unknown>:0: error: failed while converting: 'main': Ops that can be supported by the flex runtime (enabled via setting the -emit-select-tf-ops flag):
        tf.Size {device = ""}

 

 

$ cat c.py
import tensorflow as tf

saved_model_dir="./"
converter = tf.lite.TFLiteConverter.from_saved_model(saved_model_dir)
#converter.target_spec.supported_ops = [
#  tf.lite.OpsSet.TFLITE_BUILTINS, # enable TensorFlow Lite ops.
#  tf.lite.OpsSet.SELECT_TF_OPS # enable TensorFlow ops.
#]
tflite_model = converter.convert()
open("converted_model.tflite", "wb").write(tflite_model)

$ python3 c.py
2021-02-02 16:45:08.234068: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory
2021-02-02 16:45:08.234112: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
2021-02-02 16:45:10.382490: I tensorflow/compiler/jit/xla_cpu_device.cc:41] Not creating XLA devices, tf_xla_enable_xla_devices not set
2021-02-02 16:45:10.382710: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library 'libcuda.so.1'; dlerror: libcuda.so.1: cannot open shared object file: No such file or directory
2021-02-02 16:45:10.382742: W tensorflow/stream_executor/cuda/cuda_driver.cc:326] failed call to cuInit: UNKNOWN ERROR (303)
2021-02-02 16:45:10.382775: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:156] kernel driver does not appear to be running on this host (mini2760p): /proc/driver/nvidia/version does not exist
2021-02-02 16:45:10.383265: I tensorflow/compiler/jit/xla_gpu_device.cc:99] Not creating XLA devices, tf_xla_enable_xla_devices not set
2021-02-02 16:45:26.331917: W tensorflow/compiler/mlir/lite/python/tf_tfl_flatbuffer_helpers.cc:316] Ignored output_format.
2021-02-02 16:45:26.331970: W tensorflow/compiler/mlir/lite/python/tf_tfl_flatbuffer_helpers.cc:319] Ignored drop_control_dependency.
2021-02-02 16:45:26.331981: W tensorflow/compiler/mlir/lite/python/tf_tfl_flatbuffer_helpers.cc:325] Ignored change_concat_input_ranges.
2021-02-02 16:45:26.333113: I tensorflow/cc/saved_model/reader.cc:32] Reading SavedModel from: ./
2021-02-02 16:45:26.442652: I tensorflow/cc/saved_model/reader.cc:55] Reading meta graph with tags { serve }
2021-02-02 16:45:26.442721: I tensorflow/cc/saved_model/reader.cc:93] Reading SavedModel debug info (if present) from: ./
2021-02-02 16:45:26.442798: I tensorflow/compiler/jit/xla_gpu_device.cc:99] Not creating XLA devices, tf_xla_enable_xla_devices not set
2021-02-02 16:45:26.752919: I tensorflow/compiler/mlir/mlir_graph_optimization_pass.cc:196] None of the MLIR optimization passes are enabled (registered 0 passes)
2021-02-02 16:45:26.824027: I tensorflow/cc/saved_model/loader.cc:206] Restoring SavedModel bundle.
2021-02-02 16:45:26.900734: I tensorflow/core/platform/profile_utils/cpu_utils.cc:112] CPU Frequency: 2494085000 Hz
2021-02-02 16:45:27.788741: I tensorflow/cc/saved_model/loader.cc:190] Running initialization op on SavedModel bundle at path: ./
2021-02-02 16:45:28.227404: I tensorflow/cc/saved_model/loader.cc:277] SavedModel load for tags { serve }; Status: success: OK. Took 1894293 microseconds.
2021-02-02 16:45:34.080047: I tensorflow/compiler/mlir/tensorflow/utils/dump_mlir_util.cc:194] disabling MLIR crash reproducer, set env var `MLIR_CRASH_REPRODUCER_DIRECTORY` to enable.
2021-02-02 16:45:35.369335: I tensorflow/compiler/jit/xla_gpu_device.cc:99] Not creating XLA devices, tf_xla_enable_xla_devices not set
loc(callsite(callsite("Postprocessor/BatchMultiClassNonMaxSuppression/MultiClassNonMaxSuppression/SortByField_1/Size@__inference___call___21591" at "StatefulPartitionedCall@__inference_signature_wrapper_23250") at "StatefulPartitionedCall")): error: 'tf.Size' op is neither a custom op nor a flex op
error: failed while converting: 'main': Ops that can be supported by the flex runtime (enabled via setting the -emit-select-tf-ops flag):
        tf.Size {device = ""}
Traceback (most recent call last):
  File "/home/minimonk/.local/lib/python3.8/site-packages/tensorflow/lite/python/convert.py", line 210, in toco_convert_protos
    model_str = wrap_toco.wrapped_toco_convert(model_flags_str,
  File "/home/minimonk/.local/lib/python3.8/site-packages/tensorflow/lite/python/wrap_toco.py", line 32, in wrapped_toco_convert
    return _pywrap_toco_api.TocoConvert(
Exception: <unknown>:0: error: loc(callsite(callsite("Postprocessor/BatchMultiClassNonMaxSuppression/MultiClassNonMaxSuppression/SortByField_1/Size@__inference___call___21591" at "StatefulPartitionedCall@__inference_signature_wrapper_23250") at "StatefulPartitionedCall")): 'tf.Size' op is neither a custom op nor a flex op
<unknown>:0: note: loc("StatefulPartitionedCall"): called from
<unknown>:0: error: failed while converting: 'main': Ops that can be supported by the flex runtime (enabled via setting the -emit-select-tf-ops flag):
        tf.Size {device = ""}


During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "c.py", line 9, in <module>
    tflite_model = converter.convert()
  File "/home/minimonk/.local/lib/python3.8/site-packages/tensorflow/lite/python/lite.py", line 739, in convert
    result = _convert_saved_model(**converter_kwargs)
  File "/home/minimonk/.local/lib/python3.8/site-packages/tensorflow/lite/python/convert.py", line 632, in convert_saved_model
    data = toco_convert_protos(
  File "/home/minimonk/.local/lib/python3.8/site-packages/tensorflow/lite/python/convert.py", line 216, in toco_convert_protos
    raise ConverterError(str(e))
tensorflow.lite.python.convert.ConverterError: <unknown>:0: error: loc(callsite(callsite("Postprocessor/BatchMultiClassNonMaxSuppression/MultiClassNonMaxSuppression/SortByField_1/Size@__inference___call___21591" at "StatefulPartitionedCall@__inference_signature_wrapper_23250") at "StatefulPartitionedCall")): 'tf.Size' op is neither a custom op nor a flex op
<unknown>:0: note: loc("StatefulPartitionedCall"): called from
<unknown>:0: error: failed while converting: 'main': Ops that can be supported by the flex runtime (enabled via setting the -emit-select-tf-ops flag):
        tf.Size {device = ""}

 

saved_model_cli 명령어를 이용해서 변환하려는데 tensorrt가 안들어가면 인자가 부족하다고 하고

넣으면 libvinfer 에러가 나고... 후...

$ saved_model_cli convert --dir=. --output_dir=output --tag_set serving_default tensorrt
2021-02-02 16:52:12.317957: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory
2021-02-02 16:52:12.318003: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
2021-02-02 16:52:13.640651: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library 'libnvinfer.so.7'; dlerror: libnvinfer.so.7: cannot open shared object file: No such file or directory
2021-02-02 16:52:13.640699: F tensorflow/compiler/tf2tensorrt/stub/nvinfer_stub.cc:49] getInferLibVersion symbol not found.
중지됨 (core dumped)

[링크 : https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/tools/saved_model_cli.py]

 

-emi-select-tf-ops=true가 flatbuffer_translate 에 전달되는 옵션인가?

// RUN: flatbuffer_translate -mlir-to-tflite-flatbuffer %s -emit-select-tf-ops=true -emit-builtin-tflite-ops=false -o - | flatbuffer_to_string - | FileCheck %s

[링크 : http://110.249.209.116/tiansongzhao/QT-Platform/-/blob/ffe4404132bbba3c690232c9f846ac160aa38e65/Software/resource/samples/tensorflow/tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer/flex_exclusively.mlir]

 

MLIR (Multi-Level Intermediate Representation)

[링크 : https://mlir.llvm.org/]

 

결국은 돌아돌아 원점인가..

import tensorflow as tf

converter = tf.lite.TFLiteConverter.from_saved_model(saved_model_dir)
converter.target_spec.supported_ops = [
  tf.lite.OpsSet.TFLITE_BUILTINS, # enable TensorFlow Lite ops.
  tf.lite.OpsSet.SELECT_TF_OPS # enable TensorFlow ops.
]
tflite_model = converter.convert()
open("converted_model.tflite", "wb").write(tflite_model)

[링크 : https://www.tensorflow.org/lite/guide/ops_select#convert_a_model]

 

TFLite interpreter가 select ops 를 가지고 있는지 확인해라..

tensorflow.a 가 그럼 그 옵션을 받아야 한다는건가?

Try using TF select ops. However, you may needs to ensure your TFLite interpreter has these select ops for inference.

  [링크 : https://stackoverflow.com/questions/65661737/]

 

'프로그램 사용 > yolo_tensorflow' 카테고리의 다른 글

tf object detection COCO  (0) 2021.02.05
tensorflow lite SELECT_TF_OPS  (0) 2021.02.02
tensorflow bazel build 옵션  (0) 2021.02.02
tensorflow bazel build  (0) 2021.02.01
convert from tensorflow to tensorflow lite  (0) 2021.02.01
Posted by 구차니

tensorflow lite 빌드는 묘하게 갈리는구나..

일단 static library는 bazel 도움 없이 스크립트, make로 빌드하게 되어있고

./tensorflow/lite/tools/make/download_dependencies.sh
./tensorflow/lite/tools/make/build_aarch64_lib.sh

 

so는 위의 스크립트로는 못하고 bazel의 도움을 받아야 빌드할 수 있는데

3단계. ARM64 바이너리 빌드하기
C 라이브러리
bazel build --config=elinux_aarch64 -c opt //tensorflow/lite/c:libtensorflowlite_c.so

C++ 라이브러리
bazel build --config=elinux_aarch64 -c opt //tensorflow/lite:libtensorflowlite.so

[링크 : https://www.tensorflow.org/lite/guide/build_arm64?hl=ko]

 

cross compile은 --config=를 통해서 지정이 가능하나, 특정 컴파일러를 쓰도록 지정은 어떻게 하는지 찾아봐야 겠다.

구글 페이지에는 elinux_aarch64만 보이는데 .bazelrc를 열어보니 elinux_armhf 도 존재한다(32bit?)

그리고 monolithic이 해결책일줄 알았는데.. 단일 so를 만드는 옵션일뿐.. select_tf_ops 랑은 상관이 없나보다..

# Embedded Linux options (experimental and only tested with TFLite build yet)
#     elinux:          General Embedded Linux options shared by all flavors.
#     elinux_aarch64:  Embedded Linux options for aarch64 (ARM64) CPU support.
#     elinux_armhf:    Embedded Linux options for armhf (ARMv7) CPU support.

# Other build options:
#     short_logs:       Only log errors during build, skip warnings.
#     verbose_logs:     Show all compiler warnings during build.
#     monolithic:       Build all TF C++ code into a single shared object.
#     dynamic_kernels:  Try to link all kernels dynamically (experimental).
#     libc++:           Link against libc++ instead of stdlibc++

 

 

Posted by 구차니

bazel로 빌드시에 어떤 옵션이 가능한지 보는 법 없나...

현재 발견(?) 한건

 

--config=monolithic

[링크 : https://www.tensorflow.org/lite/guide/ops_select]

 

c와 c++용 라이브러리 생성

bazel build --config=elinux_aarch64 -c opt //tensorflow/lite/c:libtensorflowlite_c.so
bazel build --config=elinux_aarch64 -c opt //tensorflow/lite:libtensorflowlite.so

[링크 : https://www.tensorflow.org/lite/guide/build_arm64?hl=ko]

 

$ bazel build tensorflow/examples/label_image

[링크 : https://chromium.googlesource.com/external/github.com/tensorflow/tensorflow/+/refs/heads/master/tensorflow/examples/label_image/]

 

[ github.com/tensorflow/tensorflow/issues/38077]

[ github.com/tensorflow/tensorflow/issues/35590]

Posted by 구차니

toco랑은 또 다른건가..

보다 보니 tf.lite.Optimize.DEFAULT 라는 것도 새롭게 보이고

set_shape( ) 로 입력값이 변화하지 않도록 정해주는 것 같은데 무슨 차이인지 모르겠다 ㅠㅠ

 

!pip install tf-nightly
import tensorflow as tf

## TFLite Conversion
model = tf.saved_model.load("saved_model")
concrete_func = model.signatures[tf.saved_model.DEFAULT_SERVING_SIGNATURE_DEF_KEY]
concrete_func.inputs[0].set_shape([1, 300, 300, 3])
tf.saved_model.save(model, "saved_model_updated", signatures={"serving_default":concrete_func})
converter = tf.lite.TFLiteConverter.from_saved_model(saved_model_dir='saved_model_updated', signature_keys=['serving_default'])

converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS, tf.lite.OpsSet.SELECT_TF_OPS]
tflite_model = converter.convert()

## TFLite Interpreter to check input shape
interpreter = tf.lite.Interpreter(model_content=tflite_model)
interpreter.allocate_tensors()

# Get input and output tensors.
input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()

# Test the model on random input data.
input_shape = input_details[0]['shape']
print(input_shape)

[링크 : https://stackoverflow.com/questions/63325216/]

    [링크 : https://github.com/tensorflow/tensorflow/issues/42114#issuecomment-671593386]

 

+

희망을 가졌던 tf.lite.Optimize.DEFAULT는 quantization 관련 옵션 ㅠㅠ

[링크 : https://www.tensorflow.org/lite/performance/post_training_quantization]

[링크 : https://www.tensorflow.org/lite/performance/model_optimization]

[링크 : https://www.tensorflow.org/lite/api_docs/python/tf/lite/Optimize]

Posted by 구차니

변환한 모델 실행하는데 에러가 발생

정확하게는 모델을 불러오다 죽는게 아닌가 싶어서 해당 파일을 열어봄

$ ./label_image -i 2012060407491899196_l.jpg -l label -m go.tflite 
Loaded model go.tflite
resolved reporter
ERROR: tensorflow/lite/core/subgraph.cc BytesRequired number of elements overflowed.

세그멘테이션 오류 (core dumped)

 

음.. 593 라인 에러라는데

 

 563 namespace {
 564 // Multiply two sizes and return true if overflow occurred;
 565 // This is based off tensorflow/overflow.h but is simpler as we already
 566 // have unsigned numbers. It is also generalized to work where sizeof(size_t)
 567 // is not 8. 
 568 TfLiteStatus MultiplyAndCheckOverflow(size_t a, size_t b, size_t* product) {
 569   // Multiplying a * b where a and b are size_t cannot result in overflow in a
 570   // size_t accumulator if both numbers have no non-zero bits in their upper
 571   // half.
 572   constexpr size_t size_t_bits = 8 * sizeof(size_t);
 573   constexpr size_t overflow_upper_half_bit_position = size_t_bits / 2;
 574   *product = a * b;
 575   // If neither integers have non-zero bits past 32 bits can't overflow.
 576   // Otherwise check using slow devision.
 577   if (TFLITE_EXPECT_FALSE((a | b) >> overflow_upper_half_bit_position != 0)) {
 578     if (a != 0 && *product / a != b) return kTfLiteError;
 579   }
 580   return kTfLiteOk;
 581 }  
 582 }  // namespace
 583 
 584 TfLiteStatus Subgraph::BytesRequired(TfLiteType type, const int* dims,
 585                                      size_t dims_size, size_t* bytes) {
 586   TF_LITE_ENSURE(&context_, bytes != nullptr);
 587   size_t count = 1;
 588   for (int k = 0; k < dims_size; k++) {
 589     size_t old_count = count;
 590     TF_LITE_ENSURE_MSG(
 591         &context_,
 592         MultiplyAndCheckOverflow(old_count, dims[k], &count) == kTfLiteOk,
 593         "BytesRequired number of elements overflowed.\n");
 594   }
 595   size_t type_size = 0;
 596   TF_LITE_ENSURE_OK(&context_, GetSizeOfType(&context_, type, &type_size));
 597   TF_LITE_ENSURE_MSG(
 598       &context_, MultiplyAndCheckOverflow(type_size, count, bytes) == kTfLiteOk,
 599       "BytesRequired number of bytes overflowed.\n");
 600   return kTfLiteOk;
 601 }

'프로그램 사용 > yolo_tensorflow' 카테고리의 다른 글

tensorflow bazel build  (0) 2021.02.01
convert from tensorflow to tensorflow lite  (0) 2021.02.01
tensorflowlite build  (0) 2021.01.28
pb to tflite 변환 part 2...  (0) 2021.01.27
tensorflow netron  (0) 2021.01.27
Posted by 구차니
프로그램 사용/minicom2021. 1. 28. 14:56

minicom 으로 실행하면 폭이 늘어나지 않고 창도 안커지는데(putty)

아래와 같이 하면 창을 늘리면 같이 늘어난다.

 

$ TERM=linux minicom

[링크 : https://unix.stackexchange.com/questions/106644/how-to-change-the-width-of-remote-serial-console]

 

+물론 시리얼로 접속해서는 화면은 늘어나도 실제로 콘솔 영향을 받는지

더 위아래로 길게 나오진 않는다 ㅠㅠ

'프로그램 사용 > minicom' 카테고리의 다른 글

minicom lf에 cr 붙이기  (0) 2023.01.05
minicom 16진수로 보기  (0) 2022.08.25
minicom 로그 저장하기  (0) 2021.09.16
minicom timestamp  (0) 2021.09.16
/dev/tty 를 sudo 쓰지 않고 사용하기  (0) 2020.09.24
Posted by 구차니

with_select_tf_ops 라는 옵션이 보이는데

*.cc 파일들 상에서 해당 디파인으로 뒤져도 안나오고 py 쪽으로만 보이는데..

헛짚은건가?  ㅠㅠ

bazel build --config=monolithic --define=with_select_tf_ops=true -c opt //tensorflow/lite:libtensorflowlite.so

[링크 : https://stackoverflow.com/questions/58623937/]

Posted by 구차니

tf.size() tflite 호환 연산자 목록에 없긴한데

[링크 : https://www.tensorflow.org/lite/guide/ops_compatibility]

 

tf.size()는... tflite 지원인지 아닌지 말이 없네?

[링크 : https://www.tensorflow.org/api_docs/python/tf/size?hl=ko]

 

하라는 대로 하니 변환은 되었다?

아무래도 tflite_convert에는 없는 SELECT_TF_OPS 같은게 있어서 그런가?

import tensorflow as tf

converter = tf.lite.TFLiteConverter.from_saved_model(saved_model_dir)
converter.target_spec.supported_ops = [
  tf.lite.OpsSet.TFLITE_BUILTINS, # enable TensorFlow Lite ops.
  tf.lite.OpsSet.SELECT_TF_OPS # enable TensorFlow ops.
]
tflite_model = converter.convert()
open("converted_model.tflite", "wb").write(tflite_model)

[링크 : https://www.tensorflow.org/lite/guide/ops_select]

  [링크 : https://stackoverflow.com/questions/53824223/what-does-flex-op-mean-in-tensorflow]

 

일단은 먼진 몰라도 그냥 pb가 있는거 대충 받는 링크

[링크 : https://tfhub.dev/tensorflow/ssd_mobilenet_v2/2]

[링크 : https://tfhub.dev/tensorflow/retinanet/resnet101_v1_fpn_1024x1024/1]

 

tflite-converter.py 에서 뒤져보니 아래와 같은 옵션을 주어야 작동하는건가..?

  if flags.experimental_select_user_tf_ops:
    if lite.OpsSet.SELECT_TF_OPS not in converter.target_spec.supported_ops:
      raise ValueError("--experimental_select_user_tf_ops can only be set if "
                       "--target_ops contains SELECT_TF_OPS.")

 

+

변환한 파일을 실행하니 죽는다 -_ㅠ overflowed.. 무시무시한 에러인데?

$ ./label_image -i 2012060407491899196_l.jpg -m test.tflite 
INFO: Loaded model go.tflite
INFO: resolved reporter
ERROR: tensorflow/lite/core/subgraph.cc BytesRequired number of elements overflowed.

[링크 : https://stackoverflow.com/questions/63500096/]

Posted by 구차니