변환한 모델 실행하는데 에러가 발생

정확하게는 모델을 불러오다 죽는게 아닌가 싶어서 해당 파일을 열어봄

$ ./label_image -i 2012060407491899196_l.jpg -l label -m go.tflite 
Loaded model go.tflite
resolved reporter
ERROR: tensorflow/lite/core/subgraph.cc BytesRequired number of elements overflowed.

세그멘테이션 오류 (core dumped)

 

음.. 593 라인 에러라는데

 

 563 namespace {
 564 // Multiply two sizes and return true if overflow occurred;
 565 // This is based off tensorflow/overflow.h but is simpler as we already
 566 // have unsigned numbers. It is also generalized to work where sizeof(size_t)
 567 // is not 8. 
 568 TfLiteStatus MultiplyAndCheckOverflow(size_t a, size_t b, size_t* product) {
 569   // Multiplying a * b where a and b are size_t cannot result in overflow in a
 570   // size_t accumulator if both numbers have no non-zero bits in their upper
 571   // half.
 572   constexpr size_t size_t_bits = 8 * sizeof(size_t);
 573   constexpr size_t overflow_upper_half_bit_position = size_t_bits / 2;
 574   *product = a * b;
 575   // If neither integers have non-zero bits past 32 bits can't overflow.
 576   // Otherwise check using slow devision.
 577   if (TFLITE_EXPECT_FALSE((a | b) >> overflow_upper_half_bit_position != 0)) {
 578     if (a != 0 && *product / a != b) return kTfLiteError;
 579   }
 580   return kTfLiteOk;
 581 }  
 582 }  // namespace
 583 
 584 TfLiteStatus Subgraph::BytesRequired(TfLiteType type, const int* dims,
 585                                      size_t dims_size, size_t* bytes) {
 586   TF_LITE_ENSURE(&context_, bytes != nullptr);
 587   size_t count = 1;
 588   for (int k = 0; k < dims_size; k++) {
 589     size_t old_count = count;
 590     TF_LITE_ENSURE_MSG(
 591         &context_,
 592         MultiplyAndCheckOverflow(old_count, dims[k], &count) == kTfLiteOk,
 593         "BytesRequired number of elements overflowed.\n");
 594   }
 595   size_t type_size = 0;
 596   TF_LITE_ENSURE_OK(&context_, GetSizeOfType(&context_, type, &type_size));
 597   TF_LITE_ENSURE_MSG(
 598       &context_, MultiplyAndCheckOverflow(type_size, count, bytes) == kTfLiteOk,
 599       "BytesRequired number of bytes overflowed.\n");
 600   return kTfLiteOk;
 601 }

'프로그램 사용 > yolo_tensorflow' 카테고리의 다른 글

tensorflow bazel build  (0) 2021.02.01
convert from tensorflow to tensorflow lite  (0) 2021.02.01
tensorflowlite build  (0) 2021.01.28
pb to tflite 변환 part 2...  (0) 2021.01.27
tensorflow netron  (0) 2021.01.27
Posted by 구차니
이론 관련/전기 전자2021. 1. 29. 11:13

아래에서 설명은 들어도 dBFS가 어떤 의미로 쓰이고 이 값이 멀 의미하는지 모르겠다.

dBFS는 디지탈 오디오 신호에 특히 더 중요하게 사용되는 개념입니다. 디지탈 오디오에서 신호는 숫자로 변환되고 다시 그 숫자가 실제의 아날로그 신호로 바뀝니다. 쉽게 생각해서 1부터 10까지 표현할 수 있는 디지털 오디오 시스템은 11보다 큰 크기의 아날로그 신호를 녹음할 수도 재생할 수도 없습니다. 자릿수를 넘어가면서 즉시 복구될 수 없는 클립핑이 생기는 것입니다. 

[링크 : http://audio-probe.com/documentation/db란-무엇인가/]

'이론 관련 > 전기 전자' 카테고리의 다른 글

Hall effect sensor  (0) 2021.06.22
emc2301 fan controller  (0) 2021.06.11
FFT와 고조파(harmonic)  (0) 2020.10.05
Audio Induction Loop  (0) 2020.09.21
quadrature sampling(I/Q signal)  (0) 2020.09.06
Posted by 구차니
하드웨어/Network 장비2021. 1. 28. 17:14

위조칩으로 인해서 막은건 기억을 했는데

이제는 인식은 하되 장치명을 이상하게 띄우고

시리얼 포트를 연결하지 않게 나온다.

 

물론 리눅스는 그런거 없이 잘 됨 ㅋ

'하드웨어 > Network 장비' 카테고리의 다른 글

rs485 2선 연결  (0) 2021.12.13
rs232 to rs485 무전원 컨버터 전원공급 방법 RI(ring indicator)  (0) 2021.12.08
채널 본딩...  (0) 2019.03.08
TLS 하드웨어 가속  (0) 2019.02.07
iptime port trunk / link aggregation  (0) 2019.01.24
Posted by 구차니
프로그램 사용/minicom2021. 1. 28. 14:56

minicom 으로 실행하면 폭이 늘어나지 않고 창도 안커지는데(putty)

아래와 같이 하면 창을 늘리면 같이 늘어난다.

 

$ TERM=linux minicom

[링크 : https://unix.stackexchange.com/questions/106644/how-to-change-the-width-of-remote-serial-console]

 

+물론 시리얼로 접속해서는 화면은 늘어나도 실제로 콘솔 영향을 받는지

더 위아래로 길게 나오진 않는다 ㅠㅠ

'프로그램 사용 > minicom' 카테고리의 다른 글

minicom lf에 cr 붙이기  (0) 2023.01.05
minicom 16진수로 보기  (0) 2022.08.25
minicom 로그 저장하기  (0) 2021.09.16
minicom timestamp  (0) 2021.09.16
/dev/tty 를 sudo 쓰지 않고 사용하기  (0) 2020.09.24
Posted by 구차니
Linux API/linux2021. 1. 28. 13:59

쓸일이 곧 생길게야.... (동공지진)

 

[링크 : https://github.com/torvalds/linux/blob/master/tools/spi/spidev_test.c]

+ 2021.02.08

 

It's easy to be confused here, and the vendor documentation you'll
find isn't necessarily helpful.  The four modes combine two mode bits:

 - CPOL indicates the initial clock polarity.  CPOL=0 means the
   clock starts low, so the first (leading) edge is rising, and
   the second (trailing) edge is falling.  CPOL=1 means the clock
   starts high, so the first (leading) edge is falling.

 - CPHA indicates the clock phase used to sample data; CPHA=0 says
   sample on the leading edge, CPHA=1 means the trailing edge.

   Since the signal needs to stablize before it's sampled, CPHA=0
   implies that its data is written half a clock before the first
   clock edge.  The chipselect may have made it become available.

[링크 : https://www.kernel.org/doc/Documentation/spi/spi-summary]

Posted by 구차니

with_select_tf_ops 라는 옵션이 보이는데

*.cc 파일들 상에서 해당 디파인으로 뒤져도 안나오고 py 쪽으로만 보이는데..

헛짚은건가?  ㅠㅠ

bazel build --config=monolithic --define=with_select_tf_ops=true -c opt //tensorflow/lite:libtensorflowlite.so

[링크 : https://stackoverflow.com/questions/58623937/]

Posted by 구차니

tf.size() tflite 호환 연산자 목록에 없긴한데

[링크 : https://www.tensorflow.org/lite/guide/ops_compatibility]

 

tf.size()는... tflite 지원인지 아닌지 말이 없네?

[링크 : https://www.tensorflow.org/api_docs/python/tf/size?hl=ko]

 

하라는 대로 하니 변환은 되었다?

아무래도 tflite_convert에는 없는 SELECT_TF_OPS 같은게 있어서 그런가?

import tensorflow as tf

converter = tf.lite.TFLiteConverter.from_saved_model(saved_model_dir)
converter.target_spec.supported_ops = [
  tf.lite.OpsSet.TFLITE_BUILTINS, # enable TensorFlow Lite ops.
  tf.lite.OpsSet.SELECT_TF_OPS # enable TensorFlow ops.
]
tflite_model = converter.convert()
open("converted_model.tflite", "wb").write(tflite_model)

[링크 : https://www.tensorflow.org/lite/guide/ops_select]

  [링크 : https://stackoverflow.com/questions/53824223/what-does-flex-op-mean-in-tensorflow]

 

일단은 먼진 몰라도 그냥 pb가 있는거 대충 받는 링크

[링크 : https://tfhub.dev/tensorflow/ssd_mobilenet_v2/2]

[링크 : https://tfhub.dev/tensorflow/retinanet/resnet101_v1_fpn_1024x1024/1]

 

tflite-converter.py 에서 뒤져보니 아래와 같은 옵션을 주어야 작동하는건가..?

  if flags.experimental_select_user_tf_ops:
    if lite.OpsSet.SELECT_TF_OPS not in converter.target_spec.supported_ops:
      raise ValueError("--experimental_select_user_tf_ops can only be set if "
                       "--target_ops contains SELECT_TF_OPS.")

 

+

변환한 파일을 실행하니 죽는다 -_ㅠ overflowed.. 무시무시한 에러인데?

$ ./label_image -i 2012060407491899196_l.jpg -m test.tflite 
INFO: Loaded model go.tflite
INFO: resolved reporter
ERROR: tensorflow/lite/core/subgraph.cc BytesRequired number of elements overflowed.

[링크 : https://stackoverflow.com/questions/63500096/]

Posted by 구차니

변환하기 힘들다 ㅠㅠ

실행해보니 pb나 tflite 를 읽어서 시각화 하는데

tensorboard랑 비슷하다고 해야하나 다르다고 하나.. 보이는 방식은 가로로(tensroboard) 세로로(netron) 보이는 차이인것 같기도 하고..

다만 버전정보가 똭 보이는건 편하네..

그리고 tensorboard보다는 빠른 느낌이다.

 

[링크 : https://devinlife.com/tensorflow%20lite/tflite-simple-regression/]

[링크 : https://github.com/lutzroeder/netron]

'프로그램 사용 > yolo_tensorflow' 카테고리의 다른 글

tensorflowlite build  (0) 2021.01.28
pb to tflite 변환 part 2...  (0) 2021.01.27
pb to tflite, tensorboard  (0) 2021.01.26
tensorflow pb to tflite  (0) 2021.01.25
텐서플로우 - detection과 classification  (0) 2021.01.22
Posted by 구차니

tensorboard

먼가 복잡하게 나오는데 보는법을 모르겠다? ㅠㅠ

[링크 : https://urbangy.tistory.com/38]

[링크 : https://eehoeskrap.tistory.com/322]

 

pb to tflite

영... 실패중.. ㅠㅠ

[링크 : https://github.com/tensorflow/tensorflow/issues/46285]

 

$ python3 /home/minimonk/src/tensorflow/tensorflow/lite/python/tflite_convert.py --saved_model_dir=./saved_model --output_file=output.tflite
2021-01-26 19:01:39.223104: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory
2021-01-26 19:01:39.223142: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
2021-01-26 19:01:41.278842: I tensorflow/compiler/jit/xla_cpu_device.cc:41] Not creating XLA devices, tf_xla_enable_xla_devices not set
2021-01-26 19:01:41.279042: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library 'libcuda.so.1'; dlerror: libcuda.so.1: cannot open shared object file: No such file or directory
2021-01-26 19:01:41.279063: W tensorflow/stream_executor/cuda/cuda_driver.cc:326] failed call to cuInit: UNKNOWN ERROR (303)
2021-01-26 19:01:41.279101: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:156] kernel driver does not appear to be running on this host (mini2760p): /proc/driver/nvidia/version does not exist
2021-01-26 19:01:41.279527: I tensorflow/compiler/jit/xla_gpu_device.cc:99] Not creating XLA devices, tf_xla_enable_xla_devices not set
2021-01-26 19:01:55.229040: W tensorflow/compiler/mlir/lite/python/tf_tfl_flatbuffer_helpers.cc:316] Ignored output_format.
2021-01-26 19:01:55.229092: W tensorflow/compiler/mlir/lite/python/tf_tfl_flatbuffer_helpers.cc:319] Ignored drop_control_dependency.
2021-01-26 19:01:55.229117: W tensorflow/compiler/mlir/lite/python/tf_tfl_flatbuffer_helpers.cc:325] Ignored change_concat_input_ranges.
2021-01-26 19:01:55.230250: I tensorflow/cc/saved_model/reader.cc:32] Reading SavedModel from: ./saved_model
2021-01-26 19:01:55.349428: I tensorflow/cc/saved_model/reader.cc:55] Reading meta graph with tags { serve }
2021-01-26 19:01:55.349498: I tensorflow/cc/saved_model/reader.cc:93] Reading SavedModel debug info (if present) from: ./saved_model
2021-01-26 19:01:55.349576: I tensorflow/compiler/jit/xla_gpu_device.cc:99] Not creating XLA devices, tf_xla_enable_xla_devices not set
2021-01-26 19:01:55.676408: I tensorflow/compiler/mlir/mlir_graph_optimization_pass.cc:196] None of the MLIR optimization passes are enabled (registered 0 passes)
2021-01-26 19:01:55.748285: I tensorflow/cc/saved_model/loader.cc:206] Restoring SavedModel bundle.
2021-01-26 19:01:55.826459: I tensorflow/core/platform/profile_utils/cpu_utils.cc:112] CPU Frequency: 2494460000 Hz
2021-01-26 19:01:56.738523: I tensorflow/cc/saved_model/loader.cc:190] Running initialization op on SavedModel bundle at path: ./saved_model
2021-01-26 19:01:57.100034: I tensorflow/cc/saved_model/loader.cc:277] SavedModel load for tags { serve }; Status: success: OK. Took 1869785 microseconds.
2021-01-26 19:01:58.857435: I tensorflow/compiler/mlir/tensorflow/utils/dump_mlir_util.cc:194] disabling MLIR crash reproducer, set env var `MLIR_CRASH_REPRODUCER_DIRECTORY` to enable.
2021-01-26 19:01:59.851936: I tensorflow/compiler/jit/xla_gpu_device.cc:99] Not creating XLA devices, tf_xla_enable_xla_devices not set
loc(callsite(callsite("Postprocessor/BatchMultiClassNonMaxSuppression/MultiClassNonMaxSuppression/SortByField_1/Size@__inference_call_func_10155" at "StatefulPartitionedCall@__inference_signature_wrapper_11818") at "StatefulPartitionedCall")): error: 'tf.Size' op is neither a custom op nor a flex op
error: failed while converting: 'main': Ops that can be supported by the flex runtime (enabled via setting the -emit-select-tf-ops flag):
	tf.Size {device = ""}
Traceback (most recent call last):
  File "/home/minimonk/.local/lib/python3.8/site-packages/tensorflow/lite/python/convert.py", line 210, in toco_convert_protos
    model_str = wrap_toco.wrapped_toco_convert(model_flags_str,
  File "/home/minimonk/.local/lib/python3.8/site-packages/tensorflow/lite/python/wrap_toco.py", line 32, in wrapped_toco_convert
    return _pywrap_toco_api.TocoConvert(
Exception: <unknown>:0: error: loc(callsite(callsite("Postprocessor/BatchMultiClassNonMaxSuppression/MultiClassNonMaxSuppression/SortByField_1/Size@__inference_call_func_10155" at "StatefulPartitionedCall@__inference_signature_wrapper_11818") at "StatefulPartitionedCall")): 'tf.Size' op is neither a custom op nor a flex op
<unknown>:0: note: loc("StatefulPartitionedCall"): called from
<unknown>:0: error: failed while converting: 'main': Ops that can be supported by the flex runtime (enabled via setting the -emit-select-tf-ops flag):
	tf.Size {device = ""}


During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/minimonk/src/tensorflow/tensorflow/lite/python/tflite_convert.py", line 698, in <module>
    main()
  File "/home/minimonk/src/tensorflow/tensorflow/lite/python/tflite_convert.py", line 694, in main
    app.run(main=run_main, argv=sys.argv[:1])
  File "/home/minimonk/.local/lib/python3.8/site-packages/tensorflow/python/platform/app.py", line 40, in run
    _run(main=main, argv=argv, flags_parser=_parse_flags_tolerate_undef)
  File "/home/minimonk/.local/lib/python3.8/site-packages/absl/app.py", line 303, in run
    _run_main(main, args)
  File "/home/minimonk/.local/lib/python3.8/site-packages/absl/app.py", line 251, in _run_main
    sys.exit(main(argv))
  File "/home/minimonk/src/tensorflow/tensorflow/lite/python/tflite_convert.py", line 677, in run_main
    _convert_tf2_model(tflite_flags)
  File "/home/minimonk/src/tensorflow/tensorflow/lite/python/tflite_convert.py", line 265, in _convert_tf2_model
    tflite_model = converter.convert()
  File "/home/minimonk/.local/lib/python3.8/site-packages/tensorflow/lite/python/lite.py", line 739, in convert
    result = _convert_saved_model(**converter_kwargs)
  File "/home/minimonk/.local/lib/python3.8/site-packages/tensorflow/lite/python/convert.py", line 632, in convert_saved_model
    data = toco_convert_protos(
  File "/home/minimonk/.local/lib/python3.8/site-packages/tensorflow/lite/python/convert.py", line 216, in toco_convert_protos
    raise ConverterError(str(e))
tensorflow.lite.python.convert.ConverterError: <unknown>:0: error: loc(callsite(callsite("Postprocessor/BatchMultiClassNonMaxSuppression/MultiClassNonMaxSuppression/SortByField_1/Size@__inference_call_func_10155" at "StatefulPartitionedCall@__inference_signature_wrapper_11818") at "StatefulPartitionedCall")): 'tf.Size' op is neither a custom op nor a flex op
<unknown>:0: note: loc("StatefulPartitionedCall"): called from
<unknown>:0: error: failed while converting: 'main': Ops that can be supported by the flex runtime (enabled via setting the -emit-select-tf-ops flag):
	tf.Size {device = ""}

 

[링크 : https://bekusib.tistory.com/210]

[링크 : https://bugloss-chestnut.tistory.com/entry/Tensorflow-keras-h5-pb-tflite-변환-오류python]

[링크 : https://gmground.tistory.com/entry/학습된-모델을-TensorFlow-Lite-모델tflite로-변환하여-Android에서-Object-Classification-해보기]

'프로그램 사용 > yolo_tensorflow' 카테고리의 다른 글

pb to tflite 변환 part 2...  (0) 2021.01.27
tensorflow netron  (0) 2021.01.27
tensorflow pb to tflite  (0) 2021.01.25
텐서플로우 - detection과 classification  (0) 2021.01.22
텐서플로우 모델 출력? (metadata?)  (0) 2021.01.21
Posted by 구차니

문득 생각이 들어 비싼 이어폰 써야 한대서

아부지 협찬으로 비싼(!) 이어폰 빌려서 해봤는데

내 기준으로는 그냥 볼륨이 커져서 작은 소리까지 잘 들리는 느낌?

 

그런 이유로 quad dac 켜면 미세하게 볼륨 조절되고 최대 볼륨이 더 커지는 게 개인적인 효과

다이나믹 레인지나 이런건 솔찍히 잘 모르겠고

 

en50332 이라는 표준에 의해 청각에 손상입지 않을 세기로 핸드폰 볼륨이 정해졌다고

[링크 : http://www.0db.co.kr/REVIEW_0DB/904118]

[링크 : https://www.clien.net/service/amp/board/park/11139522]

 

300옴

[링크 : http://danawa.com/product/product.html?code=74145&cateCode=12237350]

 

 

 

'개소리 왈왈 > 모바일 생활' 카테고리의 다른 글

v50s 색감이..  (0) 2021.02.06
모바일 크롬 탭 설정 변경  (0) 2021.02.02
LG 핸드폰 접는건가?!  (0) 2021.01.20
MMS가 안가서 검색을 해보니  (2) 2021.01.15
(이제야) 갤럭시 폴드 동영상을 보니  (2) 2021.01.15
Posted by 구차니