tensorboard
먼가 복잡하게 나오는데 보는법을 모르겠다? ㅠㅠ
[링크 : https://urbangy.tistory.com/38]
[링크 : https://eehoeskrap.tistory.com/322]
pb to tflite
영... 실패중.. ㅠㅠ
[링크 : https://github.com/tensorflow/tensorflow/issues/46285]
$ python3 /home/minimonk/src/tensorflow/tensorflow/lite/python/tflite_convert.py --saved_model_dir=./saved_model --output_file=output.tflite
2021-01-26 19:01:39.223104: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory
2021-01-26 19:01:39.223142: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
2021-01-26 19:01:41.278842: I tensorflow/compiler/jit/xla_cpu_device.cc:41] Not creating XLA devices, tf_xla_enable_xla_devices not set
2021-01-26 19:01:41.279042: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library 'libcuda.so.1'; dlerror: libcuda.so.1: cannot open shared object file: No such file or directory
2021-01-26 19:01:41.279063: W tensorflow/stream_executor/cuda/cuda_driver.cc:326] failed call to cuInit: UNKNOWN ERROR (303)
2021-01-26 19:01:41.279101: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:156] kernel driver does not appear to be running on this host (mini2760p): /proc/driver/nvidia/version does not exist
2021-01-26 19:01:41.279527: I tensorflow/compiler/jit/xla_gpu_device.cc:99] Not creating XLA devices, tf_xla_enable_xla_devices not set
2021-01-26 19:01:55.229040: W tensorflow/compiler/mlir/lite/python/tf_tfl_flatbuffer_helpers.cc:316] Ignored output_format.
2021-01-26 19:01:55.229092: W tensorflow/compiler/mlir/lite/python/tf_tfl_flatbuffer_helpers.cc:319] Ignored drop_control_dependency.
2021-01-26 19:01:55.229117: W tensorflow/compiler/mlir/lite/python/tf_tfl_flatbuffer_helpers.cc:325] Ignored change_concat_input_ranges.
2021-01-26 19:01:55.230250: I tensorflow/cc/saved_model/reader.cc:32] Reading SavedModel from: ./saved_model
2021-01-26 19:01:55.349428: I tensorflow/cc/saved_model/reader.cc:55] Reading meta graph with tags { serve }
2021-01-26 19:01:55.349498: I tensorflow/cc/saved_model/reader.cc:93] Reading SavedModel debug info (if present) from: ./saved_model
2021-01-26 19:01:55.349576: I tensorflow/compiler/jit/xla_gpu_device.cc:99] Not creating XLA devices, tf_xla_enable_xla_devices not set
2021-01-26 19:01:55.676408: I tensorflow/compiler/mlir/mlir_graph_optimization_pass.cc:196] None of the MLIR optimization passes are enabled (registered 0 passes)
2021-01-26 19:01:55.748285: I tensorflow/cc/saved_model/loader.cc:206] Restoring SavedModel bundle.
2021-01-26 19:01:55.826459: I tensorflow/core/platform/profile_utils/cpu_utils.cc:112] CPU Frequency: 2494460000 Hz
2021-01-26 19:01:56.738523: I tensorflow/cc/saved_model/loader.cc:190] Running initialization op on SavedModel bundle at path: ./saved_model
2021-01-26 19:01:57.100034: I tensorflow/cc/saved_model/loader.cc:277] SavedModel load for tags { serve }; Status: success: OK. Took 1869785 microseconds.
2021-01-26 19:01:58.857435: I tensorflow/compiler/mlir/tensorflow/utils/dump_mlir_util.cc:194] disabling MLIR crash reproducer, set env var `MLIR_CRASH_REPRODUCER_DIRECTORY` to enable.
2021-01-26 19:01:59.851936: I tensorflow/compiler/jit/xla_gpu_device.cc:99] Not creating XLA devices, tf_xla_enable_xla_devices not set
loc(callsite(callsite("Postprocessor/BatchMultiClassNonMaxSuppression/MultiClassNonMaxSuppression/SortByField_1/Size@__inference_call_func_10155" at "StatefulPartitionedCall@__inference_signature_wrapper_11818") at "StatefulPartitionedCall")): error: 'tf.Size' op is neither a custom op nor a flex op
error: failed while converting: 'main': Ops that can be supported by the flex runtime (enabled via setting the -emit-select-tf-ops flag):
tf.Size {device = ""}
Traceback (most recent call last):
File "/home/minimonk/.local/lib/python3.8/site-packages/tensorflow/lite/python/convert.py", line 210, in toco_convert_protos
model_str = wrap_toco.wrapped_toco_convert(model_flags_str,
File "/home/minimonk/.local/lib/python3.8/site-packages/tensorflow/lite/python/wrap_toco.py", line 32, in wrapped_toco_convert
return _pywrap_toco_api.TocoConvert(
Exception: <unknown>:0: error: loc(callsite(callsite("Postprocessor/BatchMultiClassNonMaxSuppression/MultiClassNonMaxSuppression/SortByField_1/Size@__inference_call_func_10155" at "StatefulPartitionedCall@__inference_signature_wrapper_11818") at "StatefulPartitionedCall")): 'tf.Size' op is neither a custom op nor a flex op
<unknown>:0: note: loc("StatefulPartitionedCall"): called from
<unknown>:0: error: failed while converting: 'main': Ops that can be supported by the flex runtime (enabled via setting the -emit-select-tf-ops flag):
tf.Size {device = ""}
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/minimonk/src/tensorflow/tensorflow/lite/python/tflite_convert.py", line 698, in <module>
main()
File "/home/minimonk/src/tensorflow/tensorflow/lite/python/tflite_convert.py", line 694, in main
app.run(main=run_main, argv=sys.argv[:1])
File "/home/minimonk/.local/lib/python3.8/site-packages/tensorflow/python/platform/app.py", line 40, in run
_run(main=main, argv=argv, flags_parser=_parse_flags_tolerate_undef)
File "/home/minimonk/.local/lib/python3.8/site-packages/absl/app.py", line 303, in run
_run_main(main, args)
File "/home/minimonk/.local/lib/python3.8/site-packages/absl/app.py", line 251, in _run_main
sys.exit(main(argv))
File "/home/minimonk/src/tensorflow/tensorflow/lite/python/tflite_convert.py", line 677, in run_main
_convert_tf2_model(tflite_flags)
File "/home/minimonk/src/tensorflow/tensorflow/lite/python/tflite_convert.py", line 265, in _convert_tf2_model
tflite_model = converter.convert()
File "/home/minimonk/.local/lib/python3.8/site-packages/tensorflow/lite/python/lite.py", line 739, in convert
result = _convert_saved_model(**converter_kwargs)
File "/home/minimonk/.local/lib/python3.8/site-packages/tensorflow/lite/python/convert.py", line 632, in convert_saved_model
data = toco_convert_protos(
File "/home/minimonk/.local/lib/python3.8/site-packages/tensorflow/lite/python/convert.py", line 216, in toco_convert_protos
raise ConverterError(str(e))
tensorflow.lite.python.convert.ConverterError: <unknown>:0: error: loc(callsite(callsite("Postprocessor/BatchMultiClassNonMaxSuppression/MultiClassNonMaxSuppression/SortByField_1/Size@__inference_call_func_10155" at "StatefulPartitionedCall@__inference_signature_wrapper_11818") at "StatefulPartitionedCall")): 'tf.Size' op is neither a custom op nor a flex op
<unknown>:0: note: loc("StatefulPartitionedCall"): called from
<unknown>:0: error: failed while converting: 'main': Ops that can be supported by the flex runtime (enabled via setting the -emit-select-tf-ops flag):
tf.Size {device = ""}
[링크 : https://bekusib.tistory.com/210]
[링크 : https://bugloss-chestnut.tistory.com/entry/Tensorflow-keras-h5-pb-tflite-변환-오류python]
[링크 : https://gmground.tistory.com/entry/학습된-모델을-TensorFlow-Lite-모델tflite로-변환하여-Android에서-Object-Classification-해보기]