꽃 사세요 아니고 사과 사세요~
코로나로 안해서 설 특수가 사라진 덕인가..
하루 종일 1개도 문의가 들어오질 않는다 ㅠㅠ
'개소리 왈왈 > 직딩의 비애' 카테고리의 다른 글
정신없어!! (2) | 2021.02.16 |
---|---|
소득공제.. (0) | 2021.02.12 |
내일부터는 휴가 (0) | 2020.12.29 |
드디어 퇴직금 정산 끝 (0) | 2020.12.21 |
돈 복이 없는건지 있는건지.. (0) | 2020.11.28 |
꽃 사세요 아니고 사과 사세요~
코로나로 안해서 설 특수가 사라진 덕인가..
하루 종일 1개도 문의가 들어오질 않는다 ㅠㅠ
정신없어!! (2) | 2021.02.16 |
---|---|
소득공제.. (0) | 2021.02.12 |
내일부터는 휴가 (0) | 2020.12.29 |
드디어 퇴직금 정산 끝 (0) | 2020.12.21 |
돈 복이 없는건지 있는건지.. (0) | 2020.11.28 |
COCO 학습 시켜보기..
은근히 자료가 없네 ㅠㅠ
[링크 : https://tfhub.dev/tensorflow/ssd_mobilenet_v2/fpnlite_320x320/1]
[링크 : http://github.com/tensorflow/models/tree/master/research/object_detection]
[링크 : https://cocodataset.org/#home]
[링크 : https://github.com/tensorflow/models/tree/master/official]
[링크 : https://github.com/abhimanyu1990/SSD-Mobilenet-Custom-Object-Detector-Model-using-Tensorflow-2]
tensorflow pipeline.config (0) | 2021.02.10 |
---|---|
tensorflow pipeline.conf (0) | 2021.02.08 |
tensorflow lite SELECT_TF_OPS (0) | 2021.02.02 |
saved_model_cli (0) | 2021.02.02 |
tensorflow bazel build 옵션 (0) | 2021.02.02 |
해당예제를 조금 더 수정하면
UART1 에서 UART2로 서로 연결할 수 있겠네?
[링크 : https://riptutorial.com/stm32/example/29940/echo-application---hal-library]
UART1은 115200 UART2는 9600인데 크게 문제 없이 잘 되는 듯?
char byte;
char byte3;
void HAL_UART_RxCpltCallback(UART_HandleTypeDef *huart)
{
if (huart->Instance == USART1)
{
/* Transmit one byte with 100 ms timeout */
HAL_UART_Transmit(&huart3, &byte, 1, 100);
/* Receive one byte in interrupt mode */
HAL_UART_Receive_IT(&huart1, &byte, 1);
}
if (huart->Instance == USART3)
{
/* Transmit one byte with 100 ms timeout */
HAL_UART_Transmit(&huart1, &byte3, 1, 100);
/* Receive one byte in interrupt mode */
HAL_UART_Receive_IT(&huart3, &byte3, 1);
}
}
int main(void)
{
HAL_UART_Receive_IT(&huart1, &byte, 1);
HAL_UART_Receive_IT(&huart3, &byte3, 1);
while (1)
{
}
}
stm32 RST pull-up reset fail (0) | 2021.08.02 |
---|---|
STM32 RDP(ReaD Protection) (0) | 2021.07.02 |
STM32CubeIDE / HAL register callbacks (0) | 2021.02.03 |
STM32CubeIDE 주의사항(?) (0) | 2021.02.02 |
STM32F103 관련 용어 (0) | 2021.02.02 |
STM32CubeIDE의 경우 ioc 파일을 수정하면 코드를 재성성하는데
소스내의 헤더들도 당연히(?) 재생성 되니 계속 원복 되어서 분노 폭발(!)
열심히 뒤적여 보니 uart callback 관련해서는
stm32f1xx_hal_conf.h 의 아래 부분 설정은
/* ########################### System Configuration ######################### */
/**
* @brief This is the HAL system configuration section
*/
#define VDD_VALUE 3300U /*!< Value of VDD in mv */
#define TICK_INT_PRIORITY 0U /*!< tick interrupt priority (lowest by default) */
#define USE_RTOS 0U
#define PREFETCH_ENABLE 1U
#define USE_HAL_ADC_REGISTER_CALLBACKS 0U /* ADC register callback disabled */
#define USE_HAL_CAN_REGISTER_CALLBACKS 0U /* CAN register callback disabled */
#define USE_HAL_CEC_REGISTER_CALLBACKS 0U /* CEC register callback disabled */
#define USE_HAL_DAC_REGISTER_CALLBACKS 0U /* DAC register callback disabled */
#define USE_HAL_ETH_REGISTER_CALLBACKS 0U /* ETH register callback disabled */
#define USE_HAL_HCD_REGISTER_CALLBACKS 0U /* HCD register callback disabled */
#define USE_HAL_I2C_REGISTER_CALLBACKS 0U /* I2C register callback disabled */
#define USE_HAL_I2S_REGISTER_CALLBACKS 0U /* I2S register callback disabled */
#define USE_HAL_MMC_REGISTER_CALLBACKS 0U /* MMC register callback disabled */
#define USE_HAL_NAND_REGISTER_CALLBACKS 0U /* NAND register callback disabled */
#define USE_HAL_NOR_REGISTER_CALLBACKS 0U /* NOR register callback disabled */
#define USE_HAL_PCCARD_REGISTER_CALLBACKS 0U /* PCCARD register callback disabled */
#define USE_HAL_PCD_REGISTER_CALLBACKS 0U /* PCD register callback disabled */
#define USE_HAL_RTC_REGISTER_CALLBACKS 0U /* RTC register callback disabled */
#define USE_HAL_SD_REGISTER_CALLBACKS 0U /* SD register callback disabled */
#define USE_HAL_SMARTCARD_REGISTER_CALLBACKS 0U /* SMARTCARD register callback disabled */
#define USE_HAL_IRDA_REGISTER_CALLBACKS 0U /* IRDA register callback disabled */
#define USE_HAL_SRAM_REGISTER_CALLBACKS 0U /* SRAM register callback disabled */
#define USE_HAL_SPI_REGISTER_CALLBACKS 0U /* SPI register callback disabled */
#define USE_HAL_TIM_REGISTER_CALLBACKS 0U /* TIM register callback disabled */
#define USE_HAL_UART_REGISTER_CALLBACKS 1U /* UART register callback enabled */
#define USE_HAL_USART_REGISTER_CALLBACKS 0U /* USART register callback disabled */
#define USE_HAL_WWDG_REGISTER_CALLBACKS 0U /* WWDG register callback disabled */
STM32CubeIDE의 ioc / Project Manager - Advanced Settings - Register Callbacks
(오른쪽 구석탱이...)을 Enable로 바꾸어 주면 된다.
연관이 있는 링크인진 모르겠음 ㅋㅋ
+
나도 어느걸 보다가 저걸 발견했는진 모르겠다 -_-
소스 뒤적여서 추적하다가 발견한건가..
[링크 : https://mul-ku.tistory.com/entry/STM32-UART-수신-인터럽트-사용법-및-간단한-예제HAL-DRIVER]
[링크 : https://community.st.com/s/question/0D53W000000bRmkSAE/stm32-uart-call-back-function]
STM32 RDP(ReaD Protection) (0) | 2021.07.02 |
---|---|
stm32 uart echo (0) | 2021.02.04 |
STM32CubeIDE 주의사항(?) (0) | 2021.02.02 |
STM32F103 관련 용어 (0) | 2021.02.02 |
CMSIS for stm32 (0) | 2018.02.12 |
요 근래 업데이트가 되었는지
탭이 이상하게 나온다 -_-
오히려 한 화면에 보이는 양이 적어지는 느낌..
이전 설정으로 바꾸려고 부랴부랴 찾아봄
chrome://flags
페이지에서 tab 으로 검색해서 대충 disable / 재시작
[링크 : http://www.wetrend.co.kr/board/view?board_name=wit_board&wr_id=1308630]
카카오톡 대화내용 옮기기 (0) | 2021.02.07 |
---|---|
v50s 색감이.. (0) | 2021.02.06 |
hifi quad dac? (0) | 2021.01.25 |
LG 핸드폰 접는건가?! (0) | 2021.01.20 |
MMS가 안가서 검색을 해보니 (2) | 2021.01.15 |
ioc 파일을 통해 코드를 생성하는데
아래의 구역에 있는 애들은 새롭게 생성되어도 날아가지 않지만
그 외의 구역에는 전부 날아가니 주의!
/* USER CODE BEGIN 4 */
/* USER CODE END 4 */
stm32 uart echo (0) | 2021.02.04 |
---|---|
STM32CubeIDE / HAL register callbacks (0) | 2021.02.03 |
STM32F103 관련 용어 (0) | 2021.02.02 |
CMSIS for stm32 (0) | 2018.02.12 |
stm32 DMA 관련글들 (0) | 2017.12.11 |
SELECT_TF_OPS로 이리저리 찾으니, cmake 파일에서 똭?!
그런데 FATAL_ERROR.. 수상하다...
TODO: Add support.... 수상하다.....
68 # This must be enabled when converting from TF models with SELECT_TF_OPS
69 # enabled.
70 # https://www.tensorflow.org/lite/guide/ops_select#converting_the_model
71 # This is currently not supported.
72 option(TFLITE_ENABLE_FLEX "Enable SELECT_TF_OPS" OFF) # TODO: Add support
197 if(TFLITE_ENABLE_FLEX)
198 message(FATAL_ERROR "TF Lite Flex delegate is currently not supported.")
199 populate_tflite_source_vars("delegates/flex" TFLITE_DELEGATES_FLEX_SRCS)
200 list(APPEND TFLITE_TARGET_DEPENDENCIES
201 absl::inlined_vector
202 absl::optional
203 absl::type_traits
204 )
205 endif()
아래처럼 하면 된다는데
sudo apt-get install cmake
git clone https://github.com/tensorflow/tensorflow.git tensorflow_src
mkdir tflite_build
cd tflite_build
cmake ../tensorflow_src/tensorflow/lite
[링크 : https://www.tensorflow.org/lite/guide/build_cmake]
안되서 -S 옵션주니 된....
$ cmake -S ../tensorflow/tensorflow/lite |
야이 그지깽깽이들아!!!! ㅠㅠㅠ
CMake Error at CMakeLists.txt:198 (message):
TF Lite Flex delegate is currently not supported.
tensorflow pipeline.conf (0) | 2021.02.08 |
---|---|
tf object detection COCO (0) | 2021.02.05 |
saved_model_cli (0) | 2021.02.02 |
tensorflow bazel build 옵션 (0) | 2021.02.02 |
tensorflow bazel build (0) | 2021.02.01 |
어디까지 흘러가야 답을 찾을수 있을까 ㅠㅠ
아래 사이트에서 pb와 tflite를 모두 제공해주어서 한번 시도
[링크 : https://tfhub.dev/google/aiy/vision/classifier/birds_V1/1]
위 링크의 pb를 확인해보면 아래와 같이 나오는데, tag-set이 'serve' 라는게 없어서 변환하려니 안된다
$ saved_model_cli show --dir ./ --all
2021-02-02 16:42:12.474893: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory
2021-02-02 16:42:12.474941: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
MetaGraphDef with tag-set: '' contains the following SignatureDefs:
signature_def['default']:
The given SavedModel SignatureDef contains the following input(s):
inputs['images'] tensor_info:
dtype: DT_FLOAT
shape: (-1, 224, 224, 3)
name: hub_input/images:0
The given SavedModel SignatureDef contains the following output(s):
outputs['default'] tensor_info:
dtype: DT_FLOAT
shape: (-1, 965)
name: prediction:0
Method name is:
signature_def['image_classifier']:
The given SavedModel SignatureDef contains the following input(s):
inputs['images'] tensor_info:
dtype: DT_FLOAT
shape: (-1, 224, 224, 3)
name: hub_input/images:0
The given SavedModel SignatureDef contains the following output(s):
outputs['logits'] tensor_info:
dtype: DT_FLOAT
shape: (-1, 965)
name: prediction:0
Method name is:
이건 다른 모델.. 도대체 어떻게 봐야 하는걸까 ㅠㅠ
$ saved_model_cli show --dir ./ --all
2021-02-02 16:42:44.443703: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory
2021-02-02 16:42:44.443752: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
MetaGraphDef with tag-set: 'serve' contains the following SignatureDefs:
signature_def['__saved_model_init_op']:
The given SavedModel SignatureDef contains the following input(s):
The given SavedModel SignatureDef contains the following output(s):
outputs['__saved_model_init_op'] tensor_info:
dtype: DT_INVALID
shape: unknown_rank
name: NoOp
Method name is:
signature_def['serving_default']:
The given SavedModel SignatureDef contains the following input(s):
inputs['input_tensor'] tensor_info:
dtype: DT_UINT8
shape: (1, -1, -1, 3)
name: serving_default_input_tensor:0
The given SavedModel SignatureDef contains the following output(s):
outputs['detection_anchor_indices'] tensor_info:
dtype: DT_FLOAT
shape: (1, 100)
name: StatefulPartitionedCall:0
outputs['detection_boxes'] tensor_info:
dtype: DT_FLOAT
shape: (1, 100, 4)
name: StatefulPartitionedCall:1
outputs['detection_classes'] tensor_info:
dtype: DT_FLOAT
shape: (1, 100)
name: StatefulPartitionedCall:2
outputs['detection_multiclass_scores'] tensor_info:
dtype: DT_FLOAT
shape: (1, 100, 91)
name: StatefulPartitionedCall:3
outputs['detection_scores'] tensor_info:
dtype: DT_FLOAT
shape: (1, 100)
name: StatefulPartitionedCall:4
outputs['num_detections'] tensor_info:
dtype: DT_FLOAT
shape: (1)
name: StatefulPartitionedCall:5
outputs['raw_detection_boxes'] tensor_info:
dtype: DT_FLOAT
shape: (1, 1917, 4)
name: StatefulPartitionedCall:6
outputs['raw_detection_scores'] tensor_info:
dtype: DT_FLOAT
shape: (1, 1917, 91)
name: StatefulPartitionedCall:7
Method name is: tensorflow/serving/predict
Defined Functions:
Function Name: '__call__'
Option #1
Callable with:
Argument #1
input_tensor: TensorSpec(shape=(1, None, None, 3), dtype=tf.uint8, name='input_tensor')
혹시나 해서 다시 한번 select_tf_ops 옵션을 빼고 해보았지만 역시나 안된다.
눈에 들어오는 에러는 아래것 정도인데 -emit-select-tf-ops 옵션을 누구에게 주어야 하는건지 모르겠다.
그리고 custom op nor flex op. flex op는 또 무엇인가...
tensorflow.lite.python.convert.ConverterError: <unknown>:0: error: loc(callsite(callsite("Postprocessor/BatchMultiClassNonMaxSuppression/MultiClassNonMaxSuppression/SortByField_1/Size@__inference___call___21591" at "StatefulPartitionedCall@__inference_signature_wrapper_23250") at "StatefulPartitionedCall")): 'tf.Size' op is neither a custom op nor a flex op
<unknown>:0: note: loc("StatefulPartitionedCall"): called from
<unknown>:0: error: failed while converting: 'main': Ops that can be supported by the flex runtime (enabled via setting the -emit-select-tf-ops flag):
tf.Size {device = ""}
$ cat c.py
import tensorflow as tf
saved_model_dir="./"
converter = tf.lite.TFLiteConverter.from_saved_model(saved_model_dir)
#converter.target_spec.supported_ops = [
# tf.lite.OpsSet.TFLITE_BUILTINS, # enable TensorFlow Lite ops.
# tf.lite.OpsSet.SELECT_TF_OPS # enable TensorFlow ops.
#]
tflite_model = converter.convert()
open("converted_model.tflite", "wb").write(tflite_model)
$ python3 c.py
2021-02-02 16:45:08.234068: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory
2021-02-02 16:45:08.234112: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
2021-02-02 16:45:10.382490: I tensorflow/compiler/jit/xla_cpu_device.cc:41] Not creating XLA devices, tf_xla_enable_xla_devices not set
2021-02-02 16:45:10.382710: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library 'libcuda.so.1'; dlerror: libcuda.so.1: cannot open shared object file: No such file or directory
2021-02-02 16:45:10.382742: W tensorflow/stream_executor/cuda/cuda_driver.cc:326] failed call to cuInit: UNKNOWN ERROR (303)
2021-02-02 16:45:10.382775: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:156] kernel driver does not appear to be running on this host (mini2760p): /proc/driver/nvidia/version does not exist
2021-02-02 16:45:10.383265: I tensorflow/compiler/jit/xla_gpu_device.cc:99] Not creating XLA devices, tf_xla_enable_xla_devices not set
2021-02-02 16:45:26.331917: W tensorflow/compiler/mlir/lite/python/tf_tfl_flatbuffer_helpers.cc:316] Ignored output_format.
2021-02-02 16:45:26.331970: W tensorflow/compiler/mlir/lite/python/tf_tfl_flatbuffer_helpers.cc:319] Ignored drop_control_dependency.
2021-02-02 16:45:26.331981: W tensorflow/compiler/mlir/lite/python/tf_tfl_flatbuffer_helpers.cc:325] Ignored change_concat_input_ranges.
2021-02-02 16:45:26.333113: I tensorflow/cc/saved_model/reader.cc:32] Reading SavedModel from: ./
2021-02-02 16:45:26.442652: I tensorflow/cc/saved_model/reader.cc:55] Reading meta graph with tags { serve }
2021-02-02 16:45:26.442721: I tensorflow/cc/saved_model/reader.cc:93] Reading SavedModel debug info (if present) from: ./
2021-02-02 16:45:26.442798: I tensorflow/compiler/jit/xla_gpu_device.cc:99] Not creating XLA devices, tf_xla_enable_xla_devices not set
2021-02-02 16:45:26.752919: I tensorflow/compiler/mlir/mlir_graph_optimization_pass.cc:196] None of the MLIR optimization passes are enabled (registered 0 passes)
2021-02-02 16:45:26.824027: I tensorflow/cc/saved_model/loader.cc:206] Restoring SavedModel bundle.
2021-02-02 16:45:26.900734: I tensorflow/core/platform/profile_utils/cpu_utils.cc:112] CPU Frequency: 2494085000 Hz
2021-02-02 16:45:27.788741: I tensorflow/cc/saved_model/loader.cc:190] Running initialization op on SavedModel bundle at path: ./
2021-02-02 16:45:28.227404: I tensorflow/cc/saved_model/loader.cc:277] SavedModel load for tags { serve }; Status: success: OK. Took 1894293 microseconds.
2021-02-02 16:45:34.080047: I tensorflow/compiler/mlir/tensorflow/utils/dump_mlir_util.cc:194] disabling MLIR crash reproducer, set env var `MLIR_CRASH_REPRODUCER_DIRECTORY` to enable.
2021-02-02 16:45:35.369335: I tensorflow/compiler/jit/xla_gpu_device.cc:99] Not creating XLA devices, tf_xla_enable_xla_devices not set
loc(callsite(callsite("Postprocessor/BatchMultiClassNonMaxSuppression/MultiClassNonMaxSuppression/SortByField_1/Size@__inference___call___21591" at "StatefulPartitionedCall@__inference_signature_wrapper_23250") at "StatefulPartitionedCall")): error: 'tf.Size' op is neither a custom op nor a flex op
error: failed while converting: 'main': Ops that can be supported by the flex runtime (enabled via setting the -emit-select-tf-ops flag):
tf.Size {device = ""}
Traceback (most recent call last):
File "/home/minimonk/.local/lib/python3.8/site-packages/tensorflow/lite/python/convert.py", line 210, in toco_convert_protos
model_str = wrap_toco.wrapped_toco_convert(model_flags_str,
File "/home/minimonk/.local/lib/python3.8/site-packages/tensorflow/lite/python/wrap_toco.py", line 32, in wrapped_toco_convert
return _pywrap_toco_api.TocoConvert(
Exception: <unknown>:0: error: loc(callsite(callsite("Postprocessor/BatchMultiClassNonMaxSuppression/MultiClassNonMaxSuppression/SortByField_1/Size@__inference___call___21591" at "StatefulPartitionedCall@__inference_signature_wrapper_23250") at "StatefulPartitionedCall")): 'tf.Size' op is neither a custom op nor a flex op
<unknown>:0: note: loc("StatefulPartitionedCall"): called from
<unknown>:0: error: failed while converting: 'main': Ops that can be supported by the flex runtime (enabled via setting the -emit-select-tf-ops flag):
tf.Size {device = ""}
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "c.py", line 9, in <module>
tflite_model = converter.convert()
File "/home/minimonk/.local/lib/python3.8/site-packages/tensorflow/lite/python/lite.py", line 739, in convert
result = _convert_saved_model(**converter_kwargs)
File "/home/minimonk/.local/lib/python3.8/site-packages/tensorflow/lite/python/convert.py", line 632, in convert_saved_model
data = toco_convert_protos(
File "/home/minimonk/.local/lib/python3.8/site-packages/tensorflow/lite/python/convert.py", line 216, in toco_convert_protos
raise ConverterError(str(e))
tensorflow.lite.python.convert.ConverterError: <unknown>:0: error: loc(callsite(callsite("Postprocessor/BatchMultiClassNonMaxSuppression/MultiClassNonMaxSuppression/SortByField_1/Size@__inference___call___21591" at "StatefulPartitionedCall@__inference_signature_wrapper_23250") at "StatefulPartitionedCall")): 'tf.Size' op is neither a custom op nor a flex op
<unknown>:0: note: loc("StatefulPartitionedCall"): called from
<unknown>:0: error: failed while converting: 'main': Ops that can be supported by the flex runtime (enabled via setting the -emit-select-tf-ops flag):
tf.Size {device = ""}
saved_model_cli 명령어를 이용해서 변환하려는데 tensorrt가 안들어가면 인자가 부족하다고 하고
넣으면 libvinfer 에러가 나고... 후...
$ saved_model_cli convert --dir=. --output_dir=output --tag_set serving_default tensorrt
2021-02-02 16:52:12.317957: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory
2021-02-02 16:52:12.318003: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
2021-02-02 16:52:13.640651: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library 'libnvinfer.so.7'; dlerror: libnvinfer.so.7: cannot open shared object file: No such file or directory
2021-02-02 16:52:13.640699: F tensorflow/compiler/tf2tensorrt/stub/nvinfer_stub.cc:49] getInferLibVersion symbol not found.
중지됨 (core dumped)
[링크 : https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/tools/saved_model_cli.py]
-emi-select-tf-ops=true가 flatbuffer_translate 에 전달되는 옵션인가?
// RUN: flatbuffer_translate -mlir-to-tflite-flatbuffer %s -emit-select-tf-ops=true -emit-builtin-tflite-ops=false -o - | flatbuffer_to_string - | FileCheck %s |
MLIR (Multi-Level Intermediate Representation)
[링크 : https://mlir.llvm.org/]
결국은 돌아돌아 원점인가..
import tensorflow as tf
converter = tf.lite.TFLiteConverter.from_saved_model(saved_model_dir)
converter.target_spec.supported_ops = [
tf.lite.OpsSet.TFLITE_BUILTINS, # enable TensorFlow Lite ops.
tf.lite.OpsSet.SELECT_TF_OPS # enable TensorFlow ops.
]
tflite_model = converter.convert()
open("converted_model.tflite", "wb").write(tflite_model)
[링크 : https://www.tensorflow.org/lite/guide/ops_select#convert_a_model]
TFLite interpreter가 select ops 를 가지고 있는지 확인해라..
tensorflow.a 가 그럼 그 옵션을 받아야 한다는건가?
Try using TF select ops. However, you may needs to ensure your TFLite interpreter has these select ops for inference. |
[링크 : https://stackoverflow.com/questions/65661737/]
tf object detection COCO (0) | 2021.02.05 |
---|---|
tensorflow lite SELECT_TF_OPS (0) | 2021.02.02 |
tensorflow bazel build 옵션 (0) | 2021.02.02 |
tensorflow bazel build (0) | 2021.02.01 |
convert from tensorflow to tensorflow lite (0) | 2021.02.01 |
tensorflow lite 빌드는 묘하게 갈리는구나..
일단 static library는 bazel 도움 없이 스크립트, make로 빌드하게 되어있고
./tensorflow/lite/tools/make/download_dependencies.sh
./tensorflow/lite/tools/make/build_aarch64_lib.sh
so는 위의 스크립트로는 못하고 bazel의 도움을 받아야 빌드할 수 있는데
3단계. ARM64 바이너리 빌드하기
C 라이브러리
bazel build --config=elinux_aarch64 -c opt //tensorflow/lite/c:libtensorflowlite_c.so
C++ 라이브러리
bazel build --config=elinux_aarch64 -c opt //tensorflow/lite:libtensorflowlite.so
[링크 : https://www.tensorflow.org/lite/guide/build_arm64?hl=ko]
cross compile은 --config=를 통해서 지정이 가능하나, 특정 컴파일러를 쓰도록 지정은 어떻게 하는지 찾아봐야 겠다.
구글 페이지에는 elinux_aarch64만 보이는데 .bazelrc를 열어보니 elinux_armhf 도 존재한다(32bit?)
그리고 monolithic이 해결책일줄 알았는데.. 단일 so를 만드는 옵션일뿐.. select_tf_ops 랑은 상관이 없나보다..
# Embedded Linux options (experimental and only tested with TFLite build yet)
# elinux: General Embedded Linux options shared by all flavors.
# elinux_aarch64: Embedded Linux options for aarch64 (ARM64) CPU support.
# elinux_armhf: Embedded Linux options for armhf (ARMv7) CPU support.
# Other build options:
# short_logs: Only log errors during build, skip warnings.
# verbose_logs: Show all compiler warnings during build.
# monolithic: Build all TF C++ code into a single shared object.
# dynamic_kernels: Try to link all kernels dynamically (experimental).
# libc++: Link against libc++ instead of stdlibc++
tensorflow lite SELECT_TF_OPS (0) | 2021.02.02 |
---|---|
saved_model_cli (0) | 2021.02.02 |
tensorflow bazel build (0) | 2021.02.01 |
convert from tensorflow to tensorflow lite (0) | 2021.02.01 |
tflite ERROR: tensorflow/lite/core/subgraph.cc BytesRequired number of elements overflowed. (0) | 2021.01.29 |
IWDG - Independent Watchdog
Note: The RTC, the IWDG, and the corresponding clock sources are not stopped by entering Stop or Standby mode.
클럭관련
High/Low speed External/Internal
HSE = high-speed external clock signal
HSI = high-speed internal clock signal
LSI = low-speed internal clock signal
LSE = low-speed external clock signal
APB1은 36MHz 클럭 최대, APB2는 72MHz 최대
다만 타이머쪽은 둘다 72MHz를 넣을 수 있다.
USART 번호가 없어서 모르겠지만 4.5Mbit/s 혹은 2.25Mbit/s 까지 설정이 가능하다는데
아니 USART를 Mbps 급으로 쓰는데가 있긴 한건가? ㄷㄷ
아 맞다.. 있긴 있었지.. -_-
2021/01/08 - [embeded] - orange pi r1+
STM32CubeIDE / HAL register callbacks (0) | 2021.02.03 |
---|---|
STM32CubeIDE 주의사항(?) (0) | 2021.02.02 |
CMSIS for stm32 (0) | 2018.02.12 |
stm32 DMA 관련글들 (0) | 2017.12.11 |
cmsis printf (0) | 2017.01.10 |