'잡동사니'에 해당되는 글 13035건
- 2021.10.16 기절
- 2021.10.15 에어컨 안켜서 그런가!? (플렉스 알파)
- 2021.10.14 imx 8m plus NPU 에러 추적 5
- 2021.10.14 번호판 인식(tesseract)
- 2021.10.13 i.MX 8M PLUS tensorflow NPU
- 2021.10.13 i.MX 8M PLUS
- 2021.10.13 2.7.0-rc with opencl
- 2021.10.12 tf release 2.7.0-rc
- 2021.10.11 tflite delegate
- 2021.10.10 MPPT 컨트롤러
오늘따라 유난히 빌드하다 죽는 것 같아서 syslog를 보고 있노라니 이런 메시지가 뿜뿜한다.
Oct 15 13:50:43 flex kernel: [ 7509.905180] mce: CPU0: Core temperature above threshold, cpu clock throttled (total events = 23935) Oct 15 13:50:43 flex kernel: [ 7509.905182] mce: CPU3: Package temperature above threshold, cpu clock throttled (total events = 26108) Oct 15 13:50:43 flex kernel: [ 7509.905183] mce: CPU2: Package temperature above threshold, cpu clock throttled (total events = 26108) Oct 15 13:50:43 flex kernel: [ 7509.905183] mce: CPU4: Core temperature above threshold, cpu clock throttled (total events = 23935) Oct 15 13:50:43 flex kernel: [ 7509.905184] mce: CPU6: Package temperature above threshold, cpu clock throttled (total events = 26108) Oct 15 13:50:43 flex kernel: [ 7509.905185] mce: CPU7: Package temperature above threshold, cpu clock throttled (total events = 26108) Oct 15 13:50:43 flex kernel: [ 7509.905185] mce: CPU4: Package temperature above threshold, cpu clock throttled (total events = 26108) Oct 15 13:50:43 flex kernel: [ 7509.905186] mce: CPU0: Package temperature above threshold, cpu clock throttled (total events = 26108) Oct 15 13:50:43 flex kernel: [ 7509.905212] mce: CPU1: Package temperature above threshold, cpu clock throttled (total events = 26108) Oct 15 13:50:43 flex kernel: [ 7509.905213] mce: CPU5: Package temperature above threshold, cpu clock throttled (total events = 26108) Oct 15 13:50:43 flex kernel: [ 7509.906081] mce: CPU4: Core temperature/speed normal Oct 15 13:50:43 flex kernel: [ 7509.906081] mce: CPU0: Core temperature/speed normal Oct 15 13:50:43 flex kernel: [ 7509.906082] mce: CPU6: Package temperature/speed normal Oct 15 13:50:43 flex kernel: [ 7509.906083] mce: CPU3: Package temperature/speed normal Oct 15 13:50:43 flex kernel: [ 7509.906083] mce: CPU2: Package temperature/speed normal Oct 15 13:50:43 flex kernel: [ 7509.906084] mce: CPU7: Package temperature/speed normal Oct 15 13:50:43 flex kernel: [ 7509.906084] mce: CPU0: Package temperature/speed normal Oct 15 13:50:43 flex kernel: [ 7509.906085] mce: CPU4: Package temperature/speed normal Oct 15 13:50:43 flex kernel: [ 7509.906114] mce: CPU5: Package temperature/speed normal Oct 15 13:50:43 flex kernel: [ 7509.906115] mce: CPU1: Package temperature/speed normal |
몇 도 까지 올라가나 보니 측정 한계 온도가 설마 90은 아니겠지?
$ sensors coretemp-isa-0000 Adapter: ISA adapter Package id 0: +90.0°C (high = +100.0°C, crit = +100.0°C) Core 0: +90.0°C (high = +100.0°C, crit = +100.0°C) Core 1: +87.0°C (high = +100.0°C, crit = +100.0°C) Core 2: +87.0°C (high = +100.0°C, crit = +100.0°C) Core 3: +86.0°C (high = +100.0°C, crit = +100.0°C) |
+
참고로 idle
$ sensors coretemp-isa-0000 Adapter: ISA adapter Package id 0: +43.0°C (high = +100.0°C, crit = +100.0°C) Core 0: +42.0°C (high = +100.0°C, crit = +100.0°C) Core 1: +42.0°C (high = +100.0°C, crit = +100.0°C) Core 2: +41.0°C (high = +100.0°C, crit = +100.0°C) Core 3: +42.0°C (high = +100.0°C, crit = +100.0°C) |
심심해서(?) 검색해보니 삼성 플렉스 i7는 사지 말라고 하는데
플렉스나 플렉스 알파나 그게 그거니.. 이래서 그런 소리가 나온건가?
Intel(R) Core(TM) i7-10510U CPU @ 1.80GHz |
'개소리 왈왈 > 컴퓨터' 카테고리의 다른 글
2760p 써멀 재도포 (0) | 2021.10.29 |
---|---|
2760p 써멀이 굳었나? (0) | 2021.10.25 |
다이소 16GB/5000 원 샌디스크 USB (0) | 2021.08.20 |
삼성 플렉스 알파 써보니 불편한 점 (0) | 2021.08.18 |
컴퓨터 하나 주음 (0) | 2021.08.13 |
어떤 라이브러리에서 하나 뒤져보는데 일단 tensorflow 소스에는 없고
file system에서 뒤져보는데 /usr/lib/libovxlib.so.1.1.0 파일에서 발견된다.
후.. 추적은 일단 포기
lrwxrwxrwx 1 root root 18 Mar 9 2018 /usr/lib/libovxlib.so.1 -> libovxlib.so.1.1.0 lrwxrwxrwx 1 root root 18 Mar 9 2018 /usr/lib/libovxlib.so.1.1 -> libovxlib.so.1.1.0 -rwxr-xr-x 1 root root 3705768 Mar 9 2018 /usr/lib/libovxlib.so.1.1.0 |
INFO: Loaded model my_model.tflite INFO: resolved reporter INFO: Created TensorFlow Lite delegate for NNAPI. INFO: Use NNAPI acceleration. WARNING: Operator RESIZE_BILINEAR (v3) refused by NNAPI delegate: Operator refused due performance reasons. INFO: Applied NNAPI delegate. W [vsi_nn_op_eltwise_setup:178]Output size mismatch, expect 917504, but got 50176 E [setup_node:448]Setup node[52] PRELU fail W [vsi_nn_op_eltwise_setup:178]Output size mismatch, expect 917504, but got 50176 E [setup_node:448]Setup node[52] PRELU fail ERROR: NN API returned error ANEURALNETWORKS_BAD_DATA at line 4151 while running computation. ERROR: Node number 56 (TfLiteNnapiDelegate) failed to invoke. ERROR: Failed to invoke tflite! |
+
coco ssd mobilenet v1 - object detection은 정상적으로 작동한다
# time ./label_image -m 1.tflite -a 1 INFO: Loaded model 1.tflite INFO: resolved reporter INFO: Created TensorFlow Lite delegate for NNAPI. INFO: Use NNAPI acceleration. WARNING: Operator CUSTOM (v1) refused by NNAPI delegate: Unsupported operation type. INFO: Applied NNAPI delegate. INFO: invoked INFO: average time: 13.178 ms INFO: 0.00389769: 3 great white shark INFO: 0.0038741: 2 goldfish real 0m5.722s user 0m5.573s sys 0m0.136s |
[링크 : https://www.tensorflow.org/lite/examples/object_detection/overview]
'embeded > i.mx 8m plus' 카테고리의 다른 글
i.mx8m plus win iot 실행 (0) | 2023.02.23 |
---|---|
i.mx8 tensilica dsp (0) | 2023.02.07 |
i.mx8m plus win iot (0) | 2023.02.02 |
i.MX 8M PLUS tensorflow NPU (0) | 2021.10.13 |
i.MX 8M PLUS (0) | 2021.10.13 |
처음 패키지가 본체(?)고 그 위에는 한글언어 인식 데이터 패키지
$ sudo apt install tesseract-ocr tesseract-ocr-kor tesseract-ocr-script-hang tesseract-ocr-script-hang-vert |
도움말을 보는데 도움은 안된다(응?)
리눅스에서 실행시 outputbase를 stdout으로 하면 콘솔에 텍스트로 출력된다.
$ tesseract --help Usage: tesseract --help | --help-extra | --version tesseract --list-langs tesseract imagename outputbase [options...] [configfile...] OCR options: -l LANG[+LANG] Specify language(s) used for OCR. NOTE: These options must occur before any configfile. Single options: --help Show this help message. --help-extra Show extra help for advanced users. --version Show version information. --list-langs List available languages for tesseract engine. $ tesseract --list-langs List of available languages (5): Hangul Hangul_vert eng kor osd |
LSTM 학습
[링크 : https://hongjong.tistory.com/19]
[링크 : https://diyworld.tistory.com/114]
[링크 : https://davelogs.tistory.com/70]
'프로그램 사용 > tesseract ocr' 카테고리의 다른 글
tesseract 버전별 차이? (0) | 2023.12.27 |
---|---|
tesseract 학습 데이터 (0) | 2023.12.27 |
tesseract on arm (0) | 2023.12.26 |
tesseract ocr (0) | 2023.12.21 |
LF_v5.10.52-2.1.0_images_IMX8MPEVK.zip 파일을 받아서 이미지를 sd 카드에 굽고
부팅해서 들어가보니 경로가 좀 다르다.
tensorflow 2.5.0 버전이면.. 쓸 수 있는 건가?
# cd /usr/bin/tensorflow-lite-2.5.0/examples # ./benchmark_model --graph=mobilenet_v1_1.0_224_quant.tflite STARTING! Log parameter values verbosely: [0] Graph: [mobilenet_v1_1.0_224_quant.tflite] Use VXdelegate : [0] Loaded model mobilenet_v1_1.0_224_quant.tflite The input model file size (MB): 4.27635 Initialized session in 1.807ms. Running benchmark for at least 1 iterations and at least 0.5 seconds but terminate if exceeding 150 seconds. count=4 first=167959 curr=162606 min=162606 max=167959 avg=164253 std=2159 Running benchmark for at least 50 iterations and at least 1 seconds but terminate if exceeding 150 seconds. count=50 first=162727 curr=163003 min=162308 max=163308 avg=162758 std=190 Inference timings in us: Init: 1807, First inference: 167959, Warmup (avg): 164253, Inference (avg): 162758 Note: as the benchmark tool itself affects memory footprint, the following is only APPROXIMATE to the actual memory footprint of the model at runtime. Take the information at your discretion. Peak memory footprint (MB): init=2.51562 overall=8.64062 # ./benchmark_model --graph=mobilenet_v1_1.0_224_quant.tflite --use_nnapi=true STARTING! Log parameter values verbosely: [0] Graph: [mobilenet_v1_1.0_224_quant.tflite] Use NNAPI: [1] NNAPI accelerators available: [vsi-npu] Use VXdelegate : [0] Loaded model mobilenet_v1_1.0_224_quant.tflite INFO: Created TensorFlow Lite delegate for NNAPI. Explicitly applied NNAPI delegate, and the model graph will be completely executed by the delegate. The input model file size (MB): 4.27635 Initialized session in 4.183ms. Running benchmark for at least 1 iterations and at least 0.5 seconds but terminate if exceeding 150 seconds. count=1 curr=4649626 Running benchmark for at least 50 iterations and at least 1 seconds but terminate if exceeding 150 seconds. count=360 first=2665 curr=2733 min=2632 max=2783 avg=2715.67 std=16 Inference timings in us: Init: 4183, First inference: 4649626, Warmup (avg): 4.64963e+06, Inference (avg): 2715.67 Note: as the benchmark tool itself affects memory footprint, the following is only APPROXIMATE to the actual memory footprint of the model at runtime. Take the information at your discretion. Peak memory footprint (MB): init=2.59766 overall=30.1836 |
label_image로 해보면.. warm up이 먼진 모르겠지만 invoke() 함수 자체는 짧게 걸리는데
그 이전에 먼가 하는게 오래 걸리는지 cpu만으로 돌리는 것 보다 4초 이상 오래 걸린다.
# time ./label_image -w 1 INFO: Loaded model ./mobilenet_v1_1.0_224_quant.tflite INFO: resolved reporter INFO: invoked INFO: average time: 43.865 ms INFO: 0.764706: 653 military uniform INFO: 0.121569: 907 Windsor tie INFO: 0.0156863: 458 bow tie INFO: 0.0117647: 466 bulletproof vest INFO: 0.00784314: 835 suit real 0m0.142s user 0m0.385s sys 0m0.020s # time ./label_image -w 1 -a 1 INFO: Loaded model ./mobilenet_v1_1.0_224_quant.tflite INFO: resolved reporter INFO: Created TensorFlow Lite delegate for NNAPI. INFO: Use NNAPI acceleration. INFO: Applied NNAPI delegate. INFO: invoked INFO: average time: 2.797 ms INFO: 0.768627: 653 military uniform INFO: 0.105882: 907 Windsor tie INFO: 0.0196078: 458 bow tie INFO: 0.0117647: 466 bulletproof vest INFO: 0.00784314: 835 suit real 0m4.748s user 0m4.648s sys 0m0.092s |
아래는 2.1.0 버전에 맞춰서 한 구버전 문서 내용 인 듯.
$ cd /usr/bin/tensorflow-lite-2.1.0/examples $ ./benchmark_model --graph=mobilenet_v1_1.0_224_quant.tflite $: ./benchmark_model --graph=mobilenet_v1_1.0_224_quant.tflite --use_nnapi=true ./lbl_img -i grace_hopper.bmp -l labels.txt -w 1 ./lbl_img -i grace_hopper.bmp -l labels.txt -w 1 -a 1 |
[링크 : https://www.mouser.com/pdfDocs/AN12964.pdf]
+
망할 놈(?)들 도움말이랑 다르잖아?!
# ./label_image --help ERROR: usage: ./label_image <flags> Flags: --num_threads=1 int32 optional number of threads used for inference on CPU. --max_delegated_partitions=0 int32 optional Max number of partitions to be delegated. --min_nodes_per_partition=0 int32 optional The minimal number of TFLite graph nodes of a partition that has to be reached for it to be delegated.A negative value or 0 means to use the default choice of each delegate. --num_threads=1 int32 optional number of threads used for inference on CPU. --max_delegated_partitions=0 int32 optional Max number of partitions to be delegated. --min_nodes_per_partition=0 int32 optional The minimal number of TFLite graph nodes of a partition that has to be reached for it to be delegated.A negative value or 0 means to use the default choice of each delegate. --use_xnnpack=false bool optional use XNNPack --use_nnapi=false bool optional use nnapi delegate api --nnapi_execution_preference= string optional execution preference for nnapi delegate. Should be one of the following: fast_single_answer, sustained_speed, low_power, undefined --nnapi_execution_priority= string optional The model execution priority in nnapi, and it should be one of the following: default, low, medium and high. This requires Android 11+. --nnapi_accelerator_name= string optional the name of the nnapi accelerator to use (requires Android Q+) --disable_nnapi_cpu=true bool optional Disable the NNAPI CPU device --nnapi_allow_fp16=false bool optional Allow fp32 computation to be run in fp16 |
static struct option long_options[] = { {"accelerated", required_argument, nullptr, 'a'}, {"allow_fp16", required_argument, nullptr, 'f'}, {"count", required_argument, nullptr, 'c'}, {"verbose", required_argument, nullptr, 'v'}, {"image", required_argument, nullptr, 'i'}, {"labels", required_argument, nullptr, 'l'}, {"tflite_model", required_argument, nullptr, 'm'}, {"profiling", required_argument, nullptr, 'p'}, {"threads", required_argument, nullptr, 't'}, {"input_mean", required_argument, nullptr, 'b'}, {"input_std", required_argument, nullptr, 's'}, {"num_results", required_argument, nullptr, 'r'}, {"max_profiling_buffer_entries", required_argument, nullptr, 'e'}, {"warmup_runs", required_argument, nullptr, 'w'}, {"gl_backend", required_argument, nullptr, 'g'}, {"hexagon_delegate", required_argument, nullptr, 'j'}, {"xnnpack_delegate", required_argument, nullptr, 'x'}, {nullptr, 0, nullptr, 0}}; |
+
그러면.. 어떤식으로 라이브러리를 빌드해서 저게 가능해진거지?
# ldd label_image linux-vdso.so.1 (0x0000ffffa0989000) libtensorflow-lite.so.2.5.0 => /usr/lib/libtensorflow-lite.so.2.5.0 (0x0000ffffa05ab000) libm.so.6 => /lib/libm.so.6 (0x0000ffffa0501000) libstdc++.so.6 => /usr/lib/libstdc++.so.6 (0x0000ffffa032a000) libgcc_s.so.1 => /lib/libgcc_s.so.1 (0x0000ffffa0305000) libc.so.6 => /lib/libc.so.6 (0x0000ffffa0190000) /lib/ld-linux-aarch64.so.1 (0x0000ffffa0957000) libtim-vx.so => /usr/lib/libtim-vx.so (0x0000ffffa00c7000) libdl.so.2 => /lib/libdl.so.2 (0x0000ffffa00b1000) libpthread.so.0 => /lib/libpthread.so.0 (0x0000ffffa0082000) librt.so.1 => /lib/librt.so.1 (0x0000ffffa006a000) libovxlib.so.1.1.0 => /usr/lib/libovxlib.so.1.1.0 (0x0000ffff9fcd1000) libOpenVX.so.1 => /usr/lib/libOpenVX.so.1 (0x0000ffff9fa7e000) libVSC.so => /usr/lib/libVSC.so (0x0000ffff9eae2000) libGAL.so => /usr/lib/libGAL.so (0x0000ffff9e91b000) libArchModelSw.so => /usr/lib/libArchModelSw.so (0x0000ffff9e8f3000) libNNArchPerf.so => /usr/lib/libNNArchPerf.so (0x0000ffff9e8d0000) |
+
PRELU 연산자 자체는 지원하는 것 같은데 output size mistach가 원인인가?
INFO: Use NNAPI acceleration. WARNING: Operator RESIZE_BILINEAR (v3) refused by NNAPI delegate: Operator refused due performance reasons. INFO: Applied NNAPI delegate. W [vsi_nn_op_eltwise_setup:178]Output size mismatch, expect 917504, but got 50176 E [setup_node:448]Setup node[52] PRELU fail W [vsi_nn_op_eltwise_setup:178]Output size mismatch, expect 917504, but got 50176 E [setup_node:448]Setup node[52] PRELU fail ERROR: NN API returned error ANEURALNETWORKS_BAD_DATA at line 4151 while running computation. ERROR: Node number 56 (TfLiteNnapiDelegate) failed to invoke. ERROR: Failed to invoke tflite! |
[링크 : https://www.nxp.com/docs/en/user-guide/IMX-MACHINE-LEARNING-UG.pdf]
+
warm up은 코드상으로 1회 invoke 하는 것인데 해당 작업이 4649ms 정도 소요되며
warm up 없이 1회 실행하면 대략 그 정도 시간이 소요된다.
root@imx8mpevk:/usr/bin/tensorflow-lite-2.5.0/examples# time ./label_image -a 1 -w 0 -p 1 -c 1 INFO: Loaded model ./mobilenet_v1_1.0_224_quant.tflite INFO: resolved reporter INFO: Created TensorFlow Lite delegate for NNAPI. INFO: Use NNAPI acceleration. INFO: Applied NNAPI delegate. INFO: invoked INFO: average time: 4649.78 ms INFO: 0.768627: 653 military uniform INFO: 0.105882: 907 Windsor tie INFO: 0.0196078: 458 bow tie INFO: 0.0117647: 466 bulletproof vest INFO: 0.00784314: 835 suit real 0m4.757s user 0m4.655s sys 0m0.096s root@imx8mpevk:/usr/bin/tensorflow-lite-2.5.0/examples# time ./label_image -a 1 -w 0 -p 1 -c 4 INFO: Loaded model ./mobilenet_v1_1.0_224_quant.tflite INFO: resolved reporter INFO: Created TensorFlow Lite delegate for NNAPI. INFO: Use NNAPI acceleration. INFO: Applied NNAPI delegate. INFO: invoked INFO: average time: 1164.36 ms INFO: 0.768627: 653 military uniform INFO: 0.105882: 907 Windsor tie INFO: 0.0196078: 458 bow tie INFO: 0.0117647: 466 bulletproof vest INFO: 0.00784314: 835 suit real 0m4.768s user 0m4.663s sys 0m0.092s root@imx8mpevk:/usr/bin/tensorflow-lite-2.5.0/examples# time ./label_image -a 1 -w 0 -p 1 -c 10000 INFO: Loaded model ./mobilenet_v1_1.0_224_quant.tflite INFO: resolved reporter INFO: Created TensorFlow Lite delegate for NNAPI. INFO: Use NNAPI acceleration. INFO: Applied NNAPI delegate. INFO: invoked INFO: average time: 3.30189 ms INFO: 0.768627: 653 military uniform INFO: 0.105882: 907 Windsor tie INFO: 0.0196078: 458 bow tie INFO: 0.0117647: 466 bulletproof vest INFO: 0.00784314: 835 suit real 0m33.128s user 0m7.516s sys 0m1.590s |
openVX를 통해 처리하는 것 같은데 처음 처리하면 그래프 처리 결과를 스토리지에 저장한다고.
11.3 Hardware accelerators warmup time For both Arm NN and TensorFlow Lite, the initial execution of model inference takes longer time, because of the model graph initialization needed by the GPU/NPU hardware accelerator. The initialization phase is known as warmup. This time duration can be decreased for subsequent application that runs by storing on disk the information resulted from the initial OpenVX graph processing. The following environment variables should be used for this purpose: VIV_VX_ENABLE_CACHE_GRAPH_BINARY: flag to enable/disable OpenVX graph caching VIV_VX_CACHE_BINARY_GRAPH_DIR: set location of the cached information on disk For example, set these variables on the console in this way: export VIV_VX_ENABLE_CACHE_GRAPH_BINARY="1" export VIV_VX_CACHE_BINARY_GRAPH_DIR=`pwd` |
[링크 : https://www.nxp.com/docs/en/user-guide/IMX-MACHINE-LEARNING-UG.pdf]
'embeded > i.mx 8m plus' 카테고리의 다른 글
i.mx8m plus win iot 실행 (0) | 2023.02.23 |
---|---|
i.mx8 tensilica dsp (0) | 2023.02.07 |
i.mx8m plus win iot (0) | 2023.02.02 |
imx 8m plus NPU 에러 추적 (5) | 2021.10.14 |
i.MX 8M PLUS (0) | 2021.10.13 |
오잉? 저번에 볼 땐 8M PLUS에는 cortex-M 계열 없었던 것 같은데?!?!
음.. 그냥 내 눈이 삐꾸인걸로 -_ㅠ
아무튼 회사에 굴러(?) 다니는 이 녀석 사용해보려니
헐.. 무슨 디버그 포트가 이렇게 많이 인식 돼? 일단 나의 경우에는 리눅스가 COM27로 연결되었다.
패키지에 들어있던 종이 쪼가리구만 -_-
첫째랑 둘째는 누구꺼냐!?
Four UART connections will appear on the PC, the third port for the Cortex-A53 core and the fourth for Cortex-M7 core system debugging. |
[링크 : https://www.nxp.com/docs/en/quick-reference-guide/8MPLUSEVKQSG.pdf]
Proejct - Tutorial에 Machine Learning
[링크 : https://www.nxp.com/document/guide/getting-started-with-the-i-mx-8m-plus-evk:GS-iMX-8M-Plus-EVK]
i.MX 8M PLUS 에는 전체 기능을 다 지원하는데
NPU를 써볼려면 eIQ를 이용해서 먼가 짓을 해야 하는 것 같고.
Cortex-M7도 있으니 (standalone 혹은 collaborative 하게 작동이 가능하다고) 이걸 이용해서 일종의 가속기화 하려나?
TFLite
TFLite for MCU
위에서 다운로드 링크 누르니 이상한데(?)로 보내버리네
[링크 : https://mcuxpresso.nxp.com/en/welcome] cortex-M7 쓰려면 이게 필요한 듯. 이클립스 기반?
[링크 : https://source.codeaurora.org/external/imx/imx-manifest]
오오 i.MX 8M Plus!!
Cortex-A / GPU / NPU 오오오...
[링크 : https://www.nxp.com/docs/en/user-guide/IMX-MACHINE-LEARNING-UG.pdf]
+
이미지 받아보니 아래와 같이 구성되어 있다.
귀찮으면 fsl-image-validation-imx-imx8mmevk.sdcard 를 sd에 구워서 켜보면 될 듯.
imx_m4_demos에는 bin 파일이 있는데 이건 어떻게 올려서 쓰려나?
MCUXpresso 안쓰면 uboot에서 해당 파일을 직접 sd에 넣어 실행하는 수 밖에 없나?
4.2 Run applications using U-BootThis section describes how to run applications using an SD card and pre-built U-Boot image for i.MX processor.
|
리눅스에서 /sys 등으로 접근할 순 없나?
wic 파일을 win32diskimager로 구우면 되려나?
[링크 : https://www.nxp.com/docs/en/user-guide/IMX_LINUX_USERS_GUIDE.pdf]
[링크 : https://www.nxp.com/part/8MPLUSLPD4-EVK#/]
MCUXpresso 로 imx8m quad 선택해서 빌드한다?
[링크 : https://www.embeddedartists.com/wp-content/uploads/2019/03/iMX8M_Working_with_Cortex-M.pdf]
'embeded > i.mx 8m plus' 카테고리의 다른 글
i.mx8m plus win iot 실행 (0) | 2023.02.23 |
---|---|
i.mx8 tensilica dsp (0) | 2023.02.07 |
i.mx8m plus win iot (0) | 2023.02.02 |
imx 8m plus NPU 에러 추적 (5) | 2021.10.14 |
i.MX 8M PLUS tensorflow NPU (0) | 2021.10.13 |
빌드해보려다가 라즈베리 4에서 실패인지 성공인지 미묘하게 완료
$ git clone https://github.com/tensorflow/tensorflow.git $ cd tensorflow/ $ git checkout v2.7.0-rc0 $ mkdir ../tflite_build $ cd ../tflite_build $ cmake ../tensorflow/tensorflow/lite/ -DTFLITE_ENABLE_GPU=ON |
[링크 : https://www.tensorflow.org/lite/guide/build_cmake]
문제는 rpi 3용 videocore IV는 openCL 사용자 버전이 있는데
rpi 4용 videocore VI는 아직 안나와서 쓸수가 없을 듯 ㅠㅠ
/home/pi/work/tflite_build/opencl_headers/CL/cl_version.h:34:104: note: #pragma message: cl_version.h: CL_TARGET_OPENCL_VERSION is not defined. Defaulting to 220 (OpenCL 2.2) #pragma message("cl_version.h: CL_TARGET_OPENCL_VERSION is not defined. Defaulting to 220 (OpenCL 2.2)") |
[링크 : https://github.com/doe300/VC4CL/issues/86]
[링크 : https://github.com/Idein/py-videocore6]
[링크 : https://forums.raspberrypi.com/viewtopic.php?t=312646]
'프로그램 사용 > yolo_tensorflow' 카테고리의 다른 글
bazel cross compile (0) | 2022.01.27 |
---|---|
bazel clean (0) | 2021.10.19 |
tf release 2.7.0-rc (0) | 2021.10.12 |
tflite delegate (0) | 2021.10.11 |
tflite gpu openCL support build fail (0) | 2021.08.31 |
정식 릴리즈는 아니고 후보인데 tensorflow lite 빌드를 make 사용하지 않고
cmake나 bazel로만 되도록 변경됨.
'프로그램 사용 > yolo_tensorflow' 카테고리의 다른 글
bazel clean (0) | 2021.10.19 |
---|---|
2.7.0-rc with opencl (0) | 2021.10.13 |
tflite delegate (0) | 2021.10.11 |
tflite gpu openCL support build fail (0) | 2021.08.31 |
tf lite cmake (0) | 2021.08.27 |
과거에도 있긴 했지만.. 도대체 이 delegate는 어떤 통로를 통해 tensorflow lite에서 사용이 가능해지는 건진 모르겠다.
[링크 : https://www.tensorflow.org/lite/performance/implementing_delegate]
[링크 : https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/delegates/gpu/README.md]
'프로그램 사용 > yolo_tensorflow' 카테고리의 다른 글
2.7.0-rc with opencl (0) | 2021.10.13 |
---|---|
tf release 2.7.0-rc (0) | 2021.10.12 |
tflite gpu openCL support build fail (0) | 2021.08.31 |
tf lite cmake (0) | 2021.08.27 |
tensorflow cross compile (0) | 2021.07.01 |
태양광 패널 관련해서 찾다보니 이런게 나오네..
PWM 방식도 있긴한데 MPPT가 더 효율이 좋게 최대 전력을 뽑아내주는 듯.
일단 솔라패널의 특성상 최대 출력 전압이 있는데
90% 정도의 전압을 뽑아낼 때 최대 전류가 나오는 듯
근데.. 어떻게 전압을 원하는 대로 빼내지?
[링크 : https://en.m.wikipedia.org/wiki/Maximum_power_point_tracking]
[링크 : https://blog.naver.com/sipeng/220035371228]
[링크 : https://electronics.stackexchange.com/.../what-is-the-best-way-to-limit-voltage-of-a-solar-panel]
[링크 : https://electronics.stackexchange.com/questions/326271/solar-charging-with-a-super-capacitor-buffer]
'이론 관련 > 전기 전자' 카테고리의 다른 글
led 와 solar panel (0) | 2022.02.22 |
---|---|
FFT RBW - Resolution Band Width (0) | 2021.12.22 |
인산철 배터리 (0) | 2021.10.09 |
220V 전원 (0) | 2021.09.21 |
current loop (0) | 2021.09.02 |