외부에서 획득한 cfg와 weight 로 실행을 하려고 하니 엉뚱하게 bicycle,person,car가 나오고 있어서 급 멘붕 -_-

/darknet detect cfg/yolov3.cfg yolov3.weights data/dog.jpg

 

 

그래서 코드를 뜯어보니 detect 로 실행 할 경우

test_detector를 실행하게 하고 cfg/coco.data 라는걸 자동으로 사용하게 한다.

    if (0 == strcmp(argv[1], "average")){
        average(argc, argv);
    } else if (0 == strcmp(argv[1], "yolo")){
        run_yolo(argc, argv);
    } else if (0 == strcmp(argv[1], "voxel")){
        run_voxel(argc, argv);
    } else if (0 == strcmp(argv[1], "super")){
        run_super(argc, argv);
    } else if (0 == strcmp(argv[1], "detector")){
        run_detector(argc, argv);
    } else if (0 == strcmp(argv[1], "detect")){
        float thresh = find_float_arg(argc, argv, "-thresh", .24);
		int ext_output = find_arg(argc, argv, "-ext_output");
        char *filename = (argc > 4) ? argv[4]: 0;
        test_detector("cfg/coco.data", argv[2], argv[3], filename, thresh, 0.5, 0, ext_output, 0, NULL, 0, 0);
    }

 

 coco.data에는 별 건 없는것 같은데 가장 중요한(?)건 names 인 것 같고

$ cat cfg/coco.data
classes= 80
train  = /home/pjreddie/data/coco/trainvalno5k.txt
valid  = coco_testdev
#valid = data/coco_val_5k.list
names = data/coco.names
backup = /home/pjreddie/backup/
eval=coco

 

coco.names 에는 항목별 이름이 존재하는 것을 확인할 수 있었다.

$ cat data/coco.names
person
bicycle
car
motorbike
aeroplane
bus
train
truck
boat
traffic light
fire hydrant
stop sign
parking meter
bench
bird
cat
dog
horse
sheep
cow
elephant
bear
zebra
giraffe
backpack
umbrella
handbag
tie
suitcase
frisbee
skis
snowboard
sports ball
kite
baseball bat
baseball glove
skateboard
surfboard
tennis racket
bottle
wine glass
cup
fork
knife
spoon
bowl
banana
apple
sandwich
orange
broccoli
carrot
hot dog
pizza
donut
cake
chair
sofa
pottedplant
bed
diningtable
toilet
tvmonitor
laptop
mouse
remote
keyboard
cell phone
microwave
oven
toaster
sink
refrigerator
book
clock
vase
scissors
teddy bear
hair drier
toothbrush

 

그러니까..

결국에는 최소 5개의 파일이 필요한건가?

cfg, weight, data, name 그리고 테스트 이미지

'프로그램 사용 > yolo_tensorflow' 카테고리의 다른 글

mobileNET/SSD  (0) 2021.01.14
caffe  (0) 2021.01.14
CNN convolution과 maxpool  (0) 2021.01.10
CNN과 RNN  (0) 2021.01.09
darknet과 darknetab  (0) 2021.01.09
Posted by 구차니

트럼프가 중국 제제 하면서

화웨이라던가 이런데서 몇달치 부품을 싹 쓸어 버렸고

 

그 와중에 코로나 퍼지고

다들 집에 쳐박혀서 있으니 전자장비들 잘 팔리고

 

파운드리 몇개에서 미친듯이 찍어대도 IT 기업들 수요를 맞추지 못하니

자동차 업계 부품은 수급 부족해서 자동차를 못 만들게 되는 상황이 발생

 

머가 많이도 돌고 엮여 있냐 ㅋㅋㅋ

 

 

[링크 : http://cmobile.g-enews.com/view.php?ud=2020122010081579306336258971_1&md=20201223135615_R]

'개소리 왈왈 > 정치관련 신세한탄' 카테고리의 다른 글

의사 면허 취소 반대?  (0) 2021.02.20
미국 정전 반도체 타격  (2) 2021.02.17
민주당 머하는 걸까...  (4) 2021.01.02
사면?  (0) 2021.01.01
의사 만세구만  (0) 2020.12.31
Posted by 구차니

pooling  - overfitting 방지

[링크 : https://hobinjeong.medium.com/cnn에서-pooling이란-c4e01aa83c83]

[링크 : http://hobinjeong.medium.com/cnn-convolutional-neural-network-9f600dd3b66395]

 

정리 잘된 동영상이 있어서 링크

 

convolution은 특정 신호에 반응하고 이미지 어디에 있는지를 확인하고

pooling은 위치나 각도에 둔감해지도록 인식율을 올리는 효과를 지니는 연산(?)

[링크 : https://www.youtube.com/watch?v=u0eT7VZAgRw]

'프로그램 사용 > yolo_tensorflow' 카테고리의 다른 글

caffe  (0) 2021.01.14
darknet detect  (0) 2021.01.11
CNN과 RNN  (0) 2021.01.09
darknet과 darknetab  (0) 2021.01.09
darknet openmp 빌드  (0) 2021.01.08
Posted by 구차니

게임은 하나도 안되네?

 

 

나오는 이야기만 보면 삼성이던 LG던 둘다 동일한 API로 작동하는것 같기도 하고..

갤럭시 폴드나 플립을 못만져봐서 모르겠다?

[링크 : http://mobile.developer.lge.com/develop/lgdual/lgdual_sdk/]

[링크 : http://mobile.developer.lge.com/develop/sdks/lg-dual-screen-sdk/]

[링크 : http://mobile.developer.lge.com/develop/dev-guides/lg-dual-screen-guide/]

[링크 : http://developer.android.com/guide/topics/ui/foldables]

 

 

 

 

 

 

 

 

 

 

Posted by 구차니

RNN(Recurrent Neural Network)

 

CNN(Convolution Neural Network)

합성곱신경망, convolution과 pooling

 

[링크 : http://ebbnflow.tistory.com/119]

[링크 : http://dbrang.tistory.com/1537]

 

'프로그램 사용 > yolo_tensorflow' 카테고리의 다른 글

darknet detect  (0) 2021.01.11
CNN convolution과 maxpool  (0) 2021.01.10
darknet과 darknetab  (0) 2021.01.09
darknet openmp 빌드  (0) 2021.01.08
darknet on rpi3  (0) 2021.01.08
Posted by 구차니

원본 darknet은 성능에 영향을 줄게 3개 밖에 없는데

GPU=0
CUDNN=0
OPENCV=0
OPENMP=0

 

alexeyAB의 darknet은 gpu, cudnn, avx, openmp 4가지 이다.

GPU=0
CUDNN=0
CUDNN_HALF=0
OPENCV=0
AVX=0
OPENMP=0
LIBSO=0
ZED_CAMERA=0
ZED_CAMERA_v2_8=0

[링크 : https://github.com/AlexeyAB/darknet]

 

심심해서(?) i5-2세대도 있나 보는데 어라..? 있네?

[링크 : https://ark.intel.com/.../intel-core-i5-2500-processor-6m-cache-up-to-3-70-ghz.html]

[링크 : https://ark.intel.com/../intel-core-i5-2520m-processor-3m-cache-up-to-3-20-ghz.html]

 

근데 빌드해서 돌려보니 내꺼는 AVX일뿐이라 돌아가지 않는다 ㅠㅠ

AVX2는 하스웰 이후부터 지원한다고 하니.. 집에있는 내 실험용 컴퓨터로는 무리겠구나..

$ ./darknet detect cfg/yolov3.cfg ../yolov3.weights data/dog.jpg
 GPU isn't used 
 Used AVX 
 Not used FMA & AVX2 
 OpenCV isn't used - data augmentation will be slow 
명령어가 잘못됨 (core dumped)

 

alexeyAB 버전을 싱글 코어 / openmp 설정으로 돌리니 반정도 줄었다.

data/dog.jpg: Predicted in 11175.292000 milli-seconds.
data/dog.jpg: Predicted in 5974.575000 milli-seconds.

'프로그램 사용 > yolo_tensorflow' 카테고리의 다른 글

CNN convolution과 maxpool  (0) 2021.01.10
CNN과 RNN  (0) 2021.01.09
darknet openmp 빌드  (0) 2021.01.08
darknet on rpi3  (0) 2021.01.08
yolo lite  (0) 2021.01.08
Posted by 구차니

아침 일어나니 -22도!!!

점심때나 되니 -11도

영하 20도를 넘나드니 영하10도는 따스한 착각마저 드네 ㅋㅋ

Posted by 구차니

100시간은 족히 한듯

아무튼 그 와중에 제노블레이드 크로니클스2 살려고 기웃대고 있고

 

DLC라고 해야하나? "이어지는 미래"도 해야하는데

시작해보니 60렙부터 시작하는군 ㅋ

Posted by 구차니

위는 오리지널 darknet을 아무런 옵션없이 라즈베리에서 빌드한 결과

$ ldd darknet
        linux-vdso.so.1 (0x7efe1000)
        /usr/lib/arm-linux-gnueabihf/libarmmem-${PLATFORM}.so => /usr/lib/arm-linux-gnueabihf/libarmmem-v7l.so (0x76f54000)
        libm.so.6 => /lib/arm-linux-gnueabihf/libm.so.6 (0x76eb6000)
        libpthread.so.0 => /lib/arm-linux-gnueabihf/libpthread.so.0 (0x76e8c000)
        libc.so.6 => /lib/arm-linux-gnueabihf/libc.so.6 (0x76d3e000)
        /lib/ld-linux-armhf.so.3 (0x76f69000)

 

아래는 darknet AlexeyAB 버전을 neon과 openmp 설정해서 빌드한 결과

$ ldd darknet
        linux-vdso.so.1 (0x7eefc000)
        /usr/lib/arm-linux-gnueabihf/libarmmem-${PLATFORM}.so => /usr/lib/arm-linux-gnueabihf/libarmmem-v7l.so (0x76f79000)
        libgomp.so.1 => /lib/arm-linux-gnueabihf/libgomp.so.1 (0x76f25000)
        libstdc++.so.6 => /lib/arm-linux-gnueabihf/libstdc++.so.6 (0x76dde000)
        libm.so.6 => /lib/arm-linux-gnueabihf/libm.so.6 (0x76d5c000)
        libgcc_s.so.1 => /lib/arm-linux-gnueabihf/libgcc_s.so.1 (0x76d2f000)
        libpthread.so.0 => /lib/arm-linux-gnueabihf/libpthread.so.0 (0x76d05000)
        libc.so.6 => /lib/arm-linux-gnueabihf/libc.so.6 (0x76bb7000)
        libdl.so.2 => /lib/arm-linux-gnueabihf/libdl.so.2 (0x76ba4000)
        /lib/ld-linux-armhf.so.3 (0x76f8e000)

 

 

+

cpu only = 30.13sec

$ ./tiny.sh
 GPU isn't used
 OpenCV isn't used - data augmentation will be slow
mini_batch = 1, batch = 1, time_steps = 1, train = 0
   layer   filters  size/strd(dil)      input                output
   0 conv     16       3 x 3/ 1    416 x 416 x   3 ->  416 x 416 x  16 0.150 BF
   1 max                2x 2/ 2    416 x 416 x  16 ->  208 x 208 x  16 0.003 BF
   2 conv     32       3 x 3/ 1    208 x 208 x  16 ->  208 x 208 x  32 0.399 BF
   3 max                2x 2/ 2    208 x 208 x  32 ->  104 x 104 x  32 0.001 BF
   4 conv     64       3 x 3/ 1    104 x 104 x  32 ->  104 x 104 x  64 0.399 BF
   5 max                2x 2/ 2    104 x 104 x  64 ->   52 x  52 x  64 0.001 BF
   6 conv    128       3 x 3/ 1     52 x  52 x  64 ->   52 x  52 x 128 0.399 BF
   7 max                2x 2/ 2     52 x  52 x 128 ->   26 x  26 x 128 0.000 BF
   8 conv    256       3 x 3/ 1     26 x  26 x 128 ->   26 x  26 x 256 0.399 BF
   9 max                2x 2/ 2     26 x  26 x 256 ->   13 x  13 x 256 0.000 BF
  10 conv    512       3 x 3/ 1     13 x  13 x 256 ->   13 x  13 x 512 0.399 BF
  11 max                2x 2/ 1     13 x  13 x 512 ->   13 x  13 x 512 0.000 BF
  12 conv   1024       3 x 3/ 1     13 x  13 x 512 ->   13 x  13 x1024 1.595 BF
  13 conv    256       1 x 1/ 1     13 x  13 x1024 ->   13 x  13 x 256 0.089 BF
  14 conv    512       3 x 3/ 1     13 x  13 x 256 ->   13 x  13 x 512 0.399 BF
  15 conv    255       1 x 1/ 1     13 x  13 x 512 ->   13 x  13 x 255 0.044 BF
  16 yolo
[yolo] params: iou loss: mse (2), iou_norm: 0.75, obj_norm: 1.00, cls_norm: 1.00, delta_norm: 1.00, scale_x_y: 1.00
  17 route  13                                     ->   13 x  13 x 256
  18 conv    128       1 x 1/ 1     13 x  13 x 256 ->   13 x  13 x 128 0.011 BF
  19 upsample                 2x    13 x  13 x 128 ->   26 x  26 x 128
  20 route  19 8                                   ->   26 x  26 x 384
  21 conv    256       3 x 3/ 1     26 x  26 x 384 ->   26 x  26 x 256 1.196 BF
  22 conv    255       1 x 1/ 1     26 x  26 x 256 ->   26 x  26 x 255 0.088 BF
  23 yolo
[yolo] params: iou loss: mse (2), iou_norm: 0.75, obj_norm: 1.00, cls_norm: 1.00, delta_norm: 1.00, scale_x_y: 1.00
Total BFLOPS 5.571
avg_outputs = 341534
Loading weights from ../yolov3-tiny.weights...
 seen 64, trained: 32013 K-images (500 Kilo-batches_64)
Done! Loaded 24 layers from weights-file
 Detection layer: 16 - type = 28
 Detection layer: 23 - type = 28
data/dog.jpg: Predicted in 30133.750000 milli-seconds.
dog: 81%
bicycle: 38%
car: 71%
truck: 42%
truck: 62%
car: 40%
Not compiled with OpenCV, saving to predictions.png instead

 

neon = 10.718 sec

$ ./tiny.sh
 GPU isn't used
 OpenCV isn't used - data augmentation will be slow
mini_batch = 1, batch = 1, time_steps = 1, train = 0
   layer   filters  size/strd(dil)      input                output
   0 conv     16       3 x 3/ 1    416 x 416 x   3 ->  416 x 416 x  16 0.150 BF
   1 max                2x 2/ 2    416 x 416 x  16 ->  208 x 208 x  16 0.003 BF
   2 conv     32       3 x 3/ 1    208 x 208 x  16 ->  208 x 208 x  32 0.399 BF
   3 max                2x 2/ 2    208 x 208 x  32 ->  104 x 104 x  32 0.001 BF
   4 conv     64       3 x 3/ 1    104 x 104 x  32 ->  104 x 104 x  64 0.399 BF
   5 max                2x 2/ 2    104 x 104 x  64 ->   52 x  52 x  64 0.001 BF
   6 conv    128       3 x 3/ 1     52 x  52 x  64 ->   52 x  52 x 128 0.399 BF
   7 max                2x 2/ 2     52 x  52 x 128 ->   26 x  26 x 128 0.000 BF
   8 conv    256       3 x 3/ 1     26 x  26 x 128 ->   26 x  26 x 256 0.399 BF
   9 max                2x 2/ 2     26 x  26 x 256 ->   13 x  13 x 256 0.000 BF
  10 conv    512       3 x 3/ 1     13 x  13 x 256 ->   13 x  13 x 512 0.399 BF
  11 max                2x 2/ 1     13 x  13 x 512 ->   13 x  13 x 512 0.000 BF
  12 conv   1024       3 x 3/ 1     13 x  13 x 512 ->   13 x  13 x1024 1.595 BF
  13 conv    256       1 x 1/ 1     13 x  13 x1024 ->   13 x  13 x 256 0.089 BF
  14 conv    512       3 x 3/ 1     13 x  13 x 256 ->   13 x  13 x 512 0.399 BF
  15 conv    255       1 x 1/ 1     13 x  13 x 512 ->   13 x  13 x 255 0.044 BF
  16 yolo
[yolo] params: iou loss: mse (2), iou_norm: 0.75, obj_norm: 1.00, cls_norm: 1.00, delta_norm: 1.00, scale_x_y: 1.00
  17 route  13                                     ->   13 x  13 x 256
  18 conv    128       1 x 1/ 1     13 x  13 x 256 ->   13 x  13 x 128 0.011 BF
  19 upsample                 2x    13 x  13 x 128 ->   26 x  26 x 128
  20 route  19 8                                   ->   26 x  26 x 384
  21 conv    256       3 x 3/ 1     26 x  26 x 384 ->   26 x  26 x 256 1.196 BF
  22 conv    255       1 x 1/ 1     26 x  26 x 256 ->   26 x  26 x 255 0.088 BF
  23 yolo
[yolo] params: iou loss: mse (2), iou_norm: 0.75, obj_norm: 1.00, cls_norm: 1.00, delta_norm: 1.00, scale_x_y: 1.00
Total BFLOPS 5.571
avg_outputs = 341534
Loading weights from ../yolov3-tiny.weights...
 seen 64, trained: 32013 K-images (500 Kilo-batches_64)
Done! Loaded 24 layers from weights-file
 Detection layer: 16 - type = 28
 Detection layer: 23 - type = 28
data/dog.jpg: Predicted in 10718.416000 milli-seconds.
dog: 81%
bicycle: 38%
car: 71%
truck: 42%
truck: 62%
car: 40%
Not compiled with OpenCV, saving to predictions.png instead

 

openmp = 8.686 sec

$ ./tiny.sh
 GPU isn't used
 OpenCV isn't used - data augmentation will be slow
mini_batch = 1, batch = 1, time_steps = 1, train = 0
   layer   filters  size/strd(dil)      input                output
   0 conv     16       3 x 3/ 1    416 x 416 x   3 ->  416 x 416 x  16 0.150 BF
   1 max                2x 2/ 2    416 x 416 x  16 ->  208 x 208 x  16 0.003 BF
   2 conv     32       3 x 3/ 1    208 x 208 x  16 ->  208 x 208 x  32 0.399 BF
   3 max                2x 2/ 2    208 x 208 x  32 ->  104 x 104 x  32 0.001 BF
   4 conv     64       3 x 3/ 1    104 x 104 x  32 ->  104 x 104 x  64 0.399 BF
   5 max                2x 2/ 2    104 x 104 x  64 ->   52 x  52 x  64 0.001 BF
   6 conv    128       3 x 3/ 1     52 x  52 x  64 ->   52 x  52 x 128 0.399 BF
   7 max                2x 2/ 2     52 x  52 x 128 ->   26 x  26 x 128 0.000 BF
   8 conv    256       3 x 3/ 1     26 x  26 x 128 ->   26 x  26 x 256 0.399 BF
   9 max                2x 2/ 2     26 x  26 x 256 ->   13 x  13 x 256 0.000 BF
  10 conv    512       3 x 3/ 1     13 x  13 x 256 ->   13 x  13 x 512 0.399 BF
  11 max                2x 2/ 1     13 x  13 x 512 ->   13 x  13 x 512 0.000 BF
  12 conv   1024       3 x 3/ 1     13 x  13 x 512 ->   13 x  13 x1024 1.595 BF
  13 conv    256       1 x 1/ 1     13 x  13 x1024 ->   13 x  13 x 256 0.089 BF
  14 conv    512       3 x 3/ 1     13 x  13 x 256 ->   13 x  13 x 512 0.399 BF
  15 conv    255       1 x 1/ 1     13 x  13 x 512 ->   13 x  13 x 255 0.044 BF
  16 yolo
[yolo] params: iou loss: mse (2), iou_norm: 0.75, obj_norm: 1.00, cls_norm: 1.00, delta_norm: 1.00, scale_x_y: 1.00
  17 route  13                                     ->   13 x  13 x 256
  18 conv    128       1 x 1/ 1     13 x  13 x 256 ->   13 x  13 x 128 0.011 BF
  19 upsample                 2x    13 x  13 x 128 ->   26 x  26 x 128
  20 route  19 8                                   ->   26 x  26 x 384
  21 conv    256       3 x 3/ 1     26 x  26 x 384 ->   26 x  26 x 256 1.196 BF
  22 conv    255       1 x 1/ 1     26 x  26 x 256 ->   26 x  26 x 255 0.088 BF
  23 yolo
[yolo] params: iou loss: mse (2), iou_norm: 0.75, obj_norm: 1.00, cls_norm: 1.00, delta_norm: 1.00, scale_x_y: 1.00
Total BFLOPS 5.571
avg_outputs = 341534
Loading weights from ../yolov3-tiny.weights...
 seen 64, trained: 32013 K-images (500 Kilo-batches_64)
Done! Loaded 24 layers from weights-file
 Detection layer: 16 - type = 28
 Detection layer: 23 - type = 28
data/dog.jpg: Predicted in 8686.237000 milli-seconds.
dog: 81%
bicycle: 38%
car: 71%
truck: 42%
truck: 62%
car: 40%
Not compiled with OpenCV, saving to predictions.png instead

 

 

openmp + neon = 4.449 sec

$ ./tiny.sh
 GPU isn't used
 OpenCV isn't used - data augmentation will be slow
mini_batch = 1, batch = 1, time_steps = 1, train = 0
   layer   filters  size/strd(dil)      input                output
   0 conv     16       3 x 3/ 1    416 x 416 x   3 ->  416 x 416 x  16 0.150 BF
   1 max                2x 2/ 2    416 x 416 x  16 ->  208 x 208 x  16 0.003 BF
   2 conv     32       3 x 3/ 1    208 x 208 x  16 ->  208 x 208 x  32 0.399 BF
   3 max                2x 2/ 2    208 x 208 x  32 ->  104 x 104 x  32 0.001 BF
   4 conv     64       3 x 3/ 1    104 x 104 x  32 ->  104 x 104 x  64 0.399 BF
   5 max                2x 2/ 2    104 x 104 x  64 ->   52 x  52 x  64 0.001 BF
   6 conv    128       3 x 3/ 1     52 x  52 x  64 ->   52 x  52 x 128 0.399 BF
   7 max                2x 2/ 2     52 x  52 x 128 ->   26 x  26 x 128 0.000 BF
   8 conv    256       3 x 3/ 1     26 x  26 x 128 ->   26 x  26 x 256 0.399 BF
   9 max                2x 2/ 2     26 x  26 x 256 ->   13 x  13 x 256 0.000 BF
  10 conv    512       3 x 3/ 1     13 x  13 x 256 ->   13 x  13 x 512 0.399 BF
  11 max                2x 2/ 1     13 x  13 x 512 ->   13 x  13 x 512 0.000 BF
  12 conv   1024       3 x 3/ 1     13 x  13 x 512 ->   13 x  13 x1024 1.595 BF
  13 conv    256       1 x 1/ 1     13 x  13 x1024 ->   13 x  13 x 256 0.089 BF
  14 conv    512       3 x 3/ 1     13 x  13 x 256 ->   13 x  13 x 512 0.399 BF
  15 conv    255       1 x 1/ 1     13 x  13 x 512 ->   13 x  13 x 255 0.044 BF
  16 yolo
[yolo] params: iou loss: mse (2), iou_norm: 0.75, obj_norm: 1.00, cls_norm: 1.00, delta_norm: 1.00, scale_x_y: 1.00
  17 route  13                                     ->   13 x  13 x 256
  18 conv    128       1 x 1/ 1     13 x  13 x 256 ->   13 x  13 x 128 0.011 BF
  19 upsample                 2x    13 x  13 x 128 ->   26 x  26 x 128
  20 route  19 8                                   ->   26 x  26 x 384
  21 conv    256       3 x 3/ 1     26 x  26 x 384 ->   26 x  26 x 256 1.196 BF
  22 conv    255       1 x 1/ 1     26 x  26 x 256 ->   26 x  26 x 255 0.088 BF
  23 yolo
[yolo] params: iou loss: mse (2), iou_norm: 0.75, obj_norm: 1.00, cls_norm: 1.00, delta_norm: 1.00, scale_x_y: 1.00
Total BFLOPS 5.571
avg_outputs = 341534
Loading weights from ../yolov3-tiny.weights...
 seen 64, trained: 32013 K-images (500 Kilo-batches_64)
Done! Loaded 24 layers from weights-file
 Detection layer: 16 - type = 28
 Detection layer: 23 - type = 28
data/dog.jpg: Predicted in 4449.888000 milli-seconds.
dog: 81%
bicycle: 38%
car: 71%
truck: 42%
truck: 62%
car: 40%
Not compiled with OpenCV, saving to predictions.png instead

 

'프로그램 사용 > yolo_tensorflow' 카테고리의 다른 글

CNN과 RNN  (0) 2021.01.09
darknet과 darknetab  (0) 2021.01.09
darknet on rpi3  (0) 2021.01.08
yolo lite  (0) 2021.01.08
SSDnnn (Single Shot Detector)  (0) 2021.01.08
Posted by 구차니
embeded2021. 1. 8. 17:59

아래의 옵션을 추천해서 적용해 보았는데

-mthumb -O3 -march=armv7-a -mcpu=cortex-a9 -mtune=cortex-a9 -mfpu=neon -mvectorize-with-neon-quad -mfloat-abi=softfp

[링크 : https://stackoverflow.com/questions/14962447/gcc-options-for-a-freescale-imx6q-arm-processor]

[링크 : https://gcc.gnu.org/onlinedocs/gcc/ARM-Options.html]

 

에러가 나서 mfloat-abi=softfp 에서 hard로 변경

In file included from /usr/include/features.h:448,
                 from /usr/include/arm-linux-gnueabihf/bits/libc-header-start.h:33,
                 from /usr/include/stdlib.h:25,
                 from include/darknet.h:12,
                 from ./src/activations.h:3,
                 from ./src/gemm.h:3,
                 from ./src/gemm.c:1:
/usr/include/arm-linux-gnueabihf/gnu/stubs.h:7:11: fatal error: gnu/stubs-soft.h: No such file or directory
 # include <gnu/stubs-soft.h>
           ^~~~~~~~~~~~~~~~~~
compilation terminated.

[링크 : https://stackoverflow.com/questions/49139125/fatal-error-gnu-stubs-soft-h-no-such-file-or-directory]

 

아무튼 빌드는 되지만 아래와 같은 경고가 뜬다. march와 mcpu가 충돌난다라.. 어느걸 살려야 할까?

cc1: warning: switch -mcpu=cortex-a9 conflicts with -march=armv7-a switch

 

 

+

2021.01.12

[링크 : https://developer.arm.com/documentation/dui0472/i/using-the-neon-vectorizing-compiler/generating-neon-instructions-from-c-or-c---code]

 

'embeded' 카테고리의 다른 글

arm-none-eabi는 -pthread 미지원  (0) 2021.01.11
orange pi r1+  (0) 2021.01.08
i.mx6 solo 비디오 성능 문제?  (0) 2020.10.19
간만에 부품 지름  (2) 2020.03.04
solidrun CuBox-i2w  (0) 2019.03.10
Posted by 구차니