하드웨어/Network 장비2025. 9. 17. 16:51

엥.. 예전에 본거 같은데 찾다보니 지원 안하는 모델인가..

회사꺼 T24000M 인데 지원안함.. 포트트렁킹만 보임

[링크 : https://intotw.tistory.com/172]

'하드웨어 > Network 장비' 카테고리의 다른 글

fc san  (0) 2025.03.08
modinfo bnx2x (BCM957810)  (0) 2025.02.26
랜카드 sr-iov 설정  (0) 2025.02.26
ubuntu 네트워크 연결되지 않음 40초 간격  (0) 2025.02.26
BCM957810A1008G 히트싱크 분해  (0) 2025.02.24
Posted by 구차니

tflite 에서 weight만 빼서 거기서 부터 학습 시키면 그게 fine tune / transfer learning(전이학습)이 되는건가?

The conversion from a TensorFlow SaveModel or tf.keras H5 model to .tflite is an irreversible process. Specifically, the original model topology is optimized during the compilation by the TFLite converter, which leads to some loss of information. Also, the original tf.keras model's loss and optimizer configurations are discarded, because those aren't required for inference.

However, the .tflite file still contains some information that can help you restore the original trained model. Most importantly, the weight values are available, although they might be quantized, which could lead to some loss in precision.

The code example below shows you how to read weight values from a .tflite file after it's created from a simple trained tf.keras.Model.



import numpy as np
import tensorflow as tf

# First, create and train a dummy model for demonstration purposes.
model = tf.keras.Sequential([
    tf.keras.layers.Dense(10, input_shape=[5], activation="relu"),
    tf.keras.layers.Dense(1, activation="sigmoid")])
model.compile(loss="binary_crossentropy", optimizer="sgd")

xs = np.ones([8, 5])
ys = np.zeros([8, 1])
model.fit(xs, ys, epochs=1)

# Convert it to a TFLite model file.
converter = tf.lite.TFLiteConverter.from_keras_model(model)
tflite_model = converter.convert()
open("converted.tflite", "wb").write(tflite_model)

# Use `tf.lite.Interpreter` to load the written .tflite back from the file system.
interpreter = tf.lite.Interpreter(model_path="converted.tflite")
all_tensor_details = interpreter.get_tensor_details()
interpreter.allocate_tensors()

for tensor_item in all_tensor_details:
  print("Weight %s:" % tensor_item["name"])
  print(interpreter.tensor(tensor_item["index"])())

[링크 : https://stackoverflow.com/questions/59559289/is-there-any-way-to-convert-a-tensorflow-lite-tflite-file-back-to-a-keras-fil]

 

읽어오든.. tf.keras.application.MobileNetV2 해서 만들던 weight의 유무를 제외하면 동일하게 불러오는건가 보다.

# Create the base model from the pre-trained model MobileNet V2
IMG_SHAPE = IMG_SIZE + (3,)
base_model = tf.keras.applications.MobileNetV2(input_shape=IMG_SHAPE,
                                               include_top=False,
                                               weights='imagenet')
#or load your own
#base_modeltf.saved_model.load("./pretrained_models/ssd_mobilenet_v2_320x320_coco17_tpu-8/saved_model")

[링크 : https://www.aranacorp.com/en/training-a-tensorflow2-model-with-keras/]

 

다시 찾아보니 keras쪽 표현으로

transfer-learning은 기존의 값을 변화시키지 않고 추가 레이어에 학습시키는 것 같고

The typical transfer-learning workflow
This leads us to how a typical transfer learning workflow can be implemented in Keras:

Instantiate a base model and load pre-trained weights into it.
Freeze all layers in the base model by setting trainable = False.
Create a new model on top of the output of one (or several) layers from the base model.
Train your new model on your new dataset.
Note that an alternative, more lightweight workflow could also be:

Instantiate a base model and load pre-trained weights into it.
Run your new dataset through it and record the output of one (or several) layers from the base model. This is called feature extraction.
Use that output as input data for a new, smaller model.


First, instantiate a base model with pre-trained weights.

base_model = keras.applications.Xception(
    weights='imagenet',  # Load weights pre-trained on ImageNet.
    input_shape=(150, 150, 3),
    include_top=False)  # Do not include the ImageNet classifier at the top.
Then, freeze the base model.

base_model.trainable = False


Create a new model on top.

inputs = keras.Input(shape=(150, 150, 3))
# We make sure that the base_model is running in inference mode here,
# by passing `training=False`. This is important for fine-tuning, as you will
# learn in a few paragraphs.
x = base_model(inputs, training=False)
# Convert features of shape `base_model.output_shape[1:]` to vectors
x = keras.layers.GlobalAveragePooling2D()(x)
# A Dense classifier with a single unit (binary classification)
outputs = keras.layers.Dense(1)(x)
model = keras.Model(inputsoutputs)
Train the model on new data.

model.compile(optimizer=keras.optimizers.Adam(),
              loss=keras.losses.BinaryCrossentropy(from_logits=True),
              metrics=[keras.metrics.BinaryAccuracy()])
model.fit(new_dataset, epochs=20, callbacks=..., validation_data=...)

 

fine tuning은 모델 자체의 가중치를 변경시킬수 있도록 해서, 천천히 학습 시키는 것 같다.

Fine-tuning
Once your model has converged on the new data, you can try to unfreeze all or part of the base model and retrain the whole model end-to-end with a very low learning rate.

This is an optional last step that can potentially give you incremental improvements. It could also potentially lead to quick overfitting – keep that in mind.

It is critical to only do this step after the model with frozen layers has been trained to convergence. If you mix randomly-initialized trainable layers with trainable layers that hold pre-trained features, the randomly-initialized layers will cause very large gradient updates during training, which will destroy your pre-trained features.

It's also critical to use a very low learning rate at this stage, because you are training a much larger model than in the first round of training, on a dataset that is typically very small. As a result, you are at risk of overfitting very quickly if you apply large weight updates. Here, you only want to readapt the pretrained weights in an incremental way.


This is how to implement fine-tuning of the whole base model:

# Unfreeze the base model
base_model.trainable = True

# It's important to recompile your model after you make any changes
# to the `trainable` attribute of any inner layer, so that your changes
# are take into account
model.compile(optimizer=keras.optimizers.Adam(1e-5),  # Very low learning rate
              loss=keras.losses.BinaryCrossentropy(from_logits=True),
              metrics=[keras.metrics.BinaryAccuracy()])

# Train end-to-end. Be careful to stop before you overfit!
model.fit(new_dataset, epochs=10, callbacks=..., validation_data=...)

[링크 : https://keras.io/guides/transfer_learning/]

 

 

+

[링크 : https://89douner.tistory.com/270] vgg16 기준 추가학습

 

SavedModel 형식과 비교하여 H5 파일에 포함되지 않은 두 가지가 있습니다.

model.add_loss() 및 model.add_metric()을 통해 추가된 외부 손실 및 메트릭은 저장되지 않습니다(SavedModel과 다름). 모델에 이러한 솔실 및 메트릭이 있고 훈련을 다시 시작하려면 모델을 로드한 후 이러한 손실을 다시 추가해야 합니다. 이는 self.add_loss() 및 self.add_metric()을 통해 레이어 내부에서 생성한 손실/메트릭에는 적용되지 않습니다. 이러한 손실 및 메트릭은 레이어가 로드되는 한 레이어의 call 메서드의 일부이기 때문에 계속 유지됩니다.
사용자 정의 레이어와 같은 사용자 정의 객체의 계산 그래프는 저장 파일에 포함되지 않습니다. 로드 시 Keras는 모델을 다시 구성하기 위해 이러한 객체의 Python 클래스/함수에 액세스해야 합니다. 사용자 정의 객체를 참고하세요.

[링크 : https://www.tensorflow.org/guide/keras/save_and_serialize?hl=ko]

'프로그램 사용 > yolo_tensorflow' 카테고리의 다른 글

mobilenet 학습시키기 with keras, tensorflow  (0) 2025.09.15
ssd mobilenet  (0) 2025.09.11
LLM 시각화  (0) 2025.09.06
모델 학습  (0) 2025.09.05
mean Average Precision(mAP)  (0) 2025.09.05
Posted by 구차니

요즘은 복리 보기 힘들은데

이자가 쩝..

 

아무튼 이자율이 자꾸 떨어지니 3개월 예금으로는 복리처럼 써먹기 힘들 것 같긴하다

 

2백만원에 2.8% 라고 가정하고 1년 넣으면 세전 56000원 이자

2백만원에 3개월 마자 이자 포함 자동 재예치 하고 2.8% 1년 넣으면 세전 56590원 고작이긴 하지만 590원 더 받는다.

그런데 이자율이 요즘 하락이라

 

3개월뒤 더 떨어진다고 가정하면

그냥 처음 이자 그나마 높을때 묶어 둔게 유리한 상황

 

요즘 시대에(?) 이자가 올라갈 확율이 거의 없으니

목돈 있으면 그냥 예금으로 1년 단위로 묶는게 나을것 같긴한데

야곰야곰 늘어나는 재미도 있으니 머 ㅋㅋ

'개소리 왈왈 > 직딩의 비애' 카테고리의 다른 글

10월의 시작  (0) 2025.10.01
멘탈 크래시 크래시  (0) 2025.09.18
외근  (0) 2025.09.09
압박붕대  (2) 2025.08.25
어우 피곤  (0) 2025.08.05
Posted by 구차니
Linux2025. 9. 17. 11:14

엥....?? 일반권한이랑 root 권한이랑 출력포맷이 다르네?

무슨 dd 한번 썼다고 시스템 크래시 난 줄 -_-

$ cat nvme.log 
$ sudo time dd if=/dev/zero of=test bs=1G count=1 conv=fsync
1+0 records in
1+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 5.92836 s, 181 MB/s
0.00user 1.06system 0:06.00elapsed 17%CPU (0avgtext+0avgdata 1051008maxresident)k
0inputs+2097152outputs (0major+262242minor)pagefaults 0swaps

 

time 자체는 하나인데 권한에 따라서 포맷이나 옵션이 다르게 작동하는 것 같다.

$ whereis time
time: /usr/bin/time /usr/share/man/man7/time.7.gz /usr/share/man/man2/time.2.gz /usr/share/man/man3/time.3am.gz /usr/share/man/man3/time.3avr.gz /usr/share/man/man1/time.1.gz /usr/share/info/time.info.gz

 

$ sudo whereis time
time: /usr/bin/time /usr/share/man/man7/time.7.gz /usr/share/man/man2/time.2.gz /usr/share/man/man3/time.3am.gz /usr/share/man/man3/time.3avr.gz /usr/share/man/man1/time.1.gz /usr/share/info/time.info.gz

 

$ ls -al /usr/bin/time
-rwxr-xr-x 1 root root 27160  3월 25  2022 /usr/bin/time

 

일반 유저는 -v를 쓸 수 없는 것 같고

$ time ls

real 0m0.004s
user 0m0.001s
sys 0m0.003s

 

root 에서 -p를 주면 원래 보면 포맷이긴 한데 시간 정밀도가 떨어진다. 왜지?

$ sudo time -p ls
real 0.00
user 0.00
sys 0.00

 

$ sudo time ls
0.00user 0.00system 0:00.00elapsed 66%CPU (0avgtext+0avgdata 2432maxresident)k
0inputs+0outputs (0major+106minor)pagefaults 0swaps

 

$ sudo time -v ls
Command being timed: "ls"
User time (seconds): 0.00
System time (seconds): 0.00
Percent of CPU this job got: 90%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.00
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 2432
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 107
Voluntary context switches: 1
Involuntary context switches: 0
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0

 

도움말에 먼가 보이긴 한데..

언제부터 이렇게 바뀐겨? root로 시간 측정할 일이 없으니.. 당황했네

$ man time
FORMATTING THE OUTPUT
       The format string FORMAT controls the contents of the time output.  The
       format string can be set using the `-f' or `--format', `-v' or
       `--verbose', or `-p' or `--portability' options.  If they are not
       given, but the TIME environment variable is set, its value is used as
       the format string.  Otherwise, a built-in default format is used.  The
       default format is:
         %Uuser %Ssystem %Eelapsed %PCPU (%Xtext+%Ddata %Mmax)k
         %Iinputs+%Ooutputs (%Fmajor+%Rminor)pagefaults %Wswaps

'Linux' 카테고리의 다른 글

proc fs smp_affinity  (0) 2025.09.19
dd 로 덤프 하면서 바로 압축하기  (0) 2025.07.24
gpiod - gpiomon  (0) 2025.07.17
scrub  (0) 2025.02.04
Block SCSI generic (bsg) driver  (0) 2024.04.16
Posted by 구차니

3월 17일에

3개월 / 200만원 묶는데 2.8%

 

6월 17일에

3개월 / 200만원 묶는데 2.5%

 

9월 17일에

3개월 / 200만원 묶는데 2.4%

 

장기로 묶는게 유리하려나

'개소리 왈왈 > 정치관련 신세한탄' 카테고리의 다른 글

콜래트럴 데미지  (0) 2025.06.03
전장련 시위 시작  (0) 2025.04.21
보수(保守)는 보수(報酬)를 먹고 자란다  (0) 2025.04.18
윤석열 탄핵  (0) 2025.04.04
제주항공 무안공항 대참사  (0) 2024.12.29
Posted by 구차니
개소리 왈왈/컴퓨터2025. 9. 16. 23:48

역시 잊어야 오는구나 ㅋㅋ

 

+

2025.09.17

nvme 달아서 테스트 해보니 1 lane 이라 181MB/s 정도 밖에 안나온다.

당연하긴 당연한데, 의외로 되니 신기하다.

 

$ cat nvme.log 
$ sudo time dd if=/dev/zero of=test bs=1G count=1 conv=fsync
1+0 records in
1+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 5.92836 s, 181 MB/s
0.00user 1.06system 0:06.00elapsed 17%CPU (0avgtext+0avgdata 1051008maxresident)k
0inputs+2097152outputs (0major+262242minor)pagefaults 0swaps

'개소리 왈왈 > 컴퓨터' 카테고리의 다른 글

abko k1924 적축 키보드 구매  (2) 2025.09.18
알리알리 알라성 알라리 알라 망했네?  (0) 2025.09.18
키보드 수리 2  (0) 2025.09.14
SK-8845RC USB 포트  (0) 2025.09.10
fan error .. 3  (0) 2025.08.10
Posted by 구차니
embeded/i.mx 8m plus2025. 9. 16. 12:11

오디오와 비디오로 크게 나뉘고

비디오에서는 classification / obejct detection / segmentation 정도가 현재 관심사

sementic segmentation과 instance-segmentation 차이는 멀까?

List of domains
Audio
anomaly detection
command recognition
speech recognition
Vision
classification
face recognition
object detection
pose estimation
semantic segmentation
super resolution
instance-segmentation
low-light enhancement

[링크 : https://github.com/NXP/eiq-model-zoo]

 

텍스트로 넣으니 css 때문에 깨져서 이미지로 복사 -_-

selfie-segmenter는 proprietary dataset이었군..

[링크 : https://github.com/NXP/eiq-model-zoo/tree/main/products]

'embeded > i.mx 8m plus' 카테고리의 다른 글

eiq 데이터 구조  (0) 2025.09.05
ubuntu 22.04 + cuda + cudnn 설치  (0) 2025.09.04
import tensorflow illegal instruction  (0) 2025.09.04
eiq on windows with nvidia  (0) 2025.09.03
vainfo  (0) 2025.09.03
Posted by 구차니
게임/컨트롤러2025. 9. 15. 23:32

조이트론 EX M AIR를 오랫만에 꺼냈더니

아날로그 조이스틱 쪽이 끈끈해져서 떼고, 전에 구매해놨던 아날로그 조이스틱 부품으로 바꾸려고 했으나

구멍 크기가 달라서 실패 -_-

 

삭아가고 책상에 문지르면 연필처럼 검은 줄이 생겨난다 -_-

 

나사 풀고 드니 우수수 떨어져서 당황 -_-

배터리도 커넥터 없이 그냥 납땜되어있고 모터쪽도 완전히 잡히면 잡소리 나니까

그냥 공중부양.. 우워.. 굉장하네

 

내가 가진거 보다 확실히 먼가 좋은 부품이라는 느낌

 

+

2025.09.16

머리는 얘가 맞는거 같은데 가격이..

[링크 : http://itempage3.auction.co.kr/DetailView.aspx?itemno=E854087313] 13,820원/무배

[링크 : http://itempage3.auction.co.kr/DetailView.aspx?itemno=E854088395] 12,600원/무배

 

내꺼랑 비교하면 구멍이 확실히 작다.

 

낑낑대면서 겨우겨우 다시 조립 ㅠㅠ 헉헉

 

 

+

가변저항이 아닌 홀 센서라서 더 비싸다고

물론 비싸다고 무조건 좋은건 아닌데,

데드존이 적어서 장점,

노이즈 제거등의 전처리가 필요해서 응답이 좀 느리기도 한 제품이 있다는게 단점이라고 보면 되려나?

[링크 : https://gall.dcinside.com/mgallery/board/view/?id=gamepad&no=27383]

[링크 : https://blog.naver.com/sjejfdlskek/223173724317]

'게임 > 컨트롤러' 카테고리의 다른 글

Logitech(saitek) X-56 stick 키맵  (0) 2025.03.03
logitech x-56 플라이트 스틱  (0) 2025.02.22
잘가 조이스틱  (0) 2025.01.10
pxn2119pro 중고구매  (0) 2025.01.06
ex m air 펌웨어 (안드로이드 갤럭시 S10)  (0) 2023.01.25
Posted by 구차니

대충 버전이 맞았는지 돌아는 간다.


Epoch 1/25
 52/755 [=>............................] - ETA: 1:04:39 - loss: 0.3203 - accuracy: 0.8534    

 

주요 설치 패키지 버전은 아래와 같고

keras                        2.14.0
mobilenet-v3                 0.1.2
numpy                        1.24.4
tensorflow                   2.14.0

 

수정된 소스는 다음과 같다.

그런데 voc 디렉토리를 통채로 넣었는데 어찌 돌아는 가는데.. 어떤 파일로 학습을 하는거냐.. -_-

from keras.applications import MobileNet
from keras.models import Sequential,Model 
from keras.layers import Dense,Dropout,Activation,Flatten,GlobalAveragePooling2D
from keras.layers import Conv2D,MaxPooling2D,ZeroPadding2D
from keras.layers.normalization import BatchNormalization
from tensorflow.keras.preprocessing.image import ImageDataGenerator

# MobileNet is designed to work with images of dim 224,224
img_rows,img_cols = 224,224

MobileNet = MobileNet(weights='imagenet',include_top=False,input_shape=(img_rows,img_cols,3))

# Here we freeze the last 4 layers
# Layers are set to trainable as True by default

for layer in MobileNet.layers:
    layer.trainable = True

# Let's print our layers
for (i,layer) in enumerate(MobileNet.layers):
    print(str(i),layer.__class__.__name__,layer.trainable)

def addTopModelMobileNet(bottom_model, num_classes):
    """creates the top or head of the model that will be 
    placed ontop of the bottom layers"""
    top_model = bottom_model.output
    top_model = GlobalAveragePooling2D()(top_model)
    top_model = Dense(1024,activation='relu')(top_model)
    top_model = Dense(1024,activation='relu')(top_model)
    top_model = Dense(512,activation='relu')(top_model)
    top_model = Dense(num_classes,activation='softmax')(top_model)
    return top_model

num_classes = 5  # ['Angry','Happy','Neutral','Sad','Surprise']

FC_Head = addTopModelMobileNet(MobileNet, num_classes)

model = Model(inputs = MobileNet.input, outputs = FC_Head)

print(model.summary())

train_data_dir = 'VOC2012_train_val/VOC2012_train_val'
validation_data_dir = 'VOC2012_test/VOC2012_test'

train_datagen = ImageDataGenerator(
                    rescale=1./255,
                    rotation_range=30,
                    width_shift_range=0.3,
                    height_shift_range=0.3,
                    horizontal_flip=True,
                    fill_mode='nearest'
                                   )

validation_datagen = ImageDataGenerator(rescale=1./255)

batch_size = 32

train_generator = train_datagen.flow_from_directory(
                        train_data_dir,
                        target_size = (img_rows,img_cols),
                        batch_size = batch_size,
                        class_mode = 'categorical'
                        )

validation_generator = validation_datagen.flow_from_directory(
                            validation_data_dir,
                            target_size=(img_rows,img_cols),
                            batch_size=batch_size,
                            class_mode='categorical')

from keras.optimizers import RMSprop,Adam
from keras.callbacks import ModelCheckpoint,EarlyStopping,ReduceLROnPlateau

checkpoint = ModelCheckpoint(
                             'emotion_face_mobilNet.h5',
                             monitor='val_loss',
                             mode='min',
                             save_best_only=True,
                             verbose=1)

earlystop = EarlyStopping(
                          monitor='val_loss',
                          min_delta=0,
                          patience=10,
                          verbose=1,restore_best_weights=True)

learning_rate_reduction = ReduceLROnPlateau(monitor='val_acc', 
                                            patience=5, 
                                            verbose=1, 
                                            factor=0.2, 
                                            min_lr=0.0001)

callbacks = [earlystop,checkpoint,learning_rate_reduction]

model.compile(loss='categorical_crossentropy',
              optimizer=Adam(learning_rate=0.001),
              metrics=['accuracy']
              )

nb_train_samples = 24176
nb_validation_samples = 3006

epochs = 25

history = model.fit(
            train_generator,
            steps_per_epoch=nb_train_samples//batch_size,     
            epochs=epochs,
            callbacks=callbacks,
            validation_data=validation_generator,
            validation_steps=nb_validation_samples//batch_size)


 

돌리다가 에러가 나서 멘붕.. 급 귀찮아짐..

먼가 파일 갯수가 안 맞는건가?

Epoch 1/25
718/755 [===========================>..] - ETA: 3:28 - loss: 0.1569 - accuracy: 0.9301WARNING:tensorflow:Your input ran out of data; interrupting training. Make sure that your dataset or generator can generate at least `steps_per_epoch * epochs` batches (in this case, 18875 batches). You may need to use the repeat() function when building your dataset.
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/home/minimonk/.local/lib/python3.10/site-packages/keras/src/utils/traceback_utils.py", line 70, in error_handler
    raise e.with_traceback(filtered_tb) from None
  File "/home/minimonk/.local/lib/python3.10/site-packages/tensorflow/python/eager/execute.py", line 60, in quick_execute
    tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle, device_name, op_name,
tensorflow.python.framework.errors_impl.InvalidArgumentError: Graph execution error:

Detected at node categorical_crossentropy/softmax_cross_entropy_with_logits defined at (most recent call last):
  File "<stdin>", line 1, in <module>

  File "/home/minimonk/.local/lib/python3.10/site-packages/keras/src/utils/traceback_utils.py", line 65, in error_handler

  File "/home/minimonk/.local/lib/python3.10/site-packages/keras/src/engine/training.py", line 1832, in fit

  File "/home/minimonk/.local/lib/python3.10/site-packages/keras/src/utils/traceback_utils.py", line 65, in error_handler

  File "/home/minimonk/.local/lib/python3.10/site-packages/keras/src/engine/training.py", line 2272, in evaluate

  File "/home/minimonk/.local/lib/python3.10/site-packages/keras/src/engine/training.py", line 4079, in run_step

  File "/home/minimonk/.local/lib/python3.10/site-packages/keras/src/engine/training.py", line 2042, in test_function

  File "/home/minimonk/.local/lib/python3.10/site-packages/keras/src/engine/training.py", line 2025, in step_function

  File "/home/minimonk/.local/lib/python3.10/site-packages/keras/src/engine/training.py", line 2013, in run_step

  File "/home/minimonk/.local/lib/python3.10/site-packages/keras/src/engine/training.py", line 1895, in test_step

  File "/home/minimonk/.local/lib/python3.10/site-packages/keras/src/engine/training.py", line 1185, in compute_loss

  File "/home/minimonk/.local/lib/python3.10/site-packages/keras/src/engine/compile_utils.py", line 277, in __call__

  File "/home/minimonk/.local/lib/python3.10/site-packages/keras/src/losses.py", line 143, in __call__

  File "/home/minimonk/.local/lib/python3.10/site-packages/keras/src/losses.py", line 270, in call

  File "/home/minimonk/.local/lib/python3.10/site-packages/keras/src/losses.py", line 2221, in categorical_crossentropy

  File "/home/minimonk/.local/lib/python3.10/site-packages/keras/src/backend.py", line 5581, in categorical_crossentropy

logits and labels must be broadcastable: logits_size=[32,5] labels_size=[32,3]
 [[{{node categorical_crossentropy/softmax_cross_entropy_with_logits}}]] [Op:__inference_test_function_15346]
>>> 

 

전체 pip 패키지들 버전 정보는 아래와 같다.

$ pip list
Package                      Version
---------------------------- ----------------
absl-py                      2.3.1
appdirs                      1.4.4
apturl                       0.5.2
astunparse                   1.6.3
attrs                        21.2.0
bcrypt                       3.2.0
beautifulsoup4               4.10.0
beniget                      0.4.1
blinker                      1.4
Brlapi                       0.8.3
Brotli                       1.0.9
cachetools                   5.5.2
certifi                      2020.6.20
chardet                      4.0.0
click                        8.0.3
colorama                     0.4.4
command-not-found            0.3
cryptography                 3.4.8
cupshelpers                  1.0
cycler                       0.11.0
dbus-python                  1.2.18
decorator                    4.4.2
defer                        1.0.6
distro                       1.7.0
distro-info                  1.1+ubuntu0.2
duplicity                    0.8.21
fasteners                    0.14.1
flatbuffers                  25.2.10
fonttools                    4.29.1
fs                           2.4.12
future                       0.18.2
gast                         0.6.0
google-auth                  2.40.3
google-auth-oauthlib         1.0.0
google-pasta                 0.2.0
grpcio                       1.74.0
h5py                         3.14.0
html5lib                     1.1
httplib2                     0.20.2
idna                         3.3
importlib-metadata           4.6.4
jeepney                      0.7.1
keras                        2.14.0
keyring                      23.5.0
kiwisolver                   1.3.2
language-selector            0.1
launchpadlib                 1.10.16
lazr.restfulclient           0.14.4
lazr.uri                     1.0.6
libclang                     18.1.1
lockfile                     0.12.2
louis                        3.20.0
lxml                         4.8.0
lz4                          3.1.3+dfsg
macaroonbakery               1.3.1
Mako                         1.1.3
Markdown                     3.9
markdown-it-py               4.0.0
MarkupSafe                   3.0.2
matplotlib                   3.5.1
mdurl                        0.1.2
meld                         3.20.4
ml-dtypes                    0.2.0
mobilenet-v3                 0.1.2
monotonic                    1.6
more-itertools               8.10.0
mpmath                       0.0.0
namex                        0.1.0
netifaces                    0.11.0
numpy                        1.24.4
oauthlib                     3.2.0
olefile                      0.46
opt_einsum                   3.4.0
optree                       0.17.0
packaging                    21.3
paramiko                     2.9.3
pexpect                      4.8.0
Pillow                       9.0.1
pip                          22.0.2
Pivy                         0.6.5
ply                          3.11
protobuf                     4.25.8
ptyprocess                   0.7.0
pyasn1                       0.6.1
pyasn1_modules               0.4.2
pycairo                      1.20.1
pycups                       2.0.1
Pygments                     2.19.2
PyGObject                    3.42.1
PyJWT                        2.3.0
pymacaroons                  0.13.0
PyNaCl                       1.5.0
pyparsing                    2.4.7
pyRFC3339                    1.1
python-apt                   2.4.0+ubuntu4
python-dateutil              2.8.1
python-debian                0.1.43+ubuntu1.1
pythran                      0.10.0
pytz                         2022.1
pyxdg                        0.27
PyYAML                       5.4.1
reportlab                    3.6.8
requests                     2.25.1
requests-oauthlib            2.0.0
rich                         14.1.0
rsa                          4.9.1
scipy                        1.15.3
scour                        0.38.2
SecretStorage                3.3.1
setuptools                   59.6.0
six                          1.16.0
soupsieve                    2.3.1
ssh-import-id                5.11
sympy                        1.9
systemd-python               234
tensorboard                  2.14.1
tensorboard-data-server      0.7.2
tensorflow                   2.14.0
tensorflow-estimator         2.14.0
tensorflow-io-gcs-filesystem 0.37.1
termcolor                    3.1.0
typing_extensions            4.15.0
ubuntu-drivers-common        0.0.0
ubuntu-pro-client            8001
ufoLib2                      0.13.1
ufw                          0.36.1
unattended-upgrades          0.1
unicodedata2                 14.0.0
urllib3                      1.26.5
usb-creator                  0.3.7
wadllib                      1.3.6
webencodings                 0.5.1
Werkzeug                     3.1.3
wheel                        0.37.1
wrapt                        1.14.2
xdg                          5
xkit                         0.0.0
zipp                         1.0.0

 

-------- 아래는 참고 안하는게 속 편할지도...?

2020년 3월의 문서를 keras와 tensorflow로 2025년에 다시 시도해봄

 

일단은 아래처럼 설치하니 어찌 되는 느낌

$ pip install mobilenet-v3
$ pip install tensorflow
$ pip install numpy==1.26.4

 

상세로그

$ pip install mobilenet-v3
Defaulting to user installation because normal site-packages is not writeable
Collecting mobilenet-v3
  Downloading mobilenet_v3-0.1.4-py3-none-any.whl (18 kB)
Installing collected packages: mobilenet-v3
Successfully installed mobilenet-v3-0.1.4

$ pip install tensorflow
Defaulting to user installation because normal site-packages is not writeable
Collecting tensorflow
  Downloading tensorflow-2.20.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (620.4 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 620.4/620.4 MB 2.5 MB/s eta 0:00:00
Collecting google_pasta>=0.1.1
  Downloading google_pasta-0.2.0-py3-none-any.whl (57 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 57.5/57.5 KB 9.6 MB/s eta 0:00:00
Collecting absl-py>=1.0.0
  Downloading absl_py-2.3.1-py3-none-any.whl (135 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 135.8/135.8 KB 12.8 MB/s eta 0:00:00
Collecting libclang>=13.0.0
  Downloading libclang-18.1.1-py2.py3-none-manylinux2010_x86_64.whl (24.5 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 24.5/24.5 MB 9.6 MB/s eta 0:00:00
Collecting gast!=0.5.0,!=0.5.1,!=0.5.2,>=0.2.1
  Downloading gast-0.6.0-py3-none-any.whl (21 kB)
Collecting protobuf>=5.28.0
  Downloading protobuf-6.32.1-cp39-abi3-manylinux2014_x86_64.whl (322 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 322.0/322.0 KB 11.5 MB/s eta 0:00:00
Collecting ml_dtypes<1.0.0,>=0.5.1
  Downloading ml_dtypes-0.5.3-cp310-cp310-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl (4.9 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 4.9/4.9 MB 9.9 MB/s eta 0:00:00
Collecting numpy>=1.26.0
  Downloading numpy-2.2.6-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (16.8 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 16.8/16.8 MB 9.9 MB/s eta 0:00:00
Requirement already satisfied: packaging in /usr/lib/python3/dist-packages (from tensorflow) (21.3)
Requirement already satisfied: six>=1.12.0 in /usr/lib/python3/dist-packages (from tensorflow) (1.16.0)
Collecting termcolor>=1.1.0
  Downloading termcolor-3.1.0-py3-none-any.whl (7.7 kB)
Collecting typing_extensions>=3.6.6
  Downloading typing_extensions-4.15.0-py3-none-any.whl (44 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 44.6/44.6 KB 8.2 MB/s eta 0:00:00
Collecting tensorboard~=2.20.0
  Downloading tensorboard-2.20.0-py3-none-any.whl (5.5 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 5.5/5.5 MB 10.9 MB/s eta 0:00:00
Collecting astunparse>=1.6.0
  Downloading astunparse-1.6.3-py2.py3-none-any.whl (12 kB)
Collecting opt_einsum>=2.3.2
  Downloading opt_einsum-3.4.0-py3-none-any.whl (71 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 71.9/71.9 KB 8.4 MB/s eta 0:00:00
Collecting h5py>=3.11.0
  Downloading h5py-3.14.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (4.6 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 4.6/4.6 MB 9.4 MB/s eta 0:00:00
Collecting flatbuffers>=24.3.25
  Downloading flatbuffers-25.2.10-py2.py3-none-any.whl (30 kB)
Collecting keras>=3.10.0
  Downloading keras-3.11.3-py3-none-any.whl (1.4 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.4/1.4 MB 8.0 MB/s eta 0:00:00
Requirement already satisfied: setuptools in /usr/lib/python3/dist-packages (from tensorflow) (59.6.0)
Requirement already satisfied: requests<3,>=2.21.0 in /usr/lib/python3/dist-packages (from tensorflow) (2.25.1)
Collecting wrapt>=1.11.0
  Downloading wrapt-1.17.3-cp310-cp310-manylinux1_x86_64.manylinux_2_28_x86_64.manylinux_2_5_x86_64.whl (81 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 82.0/82.0 KB 7.3 MB/s eta 0:00:00
Collecting grpcio<2.0,>=1.24.3
  Downloading grpcio-1.74.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (6.2 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 6.2/6.2 MB 12.3 MB/s eta 0:00:00
Requirement already satisfied: wheel<1.0,>=0.23.0 in /usr/lib/python3/dist-packages (from astunparse>=1.6.0->tensorflow) (0.37.1)
Collecting optree
  Downloading optree-0.17.0-cp310-cp310-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl (387 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 388.0/388.0 KB 11.8 MB/s eta 0:00:00
Collecting rich
  Downloading rich-14.1.0-py3-none-any.whl (243 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 243.4/243.4 KB 10.4 MB/s eta 0:00:00
Collecting namex
  Downloading namex-0.1.0-py3-none-any.whl (5.9 kB)
Requirement already satisfied: pillow in /usr/lib/python3/dist-packages (from tensorboard~=2.20.0->tensorflow) (9.0.1)
Collecting werkzeug>=1.0.1
  Downloading werkzeug-3.1.3-py3-none-any.whl (224 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 224.5/224.5 KB 7.7 MB/s eta 0:00:00
Collecting markdown>=2.6.8
  Downloading markdown-3.9-py3-none-any.whl (107 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 107.4/107.4 KB 7.1 MB/s eta 0:00:00
Collecting tensorboard-data-server<0.8.0,>=0.7.0
  Downloading tensorboard_data_server-0.7.2-py3-none-manylinux_2_31_x86_64.whl (6.6 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 6.6/6.6 MB 9.9 MB/s eta 0:00:00
Collecting MarkupSafe>=2.1.1
  Downloading MarkupSafe-3.0.2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (20 kB)
Collecting markdown-it-py>=2.2.0
  Downloading markdown_it_py-4.0.0-py3-none-any.whl (87 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 87.3/87.3 KB 8.2 MB/s eta 0:00:00
Collecting pygments<3.0.0,>=2.13.0
  Downloading pygments-2.19.2-py3-none-any.whl (1.2 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.2/1.2 MB 13.2 MB/s eta 0:00:00
Collecting mdurl~=0.1
  Downloading mdurl-0.1.2-py3-none-any.whl (10.0 kB)
Installing collected packages: namex, libclang, flatbuffers, wrapt, typing_extensions, termcolor, tensorboard-data-server, pygments, protobuf, opt_einsum, numpy, mdurl, MarkupSafe, markdown, grpcio, google_pasta, gast, astunparse, absl-py, werkzeug, optree, ml_dtypes, markdown-it-py, h5py, tensorboard, rich, keras, tensorflow
  WARNING: The script pygmentize is installed in '/home/minimonk/.local/bin' which is not on PATH.
  Consider adding this directory to PATH or, if you prefer to suppress this warning, use --no-warn-script-location.
  WARNING: The scripts f2py and numpy-config are installed in '/home/minimonk/.local/bin' which is not on PATH.
  Consider adding this directory to PATH or, if you prefer to suppress this warning, use --no-warn-script-location.
  WARNING: The script markdown_py is installed in '/home/minimonk/.local/bin' which is not on PATH.
  Consider adding this directory to PATH or, if you prefer to suppress this warning, use --no-warn-script-location.
  WARNING: The script markdown-it is installed in '/home/minimonk/.local/bin' which is not on PATH.
  Consider adding this directory to PATH or, if you prefer to suppress this warning, use --no-warn-script-location.
  WARNING: The script tensorboard is installed in '/home/minimonk/.local/bin' which is not on PATH.
  Consider adding this directory to PATH or, if you prefer to suppress this warning, use --no-warn-script-location.
  WARNING: The scripts import_pb_to_tensorboard, saved_model_cli, tensorboard, tf_upgrade_v2, tflite_convert and toco are installed in '/home/minimonk/.local/bin' which is not on PATH.
  Consider adding this directory to PATH or, if you prefer to suppress this warning, use --no-warn-script-location.
Successfully installed MarkupSafe-3.0.2 absl-py-2.3.1 astunparse-1.6.3 flatbuffers-25.2.10 gast-0.6.0 google_pasta-0.2.0 grpcio-1.74.0 h5py-3.14.0 keras-3.11.3 libclang-18.1.1 markdown-3.9 markdown-it-py-4.0.0 mdurl-0.1.2 ml_dtypes-0.5.3 namex-0.1.0 numpy-2.2.6 opt_einsum-3.4.0 optree-0.17.0 protobuf-6.32.1 pygments-2.19.2 rich-14.1.0 tensorboard-2.20.0 tensorboard-data-server-0.7.2 tensorflow-2.20.0 termcolor-3.1.0 typing_extensions-4.15.0 werkzeug-3.1.3 wrapt-1.17.3
minimonk@minimonk-HP-EliteBook-2760p:~$ pip install keras
Defaulting to user installation because normal site-packages is not writeable
Requirement already satisfied: keras in ./.local/lib/python3.10/site-packages (3.11.3)
Requirement already satisfied: absl-py in ./.local/lib/python3.10/site-packages (from keras) (2.3.1)
Requirement already satisfied: numpy in ./.local/lib/python3.10/site-packages (from keras) (2.2.6)
Requirement already satisfied: rich in ./.local/lib/python3.10/site-packages (from keras) (14.1.0)
Requirement already satisfied: ml-dtypes in ./.local/lib/python3.10/site-packages (from keras) (0.5.3)
Requirement already satisfied: namex in ./.local/lib/python3.10/site-packages (from keras) (0.1.0)
Requirement already satisfied: optree in ./.local/lib/python3.10/site-packages (from keras) (0.17.0)
Requirement already satisfied: h5py in ./.local/lib/python3.10/site-packages (from keras) (3.14.0)
Requirement already satisfied: packaging in /usr/lib/python3/dist-packages (from keras) (21.3)
Requirement already satisfied: typing-extensions>=4.6.0 in ./.local/lib/python3.10/site-packages (from optree->keras) (4.15.0)
Requirement already satisfied: markdown-it-py>=2.2.0 in ./.local/lib/python3.10/site-packages (from rich->keras) (4.0.0)
Requirement already satisfied: pygments<3.0.0,>=2.13.0 in ./.local/lib/python3.10/site-packages (from rich->keras) (2.19.2)
Requirement already satisfied: mdurl~=0.1 in ./.local/lib/python3.10/site-packages (from markdown-it-py>=2.2.0->rich->keras) (0.1.2)

$ pip install numpy==1.26.4

 

numpy 1.26.4를 깔게 된 에러메시지

$ python3
Python 3.10.12 (main, Aug 15 2025, 14:32:43) [GCC 11.4.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import tensorflow as tf
2025-09-15 15:28:06.544207: I external/local_xla/xla/tsl/cuda/cudart_stub.cc:31] Could not find cuda drivers on your machine, GPU will not be used.
/usr/lib/python3/dist-packages/scipy/__init__.py:146: UserWarning: A NumPy version >=1.17.3 and <1.25.0 is required for this version of SciPy (detected version 2.2.6
  warnings.warn(f"A NumPy version >={np_minversion} and <{np_maxversion}"

A module that was compiled using NumPy 1.x cannot be run in
NumPy 2.2.6 as it may crash. To support both 1.x and 2.x
versions of NumPy, modules must be compiled with NumPy 2.0.
Some module may need to rebuild instead e.g. with 'pybind11>=2.12'.

If you are a user of the module, the easiest solution will be to
downgrade to 'numpy<2' or try to upgrade the affected module.
We expect that some modules will need time to support NumPy 2.

Traceback (most recent call last):  File "<stdin>", line 1, in <module>
  File "/home/minimonk/.local/lib/python3.10/site-packages/tensorflow/__init__.py", line 49, in <module>
    from tensorflow._api.v2 import __internal__
  File "/home/minimonk/.local/lib/python3.10/site-packages/tensorflow/_api/v2/__internal__/__init__.py", line 13, in <module>
    from tensorflow._api.v2.__internal__ import feature_column
  File "/home/minimonk/.local/lib/python3.10/site-packages/tensorflow/_api/v2/__internal__/feature_column/__init__.py", line 8, in <module>
    from tensorflow.python.feature_column.feature_column_v2 import DenseColumn # line: 1777
  File "/home/minimonk/.local/lib/python3.10/site-packages/tensorflow/python/feature_column/feature_column_v2.py", line 38, in <module>
    from tensorflow.python.feature_column import feature_column as fc_old
  File "/home/minimonk/.local/lib/python3.10/site-packages/tensorflow/python/feature_column/feature_column.py", line 41, in <module>
    from tensorflow.python.layers import base
  File "/home/minimonk/.local/lib/python3.10/site-packages/tensorflow/python/layers/base.py", line 16, in <module>
    from tensorflow.python.keras.legacy_tf_layers import base
  File "/home/minimonk/.local/lib/python3.10/site-packages/tensorflow/python/keras/__init__.py", line 25, in <module>
    from tensorflow.python.keras import models
  File "/home/minimonk/.local/lib/python3.10/site-packages/tensorflow/python/keras/models.py", line 25, in <module>
    from tensorflow.python.keras.engine import training_v1
  File "/home/minimonk/.local/lib/python3.10/site-packages/tensorflow/python/keras/engine/training_v1.py", line 46, in <module>
    from tensorflow.python.keras.engine import training_arrays_v1
  File "/home/minimonk/.local/lib/python3.10/site-packages/tensorflow/python/keras/engine/training_arrays_v1.py", line 37, in <module>
    from scipy.sparse import issparse  # pylint: disable=g-import-not-at-top
  File "/usr/lib/python3/dist-packages/scipy/sparse/__init__.py", line 267, in <module>
    from ._csr import *
  File "/usr/lib/python3/dist-packages/scipy/sparse/_csr.py", line 10, in <module>
    from ._sparsetools import (csr_tocsc, csr_tobsr, csr_count_blocks,
AttributeError: _ARRAY_API not found

[링크 : https://mhui123.tistory.com/143]

 

그런데 mobilenet이 ssd가 없으면 classification만 되는 놈이었나?

from keras.applications import MobileNet
from keras.layers import Dense,Dropout,Activation, Flatten, GlobalAveragePooling2D
from keras.layers import Conv2D, MaxPooling2D, ZeroPadding2D
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from keras.optimizers import RMSprop, Adam
from keras.callbacks import ModelCheckpoint, EarlyStopping, ReduceLROnPlateau

img_rows,img_cols = 224,224
MobileNet = MobileNet(weights='imagenet', include_top=False, input_shape=(img_rows, img_cols, 3))
2025-09-15 16:00:19.852870: E external/local_xla/xla/stream_executor/cuda/cuda_platform.cc:51] failed call to cuInit: INTERNAL: CUDA error: Failed call to cuInit: UNKNOWN ERROR (303)
Downloading data from https://storage.googleapis.com/tensorflow/keras-applications/mobilenet/mobilenet_1_0_224_tf_no_top.h5
17225924/17225924 ━━━━━━━━━━━━━━━━━━━━ 1s 0us/step 

>>> for layer in MobileNet.layers:
...   layer.trainable = True
... 
>>> for (i, layer) in enumerate(MobileNet.layers):
...   print(str(i), layer.__class__.__name__, layer.trainable)
... 
0 InputLayer True
1 Conv2D True
2 BatchNormalization True
3 ReLU True
4 DepthwiseConv2D True
5 BatchNormalization True
6 ReLU True
7 Conv2D True
8 BatchNormalization True
9 ReLU True
10 ZeroPadding2D True
11 DepthwiseConv2D True
12 BatchNormalization True
13 ReLU True
14 Conv2D True
15 BatchNormalization True
16 ReLU True
17 DepthwiseConv2D True
18 BatchNormalization True
19 ReLU True
20 Conv2D True
21 BatchNormalization True
22 ReLU True
23 ZeroPadding2D True
24 DepthwiseConv2D True
25 BatchNormalization True
26 ReLU True
27 Conv2D True
28 BatchNormalization True
29 ReLU True
30 DepthwiseConv2D True
31 BatchNormalization True
32 ReLU True
33 Conv2D True
34 BatchNormalization True
35 ReLU True
36 ZeroPadding2D True
37 DepthwiseConv2D True
38 BatchNormalization True
39 ReLU True
40 Conv2D True
41 BatchNormalization True
42 ReLU True
43 DepthwiseConv2D True
44 BatchNormalization True
45 ReLU True
46 Conv2D True
47 BatchNormalization True
48 ReLU True
49 DepthwiseConv2D True
50 BatchNormalization True
51 ReLU True
52 Conv2D True
53 BatchNormalization True
54 ReLU True
55 DepthwiseConv2D True
56 BatchNormalization True
57 ReLU True
58 Conv2D True
59 BatchNormalization True
60 ReLU True
61 DepthwiseConv2D True
62 BatchNormalization True
63 ReLU True
64 Conv2D True
65 BatchNormalization True
66 ReLU True
67 DepthwiseConv2D True
68 BatchNormalization True
69 ReLU True
70 Conv2D True
71 BatchNormalization True
72 ReLU True
73 ZeroPadding2D True
74 DepthwiseConv2D True
75 BatchNormalization True
76 ReLU True
77 Conv2D True
78 BatchNormalization True
79 ReLU True
80 DepthwiseConv2D True
81 BatchNormalization True
82 ReLU True
83 Conv2D True
84 BatchNormalization True
85 ReLU True

>>> MobileNet.output
<KerasTensor shape=(None, 7, 7, 1024), dtype=float32, sparse=False, ragged=False, name=keras_tensor_85>
>>> MobileNet.input
<KerasTensor shape=(None, 224, 224, 3), dtype=float32, sparse=False, ragged=False, name=keras_tensor>
>>> MobileNet.summary()
Model: "mobilenet_1.00_224"
┏--------------------------------------┳-----------------------------┳-----------------┓
┃ Layer (type)                         ┃ Output Shape                ┃         Param # ┃
┡--------------------------------------╇-----------------------------╇-----------------┩
│ input_layer (InputLayer)             │ (None, 224, 224, 3)         │               0 │
├--------------------------------------┼-----------------------------┼-----------------┤
│ conv1 (Conv2D)                       │ (None, 112, 112, 32)        │             864 │
├--------------------------------------┼-----------------------------┼-----------------┤
│ conv1_bn (BatchNormalization)        │ (None, 112, 112, 32)        │             128 │
├--------------------------------------┼-----------------------------┼-----------------┤
│ conv1_relu (ReLU)                    │ (None, 112, 112, 32)        │               0 │
├--------------------------------------┼-----------------------------┼-----------------┤
│ conv_dw_1 (DepthwiseConv2D)          │ (None, 112, 112, 32)        │             288 │
├--------------------------------------┼-----------------------------┼-----------------┤
│ conv_dw_1_bn (BatchNormalization)    │ (None, 112, 112, 32)        │             128 │
├--------------------------------------┼-----------------------------┼-----------------┤
│ conv_dw_1_relu (ReLU)                │ (None, 112, 112, 32)        │               0 │
├--------------------------------------┼-----------------------------┼-----------------┤
│ conv_pw_1 (Conv2D)                   │ (None, 112, 112, 64)        │           2,048 │
├--------------------------------------┼-----------------------------┼-----------------┤
│ conv_pw_1_bn (BatchNormalization)    │ (None, 112, 112, 64)        │             256 │
├--------------------------------------┼-----------------------------┼-----------------┤
│ conv_pw_1_relu (ReLU)                │ (None, 112, 112, 64)        │               0 │
├--------------------------------------┼-----------------------------┼-----------------┤
│ conv_pad_2 (ZeroPadding2D)           │ (None, 113, 113, 64)        │               0 │
├--------------------------------------┼-----------------------------┼-----------------┤
│ conv_dw_2 (DepthwiseConv2D)          │ (None, 56, 56, 64)          │             576 │
├--------------------------------------┼-----------------------------┼-----------------┤
│ conv_dw_2_bn (BatchNormalization)    │ (None, 56, 56, 64)          │             256 │
├--------------------------------------┼-----------------------------┼-----------------┤
│ conv_dw_2_relu (ReLU)                │ (None, 56, 56, 64)          │               0 │
├--------------------------------------┼-----------------------------┼-----------------┤
│ conv_pw_2 (Conv2D)                   │ (None, 56, 56, 128)         │           8,192 │
├--------------------------------------┼-----------------------------┼-----------------┤
│ conv_pw_2_bn (BatchNormalization)    │ (None, 56, 56, 128)         │             512 │
├--------------------------------------┼-----------------------------┼-----------------┤
│ conv_pw_2_relu (ReLU)                │ (None, 56, 56, 128)         │               0 │
├--------------------------------------┼-----------------------------┼-----------------┤
│ conv_dw_3 (DepthwiseConv2D)          │ (None, 56, 56, 128)         │           1,152 │
├--------------------------------------┼-----------------------------┼-----------------┤
│ conv_dw_3_bn (BatchNormalization)    │ (None, 56, 56, 128)         │             512 │
├--------------------------------------┼-----------------------------┼-----------------┤
│ conv_dw_3_relu (ReLU)                │ (None, 56, 56, 128)         │               0 │
├--------------------------------------┼-----------------------------┼-----------------┤
│ conv_pw_3 (Conv2D)                   │ (None, 56, 56, 128)         │          16,384 │
├--------------------------------------┼-----------------------------┼-----------------┤
│ conv_pw_3_bn (BatchNormalization)    │ (None, 56, 56, 128)         │             512 │
├--------------------------------------┼-----------------------------┼-----------------┤
│ conv_pw_3_relu (ReLU)                │ (None, 56, 56, 128)         │               0 │
├--------------------------------------┼-----------------------------┼-----------------┤
│ conv_pad_4 (ZeroPadding2D)           │ (None, 57, 57, 128)         │               0 │
├--------------------------------------┼-----------------------------┼-----------------┤
│ conv_dw_4 (DepthwiseConv2D)          │ (None, 28, 28, 128)         │           1,152 │
├--------------------------------------┼-----------------------------┼-----------------┤
│ conv_dw_4_bn (BatchNormalization)    │ (None, 28, 28, 128)         │             512 │
├--------------------------------------┼-----------------------------┼-----------------┤
│ conv_dw_4_relu (ReLU)                │ (None, 28, 28, 128)         │               0 │
├--------------------------------------┼-----------------------------┼-----------------┤
│ conv_pw_4 (Conv2D)                   │ (None, 28, 28, 256)         │          32,768 │
├--------------------------------------┼-----------------------------┼-----------------┤
│ conv_pw_4_bn (BatchNormalization)    │ (None, 28, 28, 256)         │           1,024 │
├--------------------------------------┼-----------------------------┼-----------------┤
│ conv_pw_4_relu (ReLU)                │ (None, 28, 28, 256)         │               0 │
├--------------------------------------┼-----------------------------┼-----------------┤
│ conv_dw_5 (DepthwiseConv2D)          │ (None, 28, 28, 256)         │           2,304 │
├--------------------------------------┼-----------------------------┼-----------------┤
│ conv_dw_5_bn (BatchNormalization)    │ (None, 28, 28, 256)         │           1,024 │
├--------------------------------------┼-----------------------------┼-----------------┤
│ conv_dw_5_relu (ReLU)                │ (None, 28, 28, 256)         │               0 │
├--------------------------------------┼-----------------------------┼-----------------┤
│ conv_pw_5 (Conv2D)                   │ (None, 28, 28, 256)         │          65,536 │
├--------------------------------------┼-----------------------------┼-----------------┤
│ conv_pw_5_bn (BatchNormalization)    │ (None, 28, 28, 256)         │           1,024 │
├--------------------------------------┼-----------------------------┼-----------------┤
│ conv_pw_5_relu (ReLU)                │ (None, 28, 28, 256)         │               0 │
├--------------------------------------┼-----------------------------┼-----------------┤
│ conv_pad_6 (ZeroPadding2D)           │ (None, 29, 29, 256)         │               0 │
├--------------------------------------┼-----------------------------┼-----------------┤
│ conv_dw_6 (DepthwiseConv2D)          │ (None, 14, 14, 256)         │           2,304 │
├--------------------------------------┼-----------------------------┼-----------------┤
│ conv_dw_6_bn (BatchNormalization)    │ (None, 14, 14, 256)         │           1,024 │
├--------------------------------------┼-----------------------------┼-----------------┤
│ conv_dw_6_relu (ReLU)                │ (None, 14, 14, 256)         │               0 │
├--------------------------------------┼-----------------------------┼-----------------┤
│ conv_pw_6 (Conv2D)                   │ (None, 14, 14, 512)         │         131,072 │
├--------------------------------------┼-----------------------------┼-----------------┤
│ conv_pw_6_bn (BatchNormalization)    │ (None, 14, 14, 512)         │           2,048 │
├--------------------------------------┼-----------------------------┼-----------------┤
│ conv_pw_6_relu (ReLU)                │ (None, 14, 14, 512)         │               0 │
├--------------------------------------┼-----------------------------┼-----------------┤
│ conv_dw_7 (DepthwiseConv2D)          │ (None, 14, 14, 512)         │           4,608 │
├--------------------------------------┼-----------------------------┼-----------------┤
│ conv_dw_7_bn (BatchNormalization)    │ (None, 14, 14, 512)         │           2,048 │
├--------------------------------------┼-----------------------------┼-----------------┤
│ conv_dw_7_relu (ReLU)                │ (None, 14, 14, 512)         │               0 │
├--------------------------------------┼-----------------------------┼-----------------┤
│ conv_pw_7 (Conv2D)                   │ (None, 14, 14, 512)         │         262,144 │
├--------------------------------------┼-----------------------------┼-----------------┤
│ conv_pw_7_bn (BatchNormalization)    │ (None, 14, 14, 512)         │           2,048 │
├--------------------------------------┼-----------------------------┼-----------------┤
│ conv_pw_7_relu (ReLU)                │ (None, 14, 14, 512)         │               0 │
├--------------------------------------┼-----------------------------┼-----------------┤
│ conv_dw_8 (DepthwiseConv2D)          │ (None, 14, 14, 512)         │           4,608 │
├--------------------------------------┼-----------------------------┼-----------------┤
│ conv_dw_8_bn (BatchNormalization)    │ (None, 14, 14, 512)         │           2,048 │
├--------------------------------------┼-----------------------------┼-----------------┤
│ conv_dw_8_relu (ReLU)                │ (None, 14, 14, 512)         │               0 │
├--------------------------------------┼-----------------------------┼-----------------┤
│ conv_pw_8 (Conv2D)                   │ (None, 14, 14, 512)         │         262,144 │
├--------------------------------------┼-----------------------------┼-----------------┤
│ conv_pw_8_bn (BatchNormalization)    │ (None, 14, 14, 512)         │           2,048 │
├--------------------------------------┼-----------------------------┼-----------------┤
│ conv_pw_8_relu (ReLU)                │ (None, 14, 14, 512)         │               0 │
├--------------------------------------┼-----------------------------┼-----------------┤
│ conv_dw_9 (DepthwiseConv2D)          │ (None, 14, 14, 512)         │           4,608 │
├--------------------------------------┼-----------------------------┼-----------------┤
│ conv_dw_9_bn (BatchNormalization)    │ (None, 14, 14, 512)         │           2,048 │
├--------------------------------------┼-----------------------------┼-----------------┤
│ conv_dw_9_relu (ReLU)                │ (None, 14, 14, 512)         │               0 │
├--------------------------------------┼-----------------------------┼-----------------┤
│ conv_pw_9 (Conv2D)                   │ (None, 14, 14, 512)         │         262,144 │
├--------------------------------------┼-----------------------------┼-----------------┤
│ conv_pw_9_bn (BatchNormalization)    │ (None, 14, 14, 512)         │           2,048 │
├--------------------------------------┼-----------------------------┼-----------------┤
│ conv_pw_9_relu (ReLU)                │ (None, 14, 14, 512)         │               0 │
├--------------------------------------┼-----------------------------┼-----------------┤
│ conv_dw_10 (DepthwiseConv2D)         │ (None, 14, 14, 512)         │           4,608 │
├--------------------------------------┼-----------------------------┼-----------------┤
│ conv_dw_10_bn (BatchNormalization)   │ (None, 14, 14, 512)         │           2,048 │
├--------------------------------------┼-----------------------------┼-----------------┤
│ conv_dw_10_relu (ReLU)               │ (None, 14, 14, 512)         │               0 │
├--------------------------------------┼-----------------------------┼-----------------┤
│ conv_pw_10 (Conv2D)                  │ (None, 14, 14, 512)         │         262,144 │
├--------------------------------------┼-----------------------------┼-----------------┤
│ conv_pw_10_bn (BatchNormalization)   │ (None, 14, 14, 512)         │           2,048 │
├--------------------------------------┼-----------------------------┼-----------------┤
│ conv_pw_10_relu (ReLU)               │ (None, 14, 14, 512)         │               0 │
├--------------------------------------┼-----------------------------┼-----------------┤
│ conv_dw_11 (DepthwiseConv2D)         │ (None, 14, 14, 512)         │           4,608 │
├--------------------------------------┼-----------------------------┼-----------------┤
│ conv_dw_11_bn (BatchNormalization)   │ (None, 14, 14, 512)         │           2,048 │
├--------------------------------------┼-----------------------------┼-----------------┤
│ conv_dw_11_relu (ReLU)               │ (None, 14, 14, 512)         │               0 │
├--------------------------------------┼-----------------------------┼-----------------┤
│ conv_pw_11 (Conv2D)                  │ (None, 14, 14, 512)         │         262,144 │
├--------------------------------------┼-----------------------------┼-----------------┤
│ conv_pw_11_bn (BatchNormalization)   │ (None, 14, 14, 512)         │           2,048 │
├--------------------------------------┼-----------------------------┼-----------------┤
│ conv_pw_11_relu (ReLU)               │ (None, 14, 14, 512)         │               0 │
├--------------------------------------┼-----------------------------┼-----------------┤
│ conv_pad_12 (ZeroPadding2D)          │ (None, 15, 15, 512)         │               0 │
├--------------------------------------┼-----------------------------┼-----------------┤
│ conv_dw_12 (DepthwiseConv2D)         │ (None, 7, 7, 512)           │           4,608 │
├--------------------------------------┼-----------------------------┼-----------------┤
│ conv_dw_12_bn (BatchNormalization)   │ (None, 7, 7, 512)           │           2,048 │
├--------------------------------------┼-----------------------------┼-----------------┤
│ conv_dw_12_relu (ReLU)               │ (None, 7, 7, 512)           │               0 │
├--------------------------------------┼-----------------------------┼-----------------┤
│ conv_pw_12 (Conv2D)                  │ (None, 7, 7, 1024)          │         524,288 │
├--------------------------------------┼-----------------------------┼-----------------┤
│ conv_pw_12_bn (BatchNormalization)   │ (None, 7, 7, 1024)          │           4,096 │
├--------------------------------------┼-----------------------------┼-----------------┤
│ conv_pw_12_relu (ReLU)               │ (None, 7, 7, 1024)          │               0 │
├--------------------------------------┼-----------------------------┼-----------------┤
│ conv_dw_13 (DepthwiseConv2D)         │ (None, 7, 7, 1024)          │           9,216 │
├--------------------------------------┼-----------------------------┼-----------------┤
│ conv_dw_13_bn (BatchNormalization)   │ (None, 7, 7, 1024)          │           4,096 │
├--------------------------------------┼-----------------------------┼-----------------┤
│ conv_dw_13_relu (ReLU)               │ (None, 7, 7, 1024)          │               0 │
├--------------------------------------┼-----------------------------┼-----------------┤
│ conv_pw_13 (Conv2D)                  │ (None, 7, 7, 1024)          │       1,048,576 │
├--------------------------------------┼-----------------------------┼-----------------┤
│ conv_pw_13_bn (BatchNormalization)   │ (None, 7, 7, 1024)          │           4,096 │
├--------------------------------------┼-----------------------------┼-----------------┤
│ conv_pw_13_relu (ReLU)               │ (None, 7, 7, 1024)          │               0 │
└--------------------------------------┴-----------------------------┴-----------------┘
 Total params: 3,228,864 (12.32 MB)
 Trainable params: 3,206,976 (12.23 MB)
 Non-trainable params: 21,888 (85.50 KB)

[링크 : https://kau-deeperent.tistory.com/m/59]

 

# from keras.preprocessing.image import ImageDataGenerator #  에러났음
from tensorflow.keras.preprocessing.image import ImageDataGenerator

[링크 : https://sugyeong0425.tistory.com/151]

 

voc2012 데이터셋설명

[링크 : https://bo-10000.tistory.com/38]

[링크 : https://velog.io/@kyungmin1029/CV-OpenCV]

 

2024.8 월 이니 한번 시도해볼 만할 듯?

[링크 : https://velog.io/@choonsik_mom/MobileNet-SSD-object-detector-커스텀-데이터-학습하기-m3j5d0xh]

'프로그램 사용 > yolo_tensorflow' 카테고리의 다른 글

keras - transfer learning / fine tuning  (0) 2025.09.17
ssd mobilenet  (0) 2025.09.11
LLM 시각화  (0) 2025.09.06
모델 학습  (0) 2025.09.05
mean Average Precision(mAP)  (0) 2025.09.05
Posted by 구차니
이론 관련/전기 전자2025. 9. 15. 14:06

초음파를 쏴서 전 대역에 재밍거는 방법과 백색 소음을 전대역에 재밍거는 방법이 있다는데

일단 초음파를 쓰면 line of sight에서만 작동하는 듯.

 

[링크 : https://www.isecus.com/audio-recording-jammer-comparison/]

[링크 : https://github.com/mcore1976/antispy-jammer]

'이론 관련 > 전기 전자' 카테고리의 다른 글

pwm 화음 출력  (0) 2025.08.06
힐베르트 변환포락선 인벨로프  (0) 2025.06.27
ntc r/t-tol  (0) 2025.04.29
전자로드 사용법  (0) 2025.04.28
합성저항, 목표저항 계산  (0) 2025.01.07
Posted by 구차니