장보고 와서는 완전 피곤해서 한 3시간 넘게 기절했던 것 같다.
원래 자전거 좀 손보고 타려고 했는데 끄응..
'개소리 왈왈 > 육아관련 주저리' 카테고리의 다른 글
TOP FRUIT 망고젤리 맛이 다르다?! (4) | 2024.01.18 |
---|---|
장보기 (0) | 2024.01.14 |
lilgadget 헤드폰 단돈 21.95$ (0) | 2024.01.09 |
집 도착 콜밴 (0) | 2024.01.08 |
바나힐 테마파크 (0) | 2024.01.07 |
장보고 와서는 완전 피곤해서 한 3시간 넘게 기절했던 것 같다.
원래 자전거 좀 손보고 타려고 했는데 끄응..
TOP FRUIT 망고젤리 맛이 다르다?! (4) | 2024.01.18 |
---|---|
장보기 (0) | 2024.01.14 |
lilgadget 헤드폰 단돈 21.95$ (0) | 2024.01.09 |
집 도착 콜밴 (0) | 2024.01.08 |
바나힐 테마파크 (0) | 2024.01.07 |
Visual Geometry Group
VGG 뒤의 숫자는 CNN 레이어의 갯수
CNN(Convolutional Neural network) - 나선형의/복잡한 신경망으로 해석이 되나?
[링크 : https://wikidocs.net/164796]
탐지도 되긴 하나본데...
[링크 : https://github.com/zubairsamo/Object-Detection-With-Tensorflow-Using-VGG16]
keras에 있는 VGG16을 그냥 바로 써서 간단하게 되네..
게다가 save load도 되는데 왜 난 안될까.. ㅠㅠ
# lets import pre trained VGG16 Which is already Builtin for computer vision from tensorflow.keras.applications import VGG16 from tensorflow.keras.layers import Input # Imagenet is a competition every year held and VGG16 is winner of between 2013-14 # so here we just want limited layers so thats why we false included_top vgg=VGG16(weights='imagenet',include_top=False,input_tensor=Input(shape=(224,224,3))) # lets save model model.save('detect_Planes.h5') from tensorflow.keras.models import load_model model=load_model('/content/detect_Planes.h5') |
NMS, soft-NMS (0) | 2024.01.15 |
---|---|
MobileNetV2 SSD FPN-Lite (0) | 2024.01.11 |
mobilenet v2 ssd (0) | 2024.01.11 |
ssd-mobilenetv2 on jupyter notebook (2) | 2024.01.10 |
텐서플로우 v1 을 v2로 마이그레이션은 실패 -_- (0) | 2024.01.10 |
모델 zoo 에서 보다보면 먼가 주르르륵 붙는데
솔찍히 mobilenet v2가 무엇인지, ssd는 또 무엇인지 몰라서 헷갈려서 조사
근데.. FPN은 전에 검색해둔 기억이 있는데 가물가물하네..
base network (MobileNetV2)
- 신경망(neural network)로 구성되어 있으며 분류(classification)나 탐지(detection)에 사용이 가능함
- 네트워크의 마지막에 softmax 레이어가 있으면 분류로 작동
detection network (Single Shot Detector or SSD)
- SSD(Single ShotDetection)나 RPN(Regional Proposal Network / R-CNN)를 이용하여
- 이미지 내의 여러 물체를 감지하고, 지역을 제안(ROI)함.
- R-CNN : Regions with Convolutional Neural Networks
[링크 : https://kr.mathworks.com/help/vision/ug/getting-started-with-r-cnn-fast-r-cnn-and-faster-r-cnn.html]
feature extractor (FPN-Lite)
SSD의 경우 너무 가깝거나(즉 너무 크거나, 일부만 확대되어 보일 경우) 작은경우(멀거나, 원래 작거나) 탐지를 못하는 경우가 있어
피라미드 모양으로 쌓은 FPN(Feature Pyramid Network)를 통해 특징을 추출하여 다양한 규모의 물체를 감지
SSD 에서는 Pyramid Feature Hierachy 라는 방식을 이용하여, 서로 다른 스케일의 특징 맵을 이용하여 멀티 스케일 특징을 추출
FPN 모델은 Region Proposeal Network (RPN) 와 Fast R-CNN을 기반으로 한다. |
[링크 : https://eehoeskrap.tistory.com/300]
근데 내용만 봐서는 SSD + mobilenet v2 + FPN이 조합이 가능한건지 모르겠다?
In the MobileNetV2 SSD FPN-Lite, we have a base network (MobileNetV2), a detection network (Single Shot Detector or SSD) and a feature extractor (FPN-Lite).
Base network:
MobileNet, like VGG-Net, LeNet, AlexNet, and all others, are based on neural networks. The base network provides high-level features for classification or detection. If you use a fully connected layer and a softmax layer at the end of these networks, you have a classification.
![]() Example of a network composed of many convolutional layers. Filters are applied to each training image at different resolutions, and the output of each convolved image is used as input to the next layer (source Mathworks)
But you can remove the fully connected and the softmax layers, and replace it with detection networks, like SSD, Faster R-CNN, and others to perform object detection.
Detection network:
The most common detection networks are SSD (Single Shot Detection) and RPN (Regional Proposal Network).
When using SSD, we only need to take one single shot to detect multiple objects within the image. On the other hand, regional proposal networks (RPN) based approaches, such as R-CNN series, need two shots, one for generating region proposals, one for detecting the object of each proposal.
As a consequence, SSD is much faster compared with RPN-based approaches but often trades accuracy with real-time processing speed. They also tend to have issues in detecting objects that are too close or too small.
Feature Pyramid Network:
Detecting objects in different scales is challenging in particular for small objects. Feature Pyramid Network (FPN) is a feature extractor designed with feature pyramid concept to improve accuracy and speed.
|
NMS, soft-NMS (0) | 2024.01.15 |
---|---|
VGG-16 / VGG-19 (0) | 2024.01.11 |
mobilenet v2 ssd (0) | 2024.01.11 |
ssd-mobilenetv2 on jupyter notebook (2) | 2024.01.10 |
텐서플로우 v1 을 v2로 마이그레이션은 실패 -_- (0) | 2024.01.10 |
이전에꺼는 save를 못해먹겠어서 (checkpoint를 pb로 변환하거나 해야 하는데 그것도 안되고...)
다른 것 찾는 중
[링크 : https://www.kaggle.com/code/suraj520/mobilenet-v2-ssd-scratch-without-tfod]
위에껀 아래와 같이 sequential 이라던가 이런게 없어서.. 또 저장안되는거 아닌가 걱정중..
def create_model(): model = tf.keras.Sequential([ keras.layers.Dense(512, activation='relu', input_shape=(784,)), keras.layers.Dropout(0.2), keras.layers.Dense(10) ]) model.compile(optimizer='adam', loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True), metrics=[tf.keras.metrics.SparseCategoricalAccuracy()]) return model # Create a basic model instance model = create_model() |
[링크 : https://www.tensorflow.org/tutorials/keras/save_and_load?hl=ko]
[링크 : https://www.tensorflow.org/hub/exporting_tf2_saved_model?hl=ko]
VGG-16 / VGG-19 (0) | 2024.01.11 |
---|---|
MobileNetV2 SSD FPN-Lite (0) | 2024.01.11 |
ssd-mobilenetv2 on jupyter notebook (2) | 2024.01.10 |
텐서플로우 v1 을 v2로 마이그레이션은 실패 -_- (0) | 2024.01.10 |
골빈해커의 3분 딥러닝 github (0) | 2024.01.10 |
nnstreamer python 예제를 보다보니, 아래와 같이 pipeline 문자열을 이용해서 구성하고
이름으로 싱크를 찾아 콜백을 연결해 주는걸 보니, 일일이 element 생성해서 연결할 필요가 없을 것 같아서 검색중
gst_launch_cmdline += "tensor_sink name=tensor_sink t. ! " self.pipeline = Gst.parse_launch(gst_launch_cmdline) # bus and message callback bus = self.pipeline.get_bus() bus.add_signal_watch() bus.connect("message", self.on_bus_message) self.tensor_filter = self.pipeline.get_by_name("tensor_filter") self.wayland_sink = self.pipeline.get_by_name("img_tensor") # tensor sink signal : new data callback tensor_sink = self.pipeline.get_by_name("tensor_sink") tensor_sink.connect("new-data", self.new_data_cb) # @brief Callback for tensor sink signal. def new_data_cb(self, sink, buffer): """Callback for tensor sink signal. :param sink: tensor sink element :param buffer: buffer from element :return: None """ |
parse쪽은 c 로는 아래의 함수를 쓰면 될 것 같은데
gst_parse_launch GstElement * gst_parse_launch (const gchar * pipeline_description, GError ** error) Create a new pipeline based on command line syntax. Please note that you might get a return value that is not NULL even though the error is set. In this case there was a recoverable parsing error and you can try to play the pipeline. To create a sub-pipeline (bin) for embedding into an existing pipeline use gst_parse_bin_from_description. Parameters: pipeline_description – the command line describing the pipeline error – the error message in case of an erroneous pipeline. Returns ( [transfer: floating]) – a new element on success, NULL on failure. If more than one toplevel element is specified by the pipeline_description, all elements are put into a GstPipeline, which than is returned. |
[링크 : https://gstreamer.freedesktop.org/documentation/gstreamer/gstparse.html?gi-language=c]
아래의 함수를 이용해서 찾으면 될...지도?
GObject ╰──GInitiallyUnowned ╰──GstObject ╰──GstElement ╰──GstBin ╰──GstPipeline gst_bin_get_by_name GstElement * gst_bin_get_by_name (GstBin * bin, const gchar * name) Gets the element with the given name from a bin. This function recurses into child bins. Parameters: bin – a GstBin name – the element name to search for Returns ( [transfer: full][nullable]) – the GstElement with the given name |
nnstreamer (0) | 2023.12.20 |
---|---|
gst-device-monitor-1.0 (0) | 2023.12.06 |
gstremaer videobox + videomixer (0) | 2023.04.10 |
gst-inspector.c (0) | 2023.04.06 |
gstreamer videomixer 반쪽 성공 (0) | 2023.03.27 |
interactive python 해서 ipython인가?
notebook은 ipython 꺼였는데 jupyter의 일부가 되었다고 한다.
Installing IPythonThere are multiple ways of installing IPython. This page contains simplified installation instructions that should work for most users. Our official documentation contains more detailed instructions for manual installation targeted at advanced users and developers.If you are looking for installation documentation for the notebook and/or qtconsole, those are now part of Jupyter. |
[링크 : https://ipython.org/install.html]
python tcp 서버 예제 (0) | 2024.01.22 |
---|---|
파이썬 소켓 예제 (0) | 2024.01.17 |
파이썬 가상환경 (0) | 2024.01.09 |
pyplot legend picking (0) | 2023.10.05 |
matplotlib (0) | 2023.10.04 |
M710이 G4600 보다 많이 느려서 다시 한번 보니 G4400T HD510... -_-
G4600T는 그래도 HD630 인데 너무차이나잖아? 싶어서
intel ark 보니 G4400T는 스카이레이크, G4600T는 카비레이크. 그래서 유튜브가 이상하리 만치 느렸구나!
CPU를 찾아보니 G4600T보다 i3-7100T가 오히려 구하기 더 쉬운 느낌..
다이소 웹캠 (0) | 2024.02.24 |
---|---|
2 포트 vs 4 포트 그리고 컴퓨터 (0) | 2024.01.27 |
lenovo m710q 재설치 (0) | 2023.12.18 |
ibm ultranav (sk-8845rc) (2) | 2023.12.12 |
키보드 중고 구매 (0) | 2023.12.09 |
버전 정보 추적
python 3.7
keras 2.8.0
numpy 1.21.6
[링크 : https://github.com/saunack/MobileNetv2-SSD/blob/master/model.ipynb]
설치한 버전들
$ pip install tensorflow==2.8.0 $ pip install numpy==1.21.6 $ pip install keras==2.8.0 $ pip install protobuf==3.19.0 |
------
keras 2.8을 써야 한다고 나오니 keras의 릴리즈 날짜로 추적
v2.8.0 on Jan 7, 2022 d8fcb9d zip tar.gz Notes |
[링크 : https://github.com/keras-team/keras/tags?after=v2.9.0-rc1]
tensorflow 버전 추적
v2.8.0 on Feb 1, 2022 3f878cf zip tar.gz Notes |
[링크 : https://github.com/tensorflow/tensorflow/tags?after=v2.7.2]
protobuf 3.19.0
numpy 1.24.4 (1.25 미만)
$ pip install numpy==1.34 Defaulting to user installation because normal site-packages is not writeable ERROR: Could not find a version that satisfies the requirement numpy==1.34 (from versions: 1.3.0, 1.4.1, 1.5.0, 1.5.1, 1.6.0, 1.6.1, 1.6.2, 1.7.0, 1.7.1, 1.7.2, 1.8.0, 1.8.1, 1.8.2, 1.9.0, 1.9.1, 1.9.2, 1.9.3, 1.10.0.post2, 1.10.1, 1.10.2, 1.10.4, 1.11.0, 1.11.1, 1.11.2, 1.11.3, 1.12.0, 1.12.1, 1.13.0, 1.13.1, 1.13.3, 1.14.0, 1.14.1, 1.14.2, 1.14.3, 1.14.4, 1.14.5, 1.14.6, 1.15.0, 1.15.1, 1.15.2, 1.15.3, 1.15.4, 1.16.0, 1.16.1, 1.16.2, 1.16.3, 1.16.4, 1.16.5, 1.16.6, 1.17.0, 1.17.1, 1.17.2, 1.17.3, 1.17.4, 1.17.5, 1.18.0, 1.18.1, 1.18.2, 1.18.3, 1.18.4, 1.18.5, 1.19.0, 1.19.1, 1.19.2, 1.19.3, 1.19.4, 1.19.5, 1.20.0, 1.20.1, 1.20.2, 1.20.3, 1.21.0, 1.21.1, 1.21.2, 1.21.3, 1.21.4, 1.21.5, 1.21.6, 1.22.0, 1.22.1, 1.22.2, 1.22.3, 1.22.4, 1.23.0rc1, 1.23.0rc2, 1.23.0rc3, 1.23.0, 1.23.1, 1.23.2, 1.23.3, 1.23.4, 1.23.5, 1.24.0rc1, 1.24.0rc2, 1.24.0, 1.24.1, 1.24.2, 1.24.3, 1.24.4, 1.25.0rc1, 1.25.0, 1.25.1, 1.25.2, 1.26.0b1, 1.26.0rc1, 1.26.0, 1.26.1, 1.26.2, 1.26.3) ERROR: No matching distribution found for numpy==1.34 |
------
으아아아아 tensorflow가 문제냐 keras가 문제냐 ㅠㅠ
import tensorflow.compat.v1 as tf tf.disable_v2_behavior() |
tf_v2 끄고 하면 아래와 같이 나오고
--------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) Cell In[27], line 1 ----> 1 history = model.fit(train_dataset, 2 epochs=25, 3 validation_data = test_dataset, 4 validation_steps=1) File ~/.local/lib/python3.10/site-packages/keras/engine/training_v1.py:773, in Model.fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_freq, max_queue_size, workers, use_multiprocessing, **kwargs) 771 if kwargs: 772 raise TypeError('Unrecognized keyword arguments: ' + str(kwargs)) --> 773 self._assert_compile_was_called() 774 self._check_call_args('fit') 776 func = self._select_training_loop(x) File ~/.local/lib/python3.10/site-packages/keras/engine/training_v1.py:2788, in Model._assert_compile_was_called(self) 2782 def _assert_compile_was_called(self): 2783 # Checks whether `compile` has been called. If it has been called, 2784 # then the optimizer is set. This is different from whether the 2785 # model is compiled 2786 # (i.e. whether the model is built and its inputs/outputs are set). 2787 if not self._compile_was_called: -> 2788 raise RuntimeError('You must compile your model before ' 2789 'training/testing. ' 2790 'Use `model.compile(optimizer, loss)`.') RuntimeError: You must compile your model before training/testing. Use `model.compile(optimizer, loss)`. |
tf_v2를 쓰게 하면
import tensorflow.compat.v1 as tf #tf.disable_v2_behavior() |
사용자 쪽 코드로 문제를 넘기는데 머가 문제일까..
--------------------------------------------------------------------------- ValueError Traceback (most recent call last) Cell In[28], line 1 ----> 1 history = model.fit(train_dataset, 2 epochs=25, 3 validation_data = test_dataset, 4 validation_steps=1) File ~/.local/lib/python3.10/site-packages/keras/engine/training_v1.py:777, in Model.fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_freq, max_queue_size, workers, use_multiprocessing, **kwargs) 774 self._check_call_args('fit') 776 func = self._select_training_loop(x) --> 777 return func.fit( 778 self, 779 x=x, 780 y=y, 781 batch_size=batch_size, 782 epochs=epochs, 783 verbose=verbose, 784 callbacks=callbacks, 785 validation_split=validation_split, 786 validation_data=validation_data, 787 shuffle=shuffle, 788 class_weight=class_weight, 789 sample_weight=sample_weight, 790 initial_epoch=initial_epoch, 791 steps_per_epoch=steps_per_epoch, 792 validation_steps=validation_steps, 793 validation_freq=validation_freq, 794 max_queue_size=max_queue_size, 795 workers=workers, 796 use_multiprocessing=use_multiprocessing) File ~/.local/lib/python3.10/site-packages/keras/engine/training_arrays_v1.py:616, in ArrayLikeTrainingLoop.fit(self, model, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_freq, **kwargs) 595 def fit(self, 596 model, 597 x=None, (...) 611 validation_freq=1, 612 **kwargs): 613 batch_size = model._validate_or_infer_batch_size(batch_size, 614 steps_per_epoch, x) --> 616 x, y, sample_weights = model._standardize_user_data( 617 x, 618 y, 619 sample_weight=sample_weight, 620 class_weight=class_weight, 621 batch_size=batch_size, 622 check_steps=True, 623 steps_name='steps_per_epoch', 624 steps=steps_per_epoch, 625 validation_split=validation_split, 626 shuffle=shuffle) 628 if validation_data: 629 val_x, val_y, val_sample_weights = model._prepare_validation_data( 630 validation_data, batch_size, validation_steps) File ~/.local/lib/python3.10/site-packages/keras/engine/training_v1.py:2318, in Model._standardize_user_data(self, x, y, sample_weight, class_weight, batch_size, check_steps, steps_name, steps, validation_split, shuffle, extract_tensors_from_dataset) 2316 is_compile_called = False 2317 if not self._is_compiled and self.optimizer: -> 2318 self._compile_from_inputs(all_inputs, y_input, x, y) 2319 is_compile_called = True 2321 # In graph mode, if we had just set inputs and targets as symbolic tensors 2322 # by invoking build and compile on the model respectively, we do not have to 2323 # feed anything to the model. Model already has input and target data as (...) 2327 2328 # self.run_eagerly is not free to compute, so we want to reuse the value. File ~/.local/lib/python3.10/site-packages/keras/engine/training_v1.py:2568, in Model._compile_from_inputs(self, all_inputs, target, orig_inputs, orig_target) 2565 else: 2566 target_tensors = None -> 2568 self.compile( 2569 optimizer=self.optimizer, 2570 loss=self.loss, 2571 metrics=self._compile_metrics, 2572 weighted_metrics=self._compile_weighted_metrics, 2573 loss_weights=self.loss_weights, 2574 target_tensors=target_tensors, 2575 sample_weight_mode=self.sample_weight_mode, 2576 run_eagerly=self.run_eagerly, 2577 experimental_run_tf_function=self._experimental_run_tf_function) File ~/.local/lib/python3.10/site-packages/tensorflow/python/training/tracking/base.py:629, in no_automatic_dependency_tracking.<locals>._method_wrapper(self, *args, **kwargs) 627 self._self_setattr_tracking = False # pylint: disable=protected-access 628 try: --> 629 result = method(self, *args, **kwargs) 630 finally: 631 self._self_setattr_tracking = previous_value # pylint: disable=protected-access File ~/.local/lib/python3.10/site-packages/keras/engine/training_v1.py:443, in Model.compile(self, optimizer, loss, metrics, loss_weights, sample_weight_mode, weighted_metrics, target_tensors, distribute, **kwargs) 439 training_utils_v1.prepare_sample_weight_modes( 440 self._training_endpoints, sample_weight_mode) 442 # Creates the model loss and weighted metrics sub-graphs. --> 443 self._compile_weights_loss_and_weighted_metrics() 445 # Functions for train, test and predict will 446 # be compiled lazily when required. 447 # This saves time when the user is not using all functions. 448 self.train_function = None File ~/.local/lib/python3.10/site-packages/tensorflow/python/training/tracking/base.py:629, in no_automatic_dependency_tracking.<locals>._method_wrapper(self, *args, **kwargs) 627 self._self_setattr_tracking = False # pylint: disable=protected-access 628 try: --> 629 result = method(self, *args, **kwargs) 630 finally: 631 self._self_setattr_tracking = previous_value # pylint: disable=protected-access File ~/.local/lib/python3.10/site-packages/keras/engine/training_v1.py:1537, in Model._compile_weights_loss_and_weighted_metrics(self, sample_weights) 1524 self._handle_metrics( 1525 self.outputs, 1526 targets=self._targets, (...) 1529 masks=masks, 1530 return_weighted_metrics=True) 1532 # Compute total loss. 1533 # Used to keep track of the total loss value (stateless). 1534 # eg., total_loss = loss_weight_1 * output_1_loss_fn(...) + 1535 # loss_weight_2 * output_2_loss_fn(...) + 1536 # layer losses. -> 1537 self.total_loss = self._prepare_total_loss(masks) File ~/.local/lib/python3.10/site-packages/keras/engine/training_v1.py:1597, in Model._prepare_total_loss(self, masks) 1594 sample_weight *= mask 1596 if hasattr(loss_fn, 'reduction'): -> 1597 per_sample_losses = loss_fn.call(y_true, y_pred) 1598 weighted_losses = losses_utils.compute_weighted_loss( 1599 per_sample_losses, 1600 sample_weight=sample_weight, 1601 reduction=losses_utils.ReductionV2.NONE) 1602 loss_reduction = loss_fn.reduction File ~/.local/lib/python3.10/site-packages/keras/losses.py:245, in LossFunctionWrapper.call(self, y_true, y_pred) 242 y_pred, y_true = losses_utils.squeeze_or_expand_dimensions(y_pred, y_true) 244 ag_fn = tf.__internal__.autograph.tf_convert(self.fn, tf.__internal__.autograph.control_status_ctx()) --> 245 return ag_fn(y_true, y_pred, **self._fn_kwargs) File ~/.local/lib/python3.10/site-packages/tensorflow/python/autograph/impl/api.py:692, in convert.<locals>.decorator.<locals>.wrapper(*args, **kwargs) 690 except Exception as e: # pylint:disable=broad-except 691 if hasattr(e, 'ag_error_metadata'): --> 692 raise e.ag_error_metadata.to_exception(e) 693 else: 694 raise ValueError: in user code: File "/tmp/ipykernel_49162/810674056.py", line 8, in Loss * loss += confidenceLoss(y[:,:,:-4],tf.cast(gt[:,:,0],tf.int32)) File "/tmp/ipykernel_49162/2037607510.py", line 2, in confidenceLoss * unweighted_loss = tf.nn.sparse_softmax_cross_entropy_with_logits(label, y) ValueError: Only call sparse_softmax_cross_entropy_with_logits with named arguments (labels=..., logits=..., ...). Received unnamed argument: Tensor("loss/output_1_loss/Cast:0", shape=(None, None), dtype=int32) |
버전이 문제였고 tensorflow는 그냥 v2로 쓰면 되는거였네 -_-
import tensorflow as tf #import tensorflow.compat.v1 as tf #tf.disable_v2_behavior() |
최초는 2020년 7월 22일, 나중은 2022년 7월 21일(2년 만!)
saunack committed on Jul 21, 2022 saunack committed on Jul 22, 2020 |
[링크 : https://github.com/saunack/MobileNetv2-SSD/commits/master/model.ipynb]
[링크 : https://github.com/saunack/MobileNetv2-SSD/blob/master/model.ipynb]
모델 저장(실패)
from keras.models import load_model model.save('mnist_mlp_model.h5') |
에러는 아래와 같이 나옴
NotImplementedError: Saving the model to HDF5 format requires the model to be a Functional model or a Sequential model. It does not work for subclassed models, because such models are defined via the body of a Python method, which isn't safely serializable. Consider saving to the Tensorflow SavedModel format (by setting save_format="tf") or using `save_weights`.
|
TensorFlow 2.0 TL;DR: do not use model.save() for custom subclass keras model; use save_weights() and load_weights() instead. |
[링크 : https://stackoverflow.com/questions/51806852/cant-save-custom-subclassed-model]
sequential_model.save_weights("ckpt") |
[링크 : https://www.tensorflow.org/guide/keras/save_and_serialize?hl=ko]
model.save_weights('model_weights', save_format='tf') |
AttributeError: in user code: File "/home/user/.local/lib/python3.10/site-packages/keras/saving/saving_utils.py", line 138, in _wrapped_model * outputs = model(*args, **kwargs) File "/tmp/ipykernel_53483/1508227539.py", line 46, in call * x = self.MobileNet(x) File "/tmp/ipykernel_53483/3997091176.py", line 70, in call * x = self.B2_2(x) File "/tmp/ipykernel_53483/1796771022.py", line 69, in call * x = self.residual([inputs,x]) File "/home/user/.local/lib/python3.10/site-packages/keras/utils/traceback_utils.py", line 67, in error_handler ** raise e.with_traceback(filtered_tb) from None File "/home/user/.local/lib/python3.10/site-packages/keras/engine/base_layer.py", line 1102, in __call__ if self._saved_model_inputs_spec is None: AttributeError: 'Add' object has no attribute '_saved_model_inputs_spec' |
[링크 : https://github.com/tensorflow/tensorflow/issues/29545]
에라이 저장을 못하겠다!
[링크 : https://www.tensorflow.org/lite/convert?hl=ko]
엉뚱(?)한데서 터지는 느낌인데
tensorflow 버전을 2.14.0 으로 올려야 하나? 2.8.0이 아니라?
[링크 : https://www.tensorflow.org/api_docs/python/tf/keras/layers/Add]
+
24.01.11
2.14.0 으로 한다고 달라지는 건 없음.. 도대체 Add 객체는 멀까?
+
def SSD() 로 생성된걸 keras.Sequential로 감싸고 학습은 진행되는데.. 저장이 왜 또 안될까? ㅠㅠ
model = SSD(numBoxes=numBoxes, layerWidth=layerWidths, k = outputChannels) model = tf.keras.Sequential(model) # model.model().summary() |
[링크 : https://www.tensorflow.org/tutorials/keras/save_and_load?hl=ko]
[링크 : https://www.tensorflow.org/hub/exporting_tf2_saved_model?hl=ko]
MobileNetV2 SSD FPN-Lite (0) | 2024.01.11 |
---|---|
mobilenet v2 ssd (0) | 2024.01.11 |
텐서플로우 v1 을 v2로 마이그레이션은 실패 -_- (0) | 2024.01.10 |
골빈해커의 3분 딥러닝 github (0) | 2024.01.10 |
ReLU - Rectified Linear Unit (0) | 2024.01.10 |
아래의 스크립트를 이용하여 변환이 가능하다는데, 정작 변환하고 실행하려고 하면 안되고(import는 안건드리니)
$ tf_upgrade_v2 --infile tensorfoo.py --outfile tensorfoo-upgraded.py |
[링크 : https://www.tensorflow.org/guide/upgrade?hl=ko]
차라리 아래처럼 import tensorflow as tf를 compat.v1 으로 바꾸어주고, v2 를 끄면 구버전이 실행된다.
import tensorflow.compat.v1 as tf tf.disable_v2_behavior() |
[링크 : https://www.tensorflow.org/guide/migrate?hl=ko]
--------------------------------
돌려보니 먼가 나오긴 한데
INFO line 7:16: Renamed 'tf.random_uniform' to 'tf.random.uniform' INFO line 8:16: Renamed 'tf.random_uniform' to 'tf.random.uniform' INFO line 11:4: Renamed 'tf.placeholder' to 'tf.compat.v1.placeholder' INFO line 12:4: Renamed 'tf.placeholder' to 'tf.compat.v1.placeholder' INFO line 25:12: Renamed 'tf.train.GradientDescentOptimizer' to 'tf.compat.v1.train.GradientDescentOptimizer' INFO line 30:5: Renamed 'tf.Session' to 'tf.compat.v1.Session' INFO line 31:13: Renamed 'tf.global_variables_initializer' to 'tf.compat.v1.global_variables_initializer' TensorFlow 2.0 Upgrade Script ----------------------------- Converted 1 files |
잘 돈다는 보장은 없다 -_-
요런 에러가 뜨면
RuntimeError: tf.placeholder() is not compatible with eager execution. |
아래줄 추가해주면 되는데
tf.compat.v1.disable_eager_execution() |
[링크 : https://luvstudy.tistory.com/122]
정작 텐서 곱할 때, 에러가 발생한다.
RuntimeError: resource: Attempting to capture an EagerTensor without building a function. |
요건 막혀서 모르겠네 -_-
함수를 만들지 않고 eagertensor를 capture 하기 시도해서 에러가 발생한거라면..
함수(building a function)를 만들면 되는건가?
mobilenet v2 ssd (0) | 2024.01.11 |
---|---|
ssd-mobilenetv2 on jupyter notebook (2) | 2024.01.10 |
골빈해커의 3분 딥러닝 github (0) | 2024.01.10 |
ReLU - Rectified Linear Unit (0) | 2024.01.10 |
softmax (0) | 2024.01.10 |