When a flowgraph which has been generated by version 3.7 is opened in GRC version 3.8, much of the conversion process is done automatically. However, there are certain things which must be updated by hand.
WX GUI blocks: Since the WX GUI blocks are deprecated in version 3.8, the user must find corresponding blocks in the QT GUIs.
If blocks have different names between versions 3.7 and 3.8, they must be replaced by hand.
$ apt-cache search osmosdr gr-osmosdr - Gnuradio blocks from the OsmoSDR project libgnuradio-osmosdr0.2.0t64 - Gnuradio blocks from the OsmoSDR project - library libosmosdr-dev - Software defined radio support for OsmoSDR hardware (development files) libosmosdr0 - Software defined radio support for OsmoSDR hardware (library) osmo-sdr - Software defined radio support for OsmoSDR hardware (tools) soapyosmo-common0.8 - Use gr-osmosdr drivers with SoapySDR (common files) soapysdr-module-osmosdr - OsmoSDR device support for SoapySDR (default version) soapysdr0.8-module-osmosdr - OsmoSDR device support for SoapySDR
gr-osmosdr - generic gnuradio SDR I/O block While originally being developed for the OsmoSDR hardware, this block has become a generic SDR I/O block for a variety of SDR hardware, including:
FUNcube Dongle / Pro+ through gr-funcube RTL2832U based DVB-T dongles through librtlsdr RTL-TCP spectrum server (see librtlsdr project) MSi2500 based DVB-T dongles through libmirisdr SDRplay RSP through SDRplay API library gnuradio .cfile input through libgnuradio-blocks RFSPACE SDR-IQ, SDR-IP, NetSDR (incl. X2 option), Cloud-IQ, and CloudSDR AirSpy Wideband Receiver through libairspy CCCamp 2015 rad1o Badge through libhackrf Great Scott Gadgets HackRF through libhackrf Nuand LLC bladeRF through libbladeRF library Ettus USRP Devices through Ettus UHD library Fairwaves UmTRX through Fairwaves' module for UHD Fairwaves XTRX through libxtrx Red Pitaya SDR transceiver http://bazaar.redpitaya.com FreeSRP through libfreesrp By using the gr-osmosdr block you can take advantage of a common software API in your application(s) independent of the underlying radio hardware.
$ apt-cache search librtlsdr librtlsdr-dev - Software defined radio receiver for Realtek RTL2832U (development) librtlsdr2 - Software defined radio receiver for Realtek RTL2832U (library)
+
2025.09.24
ubuntu 22.04 에 librtlsdr0 를 설치해서 정상적으로 gqrx 작동하는 것 확인함
$ cmake ../ -- The CXX compiler identification is GNU 13.3.0 -- The C compiler identification is GNU 13.3.0 -- Detecting CXX compiler ABI info -- Detecting CXX compiler ABI info - done -- Check for working CXX compiler: /usr/bin/c++ - skipped -- Detecting CXX compile features -- Detecting CXX compile features - done -- Detecting C compiler ABI info -- Detecting C compiler ABI info - done -- Check for working C compiler: /usr/bin/cc - skipped -- Detecting C compile features -- Detecting C compile features - done -- Build type not specified: defaulting to release. CMake Error at CMakeLists.txt:87 (find_package): By not providing "FindGnuradio.cmake" in CMAKE_MODULE_PATH this project has asked CMake to find a package configuration file provided by "Gnuradio", but CMake did not find one.
Could not find a package configuration file provided by "Gnuradio" (requested version 3.9) with any of the following names:
GnuradioConfig.cmake gnuradio-config.cmake
Add the installation prefix of "Gnuradio" to CMAKE_PREFIX_PATH or set "Gnuradio_DIR" to a directory containing one of the above files. If "Gnuradio" provides a separate development package or SDK, be sure it has been installed.
$ sudo apt-get install gnuradio-dev
[ 30%] Building CXX object lib/CMakeFiles/gnuradio-lora.dir/message_socket_sink_impl.cc.o /home/minimonk/src/gr-lora/lib/decoder_impl.cc:28:10: fatal error: liquid/liquid.h: No such file or directory 28 | #include <liquid/liquid.h> | ^~~~~~~~~~~~~~~~~ compilation terminated. make[2]: *** [lib/CMakeFiles/gnuradio-lora.dir/build.make:76: lib/CMakeFiles/gnuradio-lora.dir/decoder_impl.cc.o] Error 1 make[2]: *** Waiting for unfinished jobs.... make[1]: *** [CMakeFiles/Makefile2:234: lib/CMakeFiles/gnuradio-lora.dir/all] Error 2 make: *** [Makefile:146: all] Error 2
$ apt-cache search liquid libliquid-dev - signal processing library for software defined radio (development files) libliquid1 - signal processing library for software defined radio liquidctl - CLI and Python drivers for AIO liquid coolers and other devices liquidprompt - adaptative prompt for bash & zsh liquidsoap - audio streaming language liquidsoap-doc - Documentation for Liquidsoap liquidsoap-mode - Emacs mode for editing Liquidsoap code liquidwar - truly original multiplayer wargame liquidwar-data - data files for Liquid War liquidwar-server - Liquid War server ruby-jekyll-gist - Liquid tag for displaying GitHub Gists in Jekyll sites ruby-jekyll-include-cache - Jekyll plugin to cache the rendering of Liquid includes ruby-liquid - Ruby library for rendering safe HTML and email templates ruby-liquid-c - liquid performance extension in C
[ 45%] Linking CXX shared library libgnuradio-lora.so /usr/bin/ld: cannot find -llog4cpp: No such file or directory collect2: error: ld returned 1 exit status make[2]: *** [lib/CMakeFiles/gnuradio-lora.dir/build.make:205: lib/libgnuradio-lora.so.1.0.0.0] Error 1 make[1]: *** [CMakeFiles/Makefile2:234: lib/CMakeFiles/gnuradio-lora.dir/all] Error 2 make: *** [Makefile:146: all] Error 2
$ apt-cache search log4cpp liblog4cpp-doc - C++ library for flexible logging (documentation) liblog4cpp5-dev - C++ library for flexible logging (development) liblog4cpp5v5 - C++ library for flexible logging (runtime)
$ sudo apt-get install liblog4cpp5-dev
+
2025.09.22
gr-lora
s$ ./lora_receive_file_nogui.py [?] Download test LoRa signal to decode? [y/N] y [+] Downloading https://research.edm.uhasselt.be/probyns/lora/usrp-868.1-sf7-cr4-bw125-crc-0.sigmf-data -> ./example-trace.sigmf-data . . . . . . . . . . . . . . . . . . [+] Downloading https://research.edm.uhasselt.be/probyns/lora/usrp-868.1-sf7-cr4-bw125-crc-0.sigmf-meta -> ./example-trace.sigmf-meta . . [+] Configuration: 868.1 MHz, SF 7, CR 4/8, BW 125 kHz, prlen 8, crc on, implicit off [+] Decoding. You should see a header, followed by 'deadbeef' and a CRC 5 times. Bits (nominal) per symbol: 3.5 Bins per symbol: 128 Samples per symbol: 1024 Decimation: 8 vmcircbuf_prefs::get :info: /home/minimonk/.gnuradio/prefs/vmcircbuf_default_factory failed to open: bad true, fail true, eof true 04 90 40 de ad be ef 70 0d (p) 04 90 40 de ad be ef 70 0d (p) 04 90 40 de ad be ef 70 0d (p) 04 90 40 de ad be ef 70 0d (p) 04 90 40 de ad be ef 70 0d (p) [+] Done
Generating: "/home/minimonk/src/gr-lora/apps/lora_receive_realtime.py" >>> Warning: This flow graph contains a throttle block and another rate limiting block, e.g. a hardware source or sink. This is usually undesired. Consider removing the throttle block. >>> Warning: The block 'blocks_throttle_0' is deprecated.
qt.qpa.plugin: Could not find the Qt platform plugin "wayland" in "" [INFO] [UHD] linux; GNU C++ version 13.2.0; Boost_108300; UHD_4.6.0.0+ds1-5.1ubuntu0.24.04.1 Traceback (most recent call last): File "/home/minimonk/src/gr-lora/apps/lora_receive_realtime.py", line 234, in <module> main() File "/home/minimonk/src/gr-lora/apps/lora_receive_realtime.py", line 212, in main tb = top_block_cls() ^^^^^^^^^^^^^^^ File "/home/minimonk/src/gr-lora/apps/lora_receive_realtime.py", line 77, in __init__ self.uhd_usrp_source_0 = uhd.usrp_source( ^^^^^^^^^^^^^^^^ RuntimeError: LookupError: KeyError: No devices found for -----> Empty Device Address
The Meson build system Version: 1.9.0 Source dir: /home/odroid/GstPipelineStudio Build dir: /home/odroid/GstPipelineStudio/builddir Build type: native build WARNING: Failed to load Cargo.lock: Could not find an implementation of tomllib, nor toml2json Project name: gst_pipeline_studio Project version: 0.2.3 Host machine cpu family: aarch64 Host machine cpu: aarch64 Program python3 found: YES (/usr/bin/python3) WARNING: You should add the boolean check kwarg to the run_command call. It currently defaults to false, but it will default to true in meson 2.0. See also: https://github.com/mesonbuild/meson/issues/9300 Found pkg-config: YES (/usr/bin/pkg-config) 0.29.1 Dependency glib-2.0 found: NO. Found 2.64.6 but need: '>= 2.66' Did not find CMake 'cmake' Found CMake: NO Run-time dependency glib-2.0 found: NO
meson.build:13:0: ERROR: Dependency lookup for glib-2.0 with method 'pkgconfig' failed: Invalid version, need 'glib-2.0' ['>= 2.66'] found '2.64.6'.
A full log can be found at /home/odroid/GstPipelineStudio/builddir/meson-logs/meson-log.txt WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated.
$ man iperf3 -A, --affinity n/n,m Set the CPU affinity, if possible (Linux, FreeBSD, and Windows only). On both the client and server you can set the local affinity by using the n form of this argument (where n is a CPU number). In addition, on the client side you can override the server's affinity for just that one test, using the n,m form of argument. Note that when using this feature, a process will only be bound to a single CPU (as opposed to a set containing potentialy mul‐ tiple CPUs).
Server or Client: -p, --port # server port to listen on/connect to -f, --format [kmgtKMGT] format to report: Kbits, Mbits, Gbits, Tbits -i, --interval # seconds between periodic throughput reports -F, --file name xmit/recv the specified file -A, --affinity n/n,m set CPU affinity -B, --bind <host> bind to the interface associated with the address <host> -V, --verbose more detailed output -J, --json output in JSON format --logfile f send output to a log file --forceflush force flushing output at every interval --timestamps <format> emit a timestamp at the start of each output line (using optional format string as per strftime(3)) -d, --debug emit debugging output -v, --version show version information and quit -h, --help show this message and quit Server specific: -s, --server run in server mode -D, --daemon run the server as a daemon -I, --pidfile file write PID file -1, --one-off handle one client connection then exit --server-bitrate-limit #[KMG][/#] server's total bit rate limit (default 0 = no limit) (optional slash and number of secs interval for averaging total data rate. Default is 5 seconds) --rsa-private-key-path path to the RSA private key used to decrypt authentication credentials --authorized-users-path path to the configuration file containing user credentials Client specific: -c, --client <host> run in client mode, connecting to <host> --sctp use SCTP rather than TCP -X, --xbind <name> bind SCTP association to links --nstreams # number of SCTP streams -u, --udp use UDP rather than TCP --connect-timeout # timeout for control connection setup (ms) -b, --bitrate #[KMG][/#] target bitrate in bits/sec (0 for unlimited) (default 1 Mbit/sec for UDP, unlimited for TCP) (optional slash and packet count for burst mode) --pacing-timer #[KMG] set the timing for pacing, in microseconds (default 1000) --fq-rate #[KMG] enable fair-queuing based socket pacing in bits/sec (Linux only) -t, --time # time in seconds to transmit for (default 10 secs) -n, --bytes #[KMG] number of bytes to transmit (instead of -t) -k, --blockcount #[KMG] number of blocks (packets) to transmit (instead of -t or -n) -l, --length #[KMG] length of buffer to read or write (default 128 KB for TCP, dynamic or 1460 for UDP) --cport <port> bind to a specific client port (TCP and UDP, default: ephemeral port) -P, --parallel # number of parallel client streams to run -R, --reverse run in reverse mode (server sends, client receives) --bidir run in bidirectional mode. Client and server send and receive data. -w, --window #[KMG] set window size / socket buffer size -C, --congestion <algo> set TCP congestion control algorithm (Linux and FreeBSD only) -M, --set-mss # set TCP/SCTP maximum segment size (MTU - 40 bytes) -N, --no-delay set TCP/SCTP no delay, disabling Nagle's Algorithm -4, --version4 only use IPv4 -6, --version6 only use IPv6 -S, --tos N set the IP type of service, 0-255. The usual prefixes for octal and hex can be used, i.e. 52, 064 and 0x34 all specify the same value. --dscp N or --dscp val set the IP dscp value, either 0-63 or symbolic. Numeric values can be specified in decimal, octal and hex (see --tos above). -L, --flowlabel N set the IPv6 flow label (only supported on Linux) -Z, --zerocopy use a 'zero copy' method of sending data -O, --omit N omit the first n seconds -T, --title str prefix every output line with this string --extra-data str data string to include in client and server JSON --get-server-output get results from server --udp-counters-64bit use 64-bit counters in UDP test packets --repeating-payload use repeating pattern in payload, instead of randomized payload (like in iperf2) --username username for authentication --rsa-public-key-path path to the RSA public key used to encrypt authentication credentials
[KMG] indicates options that support a K/M/G suffix for kilo-, mega-, or giga-
Set the CPU affinity for the sender (-A 2) or the sender, receiver (-A 2,3), where the core numbering starts at 0. This has the same effect as running numactl -C 4 iperf3.
tflite 에서 weight만 빼서 거기서 부터 학습 시키면 그게 fine tune / transfer learning(전이학습)이 되는건가?
The conversion from a TensorFlow SaveModel or tf.keras H5 model to .tflite is an irreversible process. Specifically, the original model topology is optimized during the compilation by the TFLite converter, which leads to some loss of information. Also, the original tf.keras model's loss and optimizer configurations are discarded, because those aren't required for inference.
However, the .tflite file still contains some information that can help you restore the original trained model. Most importantly, the weight values are available, although they might be quantized, which could lead to some loss in precision.
The code example below shows you how to read weight values from a .tflite file after it's created from a simple trained tf.keras.Model.
import numpy as np import tensorflow as tf
# First, create and train a dummy model for demonstration purposes. model = tf.keras.Sequential([ tf.keras.layers.Dense(10, input_shape=[5], activation="relu"), tf.keras.layers.Dense(1, activation="sigmoid")]) model.compile(loss="binary_crossentropy", optimizer="sgd")
# Convert it to a TFLite model file. converter = tf.lite.TFLiteConverter.from_keras_model(model) tflite_model = converter.convert() open("converted.tflite", "wb").write(tflite_model)
# Use `tf.lite.Interpreter` to load the written .tflite back from the file system. interpreter = tf.lite.Interpreter(model_path="converted.tflite") all_tensor_details = interpreter.get_tensor_details() interpreter.allocate_tensors()
for tensor_item in all_tensor_details: print("Weight %s:" % tensor_item["name"]) print(interpreter.tensor(tensor_item["index"])())
읽어오든.. tf.keras.application.MobileNetV2 해서 만들던 weight의 유무를 제외하면 동일하게 불러오는건가 보다.
# Create the base model from the pre-trained model MobileNet V2 IMG_SHAPE = IMG_SIZE + (3,) base_model = tf.keras.applications.MobileNetV2(input_shape=IMG_SHAPE, include_top=False, weights='imagenet') #or load your own #base_model= tf.saved_model.load("./pretrained_models/ssd_mobilenet_v2_320x320_coco17_tpu-8/saved_model")
transfer-learning은 기존의 값을 변화시키지 않고 추가 레이어에 학습시키는 것 같고
The typical transfer-learning workflow This leads us to how a typical transfer learning workflow can be implemented in Keras:
Instantiate a base model and load pre-trained weights into it. Freeze all layers in the base model by setting trainable = False. Create a new model on top of the output of one (or several) layers from the base model. Train your new model on your new dataset. Note that an alternative, more lightweight workflow could also be:
Instantiate a base model and load pre-trained weights into it. Run your new dataset through it and record the output of one (or several) layers from the base model. This is called feature extraction. Use that output as input data for a new, smaller model.
First, instantiate a base model with pre-trained weights.
base_model = keras.applications.Xception( weights='imagenet', # Load weights pre-trained on ImageNet. input_shape=(150, 150, 3), include_top=False) # Do not include the ImageNet classifier at the top. Then, freeze the base model.
base_model.trainable = False
Create a new model on top.
inputs = keras.Input(shape=(150, 150, 3)) # We make sure that the base_model is running in inference mode here, # by passing `training=False`. This is important for fine-tuning, as you will # learn in a few paragraphs. x = base_model(inputs, training=False) # Convert features of shape `base_model.output_shape[1:]` to vectors x = keras.layers.GlobalAveragePooling2D()(x) # A Dense classifier with a single unit (binary classification) outputs = keras.layers.Dense(1)(x) model = keras.Model(inputs, outputs) Train the model on new data.
fine tuning은 모델 자체의 가중치를 변경시킬수 있도록 해서,천천히 학습 시키는 것 같다.
Fine-tuning Once your model has converged on the new data, you can try to unfreeze all or part of the base model and retrain the whole model end-to-end with a very low learning rate.
This is an optional last step that can potentially give you incremental improvements. It could also potentially lead to quick overfitting – keep that in mind.
It is critical to only do this step after the model with frozen layers has been trained to convergence. If you mix randomly-initialized trainable layers with trainable layers that hold pre-trained features, the randomly-initialized layers will cause very large gradient updates during training, which will destroy your pre-trained features.
It's also critical to use a very low learning rate at this stage, because you are training a much larger model than in the first round of training, on a dataset that is typically very small. As a result, you are at risk of overfitting very quickly if you apply large weight updates. Here, you only want to readapt the pretrained weights in an incremental way.
This is how to implement fine-tuning of the whole base model:
# Unfreeze the base model base_model.trainable = True
# It's important to recompile your model after you make any changes # to the `trainable` attribute of any inner layer, so that your changes # are take into account model.compile(optimizer=keras.optimizers.Adam(1e-5), # Very low learning rate loss=keras.losses.BinaryCrossentropy(from_logits=True), metrics=[keras.metrics.BinaryAccuracy()])
# Train end-to-end. Be careful to stop before you overfit! model.fit(new_dataset, epochs=10, callbacks=..., validation_data=...)
model.add_loss() 및 model.add_metric()을 통해 추가된 외부 손실 및 메트릭은 저장되지 않습니다(SavedModel과 다름). 모델에 이러한 솔실 및 메트릭이 있고 훈련을 다시 시작하려면 모델을 로드한 후 이러한 손실을 다시 추가해야 합니다. 이는 self.add_loss() 및 self.add_metric()을 통해 레이어 내부에서 생성한 손실/메트릭에는 적용되지 않습니다. 이러한 손실 및 메트릭은 레이어가 로드되는 한 레이어의 call 메서드의 일부이기 때문에 계속 유지됩니다. 사용자 정의 레이어와 같은 사용자 정의 객체의 계산 그래프는 저장 파일에 포함되지 않습니다. 로드 시 Keras는 모델을 다시 구성하기 위해 이러한 객체의 Python 클래스/함수에 액세스해야 합니다. 사용자 정의 객체를 참고하세요.