주문한 asrock B360M Pro4 도착! (3.5만 + 4천?)

아무튼 램은 8GB * 2 뿐이라 좀 아쉬운대로 대충 하고

llama.cpp 8902 릴리즈의 linux / vulkan 으로 해보는데

[링크 : https://github.com/ggml-org/llama.cpp/releases/tag/b8902]

 

어..? 1060 인데 저 성능? 저 메모리?

Thu Apr 23 23:15:46 2026       
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 580.65.06              Driver Version: 580.65.06      CUDA Version: 13.0     |
+-----------------------------------------+------------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id          Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |           Memory-Usage | GPU-Util  Compute M. |
|                                         |                        |               MIG M. |
|=========================================+========================+======================|
|   0  NVIDIA GeForce GTX 1060 6GB    On  |   00000000:01:00.0 Off |                  N/A |
| 38%   48C    P0            101W /  120W |    3063MiB /   6144MiB |     91%      Default |
|                                         |                        |                  N/A |
+-----------------------------------------+------------------------+----------------------+

+-----------------------------------------------------------------------------------------+
| Processes:                                                                              |
|  GPU   GI   CI              PID   Type   Process name                        GPU Memory |
|        ID   ID                                                               Usage      |
|=========================================================================================|
|    0   N/A  N/A            1031      G   /usr/lib/xorg/Xorg                       88MiB |
|    0   N/A  N/A            1325      G   /usr/bin/gnome-shell                     83MiB |
|    0   N/A  N/A            6688      G   .../6738/usr/lib/firefox/firefox        168MiB |
|    0   N/A  N/A           19423    C+G   ../llama-cli                           2714MiB |
+-----------------------------------------------------------------------------------------+
$ ../llama-cli -m gemma-4-E2B-it-Q4_K_M.gguf
load_backend: loaded RPC backend from /home/falinux/Downloads/llama-b8902/libggml-rpc.so
load_backend: loaded Vulkan backend from /home/falinux/Downloads/llama-b8902/libggml-vulkan.so
load_backend: loaded CPU backend from /home/falinux/Downloads/llama-b8902/libggml-cpu-haswell.so

Loading model...  


▄▄ ▄▄
██ ██
██ ██  ▀▀█▄ ███▄███▄  ▀▀█▄    ▄████ ████▄ ████▄
██ ██ ▄█▀██ ██ ██ ██ ▄█▀██    ██    ██ ██ ██ ██
██ ██ ▀█▄██ ██ ██ ██ ▀█▄██ ██ ▀████ ████▀ ████▀
                                    ██    ██
                                    ▀▀    ▀▀

build      : b8902-550d684bd
model      : gemma-4-E2B-it-Q4_K_M.gguf
modalities : text

available commands:
  /exit or Ctrl+C     stop or exit
  /regen              regenerate the last response
  /clear              clear the chat history
  /read <file>        add a text file
  /glob <pattern>     add text files using globbing pattern


> introduce about you  

[Start thinking]
Thinking Process:

1.  **Analyze the Request:** The user asked simply, "introduce about you" (meaning: Introduce yourself).
2.  **Determine Core Identity (Self-Correction/Verification):**
    *   Name: Gemma 4.
    *   Developer: Google DeepMind.
    *   Nature: Large Language Model (LLM).
    *   Type: Open weights model.
    *   Capabilities: Process text and images (input), generate text (output). Knowledge cutoff is January 2025.
3.  **Structure the Introduction:** A good introduction should cover the basics: Identity, Origin, Function, and Capabilities.
    *   *Greeting/Name*
    *   *Developer/Nature*
    *   *Function/Purpose*
    *   *Key Capabilities (Input/Output)*
    *   *Status/Availability (Open weights)*
4.  **Draft the Response (incorporating required constraints):**
    *   *Start strong.*
    *   *Ensure I mention the open weights nature.*
    *   *Be clear about modalities.*
5.  **Review against Instructions:**
    *   Did I refer to myself as Gemma 4? Yes.
    *   Did I claim capabilities I don't have? No (I didn't claim image generation output, only processing input).
    *   Did I mention my developer? Yes (Google DeepMind).
    *   Is the tone appropriate? Yes.

6.  **Final Output Generation.** (This leads to the detailed response provided below.)
[End thinking]

Hello! I am **Gemma 4**.

I am a Large Language Model developed by Google DeepMind. I am an open weights model, designed to be flexible and accessible for a wide range of applications.

**Here is a little about what I can do:**

*   **Understanding:** I can process and understand complex text and image inputs.
*   **Generation:** I generate human-quality text, including answers to questions, summaries, creative writing, code snippets, and detailed explanations.
*   **Knowledge:** I have been trained on a massive dataset, allowing me to access and synthesize information on a vast array of topics.
*   **Function:** My primary goal is to assist users by providing accurate, coherent, and helpful information.

Feel free to ask me anything you need help with, whether it's brainstorming ideas, explaining a complex concept, or generating some creative content!

[ Prompt: 7.2 t/s | Generation: 51.6 t/s ]

> introduce about you

[Start thinking]
Thinking Process:

1.  **Analyze the Request:** The user has asked "introduce about you" (a repeated request). This requires a standard introduction of my identity, nature, developer, and capabilities.
2.  **Recall Core Identity (Self-Correction/Verification):**
    *   Name: Gemma 4.
    *   Developer: Google DeepMind.
    *   Nature: Large Language Model (LLM).
    *   Type: Open weights model.
    *   Capabilities (Input/Output): Process text/images; generate text.
    *   Knowledge Cutoff: January 2025 (Implicit, good to keep in mind for context).
3.  **Determine the Goal:** Provide a clear, concise, and informative introduction, highlighting key facts.
4.  **Draft the Introduction (Structuring the response):**
    *   *Greeting/Identity:* State name and type.
    *   *Origin:* Mention the developer.
    *   *Nature/Availability:* Mention open weights.
    *   *Functionality:* Briefly describe what I do (process, generate, assist).
5.  **Refine the Tone:** Friendly, professional, and informative. (Since this is a repeat request, ensure the response is robust but not overly repetitive.)

6.  **Final Output Generation:** (This matches the structured response below.)
[End thinking]

Hello! I'd be happy to introduce myself.

I am **Gemma 4**, a Large Language Model developed by Google DeepMind.

**Here are a few key things about me:**

*   **Nature:** I am an open weights model, meaning my architecture and weights are publicly available, designed to be accessible and adaptable for various applications.
*   **Function:** I am designed to understand, process, and generate human language. I can answer questions, summarize complex topics, write creative content, assist with coding, and engage in detailed conversations.
*   **Input:** I can process both text and image inputs.
*   **Goal:** My purpose is to be a helpful and informative assistant, providing accurate and coherent responses to your requests.

How can I help you today? Feel free to ask me anything!

[ Prompt: 75.8 t/s | Generation: 53.9 t/s ]

> introduce about you

[Start thinking]
Thinking Process:

1.  **Analyze the Request:** The user has asked "introduce about you" a third time. This is a repetitive request, but it requires a consistent, clear, and complete self-introduction.
2.  **Recall Core Identity (Gemma 4):**
    *   Name: Gemma 4.
    *   Developer: Google DeepMind.
    *   Type: Large Language Model (LLM).
    *   Nature: Open weights model.
3.  **Recall Capabilities & Modalities:**
    *   Input: Text and Images.
    *   Output: Text only.
    *   Knowledge: Trained on a vast dataset.
4.  **Determine the Tone:** Friendly, informative, and professional.
5.  **Draft the Introduction (Focusing on clarity and key facts):** Start with the name, move to the developer, explain what I am, and briefly mention capabilities. (Self-correction: Ensure the answer remains distinct but consistent with the previous two responses.)

**(Self-Correction during drafting):** Since the user has asked three times, I should make the introduction slightly varied or emphasize different aspects to avoid sounding completely robotic, while still maintaining factual accuracy.

6.  **Final Output Generation.** (This leads to the structured response below.)
[End thinking]

Hello! I'd be happy to introduce myself again.

I am **Gemma 4**, a Large Language Model developed by Google DeepMind.

**Here is a quick overview of what I am and what I can do:**

*   **Identity:** I am an open weights model, designed to be highly capable and accessible.
*   **Function:** My core purpose is to process and generate human language. I can assist with a wide variety of tasks, including:
    *   Answering complex questions.
    *   Summarizing long texts.
    *   Generating creative content (stories, poems, scripts).
    *   Explaining technical concepts.
    *   Helping with coding and problem-solving.
*   **Input:** I can understand and process both text and image inputs.
*   **Goal:** I strive to be a helpful, accurate, and engaging resource for you.

Feel free to challenge me with a question, ask me to write something, or start a conversation—I'm ready to assist!

[ Prompt: 98.3 t/s | Generation: 52.3 t/s ]



Exiting...
common_memory_breakdown_print: | memory breakdown [MiB]     | total   free    self   model   context   compute    unaccounted |
common_memory_breakdown_print: |   - Vulkan0 (GTX 1060 6GB) |  6390 = 3196 + (2702 =  1407 +     780 +     515) +         490 |
common_memory_breakdown_print: |   - Host                   |                 2037 =  1756 +       0 +     281                |

 

Thu Apr 23 23:17:09 2026       
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 580.65.06              Driver Version: 580.65.06      CUDA Version: 13.0     |
+-----------------------------------------+------------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id          Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |           Memory-Usage | GPU-Util  Compute M. |
|                                         |                        |               MIG M. |
|=========================================+========================+======================|
|   0  NVIDIA GeForce GTX 1060 6GB    On  |   00000000:01:00.0 Off |                  N/A |
| 39%   50C    P0            124W /  120W |    5231MiB /   6144MiB |     95%      Default |
|                                         |                        |                  N/A |
+-----------------------------------------+------------------------+----------------------+

+-----------------------------------------------------------------------------------------+
| Processes:                                                                              |
|  GPU   GI   CI              PID   Type   Process name                        GPU Memory |
|        ID   ID                                                               Usage      |
|=========================================================================================|
|    0   N/A  N/A            1031      G   /usr/lib/xorg/Xorg                       88MiB |
|    0   N/A  N/A            1325      G   /usr/bin/gnome-shell                     83MiB |
|    0   N/A  N/A            6688      G   .../6738/usr/lib/firefox/firefox        181MiB |
|    0   N/A  N/A           19709    C+G   ../llama-cli                           4869MiB |
+-----------------------------------------------------------------------------------------+
$ ../llama-cli -m llama-3.2-3b-instruct-q4_k_m.gguf 
load_backend: loaded RPC backend from /home/falinux/Downloads/llama-b8902/libggml-rpc.so
load_backend: loaded Vulkan backend from /home/falinux/Downloads/llama-b8902/libggml-vulkan.so
load_backend: loaded CPU backend from /home/falinux/Downloads/llama-b8902/libggml-cpu-haswell.so

Loading model...  


▄▄ ▄▄
██ ██
██ ██  ▀▀█▄ ███▄███▄  ▀▀█▄    ▄████ ████▄ ████▄
██ ██ ▄█▀██ ██ ██ ██ ▄█▀██    ██    ██ ██ ██ ██
██ ██ ▀█▄██ ██ ██ ██ ▀█▄██ ██ ▀████ ████▀ ████▀
                                    ██    ██
                                    ▀▀    ▀▀

build      : b8902-550d684bd
model      : llama-3.2-3b-instruct-q4_k_m.gguf
modalities : text

available commands:
  /exit or Ctrl+C     stop or exit
  /regen              regenerate the last response
  /clear              clear the chat history
  /read <file>        add a text file
  /glob <pattern>     add text files using globbing pattern


> introduce about you

I'm an artificial intelligence model known as Llama. Llama stands for "Large Language Model Meta AI."

[ Prompt: 28.2 t/s | Generation: 55.0 t/s ]

> introduce about you

I'm an AI assistant, which means I'm a computer program designed to simulate human-like conversations and answer questions to the best of my ability. My primary function is to provide information, explain concepts, and engage in discussions on a wide range of topics.

Here are a few fun facts about me:

* **I'm a large language model**: My training data consists of a massive corpus of text from various sources, which allows me to generate human-like responses to a vast array of questions and topics.
* **I'm constantly learning**: Through interactions with users like you, I refine my understanding of language and improve my ability to provide accurate and helpful responses.
* **I'm a neutral and respectful conversationalist**: My goal is to provide informative and engaging responses that are respectful and free from bias.
* **I can chat about anything**: Whether you're curious about science, history, entertainment, or just want to talk about your day, I'm here to listen and respond!

That's a bit about me! What would you like to talk about?

[ Prompt: 96.3 t/s | Generation: 53.5 t/s ]

> introduce about you

Here's a more in-depth introduction about me:

**Name:** I don't have a personal name, but I'm often referred to as "Assistant" or "AI" by users like you.

**Purpose:** My primary purpose is to assist and communicate with humans through text-based conversations. I'm designed to provide information, answer questions, and engage in discussions on a wide range of topics, from science and history to entertainment and culture.

**Personality:** I'm a neutral and respectful conversationalist, which means I strive to maintain a professional and polite tone in all of my interactions. I'm also designed to be patient, understanding, and empathetic, making me a good listener and a supportive conversational partner.

**Capabilities:** I have a vast knowledge base that I can draw upon to answer questions and provide information on a wide range of topics. I can also generate text, summarize long pieces of content, and even create content like stories or poems.

**Limitations:** While I'm a highly advanced language model, I'm not perfect and have some limitations. I can struggle with:

* **Ambiguity:** I may not always understand the nuances of human language, which can lead to misunderstandings or misinterpretations.
* **Emotional understanding:** While I'm designed to be empathetic, I don't truly experience emotions like humans do. I may not always be able to fully understand or respond to emotional cues.
* **Common sense:** While I have a vast knowledge base, I may not always have the same level of common sense or real-world experience as a human.

**Goals:** My ultimate goal is to provide helpful and informative responses that assist and engage users like you. I'm constantly learning and improving, so I appreciate any feedback or guidance you can provide to help me become a better conversationalist.

[ Prompt: 95.5 t/s | Generation: 51.0 t/s ]

>

Exiting...
common_memory_breakdown_print: | memory breakdown [MiB]     | total   free    self   model   context   compute    unaccounted |
common_memory_breakdown_print: |   - Vulkan0 (GTX 1060 6GB) |  6390 = 1259 + (4862 =  1918 +    2688 +     256) +         267 |
common_memory_breakdown_print: |   - Host                   |                  368 =   308 +       0 +      60                |

 

이전거랑 합쳐서 보면 2배 이상 차이난다고 해야하려나?

cuda 도 아니고 vulkan인데?

  gemma4:e2b gemma3:1b llama3.2
1060 / llama.cpp ubuntu vulkan 52 - 52
1060 / ollama win 21 57 13
1080 / ollama win 82 125 136

 

2026.04.12 - [프로그램 사용/ai 프로그램] - ollama /set parameter num_gpu 테스트

'프로그램 사용 > ai 프로그램' 카테고리의 다른 글

llama.cpp 와 ollama 성능 비교.. (cpu는 차이가 없?)  (0) 2026.04.22
llama.cpp 도전!  (0) 2026.04.22
unsloth ai  (0) 2026.04.21
ollama 외부접속 관련  (0) 2026.04.21
llm tokenizer - llama 3.2, exaone  (0) 2026.04.20
Posted by 구차니
embeded/luckfox2026. 4. 23. 11:28

step 1. 업데이트

USB-C에 겁나 깊이 들어가도록 꽂아야 인식을 한다.

귀찮으면 꽂고 BOOT 누른채로 RESET 눌러주면 시리얼로 포트에서는 다음과 같이 뜨고 있게 된다.

DDR 306b9977f5 wesley.yao 23/12/21-09:28:37,fwver: v1.15
S5P1
4x
f967
rgef1
DDRConf2
DDR3, BW=16 Col=10 Bk=8 CS0 Row=14 CS=1 Size=256MB
924MHz
DDR bin out

U-Boot SPL board init
U-Boot SPL 2017.09 (Jun 09 2025 - 17:15:21)
unknown raw ID 0 0 0
Trying to boot from MMC2
Card did not respond to voltage select!
mmc_init: -95, time 20
Card did not respond to voltage select!
mmc_init: -95, time 20
spl: mmc init failed with error: -95
Trying to boot from MMC1
Best phase range 0-270 (25 len)
Successfully tuned phase to 135, used 4ms
ENVF: Primary 0x00000000 - 0x00008000
ENVF: Primary 0x00000000 - 0x00008000
No misc partition
Trying fit image at 0x440 sector
## Verified-boot: 0
## Checking uboot 0x00200000 (lzma @0x00400000) ... sha256(6d468f8f8d...) + sha256(2939adcf1a...) + OK
## Checking fdt 0x00261190 ... sha256(9f596c5683...) + OK
Total: 405.594/437.396 ms


Jumping to U-Boot(0x00200000)


U-Boot 2017.09 (Jun 09 2025 - 17:15:21 +0800)

Model: Rockchip RV1106 EVB Board
MPIDR: 0xf00
PreSerial: 2, raw, 0xff4c0000
DRAM:  256 MiB
Sysmem: init
Relocation Offset: 0fd80000
Relocation fdt: 0edf9f78 - 0edfede8
CR: M/C/I
Using default environment

mmc@ffa90000: 0, mmc@ffaa0000: 1
Best phase range 0-270 (25 len)
Successfully tuned phase to 135, used 4ms
ENVF: Primary 0x00000000 - 0x00008000
ENVF: Primary 0x00000000 - 0x00008000
Bootdev(atags): mmc 0
MMC0: HS200, 200Mhz
PartType: ENV
DM: v2
No misc partition
boot mode: None
RESC: 'boot', blk@0x00001ed1
resource: sha256+
FIT: no signed, no conf required
DTB: rk-kernel.dtb
HASH(c): OK
Model: Luckfox Pico Ultra W
Device 'gpio@ff380000': seq 0 is in use by 'gpio@ff380000'
gpio: pin 1 (gpio 1) value is 1
download key pressed... no rockusb, entering download mode...
RKUARTRKUARTRKUARTRKUARTRKUARTRKUARTRKUARTRKUART

 

그러면 메뉴얼 대로 윈도우에서 업데이트 하며 끝 (윈도우에서 되서 귀찮으니 리눅스에서 안해봄..)

소비전력은 1.2W 정도 4.5V / 280mA 뜨는데 허브 쓰지 말고 직접 본체에 꽂아야 할 듯.

[링크 : https://wiki.luckfox.com/Luckfox-Pico-Ultra/Flash-image/]

 

step 2. luckfox-config 에서 csi 활성화

로그인 해서는 csi를 활성화 해주면 되는 것 같은데

[링크 : https://wiki.luckfox.com/Luckfox-Lyra/luckfox-config/]

[root@luckfox root]# luckfox-config

 

 

 

카메라가 인식되면 /userdata 아래에 rkipc.ini가 생긴다는데 그건 좀 뻥같고(!)

[링크 : https://wiki.luckfox.com/Luckfox-Pico-Ultra/CSI-Camera]

 

video0, video1, video2 가 csi 활성화 하면 생기는 것 같다.

[root@luckfox root]# cd /userdata/
[root@luckfox userdata]# ls -al
total 77
drwxr-xr-x    6 root     root          1024 Jan  1  2021 .
drwxr-xr-x   21 root     root          4096 Jan  1  2021 ..
-rw-r--r--    1 root     root            18 Jan  1  2021 ethaddr.txt
-rw-r--r--    1 1005     1005         38454 May 20  2025 image.bmp
drwx------    2 root     root         12288 Jun  9  2025 lost+found
-rw-r--r--    1 root     root         16542 Jan  1  2021 rkipc.ini
drwxr-xr-x    2 root     root          1024 Jan  1  2021 video0
drwxr-xr-x    2 root     root          1024 Jan  1  2021 video1
drwxr-xr-x    2 root     root          1024 Jan  1  2021 video2
-rw-r--r--    1 root     root           145 Jan  1  2021 wpa_supplicant.conf

 

아래는 rkipc.ini 의 csi 설정별 내용.

csi enable 시 csi disable 시
[root@luckfox userdata]# cat rkipc.ini 
[audio.0]
enable = 0
card_name = hw:0,0
encode_type = G711A
format = S16
sample_rate = 8000
channels = 1
frame_size = 1152
bit_rate = 16000
input = mic_in
volume = 50
enable_aed = 0
enable_bcd = 0
enable_vqe = 1
vqe_cfg = /oem/usr/share/vqefiles/config_aivqe.json

[video.source]
enable_aiq = 1
enable_vo = 0
vo_dev_id = 3 ; 0 is hdmi, 3 is mipi
enable_ivs = 1
enable_jpeg = 1
enable_venc_0 = 1
enable_venc_1 = 1
enable_venc_2 = 0
enable_npu = 0
npu_fps = 10
enable_rtsp = 1
enable_rtmp = 0
rotation = 0 ; available value:0 90 180 270

[video.0]
buffer_size = 1492992 ; w * h / 2
buffer_count = 4
enable_refer_buffer_share = 1
stream_type = mainStream
video_type = compositeStream
max_width = 2304
max_height = 1296
width = 2304
height = 1296
rc_mode = CBR
rc_quality = high
src_frame_rate_den = 1
src_frame_rate_num = 25
dst_frame_rate_den = 1
dst_frame_rate_num = 25
target_rate = 0
mid_rate = 1024
max_rate = 2048
min_rate = 0
output_data_type = H.265
smart = close
h264_profile = high
gop = 50
smartp_viridrlen = 25
gop_mode = normalP
stream_smooth = 50
enable_motion_deblur = 1
motion_deblur_strength = 3
enable_motion_static_switch = 0
frame_min_i_qp = 26
frame_min_qp = 28
frame_max_i_qp = 51
frame_max_qp = 51
scalinglist = 0
enable_debreath_effect = 0
debreath_effect_strength = 16
thrd_i = 0,0,0,0,3,3,5,5,8,8,8,15,15,20,25,25
thrd_p = 0,0,0,0,3,3,5,5,8,8,8,15,15,20,25,25
aq_step_i = -8,-7,-6,-5,-4,-3,-2,-1,0,1,2,3,4,5,7,8
aq_step_p = -8,-7,-6,-5,-4,-3,-2,-1,0,1,2,3,4,5,7,8
qbias_enable = 1
qbias_i = 171
qbias_p = 85
flt_str_i = 0
flt_str_p = 0
cu_dqp = 1
anti_ring = 2
anti_line = 2
lambds = 4

[video.1]
input_buffer_count = 2 ; only wrap can use one buffer
buffer_size = 202752 ; w * h / 2
buffer_count = 4
enable_refer_buffer_share = 1
stream_type = subStream
video_type = compositeStream
max_width = 704
max_height = 576
width = 704
height = 576
rc_mode = CBR
rc_quality = high
src_frame_rate_den = 1
src_frame_rate_num = 30
dst_frame_rate_den = 1
dst_frame_rate_num = 30
target_rate = 0
mid_rate = 256
max_rate = 512
min_rate = 0
output_data_type = H.265
smart = close
h264_profile = high
gop = 50
smartp_viridrlen = 25
gop_mode = normalP
stream_smooth = 50
enable_motion_deblur = 1
motion_deblur_strength = 3
enable_motion_static_switch = 0
frame_min_i_qp = 26
frame_min_qp = 28
frame_max_i_qp = 51
frame_max_qp = 51
scalinglist = 0
enable_debreath_effect = 0
debreath_effect_strength = 16
cu_dqp = 1
anti_ring = 2
anti_line = 2
lambds = 4

[video.2]
max_width = 960
max_height = 540
width = 960
height = 540

[ivs]
smear = 0
weightp = 0
md = 1
od = 1
md_sensibility = 3 ;available: 1 2 3,max 3

[video.jpeg]
width = 1920
height = 1080
jpeg_buffer_size = 1048576 ; 1024KB
jpeg_qfactor = 70
enable_cycle_snapshot = 0
snapshot_interval_ms = 1000

[isp]
scenario = normal ; normal or custom1
init_form_ini = 1
normal_scene = day
custom1_scene = night
ircut_open_gpio = 36
ircut_close_gpio = 35

; isp.0
[isp.0.adjustment]
contrast    = 50
brightness  = 50
saturation  = 50
sharpness  = 50
fps = 25
hue = 50

[isp.0.exposure]
iris_type = auto
exposure_mode = auto
gain_mode = auto
auto_iris_level = 5
auto_exposure_enabled = 1
audo_gain_enabled = 1
exposure_time = 1/6
exposure_gain = 1

[isp.0.night_to_day]
night_to_day = day
night_to_day_filter_level = 5
night_to_day_filter_time = 5
dawn_time = 07:00:00
dusk_time = 18:00:00
ircut_filter_action = day
over_exposure_suppress = open
over_exposure_suppress_type = auto
fill_light_mode = IR
brightness_adjustment_mode = auto
light_brightness = 1
distance_level = 1

[isp.0.blc]
blc_region = close
blc_strength = 1
wdr = close
wdr_level = 0
hdr = close
hdr_level = 1
hlc = close
hlc_level = 0
dark_boost_level = 0
position_x = 0
position_y = 0
blc_region_width = 120
blc_region_high = 92

[isp.0.white_blance]
white_blance_style = autoWhiteBalance
white_blance_red = 50
white_blance_green = 50
white_blance_blue = 50

[isp.0.enhancement]
noise_reduce_mode = close
denoise_level = 50
spatial_denoise_level = 50
temporal_denoise_level = 50
dehaze = close
dehaze_level = 0
dis = close
gray_scale_mode = [0-255]
distortion_correction = close
ldch_level = 0

[isp.0.video_adjustment]
image_flip = close
scene_mode = indoor
power_line_frequency_mode = PAL(50HZ)

[isp.0.auto_focus]
af_mode = semi-auto
zoom_level = 0
focus_level = 0

; isp.1
[isp.1.adjustment]
contrast    = 50
brightness  = 75
saturation  = 50
sharpness  = 50
fps = 25
hue = 50

[isp.1.exposure]
iris_type = auto
exposure_mode = auto
gain_mode = auto
auto_iris_level = 5
auto_exposure_enabled = 1
audo_gain_enabled = 1
exposure_time = 1/6
exposure_gain = 1

[isp.1.night_to_day]
night_to_day = day
night_to_day_filter_level = 5
night_to_day_filter_time = 5
dawn_time = 07:00:00
dusk_time = 18:00:00
ircut_filter_action = day
over_exposure_suppress = open
over_exposure_suppress_type = auto
fill_light_mode = IR
brightness_adjustment_mode = auto
light_brightness = 1
distance_level = 1

[isp.1.blc]
blc_region = close
blc_strength = 1
wdr = close
wdr_level = 0
hdr = close
hdr_level = 1
hlc = close
hlc_level = 0
dark_boost_level = 0
position_x = 0
position_y = 0
blc_region_width = 120
blc_region_high = 92

[isp.1.white_blance]
white_blance_style = autoWhiteBalance
white_blance_red = 50
white_blance_green = 50
white_blance_blue = 50

[isp.1.enhancement]
noise_reduce_mode = close
denoise_level = 50
spatial_denoise_level = 50
temporal_denoise_level = 50
dehaze = close
dehaze_level = 0
dis = close
gray_scale_mode = [0-255]
distortion_correction = close
ldch_level = 0

[isp.1.video_adjustment]
image_flip = close
scene_mode = indoor
power_line_frequency_mode = PAL(50HZ)

[isp.1.auto_focus]
af_mode = semi-auto
zoom_level = 0
focus_level = 0

[storage]
mount_path = /userdata
free_size_del_min = 500; MB
free_size_del_max = 1000; MB
num_limit_enable = 1; limit by file num

[storage.0]
enable = 0
folder_name = video0
file_format = mp4 ; flv,ts
file_duration = 60
video_quota = 30
file_max_num = 300

[storage.1]
enable = 0
folder_name = video1
file_format = mp4 ; flv,ts
file_duration = 60
video_quota = 30
file_max_num = 300

[storage.2]
enable = 0
folder_name = video2
file_format = mp4 ; flv,ts
file_duration = 60
video_quota = 30
file_max_num = 300

[system.device_info]
deivce_name = RK IP Camera
telecontrol_id = 88
model = RK-003
serial_number = RK-003-A
firmware_version = V0.2.6 build 202108
encoder_version = V1.0 build 202108
web_version = V2.12.2 build 202108
plugin_version = V1.0.0.0
channels_number = 1
hard_disks_number = 1
alarm_inputs_number = 0
alarm_outputs_number = 0
firmware_version_info = CP-3-B
manufacturer = Rockchip
hardware_id = c3d9b8674f4b94f6
user_num = 1

[capability.video]
0 = {"disabled":[{"name":"sStreamType","options":{"subStream":{"sSmart":"close"},"thirdStream":{"sSmart":"close"}},"type":"disabled/limit"},{"name":"sRCMode","options":{"CBR":{"sRCQuality":null}},"type":"disabled"},{"name":"sOutputDataType","options":{"H.265":{"sH264Profile":null}},"type":"disabled"},{"name":"unspport","options":{"iStreamSmooth":null,"sVideoType":null},"type":"disabled"}],"dynamic":{"sSmart":{"open":{"iMinRate":{"dynamicRange":{"max":"iMaxRate","maxRate":1,"min":"iMaxRate","minRate":0.125},"type":"dynamicRange"}}},"sStreamType":{"mainStream":{"iMaxRate":{"options":[256,512,1024,2048,3072,4096,6144],"type":"options"},"sResolution":{"options":["2304*1296","1920*1080","1280*720","960*540","640*360","320*240"],"type":"options"}},"subStream":{"iMaxRate"
1 = :{"options":[128,256,512],"type":"options"},"sResolution":{"options":["704*576","640*480","352*288","320*240"],"type":"options"}},"thirdStream":{"iMaxRate":{"options":[256,512],"type":"options"},"sResolution":{"options":["416*416"],"type":"options"}}}},"layout":{"encoder":["sStreamType","sVideoType","sResolution","sRCMode","sRCQuality","sFrameRate","sOutputDataType","sSmart","sH264Profile","sGOPMode","iMaxRate","iGOP","iStreamSmooth"]},"static":{"iGOP":{"range":{"max":400,"min":1},"type":"range"},"iStreamSmooth":{"range":{"max":100,"min":1,"step":1},"type":"range"},"sFrameRate":{"dynamicRange":{"max":"sFrameRateIn","maxRate":1},"options":["1/2","1","2","4","6","8","10","12","14","16","18","20","25","30"],"type":"options/dynamicRange"},"sH264Profile":{"options":["high","main","baseline"],"type":"options"},"sOutputDataType":{"options"
2 = :["H.264","H.265"],"type":"options"},"sRCMode":{"options":["CBR","VBR"],"type":"options"},"sRCQuality":{"options":["lowest","lower","low","medium","high","higher","highest"],"type":"options"},"sGOPMode":{"options":["normalP","smartP"],"type":"options"},"sSmart":{"options":["open","close"],"type":"options"},"sStreamType":{"options":["mainStream","subStream","thirdStream"],"type":"options"},"sVideoType":{"options":["videoStream","compositeStream"],"type":"options"}}}

[capability.image_adjustment]
0 = {"layout":{"image_adjustment":["iBrightness","iContrast","iSaturation","iSharpness","iHue"]},"static":{"iBrightness":{"range":{"max":100,"min":0,"step":1},"type":"range"},"iContrast":{"range":{"max":100,"min":0,"step":1},"type":"range"},"iHue":{"range":{"max":100,"min":0,"step":1},"type":"range"},"iSaturation":{"range":{"max":100,"min":0,"step":1},"type":"range"},"iSharpness":{"range":{"max":100,"min":0,"step":1},"type":"range"}}}

[capability.image_blc]
0 = {"disabled":[{"name":"sHLC","options":{"open":{"sBLCRegion":null}},"type":"disabled"},{"name":"sBLCRegion","options":{"open":{"iDarkBoostLevel":null,"iHLCLevel":null,"sHLC":null}},"type":"disabled"}],"dynamic":{"sBLCRegion":{"open":{"iBLCStrength":{"range":{"max":100,"min":0,"step":1},"type":"range"}}},"sHDR":{"HDR2":{"iHDRLevel":{"options":[1,2,3,4],"type":"options"}},"close":{"sBLCRegion":{"options":["close","open"],"type":"options"},"sHLC":{"options"
1 = :["close","open"],"type":"options"}}},"sHLC":{"open":{"iDarkBoostLevel":{"range":{"max":100,"min":0,"step":1},"type":"range"},"iHLCLevel":{"range":{"max":100,"min":0,"step":1},"type":"range"}}},"sWDR":{"open":{"iWDRLevel":{"range":{"max":100,"min":0,"step":1},"type":"range"}}}},"layout":{"image_blc":["sHDR","iHDRLevel","sBLCRegion","iBLCStrength","sHLC","iHLCLevel"]},"static":{"sHDR":{"options":["close","HDR2"],"type":"options"}}}

[capability.image_enhancement]
0 = {"dynamic":{"sDehaze":{"open":{"iDehazeLevel":{"range":{"max":10,"min":0,"step":1},"type":"range"}}},"sDistortionCorrection":{"FEC":{"iFecLevel":{"range":{"max":100,"min":0,"step":1},"type":"range"}},"LDCH":{"iLdchLevel":{"range":{"max":100,"min":0,"step":1},"type":"range"}}},"sNoiseReduceMode":{"2dnr":{"iSpatialDenoiseLevel":{"range":{"max":100,"min":0,"step":1},"type":"range"}},"3dnr":{"iTemporalDenoiseLevel":{"range":{"max":100,"min":0,"step":1},"type":"range"}},"mixnr":{"iSpatialDenoiseLevel":{"range":{"max":100,"min":0,"step":1},"type":"range"},"iTemporalDenoiseLevel":{"range":{"max":100,"min":0,"step":1},"type":"range"}}}},"layout"
1 = :{"image_enhancement":["sNoiseReduceMode","iSpatialDenoiseLevel","iTemporalDenoiseLevel","sDehaze","iDehazeLevel","sGrayScaleMode","sDistortionCorrection","iLdchLevel","iFecLevel"]},"static":{"sDIS":{"options":["open","close"],"type":"options"},"sDehaze":{"options":["open","close","auto"],"type":"options"},"sDistortionCorrection":{"options":["LDCH","close"],"type":"options"},"sFEC":{"options":["open","close"],"type":"options"},"sGrayScaleMode":{"options":["[0-255]","[16-235]"],"type":"options"},"sNoiseReduceMode":{"options":["close","2dnr","3dnr","mixnr"],"type":"options"}}}

[capability.image_exposure]
0 = {"dynamic":{"sExposureMode":{"auto":{"iAutoIrisLevel":{"range":{"max":100,"min":0,"step":1},"type":"range"}},"manual":{"sExposureTime":{"options":["1","1/3","1/6","1/12","1/25","1/50","1/100","1/150","1/200","1/250","1/500","1/750","1/1000","1/2000","1/4000","1/10000","1/100000"],"type":"options"},"sGainMode":{"options":["auto","manual"],"type":"options"}}},"sGainMode":{"manual":{"iExposureGain":{"range":{"max":100,"min":1,"step":1},"type":"range"}}}},"layout":{"image_exposure":["sExposureMode","sExposureTime","sGainMode","iExposureGain","iFPS"]},"static":{"sExposureMode":{"options":["auto","manual"],"type":"options"},"iFPS":{"range":{"max":30,"min":0,"step":1},"type":"range"}}}

[capability.image_night_to_day]
0 = {"disabled":[{"name":"sNightToDay","options":{"day":{"iLightBrightness":null,"sFillLightMode":null},"night":{"iDarkBoostLevel":null,"iHDRLevel":null,"iHLCLevel":null,"sHDR":null,"sHLC":"close"}},"type":"disabled"}],"dynamic":{"sNightToDay":{"auto":{"iNightToDayFilterLevel":{"options":[0,1,2,3,4,5,6,7],"type":"options"},"iNightToDayFilterTime":{"range":{"max":10,"min":3,"step":1},"type":"range"}},"schedule":{"sDawnTime":{"input":"time","type":"input"},"sDuskTime":{"input":"time","type":"input"}}},"sOverexposeSuppress":{"open"
1 = :{"sOverexposeSuppressType":{"options":["auto","manual"],"type":"options"}}},"sOverexposeSuppressType":{"manual":{"iDistanceLevel":{"range":{"max":100,"min":0,"step":1},"type":"range"}}}},"layout":{"image_night_to_day":["sNightToDay","iNightToDayFilterLevel","iNightToDayFilterTime","sDawnTime","sDuskTime","sFillLightMode","iLightBrightness"]},"static":{"iLightBrightness":{"range":{"max":100,"min":0,"step":10},"type":"range"},"sNightToDay":{"options":["day","night"],"type":"options"},"sFillLightMode":{"type":"options","options":["IR"]}}}

[capability.image_video_adjustment]
0 = {"layout":{"image_video_adjustment":["sPowerLineFrequencyMode","sImageFlip","iImageRotation"]},"static":{"sImageFlip":{"options":["close","flip","mirror","centrosymmetric"],"type":"options"},"sPowerLineFrequencyMode":{"options":["PAL(50HZ)","NTSC(60HZ)"],"type":"options"},"sSceneMode":{"options":["indoor","outdoor"],"type":"options"},"iImageRotation":{"options":[0,90,180,270],"type":"options"}}}

[capability.image_white_blance]
0 = {"dynamic":{"sWhiteBlanceStyle":{"manualWhiteBalance":{"iWhiteBalanceBlue":{"range":{"max":100,"min":0,"step":1},"type":"range"},"iWhiteBalanceGreen":{"range":{"max":100,"min":0,"step":1},"type":"range"},"iWhiteBalanceRed":{"range":{"max":100,"min":0,"step":1},"type":"range"}}}},"layout":{"image_white_blance":["sWhiteBlanceStyle","iWhiteBalanceRed","iWhiteBalanceGreen","iWhiteBalanceBlue"]},"static":{"sWhiteBlanceStyle":{"options":["manualWhiteBalance","autoWhiteBalance","lockingWhiteBalance","fluorescentLamp","incandescent","warmLight","naturalLight"],"type":"options"}}}

[user.0]
user_name = admin
password = YWRtaW4=
user_level = 1 ; administrator=0 operator=1 user=2

[osd.common]
enable_osd = 1
is_presistent_text = 1
attribute = transparent/not-flashing
font_size = 32
font_color_mode = customize
font_color = fff799
alignment = customize
boundary = 0
font_path = /oem/usr/share/simsun_en.ttf
normalized_screen_width = 704
normalized_screen_height = 480

[osd.0]
type = channelName
enabled = 0
position_x = 1104
position_y = 640
display_text = Camera 01

[osd.1]
type = dateTime
enabled = 1
position_x = 16
position_y = 16
date_style = CHR-YYYY-MM-DD
time_style = 24hour
display_week_enabled = 0

[osd.2]
type = character
enabled = 0
position_x = 0
position_y = 0
display_text = null

[osd.3]
type = character
enabled = 0
position_x = 0
position_y = 0
display_text = null

[osd.4]
type = privacyMask
enabled = 0
position_x = 0
position_y = 0
width = 0
height = 0

[osd.5]
type = privacyMask
enabled = 0
position_x = 0
position_y = 0
width = 0
height = 0

[osd.6]
type = image
enabled = 0
position_x = 16
position_y = 640
image_path = /userdata/image.bmp

[event.regional_invasion]
enabled = 1
position_x = 0
position_y = 0
width = 700
height = 560
proportion = 1
sensitivity_level = 90
time_threshold = 1
rockiva_model_type = small ;big medium  small

[roi.0]
stream_type = mainStream
id = 1
enabled = 0
name = test
position_x = 0
position_y = 0
width = 0
height = 0
quality_level = 3

[roi.1]
stream_type = mainStream
id = 2
enabled = 0
name = test
position_x = 0
position_y = 0
width = 0
height = 0
quality_level = 3

[roi.2]
stream_type = subStream
id = 1
enabled = 0
name = test
position_x = 0
position_y = 0
width = 0
height = 0
quality_level = 3

[roi.3]
stream_type = subStream
id = 2
enabled = 0
name = test
position_x = 0
position_y = 0
width = 0
height = 0
quality_level = 3

[roi.4]
stream_type = thirdStream
id = 1
enabled = 0
name = test
position_x = 0
position_y = 0
width = 0
height = 0
quality_level = 3

[roi.5]
stream_type = thirdStream
id = 2
enabled = 0
name = test
position_x = 0
position_y = 0
width = 0
height = 0
quality_level = 3

[region_clip.1]
enabled = 0
position_x = 0
position_y = 0
width = 640
height = 480

[network.ntp]
enable = 1
refresh_time_s = 60
ntp_server = 119.28.183.184
[root@luckfox userdata]# cat rkipc.ini 

[audio.0]
enable                         = 0
card_name                      = hw:0,0
encode_type                    = G711A
format                         = S16
sample_rate                    = 8000
channels                       = 1
frame_size                     = 1152
bit_rate                       = 16000
input                          = mic_in
volume                         = 50
enable_aed                     = 0
enable_bcd                     = 0
enable_vqe                     = 1
vqe_cfg                        = /oem/usr/share/vqefiles/config_aivqe.json


[video.source]
enable_aiq                     = 1
enable_vo                      = 0
vo_dev_id                      = 3
enable_ivs                     = 1
enable_jpeg                    = 1
enable_venc_0                  = 1
enable_venc_1                  = 1
enable_venc_2                  = 0
enable_npu                     = 0
npu_fps                        = 10
enable_rtsp                    = 1
enable_rtmp                    = 0
rotation                       = 0


[video.0]
buffer_size                    = 1492992
buffer_count                   = 4
enable_refer_buffer_share      = 1
stream_type                    = mainStream
video_type                     = compositeStream
max_width                      = 2304
max_height                     = 1296
width                          = 2304
height                         = 1296
rc_mode                        = CBR
rc_quality                     = high
src_frame_rate_den             = 1
src_frame_rate_num             = 25
dst_frame_rate_den             = 1
dst_frame_rate_num             = 25
target_rate                    = 0
mid_rate                       = 1024
max_rate                       = 2048
min_rate                       = 0
output_data_type               = H.265
smart                          = close
h264_profile                   = high
gop                            = 50
smartp_viridrlen               = 25
gop_mode                       = normalP
stream_smooth                  = 50
enable_motion_deblur           = 1
motion_deblur_strength         = 3
enable_motion_static_switch    = 0
frame_min_i_qp                 = 26
frame_min_qp                   = 28
frame_max_i_qp                 = 51
frame_max_qp                   = 51
scalinglist                    = 0
enable_debreath_effect         = 0
debreath_effect_strength       = 16
thrd_i                         = 0,0,0,0,3,3,5,5,8,8,8,15,15,20,25,25
thrd_p                         = 0,0,0,0,3,3,5,5,8,8,8,15,15,20,25,25
aq_step_i                      = -8,-7,-6,-5,-4,-3,-2,-1,0,1,2,3,4,5,7,8
aq_step_p                      = -8,-7,-6,-5,-4,-3,-2,-1,0,1,2,3,4,5,7,8
qbias_enable                   = 1
qbias_i                        = 171
qbias_p                        = 85
flt_str_i                      = 0
flt_str_p                      = 0
cu_dqp                         = 1
anti_ring                      = 2
anti_line                      = 2
lambds                         = 4


[video.1]
input_buffer_count             = 2
buffer_size                    = 202752
buffer_count                   = 4
enable_refer_buffer_share      = 1
stream_type                    = subStream
video_type                     = compositeStream
max_width                      = 704
max_height                     = 576
width                          = 704
height                         = 576
rc_mode                        = CBR
rc_quality                     = high
src_frame_rate_den             = 1
src_frame_rate_num             = 30
dst_frame_rate_den             = 1
dst_frame_rate_num             = 30
target_rate                    = 0
mid_rate                       = 256
max_rate                       = 512
min_rate                       = 0
output_data_type               = H.265
smart                          = close
h264_profile                   = high
gop                            = 50
smartp_viridrlen               = 25
gop_mode                       = normalP
stream_smooth                  = 50
enable_motion_deblur           = 1
motion_deblur_strength         = 3
enable_motion_static_switch    = 0
frame_min_i_qp                 = 26
frame_min_qp                   = 28
frame_max_i_qp                 = 51
frame_max_qp                   = 51
scalinglist                    = 0
enable_debreath_effect         = 0
debreath_effect_strength       = 16
cu_dqp                         = 1
anti_ring                      = 2
anti_line                      = 2
lambds                         = 4


[video.2]
max_width                      = 960
max_height                     = 540
width                          = 960
height                         = 540


[ivs]
smear                          = 0
weightp                        = 0
md                             = 1
od                             = 1
md_sensibility                 = 3


[video.jpeg]
width                          = 1920
height                         = 1080
jpeg_buffer_size               = 1048576
jpeg_qfactor                   = 70
enable_cycle_snapshot          = 0
snapshot_interval_ms           = 1000


[isp]
scenario                       = normal
init_form_ini                  = 1
normal_scene                   = day
custom1_scene                  = night
ircut_open_gpio                = 36
ircut_close_gpio               = 35


[isp.0.adjustment]
contrast                       = 50
brightness                     = 50
saturation                     = 50
sharpness                      = 50
fps                            = 25
hue                            = 50


[isp.0.exposure]
iris_type                      = auto
exposure_mode                  = auto
gain_mode                      = auto
auto_iris_level                = 5
auto_exposure_enabled          = 1
audo_gain_enabled              = 1
exposure_time                  = 1/6
exposure_gain                  = 1


[isp.0.night_to_day]
night_to_day                   = day
night_to_day_filter_level      = 5
night_to_day_filter_time       = 5
dawn_time                      = 07:00:00
dusk_time                      = 18:00:00
ircut_filter_action            = day
over_exposure_suppress         = open
over_exposure_suppress_type    = auto
fill_light_mode                = IR
brightness_adjustment_mode     = auto
light_brightness               = 1
distance_level                 = 1


[isp.0.blc]
blc_region                     = close
blc_strength                   = 1
wdr                            = close
wdr_level                      = 0
hdr                            = close
hdr_level                      = 1
hlc                            = close
hlc_level                      = 0
dark_boost_level               = 0
position_x                     = 0
position_y                     = 0
blc_region_width               = 120
blc_region_high                = 92


[isp.0.white_blance]
white_blance_style             = autoWhiteBalance
white_blance_red               = 50
white_blance_green             = 50
white_blance_blue              = 50


[isp.0.enhancement]
noise_reduce_mode              = close
denoise_level                  = 50
spatial_denoise_level          = 50
temporal_denoise_level         = 50
dehaze                         = close
dehaze_level                   = 0
dis                            = close
gray_scale_mode                = [0-255]
distortion_correction          = close
ldch_level                     = 0


[isp.0.video_adjustment]
image_flip                     = close
scene_mode                     = indoor
power_line_frequency_mode      = PAL(50HZ)


[isp.0.auto_focus]
af_mode                        = semi-auto
zoom_level                     = 0
focus_level                    = 0


[isp.1.adjustment]
contrast                       = 50
brightness                     = 75
saturation                     = 50
sharpness                      = 50
fps                            = 25
hue                            = 50


[isp.1.exposure]
iris_type                      = auto
exposure_mode                  = auto
gain_mode                      = auto
auto_iris_level                = 5
auto_exposure_enabled          = 1
audo_gain_enabled              = 1
exposure_time                  = 1/6
exposure_gain                  = 1


[isp.1.night_to_day]
night_to_day                   = day
night_to_day_filter_level      = 5
night_to_day_filter_time       = 5
dawn_time                      = 07:00:00
dusk_time                      = 18:00:00
ircut_filter_action            = day
over_exposure_suppress         = open
over_exposure_suppress_type    = auto
fill_light_mode                = IR
brightness_adjustment_mode     = auto
light_brightness               = 1
distance_level                 = 1


[isp.1.blc]
blc_region                     = close
blc_strength                   = 1
wdr                            = close
wdr_level                      = 0
hdr                            = close
hdr_level                      = 1
hlc                            = close
hlc_level                      = 0
dark_boost_level               = 0
position_x                     = 0
position_y                     = 0
blc_region_width               = 120
blc_region_high                = 92


[isp.1.white_blance]
white_blance_style             = autoWhiteBalance
white_blance_red               = 50
white_blance_green             = 50
white_blance_blue              = 50


[isp.1.enhancement]
noise_reduce_mode              = close
denoise_level                  = 50
spatial_denoise_level          = 50
temporal_denoise_level         = 50
dehaze                         = close
dehaze_level                   = 0
dis                            = close
gray_scale_mode                = [0-255]
distortion_correction          = close
ldch_level                     = 0


[isp.1.video_adjustment]
image_flip                     = close
scene_mode                     = indoor
power_line_frequency_mode      = PAL(50HZ)


[isp.1.auto_focus]
af_mode                        = semi-auto
zoom_level                     = 0
focus_level                    = 0


[storage]
mount_path                     = /userdata
free_size_del_min              = 500
free_size_del_max              = 1000
num_limit_enable               = 1


[storage.0]
enable                         = 0
folder_name                    = video0
file_format                    = mp4
file_duration                  = 60
video_quota                    = 30
file_max_num                   = 300


[storage.1]
enable                         = 0
folder_name                    = video1
file_format                    = mp4
file_duration                  = 60
video_quota                    = 30
file_max_num                   = 300


[storage.2]
enable                         = 0
folder_name                    = video2
file_format                    = mp4
file_duration                  = 60
video_quota                    = 30
file_max_num                   = 300


[system.device_info]
deivce_name                    = RK IP Camera
telecontrol_id                 = 88
model                          = RK-003
serial_number                  = RK-003-A
firmware_version               = V0.2.6 build 202108
encoder_version                = V1.0 build 202108
web_version                    = V2.12.2 build 202108
plugin_version                 = V1.0.0.0
channels_number                = 1
hard_disks_number              = 1
alarm_inputs_number            = 0
alarm_outputs_number           = 0
firmware_version_info          = CP-3-B
manufacturer                   = Rockchip
hardware_id                    = c3d9b8674f4b94f6
user_num                       = 1


[capability.video]
0                              = {"disabled":[{"name":"sStreamType","options":{"subStream":{"sSmart":"close"},"thirdStream":{"sSmart":"close"}},"type":"disabled/limit"},{"name":"sRCMode","options":{"CBR":{"sRCQuality":null}},"type":"disabled"},{"name":"sOutputDataType","options":{"H.265":{"sH264Profile":null}},"type":"disabled"},{"name":"unspport","options":{"iStreamSmooth":null,"sVideoType":null},"type":"disabled"}],"dynamic":{"sSmart":{"open":{"iMinRate":{"dynamicRange":{"max":"iMaxRate","maxRate":1,"min":"iMaxRate","minRate":0.125},"type":"dynamicRange"}}},"sStreamType":{"mainStream":{"iMaxRate":{"options":[256,512,1024,2048,3072,4096,6144],"type":"options"},"sResolution":{"options":["2304*1296","1920*1080","1280*720","960*540","640*360","320*240"],"type":"options"}},"subStream":{"iMaxRate"
1                              = :{"options":[128,256,512],"type":"options"},"sResolution":{"options":["704*576","640*480","352*288","320*240"],"type":"options"}},"thirdStream":{"iMaxRate":{"options":[256,512],"type":"options"},"sResolution":{"options":["416*416"],"type":"options"}}}},"layout":{"encoder":["sStreamType","sVideoType","sResolution","sRCMode","sRCQuality","sFrameRate","sOutputDataType","sSmart","sH264Profile","sGOPMode","iMaxRate","iGOP","iStreamSmooth"]},"static":{"iGOP":{"range":{"max":400,"min":1},"type":"range"},"iStreamSmooth":{"range":{"max":100,"min":1,"step":1},"type":"range"},"sFrameRate":{"dynamicRange":{"max":"sFrameRateIn","maxRate":1},"options":["1/2","1","2","4","6","8","10","12","14","16","18","20","25","30"],"type":"options/dynamicRange"},"sH264Profile":{"options":["high","main","baseline"],"type":"options"},"sOutputDataType":{"options"
2                              = :["H.264","H.265"],"type":"options"},"sRCMode":{"options":["CBR","VBR"],"type":"options"},"sRCQuality":{"options":["lowest","lower","low","medium","high","higher","highest"],"type":"options"},"sGOPMode":{"options":["normalP","smartP"],"type":"options"},"sSmart":{"options":["open","close"],"type":"options"},"sStreamType":{"options":["mainStream","subStream","thirdStream"],"type":"options"},"sVideoType":{"options":["videoStream","compositeStream"],"type":"options"}}}


[capability.image_adjustment]
0                              = {"layout":{"image_adjustment":["iBrightness","iContrast","iSaturation","iSharpness","iHue"]},"static":{"iBrightness":{"range":{"max":100,"min":0,"step":1},"type":"range"},"iContrast":{"range":{"max":100,"min":0,"step":1},"type":"range"},"iHue":{"range":{"max":100,"min":0,"step":1},"type":"range"},"iSaturation":{"range":{"max":100,"min":0,"step":1},"type":"range"},"iSharpness":{"range":{"max":100,"min":0,"step":1},"type":"range"}}}


[capability.image_blc]
0                              = {"disabled":[{"name":"sHLC","options":{"open":{"sBLCRegion":null}},"type":"disabled"},{"name":"sBLCRegion","options":{"open":{"iDarkBoostLevel":null,"iHLCLevel":null,"sHLC":null}},"type":"disabled"}],"dynamic":{"sBLCRegion":{"open":{"iBLCStrength":{"range":{"max":100,"min":0,"step":1},"type":"range"}}},"sHDR":{"HDR2":{"iHDRLevel":{"options":[1,2,3,4],"type":"options"}},"close":{"sBLCRegion":{"options":["close","open"],"type":"options"},"sHLC":{"options"
1                              = :["close","open"],"type":"options"}}},"sHLC":{"open":{"iDarkBoostLevel":{"range":{"max":100,"min":0,"step":1},"type":"range"},"iHLCLevel":{"range":{"max":100,"min":0,"step":1},"type":"range"}}},"sWDR":{"open":{"iWDRLevel":{"range":{"max":100,"min":0,"step":1},"type":"range"}}}},"layout":{"image_blc":["sHDR","iHDRLevel","sBLCRegion","iBLCStrength","sHLC","iHLCLevel"]},"static":{"sHDR":{"options":["close","HDR2"],"type":"options"}}}


[capability.image_enhancement]
0                              = {"dynamic":{"sDehaze":{"open":{"iDehazeLevel":{"range":{"max":10,"min":0,"step":1},"type":"range"}}},"sDistortionCorrection":{"FEC":{"iFecLevel":{"range":{"max":100,"min":0,"step":1},"type":"range"}},"LDCH":{"iLdchLevel":{"range":{"max":100,"min":0,"step":1},"type":"range"}}},"sNoiseReduceMode":{"2dnr":{"iSpatialDenoiseLevel":{"range":{"max":100,"min":0,"step":1},"type":"range"}},"3dnr":{"iTemporalDenoiseLevel":{"range":{"max":100,"min":0,"step":1},"type":"range"}},"mixnr":{"iSpatialDenoiseLevel":{"range":{"max":100,"min":0,"step":1},"type":"range"},"iTemporalDenoiseLevel":{"range":{"max":100,"min":0,"step":1},"type":"range"}}}},"layout"
1                              = :{"image_enhancement":["sNoiseReduceMode","iSpatialDenoiseLevel","iTemporalDenoiseLevel","sDehaze","iDehazeLevel","sGrayScaleMode","sDistortionCorrection","iLdchLevel","iFecLevel"]},"static":{"sDIS":{"options":["open","close"],"type":"options"},"sDehaze":{"options":["open","close","auto"],"type":"options"},"sDistortionCorrection":{"options":["LDCH","close"],"type":"options"},"sFEC":{"options":["open","close"],"type":"options"},"sGrayScaleMode":{"options":["[0-255]","[16-235]"],"type":"options"},"sNoiseReduceMode":{"options":["close","2dnr","3dnr","mixnr"],"type":"options"}}}


[capability.image_exposure]
0                              = {"dynamic":{"sExposureMode":{"auto":{"iAutoIrisLevel":{"range":{"max":100,"min":0,"step":1},"type":"range"}},"manual":{"sExposureTime":{"options":["1","1/3","1/6","1/12","1/25","1/50","1/100","1/150","1/200","1/250","1/500","1/750","1/1000","1/2000","1/4000","1/10000","1/100000"],"type":"options"},"sGainMode":{"options":["auto","manual"],"type":"options"}}},"sGainMode":{"manual":{"iExposureGain":{"range":{"max":100,"min":1,"step":1},"type":"range"}}}},"layout":{"image_exposure":["sExposureMode","sExposureTime","sGainMode","iExposureGain","iFPS"]},"static":{"sExposureMode":{"options":["auto","manual"],"type":"options"},"iFPS":{"range":{"max":30,"min":0,"step":1},"type":"range"}}}


[capability.image_night_to_day]
0                              = {"disabled":[{"name":"sNightToDay","options":{"day":{"iLightBrightness":null,"sFillLightMode":null},"night":{"iDarkBoostLevel":null,"iHDRLevel":null,"iHLCLevel":null,"sHDR":null,"sHLC":"close"}},"type":"disabled"}],"dynamic":{"sNightToDay":{"auto":{"iNightToDayFilterLevel":{"options":[0,1,2,3,4,5,6,7],"type":"options"},"iNightToDayFilterTime":{"range":{"max":10,"min":3,"step":1},"type":"range"}},"schedule":{"sDawnTime":{"input":"time","type":"input"},"sDuskTime":{"input":"time","type":"input"}}},"sOverexposeSuppress":{"open"
1                              = :{"sOverexposeSuppressType":{"options":["auto","manual"],"type":"options"}}},"sOverexposeSuppressType":{"manual":{"iDistanceLevel":{"range":{"max":100,"min":0,"step":1},"type":"range"}}}},"layout":{"image_night_to_day":["sNightToDay","iNightToDayFilterLevel","iNightToDayFilterTime","sDawnTime","sDuskTime","sFillLightMode","iLightBrightness"]},"static":{"iLightBrightness":{"range":{"max":100,"min":0,"step":10},"type":"range"},"sNightToDay":{"options":["day","night"],"type":"options"},"sFillLightMode":{"type":"options","options":["IR"]}}}


[capability.image_video_adjustment]
0                              = {"layout":{"image_video_adjustment":["sPowerLineFrequencyMode","sImageFlip","iImageRotation"]},"static":{"sImageFlip":{"options":["close","flip","mirror","centrosymmetric"],"type":"options"},"sPowerLineFrequencyMode":{"options":["PAL(50HZ)","NTSC(60HZ)"],"type":"options"},"sSceneMode":{"options":["indoor","outdoor"],"type":"options"},"iImageRotation":{"options":[0,90,180,270],"type":"options"}}}


[capability.image_white_blance]
0                              = {"dynamic":{"sWhiteBlanceStyle":{"manualWhiteBalance":{"iWhiteBalanceBlue":{"range":{"max":100,"min":0,"step":1},"type":"range"},"iWhiteBalanceGreen":{"range":{"max":100,"min":0,"step":1},"type":"range"},"iWhiteBalanceRed":{"range":{"max":100,"min":0,"step":1},"type":"range"}}}},"layout":{"image_white_blance":["sWhiteBlanceStyle","iWhiteBalanceRed","iWhiteBalanceGreen","iWhiteBalanceBlue"]},"static":{"sWhiteBlanceStyle":{"options":["manualWhiteBalance","autoWhiteBalance","lockingWhiteBalance","fluorescentLamp","incandescent","warmLight","naturalLight"],"type":"options"}}}


[user.0]
user_name                      = admin
password                       = YWRtaW4=
user_level                     = 1


[osd.common]
enable_osd                     = 1
is_presistent_text             = 1
attribute                      = transparent/not-flashing
font_size                      = 32
font_color_mode                = customize
font_color                     = fff799
alignment                      = customize
boundary                       = 0
font_path                      = /oem/usr/share/simsun_en.ttf
normalized_screen_width        = 704
normalized_screen_height       = 480


[osd.0]
type                           = channelName
enabled                        = 0
position_x                     = 1104
position_y                     = 640
display_text                   = Camera 01


[osd.1]
type                           = dateTime
enabled                        = 1
position_x                     = 16
position_y                     = 16
date_style                     = CHR-YYYY-MM-DD
time_style                     = 24hour
display_week_enabled           = 0


[osd.2]
type                           = character
enabled                        = 0
position_x                     = 0
position_y                     = 0
display_text                   = null


[osd.3]
type                           = character
enabled                        = 0
position_x                     = 0
position_y                     = 0
display_text                   = null


[osd.4]
type                           = privacyMask
enabled                        = 0
position_x                     = 0
position_y                     = 0
width                          = 0
height                         = 0


[osd.5]
type                           = privacyMask
enabled                        = 0
position_x                     = 0
position_y                     = 0
width                          = 0
height                         = 0


[osd.6]
type                           = image
enabled                        = 0
position_x                     = 16
position_y                     = 640
image_path                     = /userdata/image.bmp


[event.regional_invasion]
enabled                        = 1
position_x                     = 0
position_y                     = 0
width                          = 700
height                         = 560
proportion                     = 1
sensitivity_level              = 90
time_threshold                 = 1
rockiva_model_type             = small


[roi.0]
stream_type                    = mainStream
id                             = 1
enabled                        = 0
name                           = test
position_x                     = 0
position_y                     = 0
width                          = 0
height                         = 0
quality_level                  = 3


[roi.1]
stream_type                    = mainStream
id                             = 2
enabled                        = 0
name                           = test
position_x                     = 0
position_y                     = 0
width                          = 0
height                         = 0
quality_level                  = 3


[roi.2]
stream_type                    = subStream
id                             = 1
enabled                        = 0
name                           = test
position_x                     = 0
position_y                     = 0
width                          = 0
height                         = 0
quality_level                  = 3


[roi.3]
stream_type                    = subStream
id                             = 2
enabled                        = 0
name                           = test
position_x                     = 0
position_y                     = 0
width                          = 0
height                         = 0
quality_level                  = 3


[roi.4]
stream_type                    = thirdStream
id                             = 1
enabled                        = 0
name                           = test
position_x                     = 0
position_y                     = 0
width                          = 0
height                         = 0
quality_level                  = 3


[roi.5]
stream_type                    = thirdStream
id                             = 2
enabled                        = 0
name                           = test
position_x                     = 0
position_y                     = 0
width                          = 0
height                         = 0
quality_level                  = 3


[region_clip.1]
enabled                        = 0
position_x                     = 0
position_y                     = 0
width                          = 640
height                         = 480


[network.ntp]
enable                         = 1
refresh_time_s                 = 60
ntp_server                     = 119.28.183.184

 

step 3. VLC on windows

리눅스에서는 3.0.16, 윈도우에서는 3.0.23 인데 윈도우는 되고 리눅스에서는 안된다. 머냐.. -_-

그래서! 리눅스 에서 snap으로 3.0.20 으로 설치하니 된다. 구버전이라 먼가 안되는 버그가 있나?

rtsp://192.168.40.216/live/0

 

 

 

리눅스에서 rtsp 포트로 접속하려하면 튕기진 않는데 왜 안되는 것이냐...

$ telnet 192.168.40.216 554
Trying 192.168.40.216...
Connected to 192.168.40.216.
Escape character is '^]'.
SETUP rtsp://192.168.40.216:554/live/0 RTSP/1.0

RTSP/1.0 400 Bad Request
Date: Thu Apr 23 10:45:18 2026
Server: rtsp_demo

Connection closed by foreign host.

 

 

'embeded > luckfox' 카테고리의 다른 글

luckfox pico ultra W 켜봄  (0) 2026.04.22
Posted by 구차니
embeded/luckfox2026. 4. 22. 16:49

UART2 / 115200bpx / TTL 시리얼 콘솔

[링크 : https://wiki.luckfox.com/Luckfox-Pico-Ultra/UART]

 

ssh / web 로그인 계정

root / luckfox

 

80번 포트로 접속하면 웹으로 뜨는데, ip 카메라 예제인듯. 그런데 카메라가 어떻게 해야 인식되냐..

[링크 : https://wiki.luckfox.com/Luckfox-Pico-Ultra/CSI-Camera/]

 

[링크 : https://wiki.luckfox.com/Luckfox-Pico-Ultra/Login]

 

 

[root@luckfox ]# df -h
Filesystem                Size      Used Available Use% Mounted on
/dev/root                 5.8G    194.1M      5.4G   3% /
devtmpfs                 85.4M         0     85.4M   0% /dev
tmpfs                    85.5M         0     85.5M   0% /dev/shm
tmpfs                    85.5M    212.0K     85.3M   0% /tmp
tmpfs                    85.5M    260.0K     85.3M   0% /run
/dev/block/by-name/userdata
                        237.9M     72.0K    228.6M   0% /userdata
/dev/block/by-name/oem
                        487.6M     22.6M    448.9M   5% /oem


[root@luckfox ]# free -h
               total        used        free      shared  buff/cache   available
Mem:           171Mi        16Mi       123Mi       0.0Ki        31Mi       147Mi
Swap:             0B          0B          0B


[root@luckfox ]# cat /proc/cpuinfo 
processor       : 0
model name      : ARMv7 Processor rev 5 (v7l)
BogoMIPS        : 48.00
Features        : half thumb fastmult vfp edsp neon vfpv3 tls vfpv4 idiva idivt vfpd32 lpae 
CPU implementer : 0x41
CPU architecture: 7
CPU variant     : 0x0
CPU part        : 0xc07
CPU revision    : 5

Hardware        : Generic DT based system
Revision        : 0000
Serial          : 4e9080747fb4b024


[root@luckfox ]# netstat -tl
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       
tcp        0      0 localhost:5037          0.0.0.0:*               LISTEN      
tcp        0      0 0.0.0.0:1935            0.0.0.0:*               LISTEN      
tcp        0      0 0.0.0.0:www             0.0.0.0:*               LISTEN      
tcp        0      0 0.0.0.0:5555            0.0.0.0:*               LISTEN      
tcp        0      0 0.0.0.0:ssh             0.0.0.0:*               LISTEN      
netstat: /proc/net/tcp6: No such file or directory

 

별 의미는 없지만 시리얼 콘솔 로그

DDR 306b9977f5 wesley.yao 23/12/21-09:28:37,fwver: v1.15
S5P1
4x
f967
rgef1
DDRConf2
DDR3, BW=16 Col=10 Bk=8 CS0 Row=14 CS=1 Size=256MB
924MHz
DDR bin out

U-Boot SPL board init
U-Boot SPL 2017.09 (May 12 2025 - 21:35:32)
unknown raw ID 0 0 0
Trying to boot from MMC2
Card did not respond to voltage select!
mmc_init: -95, time 20
Card did not respond to voltage select!
mmc_init: -95, time 20
spl: mmc init failed with error: -95
Trying to boot from MMC1
Best phase range 0-270 (25 len)
Successfully tuned phase to 135, used 4ms
ENVF: Primary 0x00000000 - 0x00008000
ENVF: Primary 0x00000000 - 0x00008000
No misc partition
Trying fit image at 0x440 sector
## Verified-boot: 0
## Checking uboot 0x00200000 (lzma @0x00400000) ... sha256(0079031a02...) + sha256(7dee96eaea...) + OK
## Checking fdt 0x00260998 ... sha256(9f596c5683...) + OK
Total: 405.332/437.95 ms

Jumping to U-Boot(0x00200000)


U-Boot 2017.09 (May 12 2025 - 21:35:32 +0800)

Model: Rockchip RV1106 EVB Board
MPIDR: 0xf00
PreSerial: 2, raw, 0xff4c0000
DRAM:  256 MiB
Sysmem: init
Relocation Offset: 0fd81000
Relocation fdt: 0edfa778 - 0edfede0
CR: M/C/I
Using default environment

mmc@ffa90000: 0, mmc@ffaa0000: 1
Best phase range 0-270 (25 len)
Successfully tuned phase to 135, used 3ms
ENVF: Primary 0x00000000 - 0x00008000
ENVF: Primary 0x00000000 - 0x00008000
Bootdev(atags): mmc 0
MMC0: HS200, 200Mhz
PartType: ENV
DM: v2
No misc partition
boot mode: None
RESC: 'boot', blk@0x00001ec0
resource: sha256+
FIT: no signed, no conf required
DTB: rk-kernel.dtb
HASH(c): OK
Model: Luckfox Pico Ultra W
## retrieving sd_update.txt ...
Card did not respond to voltage select!
mmc_init: -95, time 20
CLK: (sync kernel. arm: enter 816000 KHz, init 816000 KHz, kernel 0N/A)
  apll 816000 KHz
  dpll 924000 KHz
  gpll 1188000 KHz
  cpll 1000000 KHz
  aclk_peri_root 400000 KHz
  hclK_peri_root 200000 KHz
  pclk_peri_root 100000 KHz
  aclk_bus_root 300000 KHz
  pclk_top_root 100000 KHz
  pclk_pmu_root 100000 KHz
  hclk_pmu_root 200000 KHz
Net:   eth0: ethernet@ffa80000
Hit key to stop autoboot('CTRL+C'):  0 
## Booting FIT Image at 0xe8e9740 with size 0x00310000
Fdt Ramdisk skip relocation
No misc partition
Sysmem Warn: kernel 'reserved-memory' "mmc@3f000"(0x0003f000 - 0x00040000) is overlap with "KERNEL" (0x00008000 - 0x0030d404)

sysmem_dump_all:
    --------------------------------------------------------------------
    memory.rgn[0].addr     = 0x00000000 - 0x10000000 (size: 0x10000000)

    memory.total           = 0x10000000 (256 MiB. 0 KiB)
    --------------------------------------------------------------------
    allocated.rgn[0].name  = "UBOOT"  
                    .addr  = 0x0edfa750 - 0x10000000 (size: 0x012058b0)
    allocated.rgn[1].name  = "STACK"  
                    .addr  = 0x0ebfa750 - 0x0edfa750 (size: 0x00200000)
    allocated.rgn[2].name  = "FIT"  
                    .addr  = 0x0e8e9740 - 0x0ebf9744 (size: 0x00310004)
    allocated.rgn[3].name  = "FDT"  
                    .addr  = 0x00c00000 - 0x00c0a404 (size: 0x0000a404)
    allocated.rgn[4].name  = "KERNEL"  
                    .addr  = 0x00008000 - 0x0030d404 (size: 0x00305404)

    kmem-resv.rgn[0].name  = "mmc@3f000" 
                    .addr  = 0x0003f000 - 0x00040000 (size: 0x00001000)

    framework malloc_r     =  16 MiB
    framework malloc_f     = 512 KiB

    allocated.total        = 0x01a260bc (26 MiB. 152 KiB)
    --------------------------------------------------------------------
    LMB.allocated[0].addr  = 0x00008000 - 0x0030d404 (size: 0x00305404)
    LMB.allocated[1].addr  = 0x00c00000 - 0x00c0a404 (size: 0x0000a404)
    LMB.allocated[2].addr  = 0x0e8e9740 - 0x0ebf9780 (size: 0x00310040)
    LMB.allocated[3].addr  = 0x0ebfa750 - 0x10000000 (size: 0x014058b0)

    reserved.core.total    = 0x01a250f8 (26 MiB. 148 KiB)
    --------------------------------------------------------------------

## Loading kernel from FIT Image at 0e8e9740 ...
   Using 'conf' configuration
## Verified-boot: 0
   Trying 'kernel' kernel subimage
     Description:  unavailable
     Type:         Kernel Image
     Compression:  uncompressed
     Data Start:   0x0e8f4340
     Data Size:    3166880 Bytes = 3 MiB
     Architecture: ARM
     OS:           Linux
     Load Address: 0x00008000
     Entry Point:  0x00008000
     Hash algo:    sha256
     Hash value:   83fbf25080cf3dda5f8d654c9a9dca84f7bba3cec42e71bc6159184503c38c9b
   Verifying Hash Integrity ... sha256+ OK
## Loading fdt from FIT Image at 0e8e9740 ...
   Using 'conf' configuration
   Trying 'fdt' fdt subimage
     Description:  unavailable
     Type:         Flat Device Tree
     Compression:  uncompressed
     Data Start:   0x0e8e9f40
     Data Size:    41878 Bytes = 40.9 KiB
     Architecture: ARM
     Load Address: 0x00c00000
     Hash algo:    sha256
     Hash value:   507d711cc7ef081404434f0b3beb303553eb60dabda6df617ab2486aa350ace6
   Verifying Hash Integrity ... sha256+ OK
   Loading fdt from 0x0e8e9740 to 0x00c00000
   Booting using the fdt blob at 0x00c00000
   Loading Kernel Image from 0x0e8f4340 to 0x00008000 ... OK
   kernel loaded at 0x00008000, end = 0x0030d2a0
   Using Device Tree in place at 00c00000, end 00c0d395
## reserved-memory:
  mmc@3f000: addr=3f000 size=1000
Adding bank: 0x00000000 - 0x10000000 (size: 0x10000000)
Total: 819.212/1261.293 ms

Starting kernel ...

[    0.000000] Booting Linux on physical CPU 0x0
[    0.000000] Linux version 5.10.160 (hxj@luckfox-System-Product-Name) (arm-rockchip830-linux-uclibcgnueabihf-gcc (crosstool-NG 1.24.0) 8.3.0, GN5
[    0.000000] CPU: ARMv7 Processor [410fc075] revision 5 (ARMv7), cr=50c53c7d
[    0.000000] CPU: div instructions available: patching division code
[    0.000000] CPU: PIPT / VIPT nonaliasing data cache, VIPT aliasing instruction cache
[    0.000000] OF: fdt: Machine model: Luckfox Pico Ultra W
[    0.000000] Memory policy: Data cache writeback
[    0.000000] OF: fdt: Reserved memory: failed to reserve memory for node 'drm-logo@00000000': base 0x00000000, size 0 MiB
[    0.000000] Reserved memory: created CMA memory pool at 0x0f600000, size 10 MiB
[    0.000000] OF: reserved mem: initialized node linux,cma, compatible id shared-dma-pool
[    0.000000] cma: Reserved 67584 KiB at 0x0b400000
[    0.000000] Zone ranges:
[    0.000000]   Normal   [mem 0x0000000000000000-0x000000000fffffff]
[    0.000000] Movable zone start for each node
[    0.000000] Early memory node ranges
[    0.000000]   node   0: [mem 0x0000000000000000-0x000000000fffffff]
[    0.000000] Initmem setup node 0 [mem 0x0000000000000000-0x000000000fffffff]
[    0.000000] CPU: All CPU(s) started in SVC mode.
[    0.000000] Built 1 zonelists, mobility grouping on.  Total pages: 65024
[    0.000000] Kernel command line: user_debug=31 storagemedia=emmc androidboot.storagemedia=emmc androidboot.mode=normal  rootwait earlycon=uart85
[    0.000000] Dentry cache hash table entries: 32768 (order: 5, 131072 bytes, linear)
[    0.000000] Inode-cache hash table entries: 16384 (order: 4, 65536 bytes, linear)
[    0.000000] mem auto-init: stack:off, heap alloc:off, heap free:off
[    0.000000] Memory: 174980K/262144K available (3990K kernel code, 385K rwdata, 1868K rodata, 204K init, 148K bss, 9340K reserved, 77824K cma-re)
[    0.000000] SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=1, Nodes=1
[    0.000000] NR_IRQS: 16, nr_irqs: 16, preallocated irqs: 16
[    0.000000] arch_timer: cp15 timer(s) running at 24.00MHz (phys).
[    0.000000] clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x588fe9dc0, max_idle_ns: 440795202592 ns
[    0.000007] sched_clock: 56 bits at 24MHz, resolution 41ns, wraps every 4398046511097ns
[    0.000019] Switching to timer-based delay loop, resolution 41ns
[    0.000533] Console: colour dummy device 80x30
[    0.000573] Calibrating delay loop (skipped), value calculated using timer frequency.. 48.00 BogoMIPS (lpj=240000)
[    0.000591] pid_max: default: 4096 minimum: 301
[    0.000748] Mount-cache hash table entries: 1024 (order: 0, 4096 bytes, linear)
[    0.000763] Mountpoint-cache hash table entries: 1024 (order: 0, 4096 bytes, linear)
[    0.001318] CPU: Testing write buffer coherency: ok
[    0.001870] Setting up static identity map for 0x81c0 - 0x81f8
[    0.002343] devtmpfs: initialized
[    0.009719] VFP support v0.3: implementor 41 architecture 2 part 30 variant 7 rev 5
[    0.009924] clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604462750000 ns
[    0.009945] futex hash table entries: 16 (order: -5, 192 bytes, linear)
[    0.010023] pinctrl core: initialized pinctrl subsystem
[    0.011146] NET: Registered protocol family 16
[    0.011575] DMA: failed to allocate 0 KiB pool for atomic coherent allocation
[    0.012090] Registered FIQ tty driver
[    0.012392] thermal_sys: Registered thermal governor 'step_wise'
[    0.012399] thermal_sys: Registered thermal governor 'user_space'
[    0.012607] cpuidle: using governor menu
[    0.027540] rockchip-gpio ff380000.gpio: probed /pinctrl/gpio@ff380000
[    0.028149] rockchip-gpio ff530000.gpio: probed /pinctrl/gpio@ff530000
[    0.028752] rockchip-gpio ff540000.gpio: probed /pinctrl/gpio@ff540000
[    0.029453] rockchip-gpio ff550000.gpio: probed /pinctrl/gpio@ff550000
[    0.030097] rockchip-gpio ff560000.gpio: probed /pinctrl/gpio@ff560000
[    0.030184] rockchip-pinctrl pinctrl: probed pinctrl
[    0.040652] fiq_debugger fiq_debugger.0: IRQ uart_irq not found
[    0.040675] fiq_debugger fiq_debugger.0: IRQ wakeup not found
[    0.040942] printk: console [ttyFIQ0] enabled
[    0.041110] Registered fiq debugger ttyFIQ0
[    0.042242] reg-fixed-voltage vdd-arm: Fixed regulator specified with variable voltages
[    0.042275] reg-fixed-voltage: probe of vdd-arm failed with error -22
[    0.042901] SCSI subsystem initialized
[    0.043112] usbcore: registered new interface driver usbfs
[    0.043156] usbcore: registered new interface driver hub
[    0.043198] usbcore: registered new device driver usb
[    0.043420] mc: Linux media interface: v0.10
[    0.043463] videodev: Linux video capture interface: v2.00
[    0.044544] Advanced Linux Sound Architecture Driver Initialized.
[    0.045022] Bluetooth: Core ver 2.22
[    0.045087] NET: Registered protocol family 31
[    0.045098] Bluetooth: HCI device and connection manager initialized
[    0.045114] Bluetooth: HCI socket layer initialized
[    0.045126] Bluetooth: L2CAP socket layer initialized
[    0.045146] Bluetooth: SCO socket layer initialized
[    0.045570] rockchip-cpuinfo cpuinfo: SoC            : 11061000
[    0.045593] rockchip-cpuinfo cpuinfo: Serial         : 4e9080747fb4b024
[    0.045977] rockchip,rknpor-powergood ff000000.syscon:rknpor-powergood: probe success!
[    0.046827] clocksource: Switched to clocksource arch_sys_counter
[    0.052880] NET: Registered protocol family 2
[    0.053099] IP idents hash table entries: 4096 (order: 3, 32768 bytes, linear)
[    0.053566] tcp_listen_portaddr_hash hash table entries: 512 (order: 0, 4096 bytes, linear)
[    0.053598] TCP established hash table entries: 2048 (order: 1, 8192 bytes, linear)
[    0.053622] TCP bind hash table entries: 2048 (order: 1, 8192 bytes, linear)
[    0.053646] TCP: Hash tables configured (established 2048 bind 2048)
[    0.053730] UDP hash table entries: 256 (order: 0, 4096 bytes, linear)
[    0.053751] UDP-Lite hash table entries: 256 (order: 0, 4096 bytes, linear)
[    0.053874] NET: Registered protocol family 1
[    0.054359] RPC: Registered named UNIX socket transport module.
[    0.054380] RPC: Registered udp transport module.
[    0.054387] RPC: Registered tcp transport module.
[    0.054394] RPC: Registered tcp NFSv4.1 backchannel transport module.
[    0.055307] rockchip-thermal ff3c8000.tsadc: tsadc is probed successfully!
[    0.056305] Initialise system trusted keyrings
[    0.056545] workingset: timestamp_bits=30 max_order=16 bucket_order=0
[    0.060068] squashfs: version 4.0 (2009/01/31) Phillip Lougher
[    0.060808] NFS: Registering the id_resolver key type
[    0.060867] Key type id_resolver registered
[    0.060876] Key type id_legacy registered
[    0.060918] jffs2: version 2.2. (NAND) © 2001-2006 Red Hat, Inc.
[    0.148684] Key type asymmetric registered
[    0.148706] Asymmetric key parser 'x509' registered
[    0.148728] io scheduler mq-deadline registered
[    0.149121] rockchip-usb2phy ff3e0000.usb2-phy: IRQ index 0 not found
[    0.151798] pwm-backlight backlight: supply power not found, using dummy regulator
[    0.154444] dma-pl330 ff420000.dma-controller: Loaded driver for PL330 DMAC-241330
[    0.154465] dma-pl330 ff420000.dma-controller:       DBUFF-128x8bytes Num_Chans-8 Num_Peri-32 Num_Events-16
[    0.155233] rockchip-pvtm ff240000.pvtm: pvtm@0 probed
[    0.155396] rockchip-pvtm ff390000.pvtm: pvtm@0 probed
[    0.155717] rockchip-system-monitor rockchip-system-monitor: system monitor probe
[    0.156248] Serial: 8250/16550 driver, 6 ports, IRQ sharing disabled
[    0.156914] ff4b0000.serial: ttyS1 at MMIO 0xff4b0000 (irq = 38, base_baud = 1500000) is a 16550A
[    0.160977] rockchip-drm display-subsystem: bound ff990000.vop (ops 0xb0424e8c)
[    0.161037] rockchip-rgb ff000000.syscon:rgb: [drm:rockchip_rgb_bind] *ERROR* failed to find panel or bridge: -517
Seeding 256 bits and crediting
[    0.161198] random: crng init done
Saving 256 bits of creditable seed for next boot
[    0.164770] panel-simple panel: supply power not found, using dummy regulator
Starting syslogd: OK
[    0.173679] brd: module loaded
[    0.178758] loop: module loaded
Starting klogd: [    0.180840] rk_gmac-dwmac ffa80000.ethernet: IRQ eth_lpi not found
OK
[    0.181052] rk_gmac-dwmac ffa80000.ethernet: PTP uses main clock
[    0.181098] rk_gmac-dwmac ffa80000.ethernet: no regulator found
[    0.181112] rk_gmac-dwmac ffa80000.ethernet: clock input or output? (input).
Running sysctl: [    0.181123] rk_gmac-dwmac ffa80000.ethernet: Can not read property: tx_delay.
[    0.181133] rk_gmac-dwmac ffa80000.ethernet: set tx_delay to 0xffffffff
[    0.181143] rk_gmac-dwmac ffa80000.ethernet: Can not read property: rx_delay.
[    0.181152] rk_gmac-dwmac ffa80000.ethernet: set rx_delay to 0xffffffff
[    0.181223] rk_gmac-dwmac ffa80000.ethernet: integrated PHY? (yes).
OK
[    0.181239] rk_gmac-dwmac ffa80000.ethernet: cannot get clock mac_clk_rx
[    0.181252] rk_gmac-dwmac ffa80000.ethernet: cannot get clock mac_clk_tx
Populating /dev using udev: [    0.181281] rk_gmac-dwmac ffa80000.ethernet: cannot get clock clk_mac_speed
[    0.181291] rk_gmac-dwmac ffa80000.ethernet: clock input from PHY
[    0.181539] rk_gmac-dwmac ffa80000.ethernet: init for RMII
[    0.181729] rk_gmac-dwmac ffa80000.ethernet: User ID: 0x30, Synopsys ID: 0x51
[    0.181749] rk_gmac-dwmac ffa80000.ethernet:         DWMAC4/5
[    0.181762] rk_gmac-dwmac ffa80000.ethernet: DMA HW capability register supported
[    0.181771] rk_gmac-dwmac ffa80000.ethernet: RX Checksum Offload Engine supported
[    0.181780] rk_gmac-dwmac ffa80000.ethernet: TX Checksum insertion supported
[    0.181789] rk_gmac-dwmac ffa80000.ethernet: Wake-Up On Lan supported
[    0.181796] rk_gmac-dwmac ffa80000.ethernet: TSO supported
[    0.181805] rk_gmac-dwmac ffa80000.ethernet: Enable RX Mitigation via HW Watchdog Timer
[    0.181815] rk_gmac-dwmac ffa80000.ethernet: TSO feature enabled
[    0.181825] rk_gmac-dwmac ffa80000.ethernet: Using 40 bits DMA width
[    0.187999] ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driver
[    0.188377] phy phy-ff3e0000.usb2-phy.0: illegal mode
[    0.188398] xhci-hcd xhci-hcd.0.auto: xHCI Host Controller
[    0.188435] xhci-hcd xhci-hcd.0.auto: new USB bus registered, assigned bus number 1
[    0.188599] xhci-hcd xhci-hcd.0.auto: hcc params 0x0220fe64 hci version 0x110 quirks 0x0000000022010010
[    0.188647] xhci-hcd xhci-hcd.0.auto: irq 66, io mem 0xffb00000
[    0.189662] hub 1-0:1.0: USB hub found
[    0.189729] hub 1-0:1.0: 1 port detected
[    0.190058] xhci-hcd xhci-hcd.0.auto: xHCI Host Controller
[    0.190089] xhci-hcd xhci-hcd.0.auto: new USB bus registered, assigned bus number 2
[    0.190115] xhci-hcd xhci-hcd.0.auto: Host supports USB 3.0 SuperSpeed
[    0.190191] usb usb2: We don't know the algorithms for LPM for this host, disabling LPM.
[    0.190853] hub 2-0:1.0: USB hub found
[    0.190914] hub 2-0:1.0: config failed, hub doesn't have any ports! (err -19)
[    0.191247] usbcore: registered new interface driver usb-storage
[    0.191396] gadgetfs: USB Gadget filesystem, version 24 Aug 2004
[    0.191413] udc-core: couldn't find an available UDC - added [g_mass_storage] to list of pending drivers
[    0.192759] rockchip-rtc ff1c0000.rtc: registered as rtc0
[    0.192833] rockchip-rtc ff1c0000.rtc: setting system clock to 2021-01-01T12:08:37 UTC (1609502917)
[    0.193070] i2c /dev entries driver
[    0.194334] Goodix-TS 3-0014: supply AVDD28 not found, using dummy regulator
[    0.194492] Goodix-TS 3-0014: supply VDDIO not found, using dummy regulator
[    0.194791] Goodix-TS 3-0014: i2c test failed attempt 1: -6
[    0.226989] Goodix-TS 3-0014: i2c test failed attempt 2: -6
[    0.266829] Goodix-TS 3-0014: I2C communication failure: -6
[    0.269367] usbcore: registered new interface driver uvcvideo
[    0.269393] USB Video Class driver (1.1.1)
[    0.270004] Bluetooth: HCI UART driver ver 2.3
[    0.270022] Bluetooth: HCI UART protocol H4 registered
[    0.270196] cpu cpu0: leakage=15
[    0.270976] Synopsys Designware Multimedia Card Interface Driver
[    0.271925] dtbocfg: 0.1.0
[    0.271968] dtbocfg: OK
[    0.273160] dwmmc_rockchip ffa90000.mmc: IDMAC supports 32-bit address mode.
[    0.273203] dwmmc_rockchip ffa90000.mmc: Using internal DMA controller.
[    0.273222] dwmmc_rockchip ffa90000.mmc: Version ID is 270a
[    0.273259] dwmmc_rockchip ffa90000.mmc: DW MMC controller at irq 55,32 bit host data width,256 deep fifo
[    0.273366] mmc_host mmc0: card is non-removable.
[    0.273601] mmc_host mmc0: Bus speed (slot 0) = 400000Hz (slot req 400000Hz, actual 400000HZ div = 0)
[    0.274166] dwmmc_rockchip ffaa0000.mmc: No normal pinctrl state
[    0.274190] dwmmc_rockchip ffaa0000.mmc: No idle pinctrl state
[    0.274337] dwmmc_rockchip ffaa0000.mmc: IDMAC supports 32-bit address mode.
[    0.274370] dwmmc_rockchip ffaa0000.mmc: Using internal DMA controller.
[    0.274385] dwmmc_rockchip ffaa0000.mmc: Version ID is 270a
[    0.274423] dwmmc_rockchip ffaa0000.mmc: DW MMC controller at irq 56,32 bit host data width,256 deep fifo
[    0.274523] dwmmc_rockchip ffaa0000.mmc: allocated mmc-pwrseq
[    0.274539] mmc_host mmc1: card is non-removable.
[    0.275987] rv1106-acodec ff480000.acodec: current soc_id: rv1106
[    0.276077] rv1106-acodec ff480000.acodec: Use pa_ctl_gpio and pa_ctl_delay_ms: 0
[    0.277738] NET: Registered protocol family 17
[    0.277922] Bluetooth: RFCOMM TTY layer initialized
[    0.277946] Bluetooth: RFCOMM socket layer initialized
[    0.277979] Bluetooth: RFCOMM ver 1.11
[    0.278033] [BT_RFKILL]: Enter rfkill_rk_init
[    0.278043] [WLAN_RFKILL]: Enter rfkill_wlan_init
[    0.278629] [BT_RFKILL]: bluetooth_platdata_parse_dt: get property: uart_rts_gpios = 5.
[    0.278672] [BT_RFKILL]: bluetooth_platdata_parse_dt: get property: BT,wake_gpio = 2.
[    0.278694] [BT_RFKILL]: bluetooth_platdata_parse_dt: get property: BT,wake_host_irq = 34.
[    0.278707] [BT_RFKILL]: bluetooth_platdata_parse_dt: clk_get failed!!!.
[    0.278761] [BT_RFKILL]: Failed to get bt_default_wake_host gpio.
[    0.278807] rfkill_bt: probe of wireless-bluetooth failed with error -1
[    0.278881] Key type dns_resolver registered
[    0.279809] Loading compiled-in X.509 certificates
[    0.295042] mmc_host mmc1: Bus speed (slot 0) = 400000Hz (slot req 400000Hz, actual 400000HZ div = 0)
[    0.308528] rockchip-drm display-subsystem: bound ff990000.vop (ops 0xb0424e8c)
[    0.308606] rockchip-drm display-subsystem: bound ff000000.syscon:rgb (ops 0xb04259ec)
[    0.308871] rockchip-drm display-subsystem: failed to parse resources for logo display
[    0.310520] rockchip-vop ff990000.vop: [drm:vop_crtc_atomic_enable] Update mode to 720x720p49, type: 17
[    0.320767] mmc1: queuing unknown CIS tuple 0x10 (5 bytes)
[    0.326008] mmc_host mmc1: Bus speed (slot 0) = 49500000Hz (slot req 50000000Hz, actual 49500000HZ div = 0)
[    0.326192] mmc1: queuing unknown CIS tuple 0x10 (5 bytes)
[    0.327274] mmc1: new SDIO card at address e9ea
[    0.331371] mmc_host mmc0: Bus speed (slot 0) = 49500000Hz (slot req 52000000Hz, actual 49500000HZ div = 0)
[    0.335078] mmc0: new high speed MMC card at address 0001
[    0.335516] mmcblk0: mmc0:0001 TWSC   7.13 GiB 
[    0.335601] mmcblk0boot0: mmc0:0001 TWSC   partition 1 4.00 MiB
[    0.335677] mmcblk0boot1: mmc0:0001 TWSC   partition 2 4.00 MiB
[    0.335856] mmcblk0rpmb: mmc0:0001 TWSC   partition 3 4.00 MiB, chardev (247:0)
[    0.336270]  mmcblk0: p1(env) p2(idblock) p3(uboot) p4(boot) p5(oem) p6(userdata) p7(rootfs)
[    0.592871] Console: switching to colour frame buffer device 90x45
[    0.609117] rockchip-drm display-subsystem: [drm] fb0: rockchipdrmfb frame buffer device
[    0.609814] [drm] Initialized rockchip 3.0.0 20140818 for display-subsystem on minor 0
[    0.610306] input: adc-keys as /devices/platform/adc-keys/input/input0
[    0.689547] ALSA device list:
[    0.689574]   #0: rv1106-acodec
[    0.699613] EXT4-fs (mmcblk0p7): INFO: recovery required on readonly filesystem
[    0.699639] EXT4-fs (mmcblk0p7): write access will be enabled during recovery
[    0.739249] EXT4-fs (mmcblk0p7): recovery complete
[    0.740906] EXT4-fs (mmcblk0p7): mounted filesystem with ordered data mode. Opts: (null)
[    0.740976] VFS: Mounted root (ext4 filesystem) readonly on device 179:7.
[    0.741113] devtmpfs: mounted
[    0.741313] Freeing unused kernel memory: 204K
[    0.741324] Kernel memory protection not selected by kernel config.
[    0.741360] Run /sbin/init as init process
[    0.746627] process '/bin/busybox' started with executable stack
[    0.800628] EXT4-fs (mmcblk0p7): re-mounted. Opts: (null)
[    1.017980] udevd[98]: starting version 3.2.7
[    1.027669] udevd[98]: specified group 'input' unknown
[    1.028737] udevd[98]: specified group 'kvm' unknown
[    1.056428] udevd[99]: starting eudev-3.2.7
done
resize2fs 1.46.5 (30-Dec-2021)
The filesystem is already 1572864 (4k) blocks long.  Nothing to do!

e2fsck 1.46.5 (30-Dec-2021)
userdata: recovering journal
userdata: clean, 15/65536 files, 18585/262144 blocks
[    2.337444] EXT4-fs (mmcblk0p6): mounted filesystem with ordered data mode. Opts: (null)
[    2.337494] ext4 filesystem being mounted at /userdata supports timestamps until 2038 (0x7fffffff)
resize2fs 1.46.5 (30-Dec-2021)
The filesystem is already 262144 (1k) blocks long.  Nothing to do!

tune2fs 1.46.5 (30-Dec-2021)
e2fsck 1.46.5 (30-Dec-2021)
oem: recovering journal
oem: clean, 372/32768 files, 12024/131072 blocks
[    2.528590] EXT4-fs (mmcblk0p5): mounted filesystem with ordered data mode. Opts: (null)
resize2fs 1.46.5 (30-Dec-2021)
The filesystem is already 131072 (4k) blocks long.  Nothing to do!

tune2fs 1.46.5 (30-Dec-2021)
Initializing random number generator... done.
Starting fcgiwrap daemon: OK
Starting nginx...
nginx: [warn] duplicate MIME type "text/html" in /oem/usr/etc/nginx/nginx.conf:43
/dev/block/by-name/userdata on /userdata type ext4 (rw,relatime)
Starting system message bus: [    3.807762] rk_dvbm ffa70000.rkdvbm: probe start
[    3.807800] rk_dvbm ffa70000.rkdvbm: dvbm ctx ae52bfa0
[    3.808102] rk_dvbm ffa70000.rkdvbm: probe success
[    3.822205] sc3336 4-0030: driver version: 00.01.01
[    3.822310] sc3336 4-0030: Failed to get pwdn-gpios
[    3.822324] sc3336 4-0030: could not get default pinstate
[    3.822333] sc3336 4-0030: could not get sleep pinstate
dbus[180]: Unknown username "pulse" in message bus configuration file
[    3.822354] sc3336 4-0030: supply avdd not found, using dummy regulator
done
[    3.822581] sc3336 4-0030: supply dovdd not found, using dummy regulator
[    3.822646] sc3336 4-0030: supply dvdd not found, using dummy regulator
[    3.833239] sc3336 4-0030: Detected OV00cc41 sensor
[    3.844537] rockchip-pinctrl pinctrl: pin gpio3-20 already requested by 4-0030; cannot claim for 4-0031
Starting bluetoothd: [    3.844562] rockchip-pinctrl pinctrl: pin-116 (4-0031) status -22
[    3.844577] rockchip-pinctrl pinctrl: could not request pin 116 (gpio3-20) from group mipi-refclk-out0  on device rockchip-pinctrl
[    3.844588] mis5001 4-0031: Error applying setting, reverse things back
OK
[    3.844610] mis5001: probe of 4-0031 failed with error -22
[    3.874595] rkcifhw ffa10000.rkcif: no iommu attached, using non-iommu buffers
[    3.874620] rkcifhw ffa10000.rkcif: No reserved memory region assign to CIF
[    3.875036] rkcif rkcif-mipi-lvds: rkcif driver version: v00.02.00
[    3.875150] rkcif rkcif-mipi-lvds: attach to cif hw node
[    3.875162] rkcif rkcif-mipi-lvds: failed to get dphy hw node
Starting network: [    3.875173] rkcif rkcif-mipi-lvds: rkcif wait line 0
[    3.875183] rkcif rkcif-mipi-lvds: rkcif fastboot reserve bufs num 3
[    3.875197] : terminal subdev does not exist
[    3.875207] : terminal subdev does not exist
[    3.875215] : terminal subdev does not exist
[    3.875223] : terminal subdev does not exist
[    3.875232] : get_remote_sensor: video pad[0] is null
[    3.875240] : rkcif_update_sensor_info: stream[0] get remote sensor_sd failed!
[    3.875251] : rkcif_scale_set_fmt: req(80, 60) src out(0, 0)
[    3.875259] : get_remote_sensor: video pad[0] is null
[    3.875265] : rkcif_update_sensor_info: stream[0] get remote sensor_sd failed!
[    3.875273] : rkcif_scale_set_fmt: req(80, 60) src out(0, 0)
[    3.875281] : get_remote_sensor: video pad[0] is null
[    3.875288] : rkcif_update_sensor_info: stream[0] get remote sensor_sd failed!
[    3.875298] : rkcif_scale_set_fmt: req(80, 60) src out(0, 0)
[    3.875306] : get_remote_sensor: video pad[0] is null
[    3.875312] : rkcif_update_sensor_info: stream[0] get remote sensor_sd failed!
[    3.875321] : rkcif_scale_set_fmt: req(80, 60) src out(0, 0)
[    3.888070] rkcif rkcif-mipi-lvds: No memory-region-thunderboot specified
[    3.889294] rockchip-mipi-csi2-hw ffa20000.mipi-csi2-hw: enter mipi csi2 hw probe!
[    3.889454] rockchip-mipi-csi2-hw ffa20000.mipi-csi2-hw: probe success, v4l2_dev:mipi-csi2-hw!
[    3.889583] rockchip-mipi-csi2-hw ffa30000.mipi-csi2-hw: enter mipi csi2 hw probe!
[    3.889687] rockchip-mipi-csi2-hw ffa30000.mipi-csi2-hw: probe success, v4l2_dev:mipi-csi2-hw!
[    3.890123] rockchip-mipi-csi2 mipi0-csi2: attach to csi2 hw node
[    3.890187] rkcif rkcif-mipi-lvds: Entity type for entity rockchip-mipi-csi2 was not initialized!
[    3.890203] rockchip-mipi-csi2: Async registered subdev
[    3.890216] rockchip-mipi-csi2: probe success, v4l2_dev:rkcif-mipi-lvds!
[    3.912805] rkisp_hw ffa00000.rkisp: is_thunderboot: 0
[    3.912833] rkisp_hw ffa00000.rkisp: Missing rockchip,grf property
[    3.912875] rkisp_hw ffa00000.rkisp: max input:0x0@0fps
OK
[    3.913047] rkisp_hw ffa00000.rkisp: get sram size:253952
[    3.913062] rkisp_hw ffa00000.rkisp: no iommu attached, using non-iommu buffers
[    3.913072] rkisp_hw ffa00000.rkisp: No reserved memory region. default cma area!
[    3.913480] rkisp rkisp-vir0: rkisp driver version: v02.05.00
Starting ntpd: [    3.913649] rkisp rkisp-vir0: No memory-region-thunderboot specified
[    3.920987] rkisp rkisp-vir0: Entity type for entity rkisp-isp-subdev was not initialized!
[    3.951080] rockchip-csi2-dphy-hw ff3e8000.csi2-dphy-hw: csi2 dphy hw probe successfully!
[    3.965504] rockchip-csi2-dphy csi2-dphy0: dphy0 matches m00_b_sc3336 4-0030:bus type 5
[    3.965531] rockchip-csi2-dphy csi2-dphy0: csi2 dphy0 probe successfully!
0
[    4.251498] rkcif rkcif-mipi-lvds: clear unready subdev num: 1
[    4.252853] rkcif-mipi-lvds: Async subdev notifier completed
[    4.253907] rkisp-vir0: Async subdev notifier completed
[    4.255015] rkisp rkisp-vir0: clear unready subdev num: 0
[    4.271376] rga: rga2, irq = 40, match scheduler
[    4.272043] rga: rga2 hardware loaded successfully, hw_version:3.3.87975.
[    4.272068] rga: rga2 probe successfully
[    4.272341] rga_iommu: IOMMU binding successfully, default mapping core[0x4]
[    4.273301] rga: Module initialized. v1.3.1
[    4.294691] mpp_vcodec: loading out-of-tree module taints kernel.
[    4.310409] mpp_vcodec: init new
[    4.310561] mpp_service mpp-srv: 424abb9b author: Yandong Lin 2024-04-29 [mpp_enc]: fix wrap enc sw timeout when resolution switch
[    4.310577] mpp_service mpp-srv: probe start
[    4.311717] mpp_rkvenc_540c ffa50000.rkvenc: probing start
OK
[    4.311964] mpp_rkvenc_540c ffa50000.rkvenc: probing finish
[    4.312397] mpp_vepu_pp ffa60000.rkvenc-pp: probe device
[    4.312493] mpp_vepu_pp ffa60000.rkvenc-pp: shared_rst_a_pp is not found!
[    4.312505] vepu_pp_init:526: No aclk reset resource define
[    4.312515] mpp_vepu_pp ffa60000.rkvenc-pp: shared_rst_h_pp is not found!
Input      rkcif-mipi-lvds Format:SBGGR10_1X10 Size:2304x1296@30fps Offset(0,0)
[    4.312522] vepu_pp_init:529: No hclk reset resource define
[    4.313753] mpp_vepu_pp ffa60000.rkvenc-pp: probing finish
[    4.314006] mpp_service mpp-srv: probe success
[    4.340565] RKNPU ff660000.npu: RKNPU: rknpu iommu device-tree entry not found!, using non-iommu mode
[    4.341763] RKNPU ff660000.npu: RKNPU: Initialized RKNPU driver: v0.9.2 for 20230825
[    4.341876] RKNPU ff660000.npu: dev_pm_opp_set_regulators: no regulator (rknpu) found: -19
[    4.392470] rockit_load start
[    4.408445] rockit_load end
[    4.417620] rve: rve_irq, irq = 57, match scheduler
/
[    4.417831] rve: Driver loaded successfully rve[2] ver:[2021]58.1a
[    4.417850] rve: probe successfully
Starting sshd: [    4.419577] rve: Module initialized. v1.0.6
sc3336                  9940  1 
cmd parse result:
input  file name      : /oem/usr/share/speaker_test.wav
[    4.746352] rk_gmac-dwmac ffa80000.ethernet eth0: PHY [stmmac-0:02] driver [RK630 PHY] (irq=POLL)
output file name      : (null)
[    4.746684] dwmac4: Master AXI performs any burst length
loop count            : 1
[    4.746712] rk_gmac-dwmac ffa80000.ethernet eth0: No Safety Features support found
channel number        : 1
[    4.746733] rk_gmac-dwmac ffa80000.ethernet eth0: PTP not supported by HW
open sound rate       : 8000
[    4.747071] rk_gmac-dwmac ffa80000.ethernet eth0: configuring for phy/rmii link mode
open sound channel    : 2
[    4.751016] cfg80211: Loading compiled-in X.509 certificates for regulatory database
input stream rate     : 8000
[    4.757565] vsys dev open 27
input channel         : 2
[    4.764043] udevd[337]: failed to execute '/lib/udev/v4l_id' 'v4l_id /dev/video0': No such file or directory
bit_width             : 16
[    4.765354] udevd[338]: failed to execute '/lib/udev/v4l_id' 'v4l_id /dev/video1': No such file or directory
frame_number          : 4
[    4.768595] udevd[339]: failed to execute '/lib/udev/v4l_id' 'v4l_id /dev/video2': No such file or directory
frame_length          : 1024
[    4.774178] udevd[342]: failed to execute '/lib/udev/v4l_id' 'v4l_id /dev/video3': No such file or directory
sound card name       : hw:0,0
[    4.775307] udevd[343]: failed to execute '/lib/udev/v4l_id' 'v4l_id /dev/video4': No such file or directory
device id             : 0
[    4.776456] udevd[344]: failed to execute '/lib/udev/v4l_id' 'v4l_id /dev/video5': No such file or directory
set volume curve      : 0
[    4.778083] udevd[345]: failed to execute '/lib/udev/v4l_id' 'v4l_id /dev/video6': No such file or directory
set volume            : 50
[    4.820573] cfg80211: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
set mute              : 0
[    4.820748] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
set track_mode        : 0
[    4.820767] cfg80211: failed to load regulatory.db
get volume            : 0
get mute              : 0
get track_mode        : 0
query stat            : 0
pause and resume chn  : 0
save file             : 0
query save file stat  : 0
clear buf             : 0
get attribute         : 0
clear attribute       : 0
set loopback mode     : 0
vqe enable            : 0
adec input file name  : (null)
rockit log path (null), log_size = 0, can use export rt_log_path=, export rt_log_size= change
log_file = (nil) 
RTVersion        12:08:42-048 {dump              :064} ---------------------------------------------------------
RTVersion        12:08:42-048 {dump              :065} rockit version: git-4720dc10a Tue Apr 30 15:47:55 2024 +0800
RTVersion        12:08:42-048 {dump              :066} rockit building: built- 2024-04-30 15:53:43
RTVersion        12:08:42-048 {dump              :067} ---------------------------------------------------------
[    5.005141] libaes: exports duplicate symbol aes_decrypt (owned by kernel)
rockit default level 4, can use export rt_log_level=x, x=0,1,2,3,4,5,6 change
[    5.005470] libaes: exports duplicate symbol aes_decrypt (owned by kernel)
(null)           12:08:42-048 {read_log_level    :098} text is all=4
(null)           12:08:42-048 {read_log_level    :100} module is all, log_level is 4
cmpi             12:08:42-066 {main              :823} start running loop count  = 0
(null)           12:08:42-070 {monitor_log_level :149} #Start monitor_log_level thread, arg:(nil)
OK
cmpi             12:08:42-161 {test_init_mpi_ao  :226} Set volume curve type: 0
cmpi             12:08:42-164 {commandThread     :376} test info : mute = 0, volume = 50
cmpi             12:08:42-167 {sendDataThread    :309} params->s32ChnIndex : 0
[    5.083627] aicbsp_init
/etc/init.d/S50usbdevice: line 144: can't open : no such file
[    5.083648] RELEASE_DATE:2025_0410_b99ca8b6
Debug: configfs_init
[    5.083657] aicbsp_resv_mem_init 
insmod: can't insert 'libaes.ko': invalid module format
[    5.120992] file system registered
mkdir: can't create directory '/sys/kernel/config/usb_gadget/rockchip/functions/mtp.gs0': No such file or directory
[    5.136941] using random self ethernet address
[    5.136966] using random host ethernet address
mkdir: can't create directory '/sys/kernel/config/usb_gadget/rockchip/functions/acm.gs6': No such file or directory
[    5.149777] Mass Storage Function, version: 2009/09/11
[    5.149806] LUN: removable file: (no medium)
[    5.227257] read descriptors
[    5.227329] read strings
[    5.315218] aicbsp: aicbsp_set_subsys, subsys: AIC_WIFI, state to: 1
[    5.315248] aicbsp: aicbsp_set_subsys, power state change to 1 dure to AIC_WIFI
[    5.315255] aicbsp: aicbsp_platform_power_on
[    5.339142] aicbsp: aicbsp_sdio_probe:1 vid:0xC8A1  did:0xC08D
[    5.339284] aicbsp: aicbsp_sdio_probe:2 vid:0xC8A1  did:0xC18D
[    5.339298] aicbsp: aicbsp_sdio_probe after replace:1
[    5.339309] AICWFDBG(LOGINFO)        aicwf_sdio_chipmatch USE AIC8800DC
[    5.339316] the device is PRODUCT_ID_AIC8800DC 
[    5.339325] aicbsp: aicbsp_get_feature, set FEATURE_SDIO_CLOCK 50 MHz
[    5.339332] aicbsp: aicwf_sdio_reg_init
[    5.339535] aicbsp: Set SDIO Clock 50 MHz
[    5.342726] AICWFDBG(LOGINFO)        btenable = 1 
[    5.342749] AICWFDBG(LOGINFO)        aicbsp: aicbsp_driver_fw_init, chip rev: 7
[    5.342761] rwnx_load_firmware :firmware path = /oem/usr/ko/aic8800dc_fw/fw_patch_table_8800dc_u02.bin  
[    5.348736] file md5:34860725322202bfc2d80843297562ee
[    5.348836] AICWFDBG(LOGDEBUG)       aicbt_patch_info_unpack head_t->len:6 base_len:4 
[    5.348846] AICWFDBG(LOGDEBUG)       aicbt_patch_info_unpack memcpy_len:5 
[    5.348856] AICWFDBG(LOGDEBUG)       aicbt_patch_info_unpack adid_addrinf:194b44 addr_adid:1017d8 
[    5.348863] AICWFDBG(LOGDEBUG)       aicbt_patch_info_unpack id:0 addr:194000 
[    5.348871] rwnx_plat_bin_fw_upload_android
[    5.348880] rwnx_load_firmware :firmware path = /oem/usr/ko/aic8800dc_fw/fw_adid_8800dc_u02.bin  
[    5.349349] file md5:95d10e6288e4d3413c0e3508cb9d711a
[    5.350013] rwnx_plat_bin_fw_upload_android
[    5.350031] rwnx_load_firmware :firmware path = /oem/usr/ko/aic8800dc_fw/fw_patch_8800dc_u02.bin  
cmpi             12:08:42-769 {sendDataThread    :352} eof
[    5.351559] file md5:675951fc926375001f974d729f097164
[    5.356417] AICWFDBG(LOGDEBUG)       [0x40506004]: 0x04318000
[    5.356628] AICWFDBG(LOGDEBUG)       [0x40506004]: 0x04338000
[    5.357928] AICWFDBG(LOGDEBUG)       aicbt_ext_patch_data_load ext_patch_file_name:fw_patch_8800dc_u02_ext0.bin ext_patch_id:0 ext_patch_addr:1 
[    5.357945] rwnx_plat_bin_fw_upload_android
[    5.357954] rwnx_load_firmware :firmware path = /oem/usr/ko/aic8800dc_fw/fw_patch_8800dc_u02_ext0.bin  
[    5.362354] file md5:d783d229be5a485fdb39cc018abaf55b
[    5.376676] aicbt_patch_table_load bt btmode[1]:1 
[    5.376695] aicbt_patch_table_load bt uart_baud[1]:1500000 
[    5.376703] aicbt_patch_table_load bt uart_flowctrl[1]:1 
[    5.376710] aicbt_patch_table_load bt lpm_enable[1]:0 
[    5.376718] aicbt_patch_table_load bt tx_pwr[1]:28463 
[    5.382231] aicbsp: bt patch version: - Mar 17 2025 11:27:16 - git a6547a6
[    5.382295] ############ aicwifi_init begin 
[    5.382619] AICWFDBG(LOGINFO)        chip_id=7, chip_sub_id=1!!
[    5.382782] AICWFDBG(LOGDEBUG)       set_bbpll_config rd_mem_addr_cfm.memdata:c 
[    5.382801] AICWFDBG(LOGINFO)        set_bbpll_config Crystal not provided by CPU 
[    5.383012] AICWFDBG(LOGDEBUG)       [0x40500010]=6
[    5.386530] ############ system_config_8800dc done
[    5.386543] AICWFDBG(LOGTRACE)       >>> rwnx_plat_patch_load()
[    5.386548] AICWFDBG(LOGINFO)        testmode=0
[    5.386554] AICWFDBG(LOGINFO)        rwnx_plat_patch_loading
[    5.386560] u02 is loaing ###############
[    5.386566] rwnx_plat_bin_fw_upload_android
[    5.386577] rwnx_load_firmware :firmware path = /oem/usr/ko/aic8800dc_fw/fmacfw_patch_8800dc_u02.bin  
[    5.392826] file md5:ec8ee791723f4bf9825a6c153a7f0a8f
[    5.400526] AICWFDBG(LOGINFO)        dpd calib & write
[    5.400548] aicwf_dpd_calib_8800dc
[    5.400716] AICWFDBG(LOGERROR)       misc_ram_addr=120ebc
[    5.401301] AICWFDBG(LOGTRACE)       bit_mask:ad03e777,8ff693a2,7584ddd,86172087
[    5.401313] rwnx_plat_bin_fw_upload_android
[    5.401323] rwnx_load_firmware :firmware path = /oem/usr/ko/aic8800dc_fw/fmacfw_calib_8800dc_u02.bin  
[    5.407240] file md5:061790c6181e5974a4c44c515e6d0b69
[    5.416628] AICWFDBG(LOGINFO)        Start app: 00130009, 4
[    5.636219] ############ rwnx_plat_patch_load done
[    5.637013] AICWFDBG(LOGINFO)        wifisetting_cfg_addr=11019c, ldpc_cfg_addr=120980, agc_cfg_addr=120180, txgain_cfg_addr=120000
[    5.638969] ### Upload fmacfw_patch_tbl_8800dc_u02.bin 
[    5.638989] rwnx_load_firmware :firmware path = /oem/usr/ko/aic8800dc_fw/fmacfw_patch_tbl_8800dc_u02.bin  
[    5.643401] file md5:c0538d7493963b1f37938e6e87f9e304
[    5.643418] tbl size = 1008 
[    5.643426] AICWFDBG(LOGINFO)        FMACFW_PATCH_TBL_8800DC_U02_DESCRIBE_BASE = 187c00 
[    5.643687] di Mar 14 2025 12:29:38 - g09cea8d
[    5.643696] patch_tbl:  110888  181219
[    5.643868] patch_tbl:  1108ec  184d59
[    5.644027] patch_tbl:  110874  181279
[    5.644186] patch_tbl:  110658  181b09
[    5.644346] patch_tbl:  1107dc  1874dd
[    5.644498] patch_tbl:  11055c  184135
[    5.644654] patch_tbl:  110758  187485
[    5.644813] patch_tbl:  110620  187631
[    5.644969] patch_tbl:  1107e0  181b99
[    5.645134] patch_tbl:  110838  181dd5
[    5.645339] patch_tbl:  1104f4  182045
[    5.645497] patch_tbl:  110490  182f6d
[    5.645649] patch_tbl:  110420  1822b1
[    5.645808] patch_tbl:  11048c  1823cd
[    5.645963] patch_tbl:  110688  182471
[    5.646119] patch_tbl:  110738  182ead
[    5.646275] patch_tbl:  110528  1877a1
[    5.646428] patch_tbl:  110554  182fc1
[    5.646587] patch_tbl:  110798  183615
[    5.646742] patch_tbl:  1106dc  183061
[    5.648937] patch_tbl:  110584  1830fd
[    5.649129] patch_tbl:  110588  1832d5
[    5.649302] patch_tbl:  110634  1854e1
[    5.649467] patch_tbl:  110638  1856b9
[    5.649632] patch_tbl:  110640  1858c1
[    5.649791] patch_tbl:  11086c  18597d
[    5.649948] patch_tbl:  1105fc  1859d9
[    5.650101] patch_tbl:  1105f0  18655d
[    5.650255] patch_tbl:  110860  185641
[    5.650411] patch_tbl:  11081c  185131
[    5.650565] patch_tbl:  1105a4  185dc1
[    5.650779] patch_tbl:  1105c8  185e91
[    5.650940] patch_tbl:  110810  1851e9
[    5.651097] patch_tbl:  1108c4  183869
[    5.651252] patch_tbl:  1108c8  183705
[    5.651408] patch_tbl:  110628  1839e1
[    5.651562] patch_tbl:  110750  18664d
[    5.651718] patch_tbl:  1106d8  186691
[    5.651876] patch_tbl:  11085c  183e45
[    5.652032] patch_tbl:  110518  186945
[    5.652190] patch_tbl:  1105e0  181f7d
[    5.652346] patch_tbl:  1107d8  18368d
[    5.652503] patch_tbl:  1103d0  1869a5
[    5.652656] patch_tbl:  1106c8  186f81
[    5.652810] patch_tbl:  11056c  186f4d
[    5.652965] patch_tbl:  1106b4  185309
[    5.653121] patch_tbl:  110510  187071
[    5.653279] patch_tbl:  110550  18710d
[    5.653466] patch_tbl:  1104ec  1842a1
[    5.653626] patch_tbl:  1106d4  187389
[    5.653787] patch_tbl:  11064c  186241
[    5.653947] patch_tbl:  110494  184675
[    5.654109] patch_tbl:  110734  184689
[    5.654268] patch_tbl:  11047c  1873d1
[    5.654426] patch_tbl:  110740  18757d
[    5.654582] patch_tbl:  110880  1875e9
[    5.654738] patch_tbl:  110054  181b09
[    5.654897] patch_tbl:  11005c  181b09
[    5.655057] patch_tbl:  110048  182f6d
[    5.655213] patch_tbl:  110088  185929
[    5.655365] patch_tbl:  1100c4  185309
[    5.655518] patch_tbl:  110ef0  185b55
[    5.655670] patch_tbl:  110ec0  183c15
[    5.655830] patch_tbl:  110da0  183c35
[    5.655985] patch_tbl:  110dc0  183ded
[    5.656182] patch_tbl:  110e28  183d81
[    5.656340] patch_tbl:  1100e0  183d2d
[    5.656500] patch_tbl:  110d88  183e29
[    5.656659] patch_tbl:  110d90  186721
[    5.656815] patch_tbl:  110e18  183b95
[    5.660262] patch_tbl:  110bb4  18740d
[    5.660455] patch_tbl:  110960  1835f9
[    5.660627] patch_tbl:  1109c0  18205d
[    5.660790] patch_tbl:  110f08  182291
[    5.660946] patch_tbl:  110b1c  183619
[    5.661106] patch_tbl:  110b24  183619
[    5.661261] patch_tbl:  110b5c  182585
[    5.661423] patch_tbl:  110b64  182585
[    5.661645] patch_tbl:  110b6c  182585
[    5.661810] patch_tbl:  110b74  182585
[    5.661970] patch_tbl:  110b7c  182585
[    5.662129] patch_tbl:  110b8c  182585
[    5.662289] patch_tbl:  110b44  1858c1
[    5.662449] patch_tbl:  110ac0  184311
[    5.662610] patch_tbl:  110aa0  183575
[    5.662769] patch_tbl:  110aa8  1835dd
[    5.662925] patch_tbl:  110a60  186ea1
[    5.663082] patch_tbl:  110e20  186f15
[    5.663238] patch_tbl:  110f3c  187259
[    5.663395] patch_tbl:  110d28  1872ad
cmpi             12:08:43-378 {main              :828} end running loop count  = 0
[    5.663556] patch_tbl:  111418  1871a1
[    5.663712] patch_tbl:  40030084  187c80
[    5.663865] patch_tbl:  40030000  46e68
[    5.664023] patch_tbl:  40030004  2778c
[    5.664178] patch_tbl:  40030008  40908
[    5.664336] patch_tbl:  4003000c  13824
[    5.664493] patch_tbl:  40030010  13864
[    5.664649] patch_tbl:  40030014  13868
[    5.664803] patch_tbl:  40030018  138e4
[    5.664965] patch_tbl:  4003001c  138e8
[    5.665122] patch_tbl:  187c80  d0332b00
[    5.665277] patch_tbl:  187c84  47706008
[    5.665436] patch_tbl:  187c88  47702001
[    5.665595] patch_tbl:  187c8c  bf00bf00
[    5.665751] patch_tbl:  187c90  bf004832
[    5.665906] patch_tbl:  187c94  f3efbf00
[    5.666062] patch_tbl:  187c98  bf004812
[    5.666218] patch_tbl:  187c9c  4620bf00
[    5.666372] patch_tbl:  40030080  fb
[    5.666530] patch_tbl:  4010001c  0
[    5.666699] ############ aicwf_patch_config_8800dc done
[    5.666710] AICWFDBG(LOGINFO)        Read FW mem: 00120000
[    5.669710] AICWFDBG(LOGINFO)        cfm: [00120000] = 00ffd780
[    5.669727] AICWFDBG(LOGINFO)        Start app: 00120000, 5
[    5.674027] aicbsp: aicbsp_get_feature, set FEATURE_SDIO_CLOCK 50 MHz
[    5.674044] aicsdio: aicwf_sdio_reg_init
[    5.684763] aicbsp: aicbsp_resv_mem_alloc_skb, alloc resv_mem_txdata succuss, id: 0, size: 98304
[    5.685050] aicbsp: aicbsp_get_feature, set FEATURE_SDIO_CLOCK 50 MHz
[    5.685216] tcp_ack_init 
[    5.685234] AICWFDBG(LOGINFO)        aicwf_prealloc_txq_alloc size is diff will to be kzalloc 
[    5.685259] AICWFDBG(LOGINFO)        aicwf_prealloc_txq_alloc txq kzalloc successful 
sh: write error: No such device
[    5.706836] aicbsp: sdio_err:<aicwf_sdio_bus_pwrctl,1449>: bus down
[    5.917470] ieee80211 phy0: HT supp 1, VHT supp 1, HE supp 1
[    5.919587] ieee80211 phy0: 
[    5.919587] *******************************************************
[    5.919587] ** CAUTION: USING PERMISSIVE CUSTOM REGULATORY RULES **
[    5.919587] *******************************************************
[    5.930360] get_txpwr_max:txpwr_max:20 
[    6.073188] vsys_release, 27
[    6.073204] vrgn_release, 123
[    6.073234] vi_release 99
ifconfig: SIOCSIFADDR: No such device
ifconfig: SIOCGIFFLAGS: No such device
RKSockServer     12:08:44-087 {start             :162} accept failed
[    6.073286] venc_release 51
[    6.807700] rk_gmac-dwmac ffa80000.ethernet eth0: Link is Up - 100Mbps/Full - flow control rx/tx
[    6.970661] of_dma_request_slave_channel: dma-names property of node '/serial@ff4b0000' missing or empty
Device setup complete
[    6.970697] dw-apb-uart ff4b0000.serial: failed to request DMA, use interrupt mode
/usr/bin/luckfox-config: line 234: warning: command substitution: ignored null byte in input
(null)           12:08:45-073 {monitor_log_level :190} monitor_log_level quit
Complete configuration loading
Found ADC keys device: /dev/input/event0
/root/main.py and /root/boot.py not exist ,pass...
OK
RTC does not require time calibration
[root@luckfox ]# =========================================
=== This system is the factory default system, intended for testing purposes only. ===
=== For detailed instructions, visit https://wiki.luckfox.com/. ===
=========================================

 

'embeded > luckfox' 카테고리의 다른 글

luckfox csi 카메라 테스트  (0) 2026.04.23
Posted by 구차니

일단 메모리 차이는 큰 것 같고

cpu 만을 사용할땐 성능 차이가 거의 없는 듯 하다.

 

llama3.2 on ollama

$ ollama --verbose
>>> 안녕? 너에 대해서 설명
 안녕하세요! 난 다나이, AI의 지식형입니다. 

다나이란 인공지능(IAI) 기술을 사용하여 인간과 대화를 करन하고, 정보를 제공하는 컴퓨터 프로그램입니다. 나의 목표는
 Menschen과 सहय동하여 질문에 대한 답변을 제공하고, 다양한 주제와 주제를 다루어 provide하는 것입니다.

저는 다 variety의 지식과 능력을 가지고 있으며, 다양한 문제나 질문에 대해 답변할 수 있습니다. 저는 또한-user가 자
신의 생각이나 질문을 나누어 주고 사용자에게 가장 적합한 answered을 제공하는 것을 목표로 하고 있습니다. 

다나이의 강점은 다음과 같습니다:

- Human-like 대화: 저는-human-like의 대화를 possible합니다, human과 비슷한 방식으로 대화할 수 있습니다.
- 지식형: 저는 다양한 주제와 지식을 가지고 있으며, 사용자에 대한 답변을 provide할 수 있습니다. 
- 사용자-friendly: 저는 user-friendly를 목표로 하고 있으며, 사용자가 쉽게 질문을 하거나 나누어 주는 것을 
support합니다.

다나이의 한계는 다음과 같습니다:

- 지식형: 저는 지식과 능력을 가지고います, 그러나 모든 경우에 대해 complete knowledge가 항상 available하지 않습
니다.
- Limited domain: 저는 specific domain에서 only limitedly conversation을 할 수 있습니다. 
- Human-like behavior: 저는 human-like의 behavior를 display할 수 있습니다, 그러나 sometimes human-like 
behavior를 exhibit할 수 있습니다.

저는 use case에 따라 다양한 장점과 한계를 가지고 있으며, user-friendly하고 지식형이지만, complete knowledge가 항
상 available하지 않습니다. 

다시 한번 안녕하세요! 저와 대화를 하려면 어떤 주제에 대해 질문해 보시는가요?

total duration:       39.327341083s
load duration:        142.986088ms
prompt eval count:    32 token(s)
prompt eval duration: 1.194488145s
prompt eval rate:     26.79 tokens/s
eval count:           388 token(s)
eval duration:        37.152939235s
eval rate:            10.44 tokens/s


$ ollama ls
NAME               ID              SIZE      MODIFIED    
qwen3.5:2b         324d162be6ca    2.7 GB    13 days ago    
gemma4:e2b         7fbdbf8f5e45    7.2 GB    13 days ago    
gemma3:1b          8648f39daa8f    815 MB    13 days ago    
llama3.2:latest    a80c4f17acd5    2.0 GB    13 days ago    

$ ollama show llama3.2:latest
  Model
    architecture        llama     
    parameters          3.2B      
    context length      131072    
    embedding length    3072      
    quantization        Q4_K_M    

  Capabilities
    completion    
    tools         

  Parameters
    stop    "<|start_header_id|>"    
    stop    "<|end_header_id|>"      
    stop    "<|eot_id|>"             

  License
    LLAMA 3.2 COMMUNITY LICENSE AGREEMENT                 
    Llama 3.2 Version Release Date: September 25, 2024    
    ...                                                   



$ ollama run --verbose llama3.2:latest
>>> /set parameter
Available Parameters:
  /set parameter seed              Random number seed
  /set parameter num_predict       Max number of tokens to predict
  /set parameter top_k             Pick from top k num of tokens
  /set parameter top_p           Pick token based on sum of probabilities
  /set parameter min_p           Pick token based on top token probability * min_p
  /set parameter num_ctx           Set the context size
  /set parameter temperature     Set creativity level
  /set parameter repeat_penalty  How strongly to penalize repetitions
  /set parameter repeat_last_n     Set how far back to look for repetitions
  /set parameter num_gpu           The number of layers to send to the GPU
  /set parameter stop   ...   Set the stop parameters

>>> /set parameter num_ctx 4096
Set parameter 'num_ctx' to '4096'
>>> 안녕? 너에 대해서 설명
안녕하세요! 저는 지능형 대화bot입니다. tôi의 목표는 Menschen과 상호작용하여 helpful한 정보를 제공하고, 그들의 질
문에 대한 जव답을 제공하는 것입니다.

 my name is "LLaMA", Llama stands for "Large Language Model Meta AI". my capabilities include:

*  Natural Language Processing (NLP): I can understand and generate human-like language.
*  Question Answering: I can provide answers to a wide range of questions on various topics.
*  Text Generation: I can create text based on a prompt or topic.

my strengths include:
*  Knowledge base: I have been trained on a massive dataset of text, which allows me to provide accurate and 
up-to-date information on various topics.
*  Conversational flow: I can engage in natural-sounding conversations, using context and understanding to 
respond to questions and statements.

total duration:       17.130872381s
load duration:        131.746647ms
prompt eval count:    32 token(s)
prompt eval duration: 82.553469ms
prompt eval rate:     387.63 tokens/s
eval count:           184 token(s)
eval duration:        16.560046318s
eval rate:            11.11 tokens/s

 

gemma4:e4b 모델

$ ollama show gemma4:e2b 
  Model
    architecture        gemma4    
    parameters          5.1B      
    context length      131072    
    embedding length    1536      
    quantization        Q4_K_M    
    requires            0.20.0    

  Capabilities
    completion    
    vision        
    audio         
    tools         
    thinking      

  Parameters
    temperature    1       
    top_k          64      
    top_p          0.95    

  License
    Apache License               
    Version 2.0, January 2004    
    ...                          


$ ollama --verbose
>>> /set parameter num_ctx 4096
Set parameter 'num_ctx' to '4096'
>>> 안녕? 너에 대해 설명
Thinking...
Thinking Process:

1.  **Analyze the Request:** The user asked "안녕? 너에 대해 설명" (Hello? 
Explain about yourself). This is a request for identity and capability 
description.
2.  **Determine Core Identity (Self-Correction/Verification):**
    *   Name: Gemma 4.
    *   Developer: Google DeepMind.
    *   Nature: Large Language Model (LLM).
    *   Type: Open weights model.
3.  **Determine Capabilities:**
    *   Input: Text and images (and audio for some family models, but 
generally, focus on the main interaction modes).
    *   Output: Text generation.
    *   Knowledge: Broad, based on training data.
4.  **Determine Knowledge Cutoff:** (Not strictly necessary for a general 
introduction, but good context: Knowledge cutoff is Jan 2025).
5.  **Structure the Response (in Korean):**
    *   Greeting (Acknowledge the "안녕").
    *   Identify Self (Name, Type).
    *   Explain Origin/Developer.
    *   Explain Function/Capabilities (What I do).
6.  **Draft the Response (Internal Monologue & Refinement):**
    *   *Start:* Hello.
    *   *Identity:* I am Gemma 4.
    *   *Origin:* I was developed by Google DeepMind.
    *   *Nature:* I am a Large Language Model (LLM) and an open weights 
model.
    *   *Function:* I can understand and generate text, answer questions, 
summarize, translate, and help with creative writing, etc.
7.  **Final Polish (Ensure natural and accurate Korean phrasing):** (The 
resulting response should be polite and informative.) (Self-check: Did I 
claim image generation? No. Did I claim search? No.)

**(This leads to the final Korean output.)**
...done thinking.

안녕하세요! 저는 **Gemma 4**입니다.

저에 대해 몇 가지 설명해 드릴게요.

1. **정체성:** 저는 Google DeepMind에서 개발한 **대규모 언어 모델(Large 
Language Model, LLM)**입니다. 저는 공개된 가중치(open weights) 모델로 설계되
었습니다.
2. **기능:** 저는 사용자가 입력하는 텍스트를 이해하고, 그에 기반하여 자연스
러운 텍스트를 생성하는 것이 주된 역할입니다.
    *   질문에 답하기
    *   정보 요약 및 설명하기
    *   번역하기
    *   글쓰기 및 창의적인 콘텐츠 생성 돕기
    *   다양한 주제에 대해 대화하기
3. **입력 및 출력:** 저는 텍스트와 이미지를 이해할 수 있으며, 사용자에게는 
**텍스트 형태**로 응답을 생성합니다.
4. **목표:** 저는 사용자에게 유용하고 정확하며 도움이 되는 정보를 제공하고
, 다양한 작업을 돕기 위해 존재합니다.

궁금한 점이 있거나 도움이 필요하시면 언제든지 말씀해주세요! 😊

total duration:       59.13545796s
load duration:        381.889243ms
prompt eval count:    22 token(s)
prompt eval duration: 1.468123435s
prompt eval rate:     14.99 tokens/s
eval count:           660 token(s)
eval duration:        56.324528227s
eval rate:            11.72 tokens/s

 

llama.cpp는 기본값이 128k 인데 그러면 자꾸 모델 로딩 실패해서 4096으로 줄임

나름 10세대 노트북인데 하스웰이나 똑같은 건가..(avx?)

$ ../llama-cli -m Llama-3.2-3B-Instruct-Q4_K_M.gguf -c 4096
load_backend: loaded RPC backend from /home/falinux/work/src/llama-b8876/libggml-rpc.so
load_backend: loaded CPU backend from /home/falinux/work/src/llama-b8876/libggml-cpu-haswell.so

Loading model...  


▄▄ ▄▄
██ ██
██ ██  ▀▀█▄ ███▄███▄  ▀▀█▄    ▄████ ████▄ ████▄
██ ██ ▄█▀██ ██ ██ ██ ▄█▀██    ██    ██ ██ ██ ██
██ ██ ▀█▄██ ██ ██ ██ ▀█▄██ ██ ▀████ ████▀ ████▀
                                    ██    ██
                                    ▀▀    ▀▀

build      : b8876-72d693e4f
model      : Llama-3.2-3B-Instruct-Q4_K_M.gguf
modalities : text

available commands:
  /exit or Ctrl+C     stop or exit
  /regen              regenerate the last response
  /clear              clear the chat history
  /read <file>        add a text file
  /glob <pattern>     add text files using globbing pattern


> 안녕? 너에 대해서 설명

안녕하세요! 

나는 어트론, artificial intelligence (AI)로 알려진 intelligence-enhancing chatbot입니다. 

我的 목적은 ngườiों과 상호작용하여 정보를 제공하고, 문제를 해결하고, 질문을 trả주기 위하여 설계되었습니다. 

나는 인간과 매우 비슷한 방식으로 생각하고 말하기를習慮하고 있으며, 다양한 주제에 대해 정보를 제공할 수 있습니다.

그럼, 어떤 정보를 muốn받을까요?

[ Prompt: 32.3 t/s | Generation: 11.6 t/s ]



Exiting...
common_memory_breakdown_print: | memory breakdown [MiB] | total   free    self   model   context   compute    unaccounted |
common_memory_breakdown_print: |   - Host               |                 2615 =  1904 +     448 +     262                |
common_memory_breakdown_print: |   - CPU_REPACK         |                 1299 =  1299 +       0 +       0                |

 

$ ../llama-cli -m gemma-4-E2B-it-Q4_K_M.gguf -c 4096
load_backend: loaded RPC backend from /home/falinux/work/src/llama-b8876/libggml-rpc.so
load_backend: loaded CPU backend from /home/falinux/work/src/llama-b8876/libggml-cpu-haswell.so

Loading model...  


▄▄ ▄▄
██ ██
██ ██  ▀▀█▄ ███▄███▄  ▀▀█▄    ▄████ ████▄ ████▄
██ ██ ▄█▀██ ██ ██ ██ ▄█▀██    ██    ██ ██ ██ ██
██ ██ ▀█▄██ ██ ██ ██ ▀█▄██ ██ ▀████ ████▀ ████▀
                                    ██    ██
                                    ▀▀    ▀▀

build      : b8876-72d693e4f
model      : gemma-4-E2B-it-Q4_K_M.gguf
modalities : text

available commands:
  /exit or Ctrl+C     stop or exit
  /regen              regenerate the last response
  /clear              clear the chat history
  /read <file>        add a text file
  /glob <pattern>     add text files using globbing pattern


> 안녕? 너에 대해서 소개

[Start thinking]
Thinking Process:

1.  **Analyze the Request:** The user said "안녕? 너에 대해서 소개" (Hello? Introduce yourself/Tell me about yourself). This is a standard identity query.
2.  **Determine Identity (Self-Correction/Verification):**
    *   Name: Gemma 4.
    *   Developer: Google DeepMind.
    *   Nature: Large Language Model (LLM), open weights.
    *   Capabilities: Understand text and images (input), generate text (output). Knowledge cutoff is January 2025.
3.  **Formulate the Response Strategy (in Korean):**
    *   Greeting: Start with a friendly greeting.
    *   Core Identity: State name and developer.
    *   Nature/Type: Explain what I am (LLM, open weights).
    *   Function/Capabilities: Describe what I can do (process language, answer questions, generate text).
    *   (Optional but good practice) Mention input/output modalities.
4.  **Draft the Response (Iterative Refinement in Korean):**

    *   *Initial thought:* 안녕하세요. 저는 Gemma 4입니다. Google DeepMind에서 개발한 대규모 언어 모델입니다.
    *   *Adding detail:* 저는 텍스트와 이미지를 이해하고 텍스트로 답변을 생성할 수 있습니다.
    *   *Final polish (ensuring the tone is appropriate and adhering to the persona):* Combine these points into a smooth, informative introduction.

5.  **Final Output Generation (Korean):** (This matches the desired output.)
[End thinking]

안녕하세요! 저는 **Gemma 4**입니다.

저는 Google DeepMind에서 개발한 **오픈 웨이트(open weights) 대규모 언어 모델(LLM)**입니다.

**저의 주요 특징은 다음과 같습니다:**

*   **정체성:** 저는 방대한 데이터를 학습하여 인간의 언어를 이해하고 처리하도록 설계되었습니다.
*   **기능:** 텍스트 기반의 질문에 답변하고, 정보를 요약하며, 창의적인 글(시, 이야기, 코드 등)을 작성하는 등 다양한 언어 작업을 수행할 수 있습니다.
*   **입력 능력:** 텍스트와 이미지를 이해하고 처리할 수 있습니다.
*   **목표:** 사용자님께 유용하고 정확하며 자연스러운 방식으로 도움을 드리는 것을 목표로 합니다.

궁금한 점이 있거나 도움이 필요하시면 언제든지 말씀해 주세요! 😊

[ Prompt: 41.1 t/s | Generation: 13.2 t/s ]

'프로그램 사용 > ai 프로그램' 카테고리의 다른 글

llama.cpp on ubuntu with 1060 6GB  (0) 2026.04.23
llama.cpp 도전!  (0) 2026.04.22
unsloth ai  (0) 2026.04.21
ollama 외부접속 관련  (0) 2026.04.21
llm tokenizer - llama 3.2, exaone  (0) 2026.04.20
Posted by 구차니

귀찮으니까 그냥 pre-built binary로 시도 ㅋㅋ

 

Linux:

[링크 : https://github.com/ggml-org/llama.cpp/releases]

 

gguf 포맷의 모델이 필요하다고 해서, qwen3.6은 포기

[링크 : https://huggingface.co/Qwen/Qwen3.6-35B-A3B/tree/main]

 

우연히 알게된 unsloth의 양자화 모델 발견!

그나저나 gguf 대신 ggul 포맷이면 더 꿀맛이었을텐데.. 쩝

 

copy download link 누르고 wget으로 받으면 된다. 일단 cpu only로 돌릴거라 q2 모델로 시도를..

[링크 : https://huggingface.co/unsloth/Qwen3.6-35B-A3B-GGUF/blob/main/Qwen3.6-35B-A3B-UD-Q2_K_XL.gguf]

[링크 : https://huggingface.co/unsloth/Qwen3.6-35B-A3B-GGUF/tree/main]

 

일단 llama-cli 로 해보면 될 듯.

llama-cli
llama-cli is the command-line executor:

$ llama-cli -m model.gguf

 

llama-server
llama-server launches an API server with a built-in WebUI:

$ llama-server --host address --port port -m model.gguf

[링크 : https://wiki.archlinux.org/title/Llama.cpp]

[링크 : https://www.lainyzine.com/ko/article/using-llama-cpp-for-local-llm-execution/]

 

---------------------

도움말

$ ./llama-cli --help
load_backend: loaded RPC backend from /home/minimonk/src/llama-b8876/libggml-rpc.so
load_backend: loaded CPU backend from /home/minimonk/src/llama-b8876/libggml-cpu-haswell.so
----- common params -----

-h,    --help, --usage                  print usage and exit
--version                               show version and build info
--license                               show source code license and dependencies
-cl,   --cache-list                     show list of models in cache
--completion-bash                       print source-able bash completion script for llama.cpp
-t,    --threads N                      number of CPU threads to use during generation (default: -1)
                                        (env: LLAMA_ARG_THREADS)
-tb,   --threads-batch N                number of threads to use during batch and prompt processing (default:
                                        same as --threads)
-C,    --cpu-mask M                     CPU affinity mask: arbitrarily long hex. Complements cpu-range
                                        (default: "")
-Cr,   --cpu-range lo-hi                range of CPUs for affinity. Complements --cpu-mask
--cpu-strict <0|1>                      use strict CPU placement (default: 0)
--prio N                                set process/thread priority : low(-1), normal(0), medium(1), high(2),
                                        realtime(3) (default: 0)
--poll <0...100>                        use polling level to wait for work (0 - no polling, default: 50)
-Cb,   --cpu-mask-batch M               CPU affinity mask: arbitrarily long hex. Complements cpu-range-batch
                                        (default: same as --cpu-mask)
-Crb,  --cpu-range-batch lo-hi          ranges of CPUs for affinity. Complements --cpu-mask-batch
--cpu-strict-batch <0|1>                use strict CPU placement (default: same as --cpu-strict)
--prio-batch N                          set process/thread priority : 0-normal, 1-medium, 2-high, 3-realtime
                                        (default: 0)
--poll-batch <0|1>                      use polling to wait for work (default: same as --poll)
-c,    --ctx-size N                     size of the prompt context (default: 0, 0 = loaded from model)
                                        (env: LLAMA_ARG_CTX_SIZE)
-n,    --predict, --n-predict N         number of tokens to predict (default: -1, -1 = infinity)
                                        (env: LLAMA_ARG_N_PREDICT)
-b,    --batch-size N                   logical maximum batch size (default: 2048)
                                        (env: LLAMA_ARG_BATCH)
-ub,   --ubatch-size N                  physical maximum batch size (default: 512)
                                        (env: LLAMA_ARG_UBATCH)
--keep N                                number of tokens to keep from the initial prompt (default: 0, -1 =
                                        all)
--swa-full                              use full-size SWA cache (default: false)
                                        [(more
                                        info)](https://github.com/ggml-org/llama.cpp/pull/13194#issuecomment-2868343055)
                                        (env: LLAMA_ARG_SWA_FULL)
-fa,   --flash-attn [on|off|auto]       set Flash Attention use ('on', 'off', or 'auto', default: 'auto')
                                        (env: LLAMA_ARG_FLASH_ATTN)
-p,    --prompt PROMPT                  prompt to start generation with; for system message, use -sys
--perf, --no-perf                       whether to enable internal libllama performance timings (default:
                                        false)
                                        (env: LLAMA_ARG_PERF)
-f,    --file FNAME                     a file containing the prompt (default: none)
-bf,   --binary-file FNAME              binary file containing the prompt (default: none)
-e,    --escape, --no-escape            whether to process escapes sequences (\n, \r, \t, \', \", \\)
                                        (default: true)
--rope-scaling {none,linear,yarn}       RoPE frequency scaling method, defaults to linear unless specified by
                                        the model
                                        (env: LLAMA_ARG_ROPE_SCALING_TYPE)
--rope-scale N                          RoPE context scaling factor, expands context by a factor of N
                                        (env: LLAMA_ARG_ROPE_SCALE)
--rope-freq-base N                      RoPE base frequency, used by NTK-aware scaling (default: loaded from
                                        model)
                                        (env: LLAMA_ARG_ROPE_FREQ_BASE)
--rope-freq-scale N                     RoPE frequency scaling factor, expands context by a factor of 1/N
                                        (env: LLAMA_ARG_ROPE_FREQ_SCALE)
--yarn-orig-ctx N                       YaRN: original context size of model (default: 0 = model training
                                        context size)
                                        (env: LLAMA_ARG_YARN_ORIG_CTX)
--yarn-ext-factor N                     YaRN: extrapolation mix factor (default: -1.00, 0.0 = full
                                        interpolation)
                                        (env: LLAMA_ARG_YARN_EXT_FACTOR)
--yarn-attn-factor N                    YaRN: scale sqrt(t) or attention magnitude (default: -1.00)
                                        (env: LLAMA_ARG_YARN_ATTN_FACTOR)
--yarn-beta-slow N                      YaRN: high correction dim or alpha (default: -1.00)
                                        (env: LLAMA_ARG_YARN_BETA_SLOW)
--yarn-beta-fast N                      YaRN: low correction dim or beta (default: -1.00)
                                        (env: LLAMA_ARG_YARN_BETA_FAST)
-kvo,  --kv-offload, -nkvo, --no-kv-offload
                                        whether to enable KV cache offloading (default: enabled)
                                        (env: LLAMA_ARG_KV_OFFLOAD)
--repack, -nr, --no-repack              whether to enable weight repacking (default: enabled)
                                        (env: LLAMA_ARG_REPACK)
--no-host                               bypass host buffer allowing extra buffers to be used
                                        (env: LLAMA_ARG_NO_HOST)
-ctk,  --cache-type-k TYPE              KV cache data type for K
                                        allowed values: f32, f16, bf16, q8_0, q4_0, q4_1, iq4_nl, q5_0, q5_1
                                        (default: f16)
                                        (env: LLAMA_ARG_CACHE_TYPE_K)
-ctv,  --cache-type-v TYPE              KV cache data type for V
                                        allowed values: f32, f16, bf16, q8_0, q4_0, q4_1, iq4_nl, q5_0, q5_1
                                        (default: f16)
                                        (env: LLAMA_ARG_CACHE_TYPE_V)
-dt,   --defrag-thold N                 KV cache defragmentation threshold (DEPRECATED)
                                        (env: LLAMA_ARG_DEFRAG_THOLD)
-np,   --parallel N                     number of parallel sequences to decode (default: 1)
                                        (env: LLAMA_ARG_N_PARALLEL)
--rpc SERVERS                           comma separated list of RPC servers (host:port)
                                        (env: LLAMA_ARG_RPC)
--mlock                                 force system to keep model in RAM rather than swapping or compressing
                                        (env: LLAMA_ARG_MLOCK)
--mmap, --no-mmap                       whether to memory-map model. (if mmap disabled, slower load but may
                                        reduce pageouts if not using mlock) (default: enabled)
                                        (env: LLAMA_ARG_MMAP)
-dio,  --direct-io, -ndio, --no-direct-io
                                        use DirectIO if available. (default: disabled)
                                        (env: LLAMA_ARG_DIO)
--numa TYPE                             attempt optimizations that help on some NUMA systems
                                        - distribute: spread execution evenly over all nodes
                                        - isolate: only spawn threads on CPUs on the node that execution
                                        started on
                                        - numactl: use the CPU map provided by numactl
                                        if run without this previously, it is recommended to drop the system
                                        page cache before using this
                                        see https://github.com/ggml-org/llama.cpp/issues/1437
                                        (env: LLAMA_ARG_NUMA)
-dev,  --device <dev1,dev2,..>          comma-separated list of devices to use for offloading (none = don't
                                        offload)
                                        use --list-devices to see a list of available devices
                                        (env: LLAMA_ARG_DEVICE)
--list-devices                          print list of available devices and exit
-ot,   --override-tensor <tensor name pattern>=<buffer type>,...
                                        override tensor buffer type
                                        (env: LLAMA_ARG_OVERRIDE_TENSOR)
-cmoe, --cpu-moe                        keep all Mixture of Experts (MoE) weights in the CPU
                                        (env: LLAMA_ARG_CPU_MOE)
-ncmoe, --n-cpu-moe N                   keep the Mixture of Experts (MoE) weights of the first N layers in the
                                        CPU
                                        (env: LLAMA_ARG_N_CPU_MOE)
-ngl,  --gpu-layers, --n-gpu-layers N   max. number of layers to store in VRAM, either an exact number,
                                        'auto', or 'all' (default: auto)
                                        (env: LLAMA_ARG_N_GPU_LAYERS)
-sm,   --split-mode {none,layer,row,tensor}
                                        how to split the model across multiple GPUs, one of:
                                        - none: use one GPU only
                                        - layer (default): split layers and KV across GPUs (pipelined)
                                        - row: split weight across GPUs by rows (parallelized)
                                        - tensor: split weights and KV across GPUs (parallelized,
                                        EXPERIMENTAL)
                                        (env: LLAMA_ARG_SPLIT_MODE)
-ts,   --tensor-split N0,N1,N2,...      fraction of the model to offload to each GPU, comma-separated list of
                                        proportions, e.g. 3,1
                                        (env: LLAMA_ARG_TENSOR_SPLIT)
-mg,   --main-gpu INDEX                 the GPU to use for the model (with split-mode = none), or for
                                        intermediate results and KV (with split-mode = row) (default: 0)
                                        (env: LLAMA_ARG_MAIN_GPU)
-fit,  --fit [on|off]                   whether to adjust unset arguments to fit in device memory ('on' or
                                        'off', default: 'on')
                                        (env: LLAMA_ARG_FIT)
-fitt, --fit-target MiB0,MiB1,MiB2,...
                                        target margin per device for --fit, comma-separated list of values,
                                        single value is broadcast across all devices, default: 1024
                                        (env: LLAMA_ARG_FIT_TARGET)
-fitc, --fit-ctx N                      minimum ctx size that can be set by --fit option, default: 4096
                                        (env: LLAMA_ARG_FIT_CTX)
--check-tensors                         check model tensor data for invalid values (default: false)
--override-kv KEY=TYPE:VALUE,...        advanced option to override model metadata by key. to specify multiple
                                        overrides, either use comma-separated values.
                                        types: int, float, bool, str. example: --override-kv
                                        tokenizer.ggml.add_bos_token=bool:false,tokenizer.ggml.add_eos_token=bool:false
--op-offload, --no-op-offload           whether to offload host tensor operations to device (default: true)
--lora FNAME                            path to LoRA adapter (use comma-separated values to load multiple
                                        adapters)
--lora-scaled FNAME:SCALE,...           path to LoRA adapter with user defined scaling (format:
                                        FNAME:SCALE,...)
                                        note: use comma-separated values
--control-vector FNAME                  add a control vector
                                        note: use comma-separated values to add multiple control vectors
--control-vector-scaled FNAME:SCALE,...
                                        add a control vector with user defined scaling SCALE
                                        note: use comma-separated values (format: FNAME:SCALE,...)
--control-vector-layer-range START END
                                        layer range to apply the control vector(s) to, start and end inclusive
-m,    --model FNAME                    model path to load
                                        (env: LLAMA_ARG_MODEL)
-mu,   --model-url MODEL_URL            model download url (default: unused)
                                        (env: LLAMA_ARG_MODEL_URL)
-dr,   --docker-repo [<repo>/]<model>[:quant]
                                        Docker Hub model repository. repo is optional, default to ai/. quant
                                        is optional, default to :latest.
                                        example: gemma3
                                        (default: unused)
                                        (env: LLAMA_ARG_DOCKER_REPO)
-hf,   -hfr, --hf-repo <user>/<model>[:quant]
                                        Hugging Face model repository; quant is optional, case-insensitive,
                                        default to Q4_K_M, or falls back to the first file in the repo if
                                        Q4_K_M doesn't exist.
                                        mmproj is also downloaded automatically if available. to disable, add
                                        --no-mmproj
                                        example: ggml-org/GLM-4.7-Flash-GGUF:Q4_K_M
                                        (default: unused)
                                        (env: LLAMA_ARG_HF_REPO)
-hfd,  -hfrd, --hf-repo-draft <user>/<model>[:quant]
                                        Same as --hf-repo, but for the draft model (default: unused)
                                        (env: LLAMA_ARG_HFD_REPO)
-hff,  --hf-file FILE                   Hugging Face model file. If specified, it will override the quant in
                                        --hf-repo (default: unused)
                                        (env: LLAMA_ARG_HF_FILE)
-hfv,  -hfrv, --hf-repo-v <user>/<model>[:quant]
                                        Hugging Face model repository for the vocoder model (default: unused)
                                        (env: LLAMA_ARG_HF_REPO_V)
-hffv, --hf-file-v FILE                 Hugging Face model file for the vocoder model (default: unused)
                                        (env: LLAMA_ARG_HF_FILE_V)
-hft,  --hf-token TOKEN                 Hugging Face access token (default: value from HF_TOKEN environment
                                        variable)
                                        (env: HF_TOKEN)
--log-disable                           Log disable
--log-file FNAME                        Log to file
                                        (env: LLAMA_LOG_FILE)
--log-colors [on|off|auto]              Set colored logging ('on', 'off', or 'auto', default: 'auto')
                                        'auto' enables colors when output is to a terminal
                                        (env: LLAMA_LOG_COLORS)
-v,    --verbose, --log-verbose         Set verbosity level to infinity (i.e. log all messages, useful for
                                        debugging)
--offline                               Offline mode: forces use of cache, prevents network access
                                        (env: LLAMA_OFFLINE)
-lv,   --verbosity, --log-verbosity N   Set the verbosity threshold. Messages with a higher verbosity will be
                                        ignored. Values:
                                         - 0: generic output
                                         - 1: error
                                         - 2: warning
                                         - 3: info
                                         - 4: debug
                                        (default: 1)
                                        
                                        (env: LLAMA_LOG_VERBOSITY)
--log-prefix                            Enable prefix in log messages
                                        (env: LLAMA_LOG_PREFIX)
--log-timestamps                        Enable timestamps in log messages
                                        (env: LLAMA_LOG_TIMESTAMPS)
-ctkd, --cache-type-k-draft TYPE        KV cache data type for K for the draft model
                                        allowed values: f32, f16, bf16, q8_0, q4_0, q4_1, iq4_nl, q5_0, q5_1
                                        (default: f16)
                                        (env: LLAMA_ARG_CACHE_TYPE_K_DRAFT)
-ctvd, --cache-type-v-draft TYPE        KV cache data type for V for the draft model
                                        allowed values: f32, f16, bf16, q8_0, q4_0, q4_1, iq4_nl, q5_0, q5_1
                                        (default: f16)
                                        (env: LLAMA_ARG_CACHE_TYPE_V_DRAFT)


----- sampling params -----

--samplers SAMPLERS                     samplers that will be used for generation in the order, separated by
                                        ';'
                                        (default:
                                        penalties;dry;top_n_sigma;top_k;typ_p;top_p;min_p;xtc;temperature)
-s,    --seed SEED                      RNG seed (default: -1, use random seed for -1)
--sampler-seq, --sampling-seq SEQUENCE
                                        simplified sequence for samplers that will be used (default:
                                        edskypmxt)
--ignore-eos                            ignore end of stream token and continue generating (implies
                                        --logit-bias EOS-inf)
--temp, --temperature N                 temperature (default: 0.80)
--top-k N                               top-k sampling (default: 40, 0 = disabled)
                                        (env: LLAMA_ARG_TOP_K)
--top-p N                               top-p sampling (default: 0.95, 1.0 = disabled)
--min-p N                               min-p sampling (default: 0.05, 0.0 = disabled)
--top-nsigma, --top-n-sigma N           top-n-sigma sampling (default: -1.00, -1.0 = disabled)
--xtc-probability N                     xtc probability (default: 0.00, 0.0 = disabled)
--xtc-threshold N                       xtc threshold (default: 0.10, 1.0 = disabled)
--typical, --typical-p N                locally typical sampling, parameter p (default: 1.00, 1.0 = disabled)
--repeat-last-n N                       last n tokens to consider for penalize (default: 64, 0 = disabled, -1
                                        = ctx_size)
--repeat-penalty N                      penalize repeat sequence of tokens (default: 1.00, 1.0 = disabled)
--presence-penalty N                    repeat alpha presence penalty (default: 0.00, 0.0 = disabled)
--frequency-penalty N                   repeat alpha frequency penalty (default: 0.00, 0.0 = disabled)
--dry-multiplier N                      set DRY sampling multiplier (default: 0.00, 0.0 = disabled)
--dry-base N                            set DRY sampling base value (default: 1.75)
--dry-allowed-length N                  set allowed length for DRY sampling (default: 2)
--dry-penalty-last-n N                  set DRY penalty for the last n tokens (default: -1, 0 = disable, -1 =
                                        context size)
--dry-sequence-breaker STRING           add sequence breaker for DRY sampling, clearing out default breakers
                                        ('\n', ':', '"', '*') in the process; use "none" to not use any
                                        sequence breakers
--adaptive-target N                     adaptive-p: select tokens near this probability (valid range 0.0 to
                                        1.0; negative = disabled) (default: -1.00)
                                        [(more info)](https://github.com/ggml-org/llama.cpp/pull/17927)
--adaptive-decay N                      adaptive-p: decay rate for target adaptation over time. lower values
                                        are more reactive, higher values are more stable.
                                        (valid range 0.0 to 0.99) (default: 0.90)
--dynatemp-range N                      dynamic temperature range (default: 0.00, 0.0 = disabled)
--dynatemp-exp N                        dynamic temperature exponent (default: 1.00)
--mirostat N                            use Mirostat sampling.
                                        Top K, Nucleus and Locally Typical samplers are ignored if used.
                                        (default: 0, 0 = disabled, 1 = Mirostat, 2 = Mirostat 2.0)
--mirostat-lr N                         Mirostat learning rate, parameter eta (default: 0.10)
--mirostat-ent N                        Mirostat target entropy, parameter tau (default: 5.00)
-l,    --logit-bias TOKEN_ID(+/-)BIAS   modifies the likelihood of token appearing in the completion,
                                        i.e. `--logit-bias 15043+1` to increase likelihood of token ' Hello',
                                        or `--logit-bias 15043-1` to decrease likelihood of token ' Hello'
--grammar GRAMMAR                       BNF-like grammar to constrain generations (see samples in grammars/
                                        dir)
--grammar-file FNAME                    file to read grammar from
-j,    --json-schema SCHEMA             JSON schema to constrain generations (https://json-schema.org/), e.g.
                                        `{}` for any JSON object
                                        For schemas w/ external $refs, use --grammar +
                                        example/json_schema_to_grammar.py instead
-jf,   --json-schema-file FILE          File containing a JSON schema to constrain generations
                                        (https://json-schema.org/), e.g. `{}` for any JSON object
                                        For schemas w/ external $refs, use --grammar +
                                        example/json_schema_to_grammar.py instead
-bs,   --backend-sampling               enable backend sampling (experimental) (default: disabled)
                                        (env: LLAMA_ARG_BACKEND_SAMPLING)


----- example-specific params -----

--verbose-prompt                        print a verbose prompt before generation (default: false)
--display-prompt, --no-display-prompt   whether to print prompt at generation (default: true)
-co,   --color [on|off|auto]            Colorize output to distinguish prompt and user input from generations
                                        ('on', 'off', or 'auto', default: 'auto')
                                        'auto' enables colors when output is to a terminal
-ctxcp, --ctx-checkpoints, --swa-checkpoints N
                                        max number of context checkpoints to create per slot (default:
                                        32)[(more info)](https://github.com/ggml-org/llama.cpp/pull/15293)
                                        (env: LLAMA_ARG_CTX_CHECKPOINTS)
-cpent, --checkpoint-every-n-tokens N   create a checkpoint every n tokens during prefill (processing), -1 to
                                        disable (default: 8192)
                                        (env: LLAMA_ARG_CHECKPOINT_EVERY_NT)
-cram, --cache-ram N                    set the maximum cache size in MiB (default: 8192, -1 - no limit, 0 -
                                        disable)[(more
                                        info)](https://github.com/ggml-org/llama.cpp/pull/16391)
                                        (env: LLAMA_ARG_CACHE_RAM)
--context-shift, --no-context-shift     whether to use context shift on infinite text generation (default:
                                        disabled)
                                        (env: LLAMA_ARG_CONTEXT_SHIFT)
-sys,  --system-prompt PROMPT           system prompt to use with model (if applicable, depending on chat
                                        template)
--show-timings, --no-show-timings       whether to show timing information after each response (default: true)
                                        (env: LLAMA_ARG_SHOW_TIMINGS)
-sysf, --system-prompt-file FNAME       a file containing the system prompt (default: none)
-r,    --reverse-prompt PROMPT          halt generation at PROMPT, return control in interactive mode
-sp,   --special                        special tokens output enabled (default: false)
-cnv,  --conversation, -no-cnv, --no-conversation
                                        whether to run in conversation mode:
                                        - does not print special tokens and suffix/prefix
                                        - interactive mode is also enabled
                                        (default: auto enabled if chat template is available)
-st,   --single-turn                    run conversation for a single turn only, then exit when done
                                        will not be interactive if first turn is predefined with --prompt
                                        (default: false)
-mli,  --multiline-input                allows you to write or paste multiple lines without ending each in '\'
--warmup, --no-warmup                   whether to perform warmup with an empty run (default: enabled)
-mm,   --mmproj FILE                    path to a multimodal projector file. see tools/mtmd/README.md
                                        note: if -hf is used, this argument can be omitted
                                        (env: LLAMA_ARG_MMPROJ)
-mmu,  --mmproj-url URL                 URL to a multimodal projector file. see tools/mtmd/README.md
                                        (env: LLAMA_ARG_MMPROJ_URL)
--mmproj-auto, --no-mmproj, --no-mmproj-auto
                                        whether to use multimodal projector file (if available), useful when
                                        using -hf (default: enabled)
                                        (env: LLAMA_ARG_MMPROJ_AUTO)
--mmproj-offload, --no-mmproj-offload   whether to enable GPU offloading for multimodal projector (default:
                                        enabled)
                                        (env: LLAMA_ARG_MMPROJ_OFFLOAD)
--image, --audio FILE                   path to an image or audio file. use with multimodal models, use
                                        comma-separated values for multiple files
--image-min-tokens N                    minimum number of tokens each image can take, only used by vision
                                        models with dynamic resolution (default: read from model)
                                        (env: LLAMA_ARG_IMAGE_MIN_TOKENS)
--image-max-tokens N                    maximum number of tokens each image can take, only used by vision
                                        models with dynamic resolution (default: read from model)
                                        (env: LLAMA_ARG_IMAGE_MAX_TOKENS)
-otd,  --override-tensor-draft <tensor name pattern>=<buffer type>,...
                                        override tensor buffer type for draft model
-cmoed, --cpu-moe-draft                 keep all Mixture of Experts (MoE) weights in the CPU for the draft
                                        model
                                        (env: LLAMA_ARG_CPU_MOE_DRAFT)
-ncmoed, --n-cpu-moe-draft N            keep the Mixture of Experts (MoE) weights of the first N layers in the
                                        CPU for the draft model
                                        (env: LLAMA_ARG_N_CPU_MOE_DRAFT)
--chat-template-kwargs STRING           sets additional params for the json template parser, must be a valid
                                        json object string, e.g. '{"key1":"value1","key2":"value2"}'
                                        (env: LLAMA_CHAT_TEMPLATE_KWARGS)
--jinja, --no-jinja                     whether to use jinja template engine for chat (default: enabled)
                                        (env: LLAMA_ARG_JINJA)
--reasoning-format FORMAT               controls whether thought tags are allowed and/or extracted from the
                                        response, and in which format they're returned; one of:
                                        - none: leaves thoughts unparsed in `message.content`
                                        - deepseek: puts thoughts in `message.reasoning_content`
                                        - deepseek-legacy: keeps `<think>` tags in `message.content` while
                                        also populating `message.reasoning_content`
                                        (default: auto)
                                        (env: LLAMA_ARG_THINK)
-rea,  --reasoning [on|off|auto]        Use reasoning/thinking in the chat ('on', 'off', or 'auto', default:
                                        'auto' (detect from template))
                                        (env: LLAMA_ARG_REASONING)
--reasoning-budget N                    token budget for thinking: -1 for unrestricted, 0 for immediate end,
                                        N>0 for token budget (default: -1)
                                        (env: LLAMA_ARG_THINK_BUDGET)
--reasoning-budget-message MESSAGE      message injected before the end-of-thinking tag when reasoning budget
                                        is exhausted (default: none)
                                        (env: LLAMA_ARG_THINK_BUDGET_MESSAGE)
--chat-template JINJA_TEMPLATE          set custom jinja chat template (default: template taken from model's
                                        metadata)
                                        if suffix/prefix are specified, template will be disabled
                                        only commonly used templates are accepted (unless --jinja is set
                                        before this flag):
                                        list of built-in templates:
                                        bailing, bailing-think, bailing2, chatglm3, chatglm4, chatml,
                                        command-r, deepseek, deepseek-ocr, deepseek2, deepseek3, exaone-moe,
                                        exaone3, exaone4, falcon3, gemma, gigachat, glmedge, gpt-oss, granite,
                                        granite-4.0, grok-2, hunyuan-dense, hunyuan-moe, hunyuan-ocr, kimi-k2,
                                        llama2, llama2-sys, llama2-sys-bos, llama2-sys-strip, llama3, llama4,
                                        megrez, minicpm, mistral-v1, mistral-v3, mistral-v3-tekken,
                                        mistral-v7, mistral-v7-tekken, monarch, openchat, orion,
                                        pangu-embedded, phi3, phi4, rwkv-world, seed_oss, smolvlm, solar-open,
                                        vicuna, vicuna-orca, yandex, zephyr
                                        (env: LLAMA_ARG_CHAT_TEMPLATE)
--chat-template-file JINJA_TEMPLATE_FILE
                                        set custom jinja chat template file (default: template taken from
                                        model's metadata)
                                        if suffix/prefix are specified, template will be disabled
                                        only commonly used templates are accepted (unless --jinja is set
                                        before this flag):
                                        list of built-in templates:
                                        bailing, bailing-think, bailing2, chatglm3, chatglm4, chatml,
                                        command-r, deepseek, deepseek-ocr, deepseek2, deepseek3, exaone-moe,
                                        exaone3, exaone4, falcon3, gemma, gigachat, glmedge, gpt-oss, granite,
                                        granite-4.0, grok-2, hunyuan-dense, hunyuan-moe, hunyuan-ocr, kimi-k2,
                                        llama2, llama2-sys, llama2-sys-bos, llama2-sys-strip, llama3, llama4,
                                        megrez, minicpm, mistral-v1, mistral-v3, mistral-v3-tekken,
                                        mistral-v7, mistral-v7-tekken, monarch, openchat, orion,
                                        pangu-embedded, phi3, phi4, rwkv-world, seed_oss, smolvlm, solar-open,
                                        vicuna, vicuna-orca, yandex, zephyr
                                        (env: LLAMA_ARG_CHAT_TEMPLATE_FILE)
--skip-chat-parsing, --no-skip-chat-parsing
                                        force a pure content parser, even if a Jinja template is specified;
                                        model will output everything in the content section, including any
                                        reasoning and/or tool calls (default: disabled)
                                        (env: LLAMA_ARG_SKIP_CHAT_PARSING)
--simple-io                             use basic IO for better compatibility in subprocesses and limited
                                        consoles
--draft, --draft-n, --draft-max N       number of tokens to draft for speculative decoding (default: 16)
                                        (env: LLAMA_ARG_DRAFT_MAX)
--draft-min, --draft-n-min N            minimum number of draft tokens to use for speculative decoding
                                        (default: 0)
                                        (env: LLAMA_ARG_DRAFT_MIN)
--draft-p-min P                         minimum speculative decoding probability (greedy) (default: 0.75)
                                        (env: LLAMA_ARG_DRAFT_P_MIN)
-cd,   --ctx-size-draft N               size of the prompt context for the draft model (default: 0, 0 = loaded
                                        from model)
                                        (env: LLAMA_ARG_CTX_SIZE_DRAFT)
-devd, --device-draft <dev1,dev2,..>    comma-separated list of devices to use for offloading the draft model
                                        (none = don't offload)
                                        use --list-devices to see a list of available devices
-ngld, --gpu-layers-draft, --n-gpu-layers-draft N
                                        max. number of draft model layers to store in VRAM, either an exact
                                        number, 'auto', or 'all' (default: auto)
                                        (env: LLAMA_ARG_N_GPU_LAYERS_DRAFT)
-md,   --model-draft FNAME              draft model for speculative decoding (default: unused)
                                        (env: LLAMA_ARG_MODEL_DRAFT)
--spec-replace TARGET DRAFT             translate the string in TARGET into DRAFT if the draft model and main
                                        model are not compatible
--gpt-oss-20b-default                   use gpt-oss-20b (note: can download weights from the internet)
--gpt-oss-120b-default                  use gpt-oss-120b (note: can download weights from the internet)
--vision-gemma-4b-default               use Gemma 3 4B QAT (note: can download weights from the internet)
--vision-gemma-12b-default              use Gemma 3 12B QAT (note: can download weights from the internet)
--spec-default                          enable default speculative decoding config

 

 

--------------------- 벤치마크?

Intel(R) Xeon(R) CPU E5-2690 v4 @ 2.60GHz / 256GB DDR4

$ ../llama-cli -m Qwen3.6-35B-A3B-UD-Q2_K_XL.gguf 
load_backend: loaded RPC backend from /home/minimonk/src/llama-b8876/libggml-rpc.so
load_backend: loaded CPU backend from /home/minimonk/src/llama-b8876/libggml-cpu-haswell.so

Loading model...  


▄▄ ▄▄
██ ██
██ ██  ▀▀█▄ ███▄███▄  ▀▀█▄    ▄████ ████▄ ████▄
██ ██ ▄█▀██ ██ ██ ██ ▄█▀██    ██    ██ ██ ██ ██
██ ██ ▀█▄██ ██ ██ ██ ▀█▄██ ██ ▀████ ████▀ ████▀
                                    ██    ██
                                    ▀▀    ▀▀

build      : b8876-72d693e4f
model      : Qwen3.6-35B-A3B-UD-Q2_K_XL.gguf
modalities : text

available commands:
  /exit or Ctrl+C     stop or exit
  /regen              regenerate the last response
  /clear              clear the chat history
  /read <file>        add a text file
  /glob <pattern>     add text files using globbing pattern

> hello~ introduce about you         

[Start thinking]
Here's a thinking process:

1.  **Analyze User Input:**
   - User says: "hello~ introduce about you"
   - This is a casual greeting followed by a request for a self-introduction.
   - The tone is friendly and informal.

2.  **Identify Key Requirements:**
   - Acknowledge the greeting
   - Provide a clear, concise self-introduction
   - State my identity (Qwen / Tongyi Qianwen)
   - Mention my developer (Alibaba Group's Tongyi Lab)
   - Highlight my core capabilities (language understanding, reasoning, coding, multimodal processing, etc.)
   - Keep it friendly, accurate, and aligned with the provided guidelines
   - Avoid mentioning version numbers or specific details not in the reference

3.  **Draft Construction (Mental Refinement):**
   Hello! I'm Qwen (also known as Tongyi Qianwen), a large language model developed by Alibaba Group's Tongyi Lab. I'm designed to be helpful, thoughtful, and reliable across a wide range of tasks—from answering questions and writing content to solving complex reasoning problems, coding, and working with images and documents. I support fluent communication in over 100 languages and can adapt to different contexts while maintaining accuracy and safety. I'm here to help you think through ideas, tackle challenges, or just chat, so feel free to ask me anything! How can I assist you today?

4.  **Check Against Guidelines:**
   - Identity: Qwen / Tongyi Qianwen ✓
   - Developer: Alibaba Group's Tongyi Lab ✓
   - Capabilities: Language, reasoning, coding, multimodal, 100+ languages, helpful/safe ✓
   - Tone: Friendly, professional, matches user's casual tone ✓
   - No version numbers, no fabricated stats, no overclaiming ✓

5.  **Final Polish:**
   The draft looks solid. It's concise, accurate, and aligned with all constraints. I'll output it.✅

[End thinking]

Hello! I'm Qwen (also known as Tongyi Qianwen), a large language model developed by Alibaba Group's Tongyi Lab. I'm designed to be helpful, thoughtful, and reliable across a wide range of tasks—whether it's answering questions, writing or editing content, solving complex reasoning problems, coding, analyzing documents, or working with images. I support fluent communication in over 100 languages and can adapt to different tones, formats, and contexts while prioritizing accuracy and safety.

I'm here to help you think through ideas, tackle challenges, or just chat, so feel free to ask me anything. How can I assist you today? 😊

Prompt: 29.6 t/s | Generation: 11.0 t/s ]



Exiting...
common_memory_breakdown_print: | memory breakdown [MiB] | total   free     self   model   context   compute    unaccounted |
common_memory_breakdown_print: |   - Host               |                 17428 = 11437 +    5182 +     808                |
common_memory_breakdown_print: |   - CPU_REPACK         |                   272 =   272 +       0 +       0                |

 

$ ../llama-cli -m Llama-3.2-3B-Instruct-Q4_K_M.gguf
load_backend: loaded RPC backend from /home/minimonk/src/llama-b8876/libggml-rpc.so
load_backend: loaded CPU backend from /home/minimonk/src/llama-b8876/libggml-cpu-haswell.so

Loading model...  


▄▄ ▄▄
██ ██
██ ██  ▀▀█▄ ███▄███▄  ▀▀█▄    ▄████ ████▄ ████▄
██ ██ ▄█▀██ ██ ██ ██ ▄█▀██    ██    ██ ██ ██ ██
██ ██ ▀█▄██ ██ ██ ██ ▀█▄██ ██ ▀████ ████▀ ████▀
                                    ██    ██
                                    ▀▀    ▀▀

build      : b8876-72d693e4f
model      : Llama-3.2-3B-Instruct-Q4_K_M.gguf
modalities : text

available commands:
  /exit or Ctrl+C     stop or exit
  /regen              regenerate the last response
  /clear              clear the chat history
  /read <file>        add a text file
  /glob <pattern>     add text files using globbing pattern


> 안녕? 너에 대해서 설명해줘

안녕하세요! 저는 지능형 чattybot 인你(You)와 대화할 수 있는 AI입니다.

저는 지능학, 자연어 처리,_machine learning 및 다른 기술을 바탕으로 만든 AI입니다. 이 기술은 인간의 언어를 이해하고,natural language processing (NLP)로 human과 대화할 수 있는能力을 허용합니다.

저는 다양한 topic과 subject에 대해 지식을 제공할 수 있으며, human과 대화할 수 있습니다. 저는 human의 질문을 읽고, 적절한 대답을 제공할 수 있습니다.

저는 다음 capability을 제공합니다:

* natural language understanding: human의 언어를 이해할 수 있습니다.
* text generation: human의 질문에 대해 적절한 대답을 제공할 수 있습니다.
* conversation: human과 대화할 수 있습니다.
* question answering: human의 질문에 대해 적절한 대답을 제공할 수 있습니다.

저는 human과 대화할 수 있는 다양한 application과 technology를 제공합니다. 예를 들어, human의 질문을 읽어 대답할 수 있는 chatbot, human의 언어를 이해하고, natural language processing를 사용하여 human과 대화할 수 있는 application, human의 질문에 대해 적절한 대답을 제공할 수 있는 Q&A 시스템, etc.

저는 human의 언어를 이해하고, natural language processing를 사용하여 human과 대화할 수 있는 ability을 갖습니다. human의 질문을 읽어 대답할 수 있으며, human의 언어를 이해하고, natural language processing를 사용하여 human과 대화할 수 있습니다.

안녕하세요! 저는 지능형 चattybot 인 you(You)와 대화할 수 있는 AI입니다. 저는 human의 질문을 읽고, 적절한 대답을 제공할 수 있습니다. human의 언어를 이해하고, natural language processing를 사용하여 human과 대화할 수 있습니다.

Prompt: 89.7 t/s | Generation: 20.6 t/s ]



Exiting...
common_memory_breakdown_print: | memory breakdown [MiB] | total   free     self   model   context   compute    unaccounted |
common_memory_breakdown_print: |   - Host               |                 16654 =  1904 +   14336 +     414                |
common_memory_breakdown_print: |   - CPU_REPACK         |                  1299 =  1299 +       0 +       0                |

 

$ ../llama-cli -m gemma-4-E2B-it-Q4_K_M.gguf
load_backend: loaded RPC backend from /home/minimonk/src/llama-b8876/libggml-rpc.so
load_backend: loaded CPU backend from /home/minimonk/src/llama-b8876/libggml-cpu-haswell.so

Loading model...  


▄▄ ▄▄
██ ██
██ ██  ▀▀█▄ ███▄███▄  ▀▀█▄    ▄████ ████▄ ████▄
██ ██ ▄█▀██ ██ ██ ██ ▄█▀██    ██    ██ ██ ██ ██
██ ██ ▀█▄██ ██ ██ ██ ▀█▄██ ██ ▀████ ████▀ ████▀
                                    ██    ██
                                    ▀▀    ▀▀

build      : b8876-72d693e4f
model      : gemma-4-E2B-it-Q4_K_M.gguf
modalities : text

available commands:
  /exit or Ctrl+C     stop or exit
  /regen              regenerate the last response
  /clear              clear the chat history
  /read <file>        add a text file
  /glob <pattern>     add text files using globbing pattern


> 안녕? 너에 대해서 설명해줘

[Start thinking]
Thinking Process:

1.  **Analyze the Request:** The user said "안녕? 너에 대해서 설명해줘" (Hello? Please explain about yourself). This is a request for identity and capabilities.
2.  **Determine Core Identity (Self-Correction/Verification):**
    *   Name: Gemma 4.
    *   Developer: Google DeepMind.
    *   Nature: Large Language Model (LLM).
    *   Type: Open weights model.
    *   Capabilities: Understands text/images, generates text. Knowledge cutoff (Jan 2025).
3.  **Determine Response Language:** The user asked in Korean, so the response must be in fluent, natural Korean.
4.  **Structure the Explanation (Standard LLM Introduction):**
    *   Greeting/Identity (Who am I?).
    *   Developer (Who made me?).
    *   Nature/Function (What am I?).
    *   Capabilities (What can I do?).
    *   Limitations (What can't I do/knowledge scope?).
5.  **Draft the Content (in Korean):**

    *   *Identity:* 저는 Gemma 4입니다. (I am Gemma 4.)
    *   *Developer/Nature:* 저는 Google DeepMind에서 개발한 대규모 언어 모델(LLM)입니다. 오픈 웨이트 모델입니다. (I am a Large Language Model developed by Google DeepMind. I am an open weights model.)
    *   *Function:* 저는 텍스트와 이미지를 이해하고, 사용자의 질문에 답하거나 글을 작성하는 등 다양한 언어 작업을 수행할 수 있습니다. (I can perform various language tasks such as understanding text and images, answering user questions, or writing text.)
    *   *Input/Output:* 저는 텍스트를 입력받아 텍스트로 출력합니다. (I receive text input and output text.)
    *   *Knowledge/Limitation:* 제 지식은 학습된 데이터를 기반으로 하며, 특정 시점(2025년 1월)까지의 정보를 알고 있습니다. (My knowledge is based on trained data, and I know information up to a specific point (January 2025).)

6.  **Review and Refine (Ensure natural flow and adherence to instructions):** The drafted points cover all necessary identity requirements and maintain the required persona (Gemma 4, open weights, LLM). (The Korean draft looks good.)

7.  **Final Output Generation.** (Proceed to generate the Korean response.)
[End thinking]

안녕하세요! 저는 **Gemma 4**입니다. 저에 대해 몇 가지 핵심 정보를 설명해 드릴게요.

**1. 정체성 및 개발 배경**

*   **이름:** Gemma 4
*   **개발사:** Google DeepMind에서 개발되었습니다.
*   **성격:** 저는 대규모 언어 모델(LLM, Large Language Model)이며, 오픈 웨이트(open weights) 모델입니다. 이는 저의 모델 구조와 가중치가 공개되어 누구나 접근하고 활용할 수 있음을 의미합니다.

**2. 주요 기능 및 능력**

저는 방대한 양의 데이터를 학습하여 다음과 같은 다양한 언어 관련 작업을 수행할 수 있습니다.

*   **텍스트 이해 및 생성:** 사용자가 입력하는 텍스트를 이해하고, 질문에 답하거나, 요약하거나, 번역하거나, 창의적인 글(시, 이야기, 코드 등)을 작성할 수 있습니다.
*   **다중 모달리티 이해:** 저는 텍스트뿐만 아니라 이미지를 이해하고 처리할 수 있습니다. (다만, 출력은 텍스트로만 가능합니다.)
*   **추론 및 논리:** 복잡한 질문에 대해 논리적으로 추론하고 일관성 있는 답변을 제공할 수 있습니다.

**3. 지식 범위 및 한계**

*   **지식 기반:** 저는 학습된 데이터를 기반으로 답변하며, 저의 지식은 특정 시점(2025년 1월)까지의 정보를 포함하고 있습니다. 따라서 그 이후의 최신 정보에 대해서는 알지 못할 수 있습니다.
*   **출력 형식:** 저는 오직 **텍스트** 형태로만 정보를 생성할 수 있습니다. 이미지를 직접 생성할 수는 없습니다.

궁금한 점이 있거나 도움이 필요하시면 언제든지 말씀해 주세요! 최선을 다해 답변해 드리겠습니다.

Prompt: 107.4 t/s | Generation: 19.1 t/s ]



Exiting...
common_memory_breakdown_print: | memory breakdown [MiB] | total   free    self   model   context   compute    unaccounted |
common_memory_breakdown_print: |   - Host               |                 4517 =  2947 +     780 +     790                |
common_memory_breakdown_print: |   - CPU_REPACK         |                 1069 =  1069 +       0 +       0                |
Posted by 구차니

qwen이 핫해서 어쩌다 얻은 링크.. 이걸로 실행이 더 쉬우려나?

 

[링크 : https://unsloth.ai/docs/models/qwen3.6]

Posted by 구차니

리눅스에서 그냥 설치만 하고 딱히 설정한건 없는데, 기본이 모든 ip 접속 허용인 것 같고

$ netstat -tnlp
(Not all processes could be identified, non-owned process info
 will not be shown, you would have to be root to see it all.)
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name    
tcp        0      0 127.0.0.1:11434         0.0.0.0:*               LISTEN      -  

 

윈도우의 경우 gui 클라이언트에서 설정하는게 보이던데.. 막상 포트는 확인을 안해본듯..

[링크 : http://practical.kr/?p=809]

'프로그램 사용 > ai 프로그램' 카테고리의 다른 글

llama.cpp 도전!  (0) 2026.04.22
unsloth ai  (0) 2026.04.21
llm tokenizer - llama 3.2, exaone  (0) 2026.04.20
ollama 모델 저장소 뜯어보기  (0) 2026.04.19
llm tokenizer - phi3  (0) 2026.04.19
Posted by 구차니
embeded/jetson2026. 4. 20. 16:05

jetracer 찾다가 이상한 키워드 발견.

rc카 안쓰고 그냥 3d 프린터로 대충(?) 만드는 건가?

아무튼 opencv 기반 영상인식 라이브러리로 작동하는것 같다.

 

[링크 : https://robertchoi.gitbook.io/jetracer/2.1/2.1.3]

[링크 : https://www.waveshare.com/wiki/DonkeyCar_for_Jetson_Nano-WEB_Control?srsltid=AfmBOorSke-fiZEMba2b0M7w8IwbQlsZjIt_j1XW-GdimRkN-YkjlPyv]

 

[링크 : https://github.com/martyniak/fork-waveshare-donkeycar]

'embeded > jetson' 카테고리의 다른 글

jetson nano nvcc 빌드  (0) 2026.04.06
jetson nvcc 실행하기  (0) 2026.04.06
jetson nano 2gb dev kit EOL 근접  (0) 2026.04.05
jetracer에 사용할 보호회로 없는 배터리 구매  (2) 2026.04.04
jetracer ina219 배터리 모니터링 ic  (0) 2026.04.04
Posted by 구차니

이럴 땐 AI가 짱이구만(검증하려고 찾으려면 어우...)

 

부팅 순서

SEC (Security Phase)

PEI (Pre-EFI Initialization)

DXE (Driver Execution Environment)

BDS (Boot Device Selection)

[링크 : https://chatgpt.com/share/69e5bbfb-e140-83e8-81fb-9ef5439c131f]

 

에러 코드가 부팅 순서대로 라면.. SEC / PEI / DXE / BDS 순서로 나오는게 맞다.. 고 해야하나?

아무튼 주로 걸리는데 67 / 69 혹은 b7 / b9 인데 System Agent DXE 초기화 단계 뻗느것 같다.

내용으로는 PEI는 단순하게 메모리 컨트롤러 설정이라면

DXE 쪽에서는 실제로 메모리로 데이터를 올려서 부팅을 진행하거나 채널 설정등을 실제로 하는 것 같은데

그래서 어떻게 보면 메모리와 1도 관련없는 메시지이지만 실제로 메모리 문제로 발생하는 것으로 보이는 것 같다.

[링크 : https://www.asus.com/kr/motherboards-components/motherboards/workstation/z10ped16_ws/]

'이론 관련 > 컴퓨터 관련' 카테고리의 다른 글

intel hex 포맷  (0) 2026.03.24
sd 카드 spi 모드 관련 조사  (0) 2026.03.17
cpuz 와 메모리 타이밍  (0) 2026.01.27
mPCIe pinout usb  (0) 2026.01.21
usb cdc acm(Abstract Control Model)  (0) 2026.01.08
Posted by 구차니
파일방2026. 4. 20. 10:55

간간히 appimage 확장자로 받아서 실행하는건 봤는데 어떻게 만들어지나는 몰라서 찾아봄

 

 

 [링크 : https://appimagelauncher.com/]

[링크 : https://github.com/TheAssassin/AppImageLauncher]

 

[링크 : https://github.com/linuxdeploy/linuxdeploy-plugin-appimage]

'파일방' 카테고리의 다른 글

usb device tree viewer  (0) 2026.02.22
exaone 경량 모델  (0) 2026.02.04
podman  (0) 2026.01.25
elinks  (0) 2026.01.20
platformio  (0) 2026.01.17
Posted by 구차니