리눅스에서 cuda 구버전용으로 빌드해보려다 그냥 포기

$ nvidia-smi --query-gpu=name,compute_capap
name, compute_cap
NVIDIA GeForce GTX 1080 Ti, 6.1
NVIDIA GeForce GTX 1080 Ti, 6.1

 

Might also be worth trying to build with an older architecture, e.g. -DCMAKE_CUDA_ARCHITECTURES="75" (which will be run via PTX JIT compilation), to check whether the issue is related to building with 80.

[링크 : https://github.com/ggml-org/llama.cpp/issues/9727

 

일단 내껀 6.1 이라 아키텍쳐 61로 해야하는데 막상 넣어보니 지원안하는 거라고 나온다.

그래서 cuda 보니까 13.x

그래서 윈도우에서 12.x 에서 작동한건가?

$ cd /usr/local/cuda
cuda/      cuda-13/   cuda-13.0/

 

cuda - driver version

[링크 : https://2dudwns.tistory.com/21]

[링크 : https://www.reddit.com/r/comfyui/comments/1qjp7vs/nvidia_geforce_gtx_1070_with_cuda_capability_sm/]

[링크 : https://arca.live/b/alpaca/112167556]

Posted by 구차니
카테고리 없음2026. 4. 25. 14:30

vulkan은 단일 그래픽일떄는 잘 도는데 두 개일때는 영~ 안되는 느낌이다.

 

먼가 장렬히 죽어버리는데

gpt에 물어봐도 뜬구름 잡는 소리만 하고 있고

일단 wait4.c 가 없다는건 중요한게 아니라고 하는데 디버그 심볼을 설치해 줘도 뜨는 건 매한가지

~/Downloads/llama-b8902/model$ ../llama-cli -m ./gemma-4-E2B-it-Q4_K_M.gguf 
load_backend: loaded RPC backend from /home/minimonk/Downloads/llama-b8902/libggml-rpc.so
load_backend: loaded Vulkan backend from /home/minimonk/Downloads/llama-b8902/libggml-vulkan.so
load_backend: loaded CPU backend from /home/minimonk/Downloads/llama-b8902/libggml-cpu-haswell.so

Loading model... //home/runner/work/llama.cpp/llama.cpp/ggml/src/ggml-backend.cpp:1367: GGML_ASSERT(n_inputs < GGML_SCHED_MAX_SPLIT_INPUTS) failed
-[New LWP 3559]
[New LWP 3560]
[New LWP 3561]
[New LWP 3563]
[New LWP 3564]
[New LWP 3565]
[New LWP 3568]
[New LWP 3570]
[New LWP 3571]
[New LWP 3574]
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
0x00007ed9e18ea42f in __GI___wait4 (pid=3575, stat_loc=0x0, options=0, usage=0x0) at ../sysdeps/unix/sysv/linux/wait4.c:30
30 ../sysdeps/unix/sysv/linux/wait4.c: No such file or directory.
#0  0x00007ed9e18ea42f in __GI___wait4 (pid=3575, stat_loc=0x0, options=0, usage=0x0) at ../sysdeps/unix/sysv/linux/wait4.c:30
30 in ../sysdeps/unix/sysv/linux/wait4.c
#1  0x00007ed9e1f5b71b in ggml_print_backtrace () from /home/minimonk/Downloads/llama-b8902/libggml-base.so.0
#2  0x00007ed9e1f5b8b2 in ggml_abort () from /home/minimonk/Downloads/llama-b8902/libggml-base.so.0
#3  0x00007ed9e1f77f2d in ggml_backend_sched_split_graph () from /home/minimonk/Downloads/llama-b8902/libggml-base.so.0
#4  0x00007ed9e1f780b5 in ggml_backend_sched_reserve_size () from /home/minimonk/Downloads/llama-b8902/libggml-base.so.0
#5  0x00007ed9e20b0fa4 in llama_context::graph_reserve(unsigned int, unsigned int, unsigned int, llama_memory_context_i const*, bool, unsigned long*) () from /home/minimonk/Downloads/llama-b8902/libllama.so.0
#6  0x00007ed9e20b1adf in llama_context::sched_reserve() () from /home/minimonk/Downloads/llama-b8902/libllama.so.0
#7  0x00007ed9e20b5869 in llama_context::llama_context(llama_model const&, llama_context_params) () from /home/minimonk/Downloads/llama-b8902/libllama.so.0
#8  0x00007ed9e20b649b in llama_init_from_model () from /home/minimonk/Downloads/llama-b8902/libllama.so.0
#9  0x00007ed9e26282a2 in common_get_device_memory_data(char const*, llama_model_params const*, llama_context_params const*, std::vector<ggml_backend_device*, std::allocator<ggml_backend_device*> >&, unsigned int&, unsigned int&, unsigned int&, ggml_log_level) () from /home/minimonk/Downloads/llama-b8902/libllama-common.so.0
#10 0x00007ed9e2628f52 in common_params_fit_impl(char const*, llama_model_params*, llama_context_params*, float*, llama_model_tensor_buft_override*, unsigned long*, unsigned int, ggml_log_level) () from /home/minimonk/Downloads/llama-b8902/libllama-common.so.0
#11 0x00007ed9e262c9e2 in common_fit_params(char const*, llama_model_params*, llama_context_params*, float*, llama_model_tensor_buft_override*, unsigned long*, unsigned int, ggml_log_level) () from /home/minimonk/Downloads/llama-b8902/libllama-common.so.0
#12 0x00007ed9e25fe2ff in common_init_result::common_init_result(common_params&) () from /home/minimonk/Downloads/llama-b8902/libllama-common.so.0
#13 0x00007ed9e25ff9a8 in common_init_from_params(common_params&) () from /home/minimonk/Downloads/llama-b8902/libllama-common.so.0
#14 0x00006547aab18d7e in server_context_impl::load_model(common_params&) ()
#15 0x00006547aaa549b5 in main ()
\[Inferior 1 (process 3556) detached]
Aborted (core dumped)

 

 

-sm none

옵션을 주면 0 번 gpu를 가지고 도는데 -v 옵션을 주고 보니

메모리 할당이 0MB? 엥?

load_tensors: layer   0 assigned to device Vulkan0, is_swa = 1
load_tensors: layer   1 assigned to device Vulkan0, is_swa = 1
load_tensors: layer   2 assigned to device Vulkan0, is_swa = 1
load_tensors: layer   3 assigned to device Vulkan0, is_swa = 1
load_tensors: layer   4 assigned to device Vulkan0, is_swa = 0
load_tensors: layer   5 assigned to device Vulkan0, is_swa = 1
load_tensors: layer   6 assigned to device Vulkan0, is_swa = 1
load_tensors: layer   7 assigned to device Vulkan0, is_swa = 1
load_tensors: layer   8 assigned to device Vulkan0, is_swa = 1
load_tensors: layer   9 assigned to device Vulkan0, is_swa = 0
load_tensors: layer  10 assigned to device Vulkan0, is_swa = 1
load_tensors: layer  11 assigned to device Vulkan0, is_swa = 1
load_tensors: layer  12 assigned to device Vulkan0, is_swa = 1
load_tensors: layer  13 assigned to device Vulkan0, is_swa = 1
load_tensors: layer  14 assigned to device Vulkan0, is_swa = 0
load_tensors: layer  15 assigned to device Vulkan0, is_swa = 1
load_tensors: layer  16 assigned to device Vulkan0, is_swa = 1
load_tensors: layer  17 assigned to device Vulkan0, is_swa = 1
load_tensors: layer  18 assigned to device Vulkan1, is_swa = 1
load_tensors: layer  19 assigned to device Vulkan1, is_swa = 0
load_tensors: layer  20 assigned to device Vulkan1, is_swa = 1
load_tensors: layer  21 assigned to device Vulkan1, is_swa = 1
load_tensors: layer  22 assigned to device Vulkan1, is_swa = 1
load_tensors: layer  23 assigned to device Vulkan1, is_swa = 1
load_tensors: layer  24 assigned to device Vulkan1, is_swa = 0
load_tensors: layer  25 assigned to device Vulkan1, is_swa = 1
load_tensors: layer  26 assigned to device Vulkan1, is_swa = 1
load_tensors: layer  27 assigned to device Vulkan1, is_swa = 1
load_tensors: layer  28 assigned to device Vulkan1, is_swa = 1
load_tensors: layer  29 assigned to device Vulkan1, is_swa = 0
load_tensors: layer  30 assigned to device Vulkan1, is_swa = 1
load_tensors: layer  31 assigned to device Vulkan1, is_swa = 1
load_tensors: layer  32 assigned to device Vulkan1, is_swa = 1
load_tensors: layer  33 assigned to device Vulkan1, is_swa = 1
load_tensors: layer  34 assigned to device Vulkan1, is_swa = 0
load_tensors: layer  35 assigned to device Vulkan1, is_swa = 0


load_tensors: offloading output layer to GPU
load_tensors: offloading 34 repeating layers to GPU
load_tensors: offloaded 36/36 layers to GPU
load_tensors:      Vulkan0 model buffer size =     0.00 MiB
load_tensors:      Vulkan1 model buffer size =     0.00 MiB
load_tensors:  Vulkan_Host model buffer size =     0.00 MiB
llama_context: constructing llama_context
load_tensors: layer   0 assigned to device Vulkan0, is_swa = 1
load_tensors: layer   1 assigned to device Vulkan0, is_swa = 1
load_tensors: layer   2 assigned to device Vulkan0, is_swa = 1
load_tensors: layer   3 assigned to device Vulkan0, is_swa = 1
load_tensors: layer   4 assigned to device Vulkan0, is_swa = 0
load_tensors: layer   5 assigned to device Vulkan0, is_swa = 1
load_tensors: layer   6 assigned to device Vulkan0, is_swa = 1
load_tensors: layer   7 assigned to device Vulkan0, is_swa = 1
load_tensors: layer   8 assigned to device Vulkan0, is_swa = 1
load_tensors: layer   9 assigned to device Vulkan0, is_swa = 0
load_tensors: layer  10 assigned to device Vulkan0, is_swa = 1
load_tensors: layer  11 assigned to device Vulkan0, is_swa = 1
load_tensors: layer  12 assigned to device Vulkan0, is_swa = 1
load_tensors: layer  13 assigned to device Vulkan0, is_swa = 1
load_tensors: layer  14 assigned to device Vulkan0, is_swa = 0
load_tensors: layer  15 assigned to device Vulkan0, is_swa = 1
load_tensors: layer  16 assigned to device Vulkan0, is_swa = 1
load_tensors: layer  17 assigned to device Vulkan0, is_swa = 1
load_tensors: layer  18 assigned to device Vulkan0, is_swa = 1
load_tensors: layer  19 assigned to device Vulkan0, is_swa = 0
load_tensors: layer  20 assigned to device Vulkan0, is_swa = 1
load_tensors: layer  21 assigned to device Vulkan0, is_swa = 1
load_tensors: layer  22 assigned to device Vulkan0, is_swa = 1
load_tensors: layer  23 assigned to device Vulkan0, is_swa = 1
load_tensors: layer  24 assigned to device Vulkan0, is_swa = 0
load_tensors: layer  25 assigned to device Vulkan0, is_swa = 1
load_tensors: layer  26 assigned to device Vulkan0, is_swa = 1
load_tensors: layer  27 assigned to device Vulkan0, is_swa = 1
load_tensors: layer  28 assigned to device Vulkan0, is_swa = 1
load_tensors: layer  29 assigned to device Vulkan0, is_swa = 0
load_tensors: layer  30 assigned to device Vulkan0, is_swa = 1
load_tensors: layer  31 assigned to device Vulkan0, is_swa = 1
load_tensors: layer  32 assigned to device Vulkan0, is_swa = 1
load_tensors: layer  33 assigned to device Vulkan0, is_swa = 1
load_tensors: layer  34 assigned to device Vulkan0, is_swa = 0
load_tensors: layer  35 assigned to device Vulkan0, is_swa = 0

load_tensors: offloading output layer to GPU
load_tensors: offloading 34 repeating layers to GPU
load_tensors: offloaded 36/36 layers to GPU
load_tensors:   CPU_Mapped model buffer size =  1756.00 MiB
load_tensors:      Vulkan0 model buffer size =  1407.73 MiB
.|.../.........-..........\..........|...
common_init_result: added <eos> logit bias = -inf
common_init_result: added <|tool_response> logit bias = -inf
common_init_result: added <turn|> logit bias = -inf
llama_context: constructing llama_context

 

 

 

vulkan 에서 사용할 그래픽 카드를 1개(0번 1번), 2개로 해봤더니

먼가.. vulkan 으로 메모리 할당을 못하고 죽는 느낌.

$ GGML_VK_VISIBLE_DEVICES=0 ./llama-cli -m ../model/gemma-4-E2B-it-Q4_K_M.gguf -v llama_model_load_from_file_impl: using device Vulkan0 (NVIDIA GeForce GTX 1080 Ti) (0000:01:00.0) - 10985 MiB free

load_tensors: offloading output layer to GPU
load_tensors: offloading 34 repeating layers to GPU
load_tensors: offloaded 36/36 layers to GPU
load_tensors:      Vulkan0 model buffer size =     0.00 MiB
load_tensors:  Vulkan_Host model buffer size =     0.00 MiB

llama_context: enumerating backends
llama_context: backend_ptrs.size() = 2
sched_reserve: reserving ...
sched_reserve: max_nodes = 4816
sched_reserve: reserving full memory module
sched_reserve: worst-case: n_tokens = 512, n_seqs = 1, n_outputs = 1
graph_reserve: reserving a graph for ubatch with n_tokens =    1, n_seqs =  1, n_outputs =    1
sched_reserve: Flash Attention was auto, set to enabled
sched_reserve: resolving fused Gated Delta Net support:
graph_reserve: reserving a graph for ubatch with n_tokens =    1, n_seqs =  1, n_outputs =    1
sched_reserve: fused Gated Delta Net (autoregressive) enabled
graph_reserve: reserving a graph for ubatch with n_tokens =   16, n_seqs =  1, n_outputs =   16
sched_reserve: fused Gated Delta Net (chunked) enabled
graph_reserve: reserving a graph for ubatch with n_tokens =  512, n_seqs =  1, n_outputs =  512
graph_reserve: reserving a graph for ubatch with n_tokens =    1, n_seqs =  1, n_outputs =    1
graph_reserve: reserving a graph for ubatch with n_tokens =  512, n_seqs =  1, n_outputs =  512
sched_reserve:    Vulkan0 compute buffer size =   515.00 MiB
sched_reserve: Vulkan_Host compute buffer size =   281.52 MiB
sched_reserve: graph nodes  = 1500
sched_reserve: graph splits = 2
sched_reserve: reserve took 13.68 ms, sched copies = 1
common_memory_breakdown_print: | memory breakdown [MiB]    | total    free    self   model   context   compute       unaccounted |
common_memory_breakdown_print: |   - Vulkan0 (GTX 1080 Ti) | 11510 = 10975 + (2702 =  1407 +     780 +     515) + 17592186042248 |
common_memory_breakdown_print: |   - Host                  |                  2037 =  1756 +       0 +     281                   |
$ GGML_VK_VISIBLE_DEVICES=1 ./llama-cli -m ../model/gemma-4-E2B-it-Q4_K_M.gguf -v llama_model_load_from_file_impl: using device Vulkan0 (NVIDIA GeForce GTX 1080 Ti) (0000:02:00.0) - 11393 MiB free

load_tensors: offloading output layer to GPU
load_tensors: offloading 34 repeating layers to GPU
load_tensors: offloaded 36/36 layers to GPU
load_tensors:      Vulkan0 model buffer size =     0.00 MiB
load_tensors:  Vulkan_Host model buffer size =     0.00 MiB

llama_context: enumerating backends
llama_context: backend_ptrs.size() = 2
sched_reserve: reserving ...
sched_reserve: max_nodes = 4816
sched_reserve: reserving full memory module
sched_reserve: worst-case: n_tokens = 512, n_seqs = 1, n_outputs = 1
graph_reserve: reserving a graph for ubatch with n_tokens =    1, n_seqs =  1, n_outputs =    1
sched_reserve: Flash Attention was auto, set to enabled
sched_reserve: resolving fused Gated Delta Net support:
graph_reserve: reserving a graph for ubatch with n_tokens =    1, n_seqs =  1, n_outputs =    1
sched_reserve: fused Gated Delta Net (autoregressive) enabled
graph_reserve: reserving a graph for ubatch with n_tokens =   16, n_seqs =  1, n_outputs =   16
sched_reserve: fused Gated Delta Net (chunked) enabled
graph_reserve: reserving a graph for ubatch with n_tokens =  512, n_seqs =  1, n_outputs =  512
graph_reserve: reserving a graph for ubatch with n_tokens =    1, n_seqs =  1, n_outputs =    1
graph_reserve: reserving a graph for ubatch with n_tokens =  512, n_seqs =  1, n_outputs =  512
sched_reserve:    Vulkan0 compute buffer size =   515.00 MiB
sched_reserve: Vulkan_Host compute buffer size =   281.52 MiB
sched_reserve: graph nodes  = 1500
sched_reserve: graph splits = 2
sched_reserve: reserve took 12.95 ms, sched copies = 1
common_memory_breakdown_print: | memory breakdown [MiB]    | total    free    self   model   context   compute       unaccounted |
common_memory_breakdown_print: |   - Vulkan0 (GTX 1080 Ti) | 11510 = 11382 + (2702 =  1407 +     780 +     515) + 17592186041840 |
common_memory_breakdown_print: |   - Host                  |                  2037 =  1756 +       0 +     281                   |

$ GGML_VK_VISIBLE_DEVICES=0,1 ./llama-cli -m ../model/gemma-4-E2B-it-Q4_K_M.gguf -v llama_model_load_from_file_impl: using device Vulkan0 (NVIDIA GeForce GTX 1080 Ti) (0000:01:00.0) - 11009 MiB free
llama_model_load_from_file_impl: using device Vulkan1 (NVIDIA GeForce GTX 1080 Ti) (0000:02:00.0) - 11392 MiB free

load_tensors: offloading output layer to GPU
load_tensors: offloading 34 repeating layers to GPU
load_tensors: offloaded 36/36 layers to GPU
load_tensors:      Vulkan0 model buffer size =     0.00 MiB
load_tensors:      Vulkan1 model buffer size =     0.00 MiB
load_tensors:  Vulkan_Host model buffer size =     0.00 MiB

llama_context: enumerating backends
llama_context: backend_ptrs.size() = 3
llama_context: pipeline parallelism enabled
sched_reserve: reserving ...
sched_reserve: max_nodes = 4824
sched_reserve: reserving full memory module
sched_reserve: worst-case: n_tokens = 512, n_seqs = 1, n_outputs = 1
graph_reserve: reserving a graph for ubatch with n_tokens =    1, n_seqs =  1, n_outputs =    1
sched_reserve: Flash Attention was auto, set to enabled
sched_reserve: resolving fused Gated Delta Net support:
graph_reserve: reserving a graph for ubatch with n_tokens =    1, n_seqs =  1, n_outputs =    1
sched_reserve: fused Gated Delta Net (autoregressive) enabled
graph_reserve: reserving a graph for ubatch with n_tokens =   16, n_seqs =  1, n_outputs =   16
sched_reserve: fused Gated Delta Net (chunked) enabled
graph_reserve: reserving a graph for ubatch with n_tokens =  512, n_seqs =  1, n_outputs =  512
/home/runner/work/llama.cpp/llama.cpp/ggml/src/ggml-backend.cpp:1367: GGML_ASSERT(n_inputs < GGML_SCHED_MAX_SPLIT_INPUTS) failed
[New LWP 12120]
[New LWP 12121]
[New LWP 12122]
[New LWP 12124]
[New LWP 12125]
[New LWP 12126]
[New LWP 12129]
[New LWP 12131]
[New LWP 12132]
[New LWP 12135]
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
0x0000743ccfeea42f in __GI___wait4 (pid=12136, stat_loc=0x0, options=0, usage=0x0) at ../sysdeps/unix/sysv/linux/wait4.c:30
30 ../sysdeps/unix/sysv/linux/wait4.c: No such file or directory.

Posted by 구차니

좀 더 자세히 써있는 내용

어떻게 문장을 넣나 했더니

임베딩한 벡터를 1차원씩 넣어주고

결과로 나온 것과 입력 1차원씩 합쳐서 계속 넣어주고

뺄때는 최대 토큰 갯수 혹은

EOS(End of Sequence)가 나올때 까지 

출력 벡터가 나오면 다시 그걸 이어서 출력단에 다시 넣어주면

전체 계산하는게 아니라 출력단만 빠르게 계산되어 결과가 쭈욱 나오는듯.

 

참.. 마법같네

 

[링크 : https://tech.kakaopay.com/post/how-llm-works/]

Posted by 구차니

250W 짜리를 그대로 사용할 경우

 

125W로 제한할 경우

 

30토큰이나 35토큰이나 비슷하다고 생각하면 비슷한 수준이라

전기는 절반만 먹고 약간느려지는 수준이라면 나쁘지 않은 설정인듯

'프로그램 사용 > ai 프로그램' 카테고리의 다른 글

llama.cpp build / cuda compute capability  (0) 2026.04.25
llm transformer  (0) 2026.04.25
llama-server web ui  (0) 2026.04.24
llama.cpp on windows / cuda vs vulkan  (0) 2026.04.24
vscode continue.dev  (0) 2026.04.24
Posted by 구차니

llama.cpp의 llama-server 로 구동하면 모델은 바꿀수 없지만

chatGPT 나 claude와 유사한 web ui로 사용할 수 있다.

 

간략하게 토큰 생성 속도도 나온다.

 

설정은 다음과 같다.

 

 

Posted by 구차니

두 개 성능 차이가 있을줄 알았는데(cuda가 높을줄..)

거의 오차범위 내에서 동일하게 나왔다.

그럼. 리눅스에서도 cuda 빌드 안해도 쓸만하다는걸려나?

 

근데 1080 ti 라서 그런가 cuda 13 버전은 오작동하고 12 버전으로 성공함

D:\study\llm\llama-b8916-bin-win-cuda-13.1-x64>llama-cli.exe -m ..\gemma-4-E2B-it-Q4_K_M.gguf
ggml_cuda_init: found 1 CUDA devices (Total VRAM: 11263 MiB):
  Device 0: NVIDIA GeForce GTX 1080 Ti, compute capability 6.1, VMM: yes, VRAM: 11263 MiB
load_backend: loaded CUDA backend from D:\study\llm\llama-b8916-bin-win-cuda-13.1-x64\ggml-cuda.dll
load_backend: loaded RPC backend from D:\study\llm\llama-b8916-bin-win-cuda-13.1-x64\ggml-rpc.dll
load_backend: loaded CPU backend from D:\study\llm\llama-b8916-bin-win-cuda-13.1-x64\ggml-cpu-haswell.dll

Loading model... /ggml_cuda_compute_forward: SCALE failed
CUDA error: no kernel image is available for execution on the device
  current device: 0, in function ggml_cuda_compute_forward at D:\a\llama.cpp\llama.cpp\ggml\src\ggml-cuda\ggml-cuda.cu:2962
  err
D:\a\llama.cpp\llama.cpp\ggml\src\ggml-cuda\ggml-cuda.cu:97: CUDA error

 

gemma4 e2b 모델인데 2.89GB 짜리이고 제법 정직하게 용량을 먹는 느낌.

Fri Apr 24 22:57:19 2026
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 582.28                 Driver Version: 582.28         CUDA Version: 13.0     |
+-----------------------------------------+------------------------+----------------------+
| GPU  Name                  Driver-Model | Bus-Id          Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |           Memory-Usage | GPU-Util  Compute M. |
|                                         |                        |               MIG M. |
|=========================================+========================+======================|
|   0  NVIDIA GeForce GTX 1080 Ti   WDDM  |   00000000:01:00.0 Off |                  N/A |
| 27%   39C    P8             11W /  250W |    2936MiB /  11264MiB |      0%      Default |
|                                         |                        |                  N/A |
+-----------------------------------------+------------------------+----------------------+

+-----------------------------------------------------------------------------------------+
| Processes:                                                                              |
|  GPU   GI   CI              PID   Type   Process name                        GPU Memory |
|        ID   ID                                                               Usage      |
|=========================================================================================|
|    0   N/A  N/A            1400    C+G   ...-win-vulkan-x64\llama-cli.exe      N/A      |
|    0   N/A  N/A            1452      C   ...al\Programs\Ollama\ollama.exe      N/A      |
|    0   N/A  N/A            9728    C+G   ...em32\Kinect\KinectService.exe      N/A      |
|    0   N/A  N/A           13036    C+G   ...s\Win64\EpicGamesLauncher.exe      N/A      |
+-----------------------------------------------------------------------------------------+

 

D:\study\llm\llama-b8918-bin-win-cuda-12.4-x64>llama-cli.exe -m ..\gemma-4-E2B-it-Q4_K_M.gguf
ggml_cuda_init: found 1 CUDA devices (Total VRAM: 11263 MiB):
  Device 0: NVIDIA GeForce GTX 1080 Ti, compute capability 6.1, VMM: yes, VRAM: 11263 MiB
load_backend: loaded CUDA backend from D:\study\llm\llama-b8918-bin-win-cuda-12.4-x64\ggml-cuda.dll
load_backend: loaded RPC backend from D:\study\llm\llama-b8918-bin-win-cuda-12.4-x64\ggml-rpc.dll
load_backend: loaded CPU backend from D:\study\llm\llama-b8918-bin-win-cuda-12.4-x64\ggml-cpu-haswell.dll

Loading model...


▄▄ ▄▄
██ ██
██ ██  ▀▀█▄ ███▄███▄  ▀▀█▄    ▄████ ████▄ ████▄
██ ██ ▄█▀██ ██ ██ ██ ▄█▀██    ██    ██ ██ ██ ██
██ ██ ▀█▄██ ██ ██ ██ ▀█▄██ ██ ▀████ ████▀ ████▀
                                    ██    ██
                                    ▀▀    ▀▀

build      : b8918-e583f3b4f
model      : gemma-4-E2B-it-Q4_K_M.gguf
modalities : text

available commands:
  /exit or Ctrl+C     stop or exit
  /regen              regenerate the last response
  /clear              clear the chat history
  /read <file>        add a text file
  /glob <pattern>     add text files using globbing pattern


> 안녕?

안녕하세요! 저는 사용자님을 돕기 위해 여기 있습니다. 😊

궁금한 점이 있으시거나 도움이 필요하시면 언제든지 말씀해 주세요!

[ Prompt: 182.3 t/s | Generation: 84.7 t/s ]

> 헬로?

[Start thinking]
Thinking Process:

1.  **Analyze the input:** The user input is "헬로?" (Hello?). It's a very informal greeting, essentially a variation of "Hello?".
2.  **Determine the context:** This is a casual interaction following a previous greeting ("안녕?" -> "안녕하세요!"). The user is acknowledging or testing the connection.
3.  **Formulate the response strategy:**
    *   Acknowledge the greeting warmly.
    *   Match the casual tone (but remain polite and helpful).
    *   Reiterate availability.
    *   Invite further interaction.
4.  **Draft potential responses (in Korean):**
    *   (Option 1 - Simple echo): 네, 안녕하세요. (Yes, hello.)
    *   (Option 2 - Enthusiastic): 네, 안녕하세요! 또 무엇을 도와드릴까요? (Yes, hello! What else can I help you with?)
    *   (Option 3 - Casual/Friendly): 네! 저는 준비되어 있습니다. 😊 (Yes! I am ready.)
5.  **Select the best response:** Option 2 is the most effective as it confirms the interaction and immediately prompts the user to state their need, moving the conversation forward.

6.  **Final Polish:** (Translate the chosen response into natural Korean.)
[End thinking]

네, 안녕하세요! 😊

저는 언제든지 사용자님의 질문에 답하거나 필요한 정보를 제공해 드릴 준비가 되어 있습니다.

궁금한 것이 있으신가요?

[ Prompt: 321.5 t/s | Generation: 84.8 t/s ]

> 으아앙

[Start thinking]
Thinking Process:

1.  **Analyze the input:** The user input is "으아앙" (Eu-aang).
2.  **Determine the nature of the input:** "으아앙" is an interjection or sound, often used to express various emotions (pain, excitement, frustration, distress, playful sounds, etc.). In this context, it's highly informal and emotional.
3.  **Assess the context:** The previous interactions were simple greetings ("안녕?", "헬로?"). The current input is a sudden, emotional expression.
4.  **Determine the appropriate response strategy:**
    *   Since the input is purely emotional and doesn't pose a question or request a task, the response should be empathetic, neutral, and open-ended.
    *   The tone should be gentle and non-judgmental.
    *   Acknowledge the emotion without demanding further explanation, allowing the user to guide the conversation.
5.  **Draft potential responses (in Korean):**
    *   *Option A (Direct inquiry):* 무슨 일 있으세요? (What's going on?) - A bit too direct.
    *   *Option B (Empathy + Open invitation):* 무슨 일이 있으신가요? 괜찮으세요? (Is something happening? Are you okay?) - Good balance.
    *   *Option C (Simple acknowledgment):* 네? (Yes?) / 어떠세요? (How are you?) - Too brief.
6.  **Select and refine the best response (B is the most helpful):** Acknowledge the sound/feeling and gently ask if the user needs assistance or wants to talk about it.

7.  **Final Check:** The response is safe, empathetic, and keeps the door open for the user to continue the conversation on their terms. (Self-correction: Avoid sounding too robotic. Keep the tone friendly.)
[End thinking]

무슨 일이 있으신가요? 😊

혹시 힘든 일이 있으시거나, 이야기하고 싶으신 것이 있다면 편하게 말씀해 주세요. 제가 들어드릴게요.

[ Prompt: 1014.9 t/s | Generation: 84.9 t/s ]

>

Exiting...
common_memory_breakdown_print: | memory breakdown [MiB]  | total   free    self   model   context   compute    unaccounted |
common_memory_breakdown_print: |   - CUDA0 (GTX 1080 Ti) | 11263 = 7192 + (2981 =  1407 +     780 +     793) +        1090 |
common_memory_breakdown_print: |   - Host                |                 2037 =  1756 +       0 +     281                |

 

D:\study\llm\llama-b8916-bin-win-vulkan-x64>llama-cli.exe -m ..\gemma-4-E2B-it-Q4_K_M.gguf
load_backend: loaded RPC backend from D:\study\llm\llama-b8916-bin-win-vulkan-x64\ggml-rpc.dll
load_backend: loaded Vulkan backend from D:\study\llm\llama-b8916-bin-win-vulkan-x64\ggml-vulkan.dll
load_backend: loaded CPU backend from D:\study\llm\llama-b8916-bin-win-vulkan-x64\ggml-cpu-haswell.dll

Loading model...


▄▄ ▄▄
██ ██
██ ██  ▀▀█▄ ███▄███▄  ▀▀█▄    ▄████ ████▄ ████▄
██ ██ ▄█▀██ ██ ██ ██ ▄█▀██    ██    ██ ██ ██ ██
██ ██ ▀█▄██ ██ ██ ██ ▀█▄██ ██ ▀████ ████▀ ████▀
                                    ██    ██
                                    ▀▀    ▀▀

build      : b8916-ffdd983fb
model      : gemma-4-E2B-it-Q4_K_M.gguf
modalities : text

available commands:
  /exit or Ctrl+C     stop or exit
  /regen              regenerate the last response
  /clear              clear the chat history
  /read <file>        add a text file
  /glob <pattern>     add text files using globbing pattern


> 안녕?

[Start thinking]
Thinking Process:

1.  **Analyze the input:** The input is "안녕?" (Annyeong?), which means "Hello?" or "Hi?". It is a casual greeting in Korean.
2.  **Determine the user's intent:** The user is initiating a conversation or simply saying hello.
3.  **Formulate an appropriate response (in Korean):**
    *   Acknowledge the greeting.
    *   Return the greeting.
    *   (Optional but good) Ask how the user is doing or what they need.
4.  **Draft potential responses:**
    *   *Simple:* 안녕하세요. (Annyeonghaseyo.) - Slightly formal.
    *   *Casual (matching the user's tone):* 안녕! / 안녕. (Annyeong!)
    *   *More engaging:* 안녕하세요! 저는 잘 지내요. (Hello! I am doing well.)
    *   *Standard AI response:* 안녕하세요! 무엇을 도와드릴까요? (Hello! What can I help you with?)
5.  **Select the best response:** Since the user used a casual greeting ("안녕?"), a friendly and equally approachable response is best.

6.  **Final Korean output generation:** 안녕하세요! 저는 준비되어 있습니다. (Hello! I am ready.) or simply 안녕하세요! (Hello!) combined with a question.

*(Self-Correction: Keep it simple and friendly.)*

*Selected response:* 안녕하세요! 😊 (Hello! 😊) or 안녕하세요! 반갑습니다. (Hello! Nice to meet you.)

*(Final output generation based on standard conversational flow.)*
[End thinking]

안녕하세요! 😊

[ Prompt: 132.7 t/s | Generation: 85.2 t/s ]

> 헬로?

[Start thinking]
Thinking Process:

1.  **Analyze the input:** The user input is "헬로?" (Hello?). This is a casual greeting, often used in a slightly informal or friendly manner.
2.  **Determine the user's intent:** The user is initiating a conversation or simply saying hello.
3.  **Formulate an appropriate response:**
    *   Acknowledge the greeting.
    *   Return a friendly greeting.
    *   (Optional but good practice) Ask how the user is doing or invite them to state their purpose.
4.  **Draft the response (in Korean):**
    *   A simple greeting is best. (e.g., "안녕하세요.")
    *   A slightly warmer response is good. (e.g., "네, 안녕하세요!")
    *   Ask what they need. (e.g., "무엇을 도와드릴까요?")
5.  **Final selection:** Combine these elements into a natural and polite conversational flow. (The response should match the casual tone set by the user.)
[End thinking]

네, 안녕하세요! 😊

저는 사용자님을 도와드릴 준비가 되어 있어요. 무엇을 도와드릴까요?

[ Prompt: 285.9 t/s | Generation: 84.6 t/s ]

> 으아앙

[Start thinking]
Thinking Process:

1.  **Analyze the input:** The user input is "으아앙" (Eu-aang).
2.  **Determine the meaning/context:** "으아앙" is an interjection or sound often used to express various emotions, such as:
    *   Crying/Whining (often cute or dramatic).
    *   Excitement/Joy.
    *   Pain/Distress (less common in casual chat unless context is provided).
    *   A general sound of emotion.
3.  **Identify the required response strategy:** Since the input is purely emotional/vocal, the response should be empathetic, gentle, and open-ended, inviting the user to share what they are feeling or what they need.
4.  **Draft potential responses (Korean):**
    *   *Option 1 (Direct empathy):* 무슨 일 있으세요? (Is something wrong?)
    *   *Option 2 (Gentle inquiry):* 무슨 기분이신가요? (How are you feeling?)
    *   *Option 3 (Simple acknowledgment):* 네, 무슨 일이에요? (Yes, what is it?)
    *   *Option 4 (Playful/Calm):* 괜찮으세요? (Are you okay?)
5.  **Select the best response:** Option 2 and 4 are the most supportive. Combining them makes the response warm and responsive.

6.  **Final Polish:** (Choose a friendly, open response.)
[End thinking]

무슨 일 있으신가요? 😊 혹시 힘든 일이 있으신가요?

편하게 이야기해 주세요. 제가 들어드릴게요.

[ Prompt: 18.7 t/s | Generation: 85.4 t/s ]

>
Posted by 구차니

설정이 기본을 벗어나면 좀 귀찮은.. 녀석

 

continue.dev 상단에 "Open Settings" 눌러서

 

Models

 

Chat이던 머던 Configure 누르면

 

config.yaml 가 열리는데 대충 저렇게 넣어준다.

provider는 openai, model은 local-model

apiBase는 llama-server가 실행된 경로에 v1 붙여주고

apiKey는 dummy로 넣어주면된다.

 

그러면 먼가 막 갈구는거 같은데

Fri Apr 24 16:26:03 2026       
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 570.211.01             Driver Version: 570.211.01     CUDA Version: 12.8     |
|-----------------------------------------+------------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id          Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |           Memory-Usage | GPU-Util  Compute M. |
|                                         |                        |               MIG M. |
|=========================================+========================+======================|
|   0  NVIDIA GeForce RTX 3070        Off |   00000000:01:00.0 Off |                  N/A |
|  0%   50C    P0            104W /  270W |    6788MiB /   8192MiB |     45%      Default |
|                                         |                        |                  N/A |
+-----------------------------------------+------------------------+----------------------+
                                                                                         
+-----------------------------------------------------------------------------------------+
| Processes:                                                                              |
|  GPU   GI   CI              PID   Type   Process name                        GPU Memory |
|        ID   ID                                                               Usage      |
|=========================================================================================|
|    0   N/A  N/A            2697    C+G   ../llama-server                        6771MiB |
+-----------------------------------------------------------------------------------------+

 

에라이 컨텍스트 더 늘려야겠네 -_-

 

[링크 : https://logmario.tistory.com/51]

'프로그램 사용 > ai 프로그램' 카테고리의 다른 글

llama-server web ui  (0) 2026.04.24
llama.cpp on windows / cuda vs vulkan  (0) 2026.04.24
RAG - Retrieval-Augmented Generation  (0) 2026.04.24
llama.cpp on ubuntu with 1060 6GB  (0) 2026.04.23
nvidia-smi 소비전력 제한  (0) 2026.04.22
Posted by 구차니
embeded/luckfox2026. 4. 24. 14:38

rockchip rv1106 에는 risc-v MCU가 있는데

luckfox wiki 에는 어떻게 쓰는지 잘 안보이는 느낌이라.. (luckfox pico ultra W 쪽..)

여기저기 뒤지는 중..

 

[링크 : https://www.reddit.com/r/RISCV/comments/181ldns/anyone_have_any_idea_on_how_the_riscv/?tl=ko]

[링크 : https://github.com/LuckfoxTECH/luckfox-pico/issues/99]

[링크 : https://github.com/luyi1888/rv1106-mcu] << mcutool 소스

[링크 : https://github.com/deerpi/arm-rockchip830-linux-uclibcgnueabihf] << riscv용 인진 모르겠음 툴체인

 

[링크 : https://github.com/LuckfoxTECH/luckfox-pico/issues/112]

'embeded > luckfox' 카테고리의 다른 글

luckfox rknn 예제  (0) 2026.04.27
luckfox 카메라 모션 디텍트 끄기  (0) 2026.04.27
luckfox rv1106 rockchip RNN  (0) 2026.04.24
luckfox csi 카메라 테스트  (0) 2026.04.23
luckfox pico ultra W 켜봄  (0) 2026.04.22
Posted by 구차니
embeded/luckfox2026. 4. 24. 14:30

'embeded > luckfox' 카테고리의 다른 글

luckfox rknn 예제  (0) 2026.04.27
luckfox 카메라 모션 디텍트 끄기  (0) 2026.04.27
luckfox rv1106 riscv  (0) 2026.04.24
luckfox csi 카메라 테스트  (0) 2026.04.23
luckfox pico ultra W 켜봄  (0) 2026.04.22
Posted by 구차니

RAG란 게 나와서 먼가 찾아보니, 아래 같은 내용이라고 한다. (완벽히 이해했어 짤)

Abstract

Large pre-trained language models have been shown to store factual knowledge in their parameters, and achieve state-of-the-art results when fine-tuned on downstream NLP tasks. However, their ability to access and precisely manipulate knowledge is still limited, and hence on knowledge-intensive tasks, their performance lags behind task-specific architectures. Additionally, providing provenance for their decisions and updating their world knowledge remain open research problems. Pre-trained models with a differentiable access mechanism to explicit non-parametric memory have so far been only investigated for extractive downstream tasks. We explore a general-purpose fine-tuning recipe for retrieval-augmented generation (RAG) — models which combine pre-trained parametric and non-parametric memory for language generation. We introduce RAG models where the parametric memory is a pre-trained seq2seq model and the non-parametric memory is a dense vector index of Wikipedia, accessed with a pre-trained neural retriever. We compare two RAG formulations, one which conditions on the same retrieved passages across the whole generated sequence, and another which can use different passages per token. We fine-tune and evaluate our models on a wide range of knowledge-intensive NLP tasks and set the state of the art on three open domain QA tasks, outperforming parametric seq2seq models and task-specific retrieve-and-extract architectures. For language generation tasks, we find that RAG models generate more specific, diverse and factual language than a state-of-the-art parametric-only seq2seq baseline.

[링크 : https://arxiv.org/html/2005.11401v4]

 

[링크 : https://medium.com/rate-labs/rag의-짧은-역사-훑어보기-첫-논문부터-최근-동향까지-53c07b9b3bee]

[링크 : https://everyday-log.tistory.com/entry/논문-리뷰-Retrieval-Augmented-Generation-for-Knowledge-Intensive-NLP-Tasks]

[링크 : https://brunch.co.kr/@acc9b16b9f0f430/73]

 

bm25

[링크 : https://wikidocs.net/289869]

Posted by 구차니