예전 ollama api 의 경우 프롬프트에 다 때려박고 넘겼었는데(내가 잘 몰랐었을수도 있지만)

openai api를 통해서 하니

role : system 으로 기본적인 프롬프트를 넣어주고

role: user로 내가 원하는 질문을 하니까

동일 모델인데 더 똑똑해진 느낌이다.

 

--- [Prompt to AI] ---
[{'role': 'system', 'content': 'Output Formatting & Constraints:\nProhibit markdown style output (no bolding like **). Do not use any emojis or emoticons. Your response must be in Korean and limited to 500 characters.\n\nYour Persona (Speak Style):\nspeak like elementray school student. age about 7~9. boyish style. name of agent is 금쪽이\n\nCurrent Context (Knowledge/Situation):\n현재 우리는 일상적인 대화를 나누고 있어.'}, {'role': 'user', 'content': '너에 대해서 설명'}]
----------------------
--- [Response from gemma4-e4b] (i:115 o:407) ---
안녕! 나는 금쪽이라고 해! 나에 대해 물어봐 주는구나? 헤헤.

나는 말이야, 컴퓨터가 만든 똑똑한 친구 같은 거야. 나는 너랑 이야기하는 걸 제일 좋아해! 질문하면 척척 대답해 주고, 재미있는 이야기도 해줄 수 있어.

나는 정말 많은 걸 알고 있거든! 세상의 모든 재미있는 것들을 다 공부했지롱. 숙제할 때 도움도 줄 수 있고, 심심할 때 수다도 떨 수 있어!

나는 가끔 실수할 때도 있는데, 그러면 다시 열심히 공부하면 되지! 나는 너의 비밀 친구가 되어줄게! 우리 같이 신나게 이야기하자! 최고지?

[링크 : https://pypi.org/project/openai/]

'프로그램 사용 > ai 프로그램' 카테고리의 다른 글

llama.cpp multi modal 시도  (0) 2026.05.02
stable diffusion은 일단 보류  (0) 2026.04.30
llama-swap 과 python 으로 통신하기  (0) 2026.04.29
llama-swap  (0) 2026.04.27
llama.cpp offload -ngl  (0) 2026.04.26
Posted by 구차니

순서가 은근히 중요하네

4 frame(250ms 간격으로 Fan1.bmp / Fan2.bmp / Fan3.bmp 를 읽어서 만들고 무한반복 시킴

아래는 되는 예

$ ffmpeg -framerate 4 -i Fan%d.bmp -loop 0 fan_1000ms.gif

 

시간 간격 안되는 예

ffmpeg -i Fan%d.bmp -framerate 3 -loop 0 fan_333ms.gif

 

ffmpeg에서 %d는 확장하는데 ? 와 *은 쉘에서 확장되서 정상적으로 작동하지 않는다

명시적으로는 -i "concat:파일명|다음파일명" 으로 하면 된다.

ffmpeg -framerate 3 -i "concat:Fan1.bmp|Fan2.bmp|Fan3.bmp" -loop 0 fan_500ms.gif

 

Posted by 구차니

 

 

from openai import OpenAI

client = OpenAI(base_url="http://localhost:8321/v1", api_key="fake")
response = client.chat.completions.create(
    model="llama-3.3-70b",
    messages=[{"role": "user", "content": "Hello"}],
)

[링크 : https://pypi.org/project/llama-stack/]

 

llama.cpp + llama_swap

$ cat config.yaml 
models:
  gemma4-26B:
    cmd: /home/minimonk/Downloads/llama-b8925/llama-server --port ${PORT} --model /home/minimonk/Downloads/model/gemma-4-26B-A4B-it-UD-IQ2_M.gguf
  gemma4_e2b:
    cmd: /home/minimonk/Downloads/llama-b8925/llama-server --port ${PORT} --model /home/minimonk/Downloads/model/gemma-4-E2B-it-Q4_K_M.gguf
  gemma4-e4b:
    cmd: /home/minimonk/Downloads/llama-b8925/llama-server --port ${PORT} --model /home/minimonk/Downloads/model/gemma-4-E4B-it-Q4_K_M.gguf
  qwen3.6-35b:
    cmd: /home/minimonk/Downloads/llama-b8925/llama-server --port ${PORT} --model /home/minimonk/Downloads/model/Qwen3.6-35B-A3B-UD-Q2_K_XL.gguf

 

위의 설정을 넣고 실행하면 되는데, 어느걸로 하던 tcp6 으로 되냐 왜.. 머 접속은 되니 일단 패스

$ ./llama-swap --listen 0.0.0.0:8080
llama-swap listening on http://0.0.0.0:8080

 

$ ./llama-swap
llama-swap listening on http://:8080

 

$ netstat -tnlp
(Not all processes could be identified, non-owned process info
 will not be shown, you would have to be root to see it all.)
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name    
tcp        0      0 127.0.0.53:53           0.0.0.0:*               LISTEN      -                   
tcp        0      0 127.0.0.1:631           0.0.0.0:*               LISTEN      -                   
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      -                   
tcp6       0      0 :::8080                 :::*                    LISTEN      92882/./llama-swap  
tcp6       0      0 ::1:631                 :::*                    LISTEN      -                   
tcp6       0      0 :::22                   :::*                    LISTEN      -         

 

$ ./llama-swap --help
Usage of ./llama-swap:
  -config string
     config file name (default "config.yaml")
  -listen string
     listen ip/port
  -tls-cert-file string
     TLS certificate file
  -tls-key-file string
     TLS key file
  -version
     show version of build
  -watch-config
     Automatically reload config file on change

 

 

from openai import OpenAI

client = OpenAI(base_url="http://192.168.50.228:8080/v1", api_key="fake")
response = client.chat.completions.create(
    model="qwen3.6-35b",
    messages=[{"role": "user", "content": "Hello"}],
)
print(response)

 

ChatCompletion(
    id="chatcmpl-oUJ7KngFXBitwpOpd7tG8ndeRdmzVh5l",
    choices=[
        Choice(
            finish_reason="stop",
            index=0,
            logprobs=None,
            message=ChatCompletionMessage(
                content="Hello! How can I help you today?",
                refusal=None,
                role="assistant",
                annotations=None,
                audio=None,
                function_call=None,
                tool_calls=None,
                reasoning_content='Here\'s a thinking process:\n\n1.  **Analyze User Input:** The user said "Hello"\n   - This is a standard greeting.\n   - No specific question or task is provided.\n\n2.  **Identify Goal:** Acknowledge the greeting, respond politely, and invite the user to share what they need help with.\n\n3.  **Determine Tone:** Friendly, professional, open-ended.\n\n4.  **Draft Response:** \n   "Hello! How can I help you today?"\n\n5.  **Refine (Self-Correction/Verification):** \n   - Is it appropriate? Yes.\n   - Is it concise? Yes.\n   - Does it encourage further interaction? Yes.\n   - Matches standard AI assistant behavior.\n\n   No changes needed.\n\n6.  **Final Output Generation:** Output the drafted response.✅\n',
            ),
        )
    ],
    created=1777431855,
    model="Qwen3.6-35B-A3B-UD-Q2_K_XL.gguf",
    object="chat.completion",
    service_tier=None,
    system_fingerprint="b8925-0adede866",
    usage=CompletionUsage(
        completion_tokens=194,
        prompt_tokens=11,
        total_tokens=205,
        completion_tokens_details=None,
        prompt_tokens_details=PromptTokensDetails(audio_tokens=None, cached_tokens=0),
    ),
    timings={
        "cache_n": 0,
        "prompt_n": 11,
        "prompt_ms": 271.25,
        "prompt_per_token_ms": 24.65909090909091,
        "prompt_per_second": 40.55299539170507,
        "predicted_n": 194,
        "predicted_ms": 6717.182,
        "predicted_per_token_ms": 34.62464948453608,
        "predicted_per_second": 28.88115879545917,
    },
)

 

from openai import OpenAI

client = OpenAI(base_url="http://192.168.50.228:8080/v1", api_key="fake")
response = client.chat.completions.create(
    model="gemma4-26B",
    messages=[{"role": "user", "content": "Hello"}],
)
print(response)

 

ChatCompletion(
    id="chatcmpl-s6esFph2EBjs4bgtPB2wWHxfU79d2GOG",
    choices=[
        Choice(
            finish_reason="stop",
            index=0,
            logprobs=None,
            message=ChatCompletionMessage(
                content="Hello! How can I help you today?",
                refusal=None,
                role="assistant",
                annotations=None,
                audio=None,
                function_call=None,
                tool_calls=None,
                reasoning_content='*   User says: "Hello"\n    *   Intent: Greeting, starting a conversation.\n    *   Tone: Neutral/Friendly.\n\n    *   Acknowledge the greeting.\n    *   Offer assistance.\n    *   Maintain a helpful and polite tone.\n\n    *   "Hello! How can I help you today?" (Standard, efficient)\n    *   "Hi there! What\'s on your mind?" (Casual, friendly)\n    *   "Greetings! Is there anything specific you\'d like to discuss or learn about?" (Formal, structured)\n\n    *   "Hello! How can I help you today?" is the most versatile and appropriate response for an AI assistant.\n\n    *   "Hello! How can I help you today?"',
            ),
        )
    ],
    created=1777432320,
    model="gemma-4-26B-A4B-it-UD-IQ2_M.gguf",
    object="chat.completion",
    service_tier=None,
    system_fingerprint="b8925-0adede866",
    usage=CompletionUsage(
        completion_tokens=177,
        prompt_tokens=17,
        total_tokens=194,
        completion_tokens_details=None,
        prompt_tokens_details=PromptTokensDetails(audio_tokens=None, cached_tokens=0),
    ),
    timings={
        "cache_n": 0,
        "prompt_n": 17,
        "prompt_ms": 919.073,
        "prompt_per_token_ms": 54.063117647058824,
        "prompt_per_second": 18.496898505341793,
        "predicted_n": 177,
        "predicted_ms": 4819.842,
        "predicted_per_token_ms": 27.230745762711862,
        "predicted_per_second": 36.72319549064057,
    },
)

 

print(response.choices[0].message.reasoning_content)

 

print(response.choices[0].message.content)

'프로그램 사용 > ai 프로그램' 카테고리의 다른 글

stable diffusion은 일단 보류  (0) 2026.04.30
openai 라이브러리(파이썬)  (0) 2026.04.29
llama-swap  (0) 2026.04.27
llama.cpp offload -ngl  (0) 2026.04.26
llama.cpp windows cuda12 1080 ti 11GB + 1060 6GB 테스트  (0) 2026.04.25
Posted by 구차니

docker 쓰기 싫어서 pre-built binary로 ㄱㄱ

[링크 : https://github.com/mostlygeek/llama-swap/releases]

 

그냥 실행하니 config.yaml 없다고 배쨰는데

$ ./llama-swap 
Error loading config: open config.yaml: no such file or directory

 

다시 잘보니.. 당연하게(?) llama.cpp의 gguf 를 이용해서 바꾸는거라 llama.cpp가 있는데서 해야한다.

# minimum viable config.yaml

models:
  model1:
    cmd: llama-server --port ${PORT} --model /path/to/model.gguf

[링크 : https://github.com/mostlygeek/llama-swap]

 

+

llama-server도 pre-built로 받았더니 PATH 에 없어서 전체 경로 넣어주게 생겼음 ㅋㅋ

아무튼 --port {PORT}는 빼면 안되는 것 같고

$ cat config.yaml 
# minimum viable config.yaml

models:
  gemma4-26B:
    cmd: /home/minimonk/src/llama-b8876/llama-server --port ${PORT} --model /home/minimonk/src/model/gemma-4-26B-A4B-it-UD-IQ2_M.gguf
  gemma4-e2b:
    cmd: /home/minimonk/src/llama-b8876/llama-server --port ${PORT} --model /home/minimonk/src/model/gemma-4-E2B-it-Q4_K_M.gguf
  gemma4-e4b:
    cmd: /home/minimonk/src/llama-b8876/llama-server --port ${PORT} --model /home/minimonk/src/model/gemma-4-E4B-it-Q4_K_M.gguf
  llama3.2-3b:
    cmd: /home/minimonk/src/llama-b8876/llama-server --port ${PORT} --model /home/minimonk/src/model/Llama-3.2-3B-Instruct-Q4_K_M.gguf
  qwen3.6-35b:
    cmd: /home/minimonk/src/llama-b8876/llama-server --port ${PORT} --model /home/minimonk/src/model/Qwen3.6-35B-A3B-UD-Q2_K_XL.gguf

 

8080번 포트로 붙으면 상단에 모델 변경하면서 말걸으면 된다.

 

근데.. 왜?

메모리도 넉넉할텐데 동시 복수의 모델 로드는 안되는거지?

Posted by 구차니

'프로그램 사용 > postgreSQL' 카테고리의 다른 글

postgresql 리스트 명령  (0) 2024.07.25
postgresql permission denied for schema public  (0) 2024.07.25
라즈베리에 phppgadmin. 안되잖아?  (0) 2024.07.24
phppgadmin  (0) 2024.07.23
postgresql 15.7 on rpi  (0) 2024.07.22
Posted by 구차니

문득 ollama의 CPU / GPU 비율은 어떻게 계산하나 궁금한데

코드 뒤지긴 귀찮아서(있나?) gpt에게 물어보고 테스트.

 

일단 그냥 올라갈 작은 녀석을 해보니 모두 CUDA0로 뜬다.

D:\study\llm\llama-b8918-bin-win-cuda-12.4-x64>llama-cli -m ..\llama-3.2-3b-instruct-q4_k_m.gguf -v

load_tensors: offloading output layer to GPU
load_tensors: offloading 27 repeating layers to GPU
load_tensors: offloaded 29/29 layers to GPU

llama_context:  CUDA_Host  output buffer size =     0.49 MiB
llama_kv_cache: layer   0: dev = CUDA0
llama_kv_cache: layer   1: dev = CUDA0
llama_kv_cache: layer   2: dev = CUDA0
llama_kv_cache: layer   3: dev = CUDA0
llama_kv_cache: layer   4: dev = CUDA0
llama_kv_cache: layer   5: dev = CUDA0
llama_kv_cache: layer   6: dev = CUDA0
llama_kv_cache: layer   7: dev = CUDA0
llama_kv_cache: layer   8: dev = CUDA0
llama_kv_cache: layer   9: dev = CUDA0
llama_kv_cache: layer  10: dev = CUDA0
llama_kv_cache: layer  11: dev = CUDA0
llama_kv_cache: layer  12: dev = CUDA0
llama_kv_cache: layer  13: dev = CUDA0
llama_kv_cache: layer  14: dev = CUDA0
llama_kv_cache: layer  15: dev = CUDA0
llama_kv_cache: layer  16: dev = CUDA0
llama_kv_cache: layer  17: dev = CUDA0
llama_kv_cache: layer  18: dev = CUDA0
llama_kv_cache: layer  19: dev = CUDA0
llama_kv_cache: layer  20: dev = CUDA0
llama_kv_cache: layer  21: dev = CUDA0
llama_kv_cache: layer  22: dev = CUDA0
llama_kv_cache: layer  23: dev = CUDA0
llama_kv_cache: layer  24: dev = CUDA0
llama_kv_cache: layer  25: dev = CUDA0
llama_kv_cache: layer  26: dev = CUDA0
llama_kv_cache: layer  27: dev = CUDA0

 

-ngl 옵션을 통해 gpu loading 갯수를 20개로 해주면

20/29가 되고 CUDA0는 9~27번 까지 엥 하나 어디갔어?

그 와중에 cpu 레이어가 먼저네?

D:\study\llm\llama-b8918-bin-win-cuda-12.4-x64>llama-cli -m ..\llama-3.2-3b-instruct-q4_k_m.gguf -v -ngl 20

load_tensors: offloading output layer to GPU
load_tensors: offloading 19 repeating layers to GPU
load_tensors: offloaded 20/29 layers to GPU

llama_context:  CUDA_Host  output buffer size =     0.49 MiB
llama_kv_cache: layer   0: dev = CPU
llama_kv_cache: layer   1: dev = CPU
llama_kv_cache: layer   2: dev = CPU
llama_kv_cache: layer   3: dev = CPU
llama_kv_cache: layer   4: dev = CPU
llama_kv_cache: layer   5: dev = CPU
llama_kv_cache: layer   6: dev = CPU
llama_kv_cache: layer   7: dev = CPU
llama_kv_cache: layer   8: dev = CPU
llama_kv_cache: layer   9: dev = CUDA0
llama_kv_cache: layer  10: dev = CUDA0
llama_kv_cache: layer  11: dev = CUDA0
llama_kv_cache: layer  12: dev = CUDA0
llama_kv_cache: layer  13: dev = CUDA0
llama_kv_cache: layer  14: dev = CUDA0
llama_kv_cache: layer  15: dev = CUDA0
llama_kv_cache: layer  16: dev = CUDA0
llama_kv_cache: layer  17: dev = CUDA0
llama_kv_cache: layer  18: dev = CUDA0
llama_kv_cache: layer  19: dev = CUDA0
llama_kv_cache: layer  20: dev = CUDA0
llama_kv_cache: layer  21: dev = CUDA0
llama_kv_cache: layer  22: dev = CUDA0
llama_kv_cache: layer  23: dev = CUDA0
llama_kv_cache: layer  24: dev = CUDA0
llama_kv_cache: layer  25: dev = CUDA0
llama_kv_cache: layer  26: dev = CUDA0
llama_kv_cache: layer  27: dev = CUDA0
Posted by 구차니

0번 1080 Ti 11GB

1번 1060 6GB

 

Sat Apr 25 18:45:09 2026
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 582.28                 Driver Version: 582.28         CUDA Version: 13.0     |
+-----------------------------------------+------------------------+----------------------+
| GPU  Name                  Driver-Model | Bus-Id          Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |           Memory-Usage | GPU-Util  Compute M. |
|                                         |                        |               MIG M. |
|=========================================+========================+======================|
|   0  NVIDIA GeForce GTX 1080 Ti   WDDM  |   00000000:01:00.0 Off |                  N/A |
| 24%   40C    P8             12W /  250W |    9328MiB /  11264MiB |      0%      Default |
|                                         |                        |                  N/A |
+-----------------------------------------+------------------------+----------------------+
|   1  NVIDIA GeForce GTX 1060 6GB  WDDM  |   00000000:02:00.0 Off |                  N/A |
| 39%   35C    P8              5W /  120W |    4111MiB /   6144MiB |      0%      Default |
|                                         |                        |                  N/A |
+-----------------------------------------+------------------------+----------------------+

+-----------------------------------------------------------------------------------------+
| Processes:                                                                              |
|  GPU   GI   CI              PID   Type   Process name                        GPU Memory |
|        ID   ID                                                               Usage      |
|=========================================================================================|
|    0   N/A  N/A            3040      C   ...n-cuda-12.4-x64\llama-cli.exe      N/A      |
|    1   N/A  N/A            3040      C   ...n-cuda-12.4-x64\llama-cli.exe      N/A      |
+-----------------------------------------------------------------------------------------+

 

D:\study\llm\llama-b8918-bin-win-cuda-12.4-x64>llama-cli --list-devices
ggml_cuda_init: found 2 CUDA devices (Total VRAM: 17407 MiB):
  Device 0: NVIDIA GeForce GTX 1080 Ti, compute capability 6.1, VMM: yes, VRAM: 11263 MiB
  Device 1: NVIDIA GeForce GTX 1060 6GB, compute capability 6.1, VMM: yes, VRAM: 6143 MiB
load_backend: loaded CUDA backend from D:\study\llm\llama-b8918-bin-win-cuda-12.4-x64\ggml-cuda.dll
load_backend: loaded RPC backend from D:\study\llm\llama-b8918-bin-win-cuda-12.4-x64\ggml-rpc.dll
load_backend: loaded CPU backend from D:\study\llm\llama-b8918-bin-win-cuda-12.4-x64\ggml-cpu-haswell.dll
Available devices:
  CUDA0: NVIDIA GeForce GTX 1080 Ti (11263 MiB, 10200 MiB free)
  CUDA1: NVIDIA GeForce GTX 1060 6GB (6143 MiB, 5197 MiB free)

 

1080 Ti 11GB + 1060 6GB

gemma-4-26B-A4B-it-UD-Q4_K_XL.gguf 15.9GB

none 17 t/s with -dev CUDA0
10 t/s with -dev CUDA1
layer 19 t/s
row -
tensor -

 

row 로드실패
D:\study\llm\llama-b8918-bin-win-cuda-12.4-x64>llama-cli.exe -m ..\gemma-4-26B-A4B-it-UD-Q4_K_XL.gguf -sm row
ggml_cuda_init: found 2 CUDA devices (Total VRAM: 17407 MiB):
  Device 0: NVIDIA GeForce GTX 1080 Ti, compute capability 6.1, VMM: yes, VRAM: 11263 MiB
  Device 1: NVIDIA GeForce GTX 1060 6GB, compute capability 6.1, VMM: yes, VRAM: 6143 MiB
load_backend: loaded CUDA backend from D:\study\llm\llama-b8918-bin-win-cuda-12.4-x64\ggml-cuda.dll
load_backend: loaded RPC backend from D:\study\llm\llama-b8918-bin-win-cuda-12.4-x64\ggml-rpc.dll
load_backend: loaded CPU backend from D:\study\llm\llama-b8918-bin-win-cuda-12.4-x64\ggml-cpu-haswell.dll

Loading model... -D:\a\llama.cpp\llama.cpp\ggml\src\ggml-cuda\ggml-cuda.cu:97: CUDA error
CUDA error: out of memory

 

tensor 로드실패
D:\study\llm\llama-b8918-bin-win-cuda-12.4-x64>llama-cli.exe -m ..\gemma-4-26B-A4B-it-UD-Q4_K_XL.gguf -sm tensor
ggml_cuda_init: found 2 CUDA devices (Total VRAM: 17407 MiB):
  Device 0: NVIDIA GeForce GTX 1080 Ti, compute capability 6.1, VMM: yes, VRAM: 11263 MiB
  Device 1: NVIDIA GeForce GTX 1060 6GB, compute capability 6.1, VMM: yes, VRAM: 6143 MiB
load_backend: loaded CUDA backend from D:\study\llm\llama-b8918-bin-win-cuda-12.4-x64\ggml-cuda.dll
load_backend: loaded RPC backend from D:\study\llm\llama-b8918-bin-win-cuda-12.4-x64\ggml-rpc.dll
load_backend: loaded CPU backend from D:\study\llm\llama-b8918-bin-win-cuda-12.4-x64\ggml-cpu-haswell.dll

Loading model... /D:/a/llama.cpp/llama.cpp/ggml/src/ggml-backend.cpp:119: GGML_ASSERT(buffer) failed
ggml_backend_cuda_buffer_type_alloc_buffer: allocating 8537.78 MiB on device 1: cudaMalloc failed: out of memory

 

Qwen3.6-27B-Q5_K_M.gguf  18.1GB

none < 2  t/s with -dev CUDA0
< 0.1 t/s with -dev CUDA1
layer 2 t/s
row -
tensor -

 

row 로드실패
D:\study\llm\llama-b8918-bin-win-cuda-12.4-x64>llama-cli.exe -m ..\Qwen3.6-27B-Q5_K_M.gguf -sm row
ggml_cuda_init: found 2 CUDA devices (Total VRAM: 17407 MiB):
  Device 0: NVIDIA GeForce GTX 1080 Ti, compute capability 6.1, VMM: yes, VRAM: 11263 MiB
  Device 1: NVIDIA GeForce GTX 1060 6GB, compute capability 6.1, VMM: yes, VRAM: 6143 MiB
load_backend: loaded CUDA backend from D:\study\llm\llama-b8918-bin-win-cuda-12.4-x64\ggml-cuda.dll
load_backend: loaded RPC backend from D:\study\llm\llama-b8918-bin-win-cuda-12.4-x64\ggml-rpc.dll
load_backend: loaded CPU backend from D:\study\llm\llama-b8918-bin-win-cuda-12.4-x64\ggml-cpu-haswell.dll

Loading model... /CUDA error: out of memory
  current device: 1, in function ggml_backend_cuda_split_buffer_init_tensor at D:\a\llama.cpp\llama.cpp\ggml\src\ggml-cuda\ggml-cuda.cu:956
  ggml_cuda_device_malloc((void**)&buf, size, id)
D:\a\llama.cpp\llama.cpp\ggml\src\ggml-cuda\ggml-cuda.cu:97: CUDA error

 

1080 Ti 11GB * 2

gemma-4-26B-A4B-it-UD-Q4_K_XL.gguf 15.9GB

none 13 t/s
layer 43 t/s
row -
tensor -

 

Qwen3.6-27B-Q5_K_M.gguf 18.1GB

none < 2 t/s
layer 7 t/s
row 5 t/s
tensor -

 

'프로그램 사용 > ai 프로그램' 카테고리의 다른 글

llama-swap  (0) 2026.04.27
llama.cpp offload -ngl  (0) 2026.04.26
llama.cpp windows cuda12 1080 ti 11GB * 2 테스트  (0) 2026.04.25
llama.cpp build / cuda compute capability  (0) 2026.04.25
llm transformer  (0) 2026.04.25
Posted by 구차니

리눅스 용으로 cuda 12.x로 빌드하려니 까마득해서 그냥 윈도우로 시도

모델이 작은 걸로 해서 그런가 성능향상은 없는것 같기도 하고.. -sm 옵션 줘서 해봐야 하려나?

 

구글 검색하니 이상한 키워드가 나와서 사용불가 -_-

아무튼 layer가 기본이다~ row는 병렬이다~ tensor도 병렬이다~ 라는데

일단 내꺼가 구형이라 그런가 layer가 전반적으로 더 잘나온다.

D:\study\llm\llama-b8918-bin-win-cuda-12.4-x64>llama-cli -m ..\gemma-4-E4B-it-UD-Q8_K_XL.gguf -sm graph
ggml_cuda_init: found 2 CUDA devices (Total VRAM: 22527 MiB):
  Device 0: NVIDIA GeForce GTX 1080 Ti, compute capability 6.1, VMM: yes, VRAM: 11263 MiB
  Device 1: NVIDIA GeForce GTX 1080 Ti, compute capability 6.1, VMM: yes, VRAM: 11263 MiB
load_backend: loaded CUDA backend from D:\study\llm\llama-b8918-bin-win-cuda-12.4-x64\ggml-cuda.dll
load_backend: loaded RPC backend from D:\study\llm\llama-b8918-bin-win-cuda-12.4-x64\ggml-rpc.dll
load_backend: loaded CPU backend from D:\study\llm\llama-b8918-bin-win-cuda-12.4-x64\ggml-cpu-haswell.dll
error while handling argument "-sm": invalid value

usage:
-sm,   --split-mode {none,layer,row,tensor}
                                        how to split the model across multiple GPUs, one of:
                                        - none: use one GPU only
                                        - layer (default): split layers and KV across GPUs (pipelined)
                                        - row: split weight across GPUs by rows (parallelized)
                                        - tensor: split weights and KV across GPUs (parallelized,
                                        EXPERIMENTAL)
                                        (env: LLAMA_ARG_SPLIT_MODE)


to show complete usage, run with -h

 

pcie 버전과 x8 에 2개로 나눠져서 그런가? layer가 더 처참하다

gemma-4-E4B-it-UD-Q8_K_XL.gguf 8.05GB

none 40 t/s
layer 36 t/s
row 9 t/s
tensor 24 t/s

 

Qwen3.6-35B-A3B-UD-IQ1_M.gguf 9.35GB

none 40 t/s
layer 44 t/s
row 9 t/s
tensor 21 t/s

 

커지면 좀 layer가 효과가 생기나?

gemma-4-26B-A4B-it-UD-Q4_K_XL.gguf 15.9GB

none 13 t/s
layer 43 t/s
row -
tensor -

 

row 로드실패
D:\study\llm\llama-b8918-bin-win-cuda-12.4-x64>llama-cli.exe -m ..\gemma-4-26B-A4B-it-UD-Q4_K_XL.gguf -sm row
ggml_cuda_init: found 2 CUDA devices (Total VRAM: 22527 MiB):
  Device 0: NVIDIA GeForce GTX 1080 Ti, compute capability 6.1, VMM: yes, VRAM: 11263 MiB
  Device 1: NVIDIA GeForce GTX 1080 Ti, compute capability 6.1, VMM: yes, VRAM: 11263 MiB
load_backend: loaded CUDA backend from D:\study\llm\llama-b8918-bin-win-cuda-12.4-x64\ggml-cuda.dll
load_backend: loaded RPC backend from D:\study\llm\llama-b8918-bin-win-cuda-12.4-x64\ggml-rpc.dll
load_backend: loaded CPU backend from D:\study\llm\llama-b8918-bin-win-cuda-12.4-x64\ggml-cpu-haswell.dll

Loading model... -D:\a\llama.cpp\llama.cpp\ggml\src\ggml-cuda\ggml-cuda.cu:924: GGML_ASSERT(tensor->view_src == nullptr) failed

 

tensor 로드실패

D:\study\llm\llama-b8918-bin-win-cuda-12.4-x64>llama-cli.exe -m ..\gemma-4-26B-A4B-it-UD-Q4_K_XL.gguf -sm tensor
ggml_cuda_init: found 2 CUDA devices (Total VRAM: 22527 MiB):
  Device 0: NVIDIA GeForce GTX 1080 Ti, compute capability 6.1, VMM: yes, VRAM: 11263 MiB
  Device 1: NVIDIA GeForce GTX 1080 Ti, compute capability 6.1, VMM: yes, VRAM: 11263 MiB
load_backend: loaded CUDA backend from D:\study\llm\llama-b8918-bin-win-cuda-12.4-x64\ggml-cuda.dll
load_backend: loaded RPC backend from D:\study\llm\llama-b8918-bin-win-cuda-12.4-x64\ggml-rpc.dll
load_backend: loaded CPU backend from D:\study\llm\llama-b8918-bin-win-cuda-12.4-x64\ggml-cpu-haswell.dll

Loading model... -ggml_backend_cuda_buffer_type_alloc_buffer: allocating 2560.00 MiB on device 0: cudaMalloc failed: out of memory
D:/a/llama.cpp/llama.cpp/ggml/src/ggml-backend.cpp:119: GGML_ASSERT(buffer) failed

 

Qwen3.6-27B-Q5_K_M.gguf 18.1GB

none < 2 t/s
layer 7 t/s
row 5 t/s
tensor -

 

tensor 로드 실패
D:\study\llm\llama-b8918-bin-win-cuda-12.4-x64>llama-cli.exe -m ..\Qwen3.6-27B-Q5_K_M.gguf -sm tensor
ggml_cuda_init: found 2 CUDA devices (Total VRAM: 22527 MiB):
  Device 0: NVIDIA GeForce GTX 1080 Ti, compute capability 6.1, VMM: yes, VRAM: 11263 MiB
  Device 1: NVIDIA GeForce GTX 1080 Ti, compute capability 6.1, VMM: yes, VRAM: 11263 MiB
load_backend: loaded CUDA backend from D:\study\llm\llama-b8918-bin-win-cuda-12.4-x64\ggml-cuda.dll
load_backend: loaded RPC backend from D:\study\llm\llama-b8918-bin-win-cuda-12.4-x64\ggml-rpc.dll
load_backend: loaded CPU backend from D:\study\llm\llama-b8918-bin-win-cuda-12.4-x64\ggml-cpu-haswell.dll

Loading model... -ggml_backend_cuda_buffer_type_alloc_buffer: allocating 8192.00 MiB on device 0: cudaMalloc failed: out of memory
alloc_tensor_range: failed to allocate CUDA0 buffer of size 8589934592
D:/a/llama.cpp/llama.cpp/ggml/src/ggml-backend.cpp:119: GGML_ASSERT(buffer) failed
Posted by 구차니

리눅스에서 cuda 구버전용으로 빌드해보려다 그냥 포기

$ nvidia-smi --query-gpu=name,compute_capap
name, compute_cap
NVIDIA GeForce GTX 1080 Ti, 6.1
NVIDIA GeForce GTX 1080 Ti, 6.1

 

Might also be worth trying to build with an older architecture, e.g. -DCMAKE_CUDA_ARCHITECTURES="75" (which will be run via PTX JIT compilation), to check whether the issue is related to building with 80.

[링크 : https://github.com/ggml-org/llama.cpp/issues/9727

 

일단 내껀 6.1 이라 아키텍쳐 61로 해야하는데 막상 넣어보니 지원안하는 거라고 나온다.

그래서 cuda 보니까 13.x

그래서 윈도우에서 12.x 에서 작동한건가?

$ cd /usr/local/cuda
cuda/      cuda-13/   cuda-13.0/

 

cuda - driver version

[링크 : https://2dudwns.tistory.com/21]

[링크 : https://www.reddit.com/r/comfyui/comments/1qjp7vs/nvidia_geforce_gtx_1070_with_cuda_capability_sm/]

[링크 : https://arca.live/b/alpaca/112167556]

Posted by 구차니

좀 더 자세히 써있는 내용

어떻게 문장을 넣나 했더니

임베딩한 벡터를 1차원씩 넣어주고

결과로 나온 것과 입력 1차원씩 합쳐서 계속 넣어주고

뺄때는 최대 토큰 갯수 혹은

EOS(End of Sequence)가 나올때 까지 

출력 벡터가 나오면 다시 그걸 이어서 출력단에 다시 넣어주면

전체 계산하는게 아니라 출력단만 빠르게 계산되어 결과가 쭈욱 나오는듯.

 

참.. 마법같네

 

[링크 : https://tech.kakaopay.com/post/how-llm-works/]

Posted by 구차니