프로그램 사용/ai 프로그램

stable diffusion 관련

구차니 2026. 5. 2. 23:28

남의 저장소에 issue 쓴 걸 제외하면 나름 나에게 도움이 될 느낌.

I have recently run into some issues. After several hours, I got the process down to a reproducible science/recipe.

If you are running stablediffusion with an Nvidia GTX 1080Ti and are having issues, read on...

🧩 Final Working Fix Summary

Hardware: NVIDIA GTX 1080 Ti (SM 6.1 Pascal)
OS: Windows 10 x64
Environment: Automatic1111 v1.10.1
Python: 3.10.6
Torch Stack:

torch==2.1.2+cu118
torchvision==0.16.2+cu118
torchaudio==2.1.2+cu118
xformers==0.0.23.post1  (optional)
Installed from: https://download.pytorch.org/whl/cu118

Other Key Dependencies:

transformers==4.36.2
tokenizers==0.15.2
safetensors==0.4.2
onnxruntime==1.17.1
accelerate==0.21.0

[링크 : https://github.com/easydiffusion/easydiffusion/issues/1980]

 

pytorch 2.x 대로 업그레이드 해도 되는것 같긴한데..

[링크 : https://github.com/AUTOMATIC1111/stable-diffusion-webui/discussions/8709]

 

 

[링크 : https://www.reddit.com/r/StableDiffusion/comments/1qg5gha/i_made_a_new_ui_integrating_stablediffusioncpp/]

[링크 : https://github.com/Danmoreng/diffusion-desk]

 

제법(?) 여러가지 모델들이 있나보다.

🔥Important News

  • 2026/04/11 🚀 stable-diffusion.cpp now uses a brand-new embedded web UI.
    👉 Details: PR #1408
  • 2026/01/18 🚀 stable-diffusion.cpp now supports FLUX.2-klein
    👉 Details: PR #1193
  • 2025/12/01 🚀 stable-diffusion.cpp now supports Z-Image
    👉 Details: PR #1020
  • 2025/11/30 🚀 stable-diffusion.cpp now supports FLUX.2-dev
    👉 Details: PR #1016
  • 2025/10/13 🚀 stable-diffusion.cpp now supports Qwen-Image-Edit / Qwen-Image-Edit 2509
    👉 Details: PR #877
  • 2025/10/12 🚀 stable-diffusion.cpp now supports Qwen-Image
    👉 Details: PR #851
  • 2025/09/14 🚀 stable-diffusion.cpp now supports Wan2.1 Vace
    👉 Details: PR #819
  • 2025/09/06 🚀 stable-diffusion.cpp now supports Wan2.1 / Wan2.2
    👉 Details: PR #778

 

stable diffusion 이라고 SD 라고 하는건가?

Supported models

[링크 : https://github.com/leejet/stable-diffusion.cpp]

 

음.. Image랑 Iamge Edit 이랑 서로 다른 용도인가?

[링크 : https://huggingface.co/unsloth/Qwen-Image-Edit-2511-GGUF]

[링크 : https://huggingface.co/unsloth/Qwen-Image-GGUF]

[링크 : https://huggingface.co/unsloth/Qwen-Image-2512-GGUF]

 

일단 pip 패키지는 아니고 직접 설치를 해야 하는 것 같고

pip install git+https://github.com/huggingface/diffusers

 

코드 자체는 평이하긴 한데.. 딱 봐도 llama.cpp에서 돌릴만한 녀석은 아닐것 같은 느낌이..

from diffusers import DiffusionPipeline
import torch

model_name = "Qwen/Qwen-Image"

# Load the pipeline
if torch.cuda.is_available():
    torch_dtype = torch.bfloat16
    device = "cuda"
else:
    torch_dtype = torch.float32
    device = "cpu"

pipe = DiffusionPipeline.from_pretrained(model_name, torch_dtype=torch_dtype)
pipe = pipe.to(device)

positive_magic = {
    "en": ", Ultra HD, 4K, cinematic composition.", # for english prompt
    "zh": ", 超清,4K,电影级构图." # for chinese prompt
}

# Generate image
prompt = '''A coffee shop entrance features a chalkboard sign reading "Qwen Coffee 😊 $2 per cup," with a neon light beside it displaying "通义千问". Next to it hangs a poster showing a beautiful Chinese woman, and beneath the poster is written "π≈3.1415926-53589793-23846264-33832795-02384197". Ultra HD, 4K, cinematic composition'''

negative_prompt = " " # using an empty string if you do not have specific concept to remove


# Generate with different aspect ratios
aspect_ratios = {
    "1:1": (1328, 1328),
    "16:9": (1664, 928),
    "9:16": (928, 1664),
    "4:3": (1472, 1140),
    "3:4": (1140, 1472),
    "3:2": (1584, 1056),
    "2:3": (1056, 1584),
}

width, height = aspect_ratios["16:9"]

image = pipe(
    prompt=prompt positive_magic["en"],
    negative_prompt=negative_prompt,
    width=width,
    height=height,
    num_inference_steps=50,
    true_cfg_scale=4.0,
    generator=torch.Generator(device="cuda").manual_seed(42)
).images[0]

image.save("example.png")

[링크 : https://huggingface.co/unsloth/Qwen-Image-GGUF]