from diffusers import StableDiffusionPipeline

[링크 : https://www.reddit.com/r/Python/comments/10g5nay/use_python_to_build_a_free_stable_diffusion_app/?tl=ko]

[링크 : https://www.assemblyai.com/blog/build-a-free-stable-diffusion-app-with-a-gpu-backend]

 

매모리 터져나갈게 보이니 e4b는 gpu 0번에서

stable diffusion은 gpu 1번에서 돌려서 두개 연동해

img2img로 장난치거나 그림 그려 가 포함되면 txt2img로 돌려도 괜찮을듯

---

by gpt

api 모드를 활성화 해주고

./webui.sh --api

 

txt2img

import requests
import base64

url = "http://127.0.0.1:7860/sdapi/v1/txt2img"

payload = {
    "prompt": "masterpiece, ultra detailed, cyberpunk girl, neon city, rain",
    "negative_prompt": "low quality, blurry",
    "steps": 30,
    "width": 768,
    "height": 768,
    "cfg_scale": 7,
    "sampler_name": "DPM++ 2M Karras"
}

response = requests.post(url, json=payload)

result = response.json()

# base64 이미지 저장
image_data = base64.b64decode(result["images"][0])

with open("generated.png", "wb") as f:
    f.write(image_data)

print("generated.png 저장 완료")

 

오.. 된다.

 

img2img

import requests
import base64

# 입력 이미지 읽기
with open("input.png", "rb") as f:
    image_base64 = base64.b64encode(f.read()).decode()

url = "http://127.0.0.1:7860/sdapi/v1/img2img"

payload = {
    "init_images": [image_base64],

    "prompt": "cyberpunk style, neon lights, futuristic",
    "negative_prompt": "low quality, blurry",

    "denoising_strength": 0.55,

    "steps": 30,
    "cfg_scale": 7,
    "width": 768,
    "height": 768,
    "sampler_name": "DPM++ 2M Karras"
}

response = requests.post(url, json=payload)

result = response.json()

image_data = base64.b64decode(result["images"][0])

with open("modified.png", "wb") as f:
    f.write(image_data)

print("modified.png 저장 완료")

 

[링크 : https://chatgpt.com/share/69fd90a2-2bbc-83e9-8f04-6cecdddc1b41]

Posted by 구차니