Collectives™ on Stack Overflow
Find centralized, trusted content and collaborate around the technologies you use most.
Learn more about Collectives
Teams
Q&A for work
Connect and share knowledge within a single location that is structured and easy to search.
Learn more about Teams
I am in the early stage of learning
Stable Diffusion
.
Motivation:
I would like to generate real object picture from line art like
this
.
I found that I need to use
ControlNet
. However, when I download
majicMIX realistic
. It does not support
ControlNet
which I can input image to it.
Here is my attempt.
from diffusers import StableDiffusionPipeline, Stable
import torch
torch.manual_seed(111)
device = torch.device("mps") if torch.backends.mps.is_available() else torch.device("cpu")
pipe = StableDiffusionPipeline.from_ckpt("majicmixRealistic_v5.safetensors", load_safety_checker=False).to(device)
prompt = "A photo of rough collie, best quality"
negative_prompt: str = "low quality"
guidance_scale = 1
eta = 0.0
result = pipe(
prompt, num_inference_steps=30, num_images_per_prompt=8,
guidance_scale=1, negative_prompt=negative_prompt)
for idx, image in enumerate(result.images):
image.save(f"character_{guidance_scale}_{eta}_{idx}.png")
But I can't use this model
with ControlNet
. Because it is checkpoint
.
Next on I use StableDiffusionImg2ImgPipeline
. This one I can put text
and image
together, but I can't use ControlNet
.
https://huggingface.co/docs/diffusers/using-diffusers/img2img
import torch
from diffusers import StableDiffusionImg2ImgPipeline
from diffusers.utils import load_image
device = "mps" if torch.backends.mps.is_available() else "cpu"
pipe = StableDiffusionImg2ImgPipeline.from_ckpt("majicmixRealistic_v5.safetensors").to(
device
url = "../try_image_to_image/c.jpeg"
init_image = load_image(url)
prompt = "A woman, realistic color photo, high quality"
generator = torch.Generator(device=device).manual_seed(1024)
strengths = [0.3, 0.35, 0.4, 0.45, 0.5]
guidance_scales = [1, 2, 3, 4, 5, 6, 7, 8]
num_inference_steps = 100
print(f"Total run: {len(strengths) * len(guidance_scales)}")
for strength in strengths:
for guidance_scale in guidance_scales:
# image= pipe(prompt=prompt, image=init_image, strength=0.75, guidance_scale=7.5, generator=generator).images[0]
image = pipe(
prompt=prompt, image=init_image, strength=strength, guidance_scale=guidance_scale,
generator=generator, num_inference_steps=num_inference_steps).images[0]
image.save(f"images/3rd_{strength}_{guidance_scale}.png")
Question:
How to use ContronNet
with majicmix-realistic
?
Use StableDiffusionControlNetPipeline
instead of StableDiffusionPipeline
.
The ControlNet pipeline requires specification of base model e.g. SD 1.5 and ControlNet model e.g. sd-controlnet-canny.
You can replace base model with majicmix and replace sd-controlnet-canny with whatever you want.
Example code:
from diffusers import StableDiffusionControlNetPipeline, ControlNetModel
import torch
controlnet = ControlNetModel.from_pretrained("lllyasviel/sd-controlnet-canny", torch_dtype=torch.float16)
pipe = StableDiffusionControlNetPipeline.from_pretrained(
"runwayml/stable-diffusion-v1-5", controlnet=controlnet, torch_dtype=torch.float16
out_image = pipe(
"disco dancer with colorful lights", num_inference_steps=20, image=canny_image
).images[0]
Read more here: https://huggingface.co/docs/diffusers/v0.16.0/en/api/pipelines/stable_diffusion/controlnet
–
–
–
Download python file
mkdir converted
python convert_original_stable_diffusion_to_diffusers.py --checkpoint_path majicmixRealistic_v5.safetensors --from_safetensors --dump_path converted
controlnet = ControlNetModel.from_pretrained("lllyasviel/sd-controlnet-canny").to(device)
pipe = StableDiffusionControlNetPipeline.from_pretrained(
"converted", # This line
safety_checker=None,
controlnet=controlnet,
).to(device)
Then I can use majicmix-realistic
model.
Thanks for contributing an answer to Stack Overflow!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.