نسخ الرمز
from PIL import Image
import requests
from io import BytesIO
import torch
from torchvision import transforms
from diffusers import StableDiffusionPipeline
# Load the input image
image_path = "/mnt/data/1000150948.jpg"
input_image = Image.open(image_path)
# Load the Stable Diffusion model for anime style conversion
model_id = "hakurei/waifu-diffusion"
device = "cuda" if torch.cuda.is_available() else "cpu"
pipe = StableDiffusionPipeline.from_pretrained(model_id).to(device)
# Define the prompt for anime style conversion
prompt = "anime style, highly detailed, vibrant colors, soft shading, beautiful lighting"
# Generate the anime style image
output_image = pipe(prompt=prompt, init_image=input_image, strength=0.75)["images"][0]
# Save the output image
output_path = "/mnt/data/anime_style_output.jpg"
output_image.save(output_path)
Key Points:
Loading the Image: The image is loaded using PIL.Image.open.
Model Initialization: The Stable Diffusion model is loaded using StableDiffusionPipeline from the diffusers library.
Device Selection: The script checks if CUDA is available and uses it if possible; otherwise, it defaults to the CPU.
Prompt Definition: A prompt is defined to guide the anime-style conversion.
Image Generation: The anime-style image is generated using the model.
Saving the Output: The generated image is saved to the specified path.
Feel free to run this script in your Python environment. If you encounter any issues or need further customization, let me know!