最近需要篡改大模型验证篡改定位水印的泛化性,但是由于网络连接原因无法直接使用🤗's Diffusers library ,在网上找到了以下本地部署的方法。
目录
下载模型,部署至服务器上
1)huggingface官网下载
2)git-lfs clone
3)hf_hub_download下载
部署运行
下载模型,部署至服务器上
模型下载包括三种方法:
1)huggingface官网下载
(很慢..)
2)git-lfs clone
需要配置SSH Public Key公钥,感谢以下教程:
huggingface学习 | 云服务器使用git-lfs下载huggingface上的模型文件_下载huggingface模型到自己的服务器使用-CSDN博客https://blog.csdn.net/weixin_47748259/article/details/135621579?spm=1001.2014.3001.5501
(大文件clone失败.)
3)hf_hub_download下载
当使用git-lfs clone库中大文件时,会跳过乃至失败,故可采用官网提供的hf_hub_download函数下载剩余大文件。
该方法需要配置huggingface身份验证令牌token:
huggingface学习 | 云服务器使用hf_hub_download下载huggingface上的模型文件_阿里云下载huggingface-CSDN博客https://blog.csdn.net/weixin_47748259/article/details/135714102
import os
# 注意os.environ得在import huggingface库相关语句之前执行。
os.environ["HF_ENDPOINT"] = "https://hf-mirror.com"
from huggingface_hub import hf_hub_download
def download_model(local_dir, repo_id, filename, subfolder, token):
print(
f'开始下载\n仓库:{repo_id}\n大模型:{filename}\n如超时不用管,会自定继续下载,直至完成。中途中断,再次运行将继续下载。')
while True:
try:
hf_hub_download(local_dir=local_dir,
repo_id=repo_id,
token=token,
filename=filename,
subfolder=subfolder,
local_dir_use_symlinks=False,
resume_download=True,
etag_timeout=100
)
except Exception as e:
print(e)
else:
print(f'下载完成,大模型保存在:{local_dir}\{filename}')
break
if __name__ == '__main__':
repo_id = 'stabilityai/stable-diffusion-2-inpainting' # 想下载的文件所在仓库
filename = '512-inpainting-ema.ckpt' # 文件名
subfolder = '' # 文件所在仓库下的文件夹名
token = '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' # huggingface 身份令牌
local_dir = r'./' # 文件保存本地地址
download_model(local_dir, repo_id, filename, subfolder, token)
部署运行
import PIL
import requests
import torch
from io import BytesIO
from diffusers import StableDiffusionInpaintPipeline
'''
def download_image(url):
response = requests.get(url)
return PIL.Image.open(BytesIO(response.content)).convert("RGB")
img_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png"
mask_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png"
init_image = download_image(img_url).resize((512, 512))
mask_image = download_image(mask_url).resize((512, 512))
init_image.save("./example.png")
mask_image.save("./example_mask.png")
'''
# 按照路径打开文件并读取字节数据,然后将字节数据传递给BytesIO,最后用PIL库打开并处理图像
def download_image(path):
with open(path, 'rb') as file:
image_data = file.read()
return PIL.Image.open(BytesIO(image_data)).convert("RGB")
img_path = "/root/autodl-tmp/0110_original.png"
mask_path = "/root/autodl-tmp/0110_mask.png"
init_image = download_image(img_path).resize((512, 512))
mask_image = download_image(mask_path).resize((512, 512))
pipe = StableDiffusionInpaintPipeline.from_pretrained(
"/root/autodl-tmp/stable-diffusion-2-inpainting",
torch_dtype=torch.float16,
)
pipe.to("cuda")
prompt = "a capybara, high resolution, sitting on a park bench"
# prompt = "a blue plane in the sky, high resolution"
#image and mask_image should be PIL images.
#The mask structure is white for inpainting and black for keeping as is
image = pipe(prompt=prompt, image=init_image, mask_image=mask_image).images[0]
image.save("./capybara_sit_on_a_bench.png")
结果:
基于 huggingface diffuser 库云服务器实现 stable diffusion inpaint样例代码-CSDN博客https://blog.csdn.net/weixin_47748259/article/details/135613019