--- base_model: MusePublic/FLUX.1-Kontext-Dev@v1 cover_images: - _cover_images_/image4_1.png - _cover_images_/image5_1.png - _cover_images_/image6_1.png - _cover_images_/image2_1.png - _cover_images_/image3_1.png frameworks: - Pytorch license: Apache License 2.0 tags: - LoRA - text-to-image tasks: - text-to-image-synthesis vision_foundation: FLUX_1 #model-type: ##如 gpt、phi、llama、chatglm、baichuan 等 #- gpt #domain: ##如 nlp、cv、audio、multi-modal #- nlp #language: ##语言代码列表 https://help.aliyun.com/document_detail/215387.html?spm=a2c4g.11186623.0.0.9f8d7467kni6Aa #- cn #metrics: ##如 CIDEr、Blue、ROUGE 等 #- CIDEr #tags: ##各种自定义,包括 pretrained、fine-tuned、instruction-tuned、RL-tuned 等训练方法和其他 #- pretrained #tools: ##如 vllm、fastchat、llamacpp、AdaSeq 等 #- vllm --- # 高分辨率修复 - Kontext 图像编辑 LoRA ## 模型介绍 本 LoRA 模型是基于 [Kontext](https://www.modelscope.cn/models/black-forest-labs/FLUX.1-Kontext-dev) 模型和 [DiffSynth-Studio](https://github.com/modelscope/DiffSynth-Studio) 训练的 LoRA 模型,使用本模型后,可输入指令 `Outpaint the image.` 进行图像外扩。 ## 模型效果 ||样例 1|样例 2|样例 3| |-|-|-|-| |Prompt|Outpaint the image.|Outpaint the image.| Outpaint the image.| |输入|![](./assets/image1_0.png)|![](./assets/image2_0.png)|![](./assets/image3_0.png)| |外扩图|![](./assets/image1_1.png)|![](./assets/image2_1.png)|![](./assets/image3_1.png)| ||样例 4|样例 5|样例 6| |-|-|-|-| |Prompt|Outpaint the image. A chicken-headed man in suspenders is playing the basketball.|Outpaint the image. A man in suspenders is playing the basketball.| Outpaint the image. A chicken-headed man in suspenders is playing the basketball with a white background. | |输入|![](./assets/image4_0.jpg)|![](./assets/image5_0.png)|![](./assets/image6_0.png)| |外扩图|![](./assets/image4_1.png)|![](./assets/image5_1.png)|![](./assets/image6_1.png)| ## 使用说明 本模型基于框架 [DiffSynth-Studio](https://github.com/modelscope/DiffSynth-Studio/tree/main/examples/flux) 训练,请先安装 ``` git clone https://github.com/modelscope/DiffSynth-Studio.git cd DiffSynth-Studio pip install -e . ``` ```python import torch from diffsynth.pipelines.flux_image_new import FluxImagePipeline, ModelConfig from PIL import Image from modelscope import snapshot_download snapshot_download("DiffSynth-Studio/FLUX.1-Kontext-dev-lora-SuperOutpainting", cache_dir="./models") pipe = FluxImagePipeline.from_pretrained( torch_dtype=torch.bfloat16, device="cuda", model_configs=[ ModelConfig(model_id="black-forest-labs/FLUX.1-Kontext-dev", origin_file_pattern="flux1-kontext-dev.safetensors"), ModelConfig(model_id="black-forest-labs/FLUX.1-dev", origin_file_pattern="text_encoder/model.safetensors"), ModelConfig(model_id="black-forest-labs/FLUX.1-dev", origin_file_pattern="text_encoder_2/"), ModelConfig(model_id="black-forest-labs/FLUX.1-dev", origin_file_pattern="ae.safetensors"), ], ) pipe.load_lora(pipe.dit, "models/DiffSynth-Studio/FLUX.1-Kontext-dev-lora-SuperOutpainting/model.safetensors", alpha=1) image = Image.open("your_image.jpg") image = pipe( prompt="Outpaint the image.", kontext_images=image, embedded_guidance=2.5, seed=0, ) image.save("output.jpg") ```