Text-to-Image
Diffusers
Safetensors
LibreFluxIPAdapterPipeline

LibreFLUX-IP-Adapter

Example: Control image vs result

This model/pipeline is the product of my LibreFlux IP-Adapter training repo, which uses LibreFLUX as the underlying Transformer model. The IP Adapter and Attention Wrapper design is roughly based on the InstantX IP Adapter

I used transfer learning, to fintune the InstantX weights until they worked with LibreFlux and attention masking. For the dataset, I trained on laion2b-squareish-1024px for roughly 20,000 iterations.

How does this relate to LibreFLUX?

  • Base model is LibreFLUX
  • Trained in same non-distilled fashion
  • Uses Attention Masking
  • Uses CFG during Inference

Fun Facts

Compatibility

pip install -U diffusers==0.35.2
pip install -U transformers==4.57.1

Low VRAM:

pip install optimum.quanto

Load Pipeline

import torch
from diffusers import DiffusionPipeline
from huggingface_hub import hf_hub_download

model_id = "neuralvfx/LibreFlux-IP-Adapter" 
device = "cuda" if torch.cuda.is_available() else "cpu"

dtype  = torch.bfloat16 if device == "cuda" else torch.float32

pipe = DiffusionPipeline.from_pretrained(
    model_id,
    custom_pipeline=model_id,
    trust_remote_code=True,   
    torch_dtype=dtype,
    safety_checker=None        
)

# Optional way to download the weights
hf_hub_download(repo_id="neuralvfx/LibreFlux-IP-Adapter",
 filename="ip_adapter.pt",
 local_dir=".",
 local_dir_use_symlinks=False)

pipe.load_ip_adapter('ip_adapter.pt')

pipe.to(device)

Inference

from PIL import Image
from torchvision.transforms import ToTensor

# Optional way to download test IP Adapter Image
hf_hub_download(repo_id="neuralvfx/LibreFlux-IP-Adapter",
 filename="examples/david.jpg",
 local_dir=".",
 local_dir_use_symlinks=False)

# Load IP Adapter Image
ip_image = Image.open("examples/david.jpg").convert("RGB")
ip_image = ip_image.resize((512, 512))

prompt = "george washington"
negative_prompt = "blurry, low quality"

generator = torch.Generator(device="cuda").manual_seed(1995)

images = pipe(
  prompt=prompt,
  negative_prompt=negative_prompt,
  return_dict=False,
  ip_adapter_image=ip_image, 
  ip_adapter_scale=1.0,
  height=512, 
  width=512,  
  num_inference_steps=75, 
  generator=generator
)[0][0]

Load Pipeline ( Low VRAM )

import torch
from huggingface_hub import hf_hub_download
from diffusers import DiffusionPipeline
from optimum.quanto import freeze, quantize, qint8

model_id = "neuralvfx/LibreFlux-IP-Adapter" 

device = "cuda" if torch.cuda.is_available() else "cpu"
dtype  = torch.bfloat16 if device == "cuda" else torch.float32

pipe = DiffusionPipeline.from_pretrained(
    model_id,
    custom_pipeline=model_id,
    trust_remote_code=True,      
    torch_dtype=dtype,
    safety_checker=None         
)

# Optional way to download the weights
hf_hub_download(repo_id="neuralvfx/LibreFlux-IP-Adapter",
 filename="ip_adapter.pt",
 local_dir=".",
 local_dir_use_symlinks=False)

# Load the IP Adapter First
pipe.load_ip_adapter('ip_adapter.pt')

# Quantize and Freeze
quantize(
    pipe.transformer,
    weights=qint8,
    exclude=[
        "*.norm", "*.norm1", "*.norm2", "*.norm2_context",
        "proj_out", "x_embedder", "norm_out", "context_embedder",
    ],
)

quantize(
    pipe.ip_adapter,
    weights=qint8,
    exclude=[
        "*.norm", "*.norm1", "*.norm2", "*.norm2_context",
        "proj_out", "x_embedder", "norm_out", "context_embedder",
    ],
)
freeze(pipe.transformer)
freeze(pipe.ip_adapter)

# Enable Model Offloading
pipe.enable_model_cpu_offload()
Downloads last month
1
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for neuralvfx/LibreFlux-IP-Adapter

Finetuned
(3)
this model

Dataset used to train neuralvfx/LibreFlux-IP-Adapter