-
Notifications
You must be signed in to change notification settings - Fork 6.2k
Closed
Labels
bugSomething isn't workingSomething isn't working
Description
Describe the bug
I use the fuse_lora function, but the weights before and after the fuse_lora stay the same in FluxTransformer2DModel.
Codes in TensorRT demo:
def merge_loras(model, lora_loader):
import copy
model_transformer_blocks_bak = copy.deepcopy(model.transformer_blocks)
paths, weights, scale = lora_loader.paths, lora_loader.weights, lora_loader.scale
for i, path in enumerate(paths):
print(f"[I] Loading LoRA: {path}, weight {weights[i]}")
if isinstance(lora_loader, SDLoraLoader):
state_dict, network_alphas = lora_loader.lora_state_dict(path, unet_config=model.config)
lora_loader.load_lora_into_unet(state_dict, network_alphas=network_alphas, unet=model, adapter_name=path)
elif isinstance(lora_loader, FLUXLoraLoader):
state_dict, network_alphas = lora_loader.lora_state_dict(path, return_alphas=True)
# lora_loader.load_lora_into_transformer(state_dict, network_alphas=network_alphas, transformer=model, adapter_name=path)
lora_loader.load_lora_into_transformer(state_dict, network_alphas=network_alphas, transformer=model, adapter_name=None)
else:
raise ValueError(f"Unsupported LoRA loader: {lora_loader}")
model.set_adapters(paths, weights=weights)
# NOTE: fuse_lora an experimental API in Diffusers
model.fuse_lora(adapter_names=paths, lora_scale=scale)
model.unload_lora()
return model
the model
above is FluxTransformer2DModel object
I add
import copy
model_transformer_blocks_bak = copy.deepcopy(model.transformer_blocks)
at the front of codes.
Before return model
, I compare the weights of
model.transformer_blocks
with
model_transformer_blocks_bak
they are the same.
Logs
System Info
diffusers==0.34.0
torch==2.7.0
Who can help?
No response
Metadata
Metadata
Assignees
Labels
bugSomething isn't workingSomething isn't working