Skip to content

_flash_attention_3 in dispatch_attention_fn is not compatible with the latest flash-atten interface. #12022

@hmzjwhmzjw

Description

@hmzjwhmzjw

Describe the bug

[FA3] Don't return lse:
Dao-AILab/flash-attention@ed20940

but in the current diffuser version, it is not updated.
https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_dispatch.py#L608

when use fa3 backend, diffuser will raise error.

Reproduction

import torch
import numpy as np
from diffusers import WanPipeline, AutoencoderKLWan, WanTransformer3DModel, UniPCMultistepScheduler
from diffusers.utils import export_to_video, load_image

dtype = torch.bfloat16
device = "cuda"

model_id = "Wan-AI/Wan2.2-TI2V-5B-Diffusers"
vae = AutoencoderKLWan.from_pretrained(model_id, subfolder="vae", torch_dtype=torch.float32)
pipe = WanPipeline.from_pretrained(model_id, vae=vae, torch_dtype=dtype)
pipe.to(device)

height = 704
width = 1280
num_frames = 121
num_inference_steps = 50
guidance_scale = 5.0


prompt = "Two anthropomorphic cats in comfy boxing gear and bright gloves fight intensely on a spotlighted stage."
negative_prompt = "色调艳丽,过曝,静态,细节模糊不清,字幕,风格,作品,画作,画面,静止,整体发灰,最差质量,低质量,JPEG压缩残留,丑陋的,残缺的,多余的手指,画得不好的手部,画得不好的脸部,畸形的,毁容的,形态畸形的肢体,手指融合,静止不动的画面,杂乱的背景,三条腿,背景人很多,倒着走"
with attention_backend("_flash_3"):
    output = pipe(
    prompt=prompt,
    negative_prompt=negative_prompt,
    height=height,
    width=width,
    num_frames=num_frames,
    guidance_scale=guidance_scale,
    num_inference_steps=num_inference_steps,
    ).frames[0]
export_to_video(output, "5bit2v_output.mp4", fps=24)

Logs

System Info

  • 🤗 Diffusers version: 0.35.0.dev0
  • Platform: Linux-5.4.119-19.0009.28-x86_64-with-glibc2.35
  • Running on Google Colab?: No
  • Python version: 3.11.13
  • PyTorch version (GPU?): 2.7.1+cu128 (True)
  • Flax version (CPU?/GPU?/TPU?): not installed (NA)
  • Jax version: not installed
  • JaxLib version: not installed
  • Huggingface_hub version: 0.34.3
  • Transformers version: 4.52.4
  • Accelerate version: 1.8.1
  • PEFT version: 0.15.2
  • Bitsandbytes version: 0.46.0
  • Safetensors version: 0.5.3
  • xFormers version: not installed

Who can help?

No response

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't working

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions