-
Notifications
You must be signed in to change notification settings - Fork 6k
Add FluxPAGPipeline with support for PAG #11510
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Add FluxPAGPipeline with support for PAG #11510
Conversation
@tongyu0924 Is this ready for review? If so, could you move the pipeline under |
Done! The pipeline is now under src/diffusers/pipelines/pag. It's ready for review |
Thank you @tongyu0924 👍🏽! Could we add the pipeline to the necessary init files PAG Module: Pipelines Module: diffusers/src/diffusers/pipelines/__init__.py Line 175 in 049082e
Diffusers main init diffusers/src/diffusers/__init__.py Line 341 in 049082e
And then could you please add a fast test for the pipeline, similar to how it has been done here
|
I've added the pipeline to the necessary |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry for the delay here @tongyu0924. I think we just need define the correct PAG Attn Processors for this pipeline and we should be good to go 👍🏽
Thanks for your patience.
# do_true_pag = true_pag > 0 | ||
# ( | ||
# prompt_embeds, | ||
# pooled_prompt_embeds, | ||
# text_ids, | ||
# ) = self.encode_prompt( | ||
# prompt=prompt, | ||
# prompt_2=prompt_2, | ||
# prompt_embeds=prompt_embeds, | ||
# pooled_prompt_embeds=pooled_prompt_embeds, | ||
# device=device, | ||
# num_images_per_prompt=num_images_per_prompt, | ||
# max_sequence_length=max_sequence_length, | ||
# lora_scale=lora_scale, | ||
# ) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could we remove this commented section please.
timesteps = scheduler.timesteps | ||
return timesteps, num_inference_steps | ||
|
||
class PAGIdentitySelfAttnProcessor: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here we would need to define a PAGFluxAttnProcessor_2_0
similar to how it is done for SD3's JointAttnProcessor
class PAGJointAttnProcessor2_0: |
And since the pipeline supports true CFG we would also need to add a PAGCFGFluxAttnProcessor_2_0
class PAGCFGJointAttnProcessor2_0: |
… support true PAG/CFG in Flux pipeline
Please update the following if possible diffusers/src/diffusers/pipelines/pag/pipeline_pag_flux.py EXAMPLE_DOC_STRING = """
""" |
What does this PR do?
This PR adds support for Perturbed Attention Guidance (PAG) to the FluxPipeline
Fixes #11488
Before submitting
documentation guidelines, and
here are tips on formatting docstrings.
Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.