-
Notifications
You must be signed in to change notification settings - Fork 6k
Insights: huggingface/diffusers
Overview
Could not load contribution data
Please try again later
11 Pull requests merged by 8 people
-
Update pipeline_flux_inpaint.py to fix padding_mask_crop returning only the inpainted area
#11658 merged
Jun 10, 2025 -
Add community class StableDiffusionXL_T5Pipeline
#11626 merged
Jun 9, 2025 -
Introduce DeprecatedPipelineMixin to simplify pipeline deprecation process
#11596 merged
Jun 9, 2025 -
[tests] Fix how compiler mixin classes are used
#11680 merged
Jun 9, 2025 -
fixed axes_dims_rope init (huggingface#11641)
#11678 merged
Jun 8, 2025 -
Wan VACE
#11582 merged
Jun 6, 2025 -
[tests] add test for torch.compile + group offloading
#11670 merged
Jun 6, 2025 -
use deterministic to get stable result
#11663 merged
Jun 6, 2025 -
[examples] flux-control: use num_training_steps_for_scheduler
#11662 merged
Jun 5, 2025 -
[chore] bring PipelineQuantizationConfig at the top of the import chain.
#11656 merged
Jun 5, 2025 -
[CI] Some improvements to Nightly reports summaries
#11166 merged
Jun 5, 2025
20 Pull requests opened by 10 people
-
enable torchao test cases on XPU and switch to device agnostic APIs for test cases
#11654 opened
Jun 4, 2025 -
[LoRA] support Flux Control LoRA with bnb 8bit.
#11655 opened
Jun 4, 2025 -
[WIP] [LoRA] support omi hidream lora.
#11660 opened
Jun 5, 2025 -
Bump torch from 2.2.0 to 2.7.1 in /examples/research_projects/realfill
#11664 opened
Jun 5, 2025 -
⚡️ Speed up method `AutoencoderKLWan.clear_cache` by 886%
#11665 opened
Jun 5, 2025 -
⚡️ Speed up method `BlipImageProcessor.postprocess` by 51%
#11666 opened
Jun 5, 2025 -
⚡️ Speed up method `Kandinsky3ConditionalGroupNorm.forward` by 7%
#11667 opened
Jun 5, 2025 -
Fix wrong param types, docs, and handles noise=None in scale_noise of FlowMatching schedulers
#11669 opened
Jun 6, 2025 -
enable cpu offloading of new pipelines on XPU & use device agnostic empty to make pipelines work on XPU
#11671 opened
Jun 6, 2025 -
[tests] tests for compilation + quantization (bnb)
#11672 opened
Jun 6, 2025 -
Support Expert loss for HiDream
#11673 opened
Jun 6, 2025 -
Fix EDM DPM Solver Test and Enhance Test Coverage
#11679 opened
Jun 8, 2025 -
[tests] model-level `device_map` clarifications
#11681 opened
Jun 9, 2025 -
[wip][poc] make group offloading work with disk/nvme transfers
#11682 opened
Jun 9, 2025 -
[WIP] device_map rework and direct weights loading
#11683 opened
Jun 10, 2025 -
[GGUF] feat: support loading diffusers format gguf checkpoints.
#11684 opened
Jun 10, 2025 -
[WIP] Refactor Attention Modules
#11685 opened
Jun 10, 2025 -
Bump requests from 2.32.3 to 2.32.4 in /examples/server
#11686 opened
Jun 10, 2025 -
Add Pruna optimization framework documentation
#11688 opened
Jun 10, 2025 -
Improve Wan docstrings
#11689 opened
Jun 10, 2025
9 Issues closed by 4 people
-
Add support for ConsisID
#10100 closed
Jun 10, 2025 -
HunyuanVideo with IP2V
#10485 closed
Jun 10, 2025 -
Docs for HunyuanVideo LoRA?
#10796 closed
Jun 10, 2025 -
Need to handle v0.33.0 deprecations
#10895 closed
Jun 10, 2025 -
[BUG] [CleanCode] Tuple[int] = (16, 56, 56) in FluxTransformer2DModel
#11641 closed
Jun 8, 2025 -
Error in loading the pretrained lora weights
#11675 closed
Jun 7, 2025 -
[BUG]: Using args.max_train_steps even if it is None in diffusers/examples/flux-control
#11661 closed
Jun 5, 2025
4 Issues opened by 4 people
-
[DOCS] Add `pruna` as optimization framework
#11687 opened
Jun 10, 2025 -
HunyuanVideoImageToVideoPipeline memory leak
#11676 opened
Jun 7, 2025 -
[FR] Please support ref image and multiple control videos in Wan VACE
#11674 opened
Jun 6, 2025 -
LoRA load issue
#11659 opened
Jun 4, 2025
26 Unresolved conversations
Sometimes conversations happen on old items that aren’t yet closed. Here is a list of all the Issues and Pull Requests with unresolved conversations.
-
[LoRA] parse metadata from LoRA and save metadata
#11324 commented on
Jun 10, 2025 • 12 new comments -
[benchmarks] overhaul benchmarks
#11565 commented on
Jun 10, 2025 • 10 new comments -
Attention Dispatcher
#11368 commented on
Jun 8, 2025 • 8 new comments -
Allow remote code repo names to contain "."
#11652 commented on
Jun 10, 2025 • 2 new comments -
Add FluxPAGPipeline with support for PAG
#11510 commented on
Jun 9, 2025 • 2 new comments -
Add Finegrained FP8
#11647 commented on
Jun 4, 2025 • 0 new comments -
Added PhotoDoodle Pipeline
#11621 commented on
Jun 5, 2025 • 0 new comments -
Chroma as a FLUX.1 variant
#11566 commented on
Jun 10, 2025 • 0 new comments -
Add SkyReels V2: Infinite-Length Film Generative Model
#11518 commented on
Jun 10, 2025 • 0 new comments -
[torch.compile] Make HiDream torch.compile ready
#11477 commented on
Jun 10, 2025 • 0 new comments -
[quant] add __repr__ for better printing of configs.
#11452 commented on
Jun 10, 2025 • 0 new comments -
OMI Format Compatibility
#11631 commented on
Jun 10, 2025 • 0 new comments -
Error in init from pretrained for LTXConditionPipeline
#11644 commented on
Jun 10, 2025 • 0 new comments -
[performance] investigating FluxPipeline for recompilations on resolution changes
#11360 commented on
Jun 10, 2025 • 0 new comments -
[Performance] Issue on *SanaLinearAttnProcessor2_0 family. 1.06X speedup can be reached with a simple change.
#11499 commented on
Jun 10, 2025 • 0 new comments -
max_shard_size
#11650 commented on
Jun 10, 2025 • 0 new comments -
The density_for_timestep_sampling and loss_weighting for SD3 Training!!!
#9056 commented on
Jun 10, 2025 • 0 new comments -
Add SUPIR Upscaler
#7219 commented on
Jun 10, 2025 • 0 new comments -
Sage Attention for diffuser library
#11168 commented on
Jun 9, 2025 • 0 new comments -
how to load lora weight with fp8 transfomer model?
#11648 commented on
Jun 9, 2025 • 0 new comments -
torch.compile can't be used with groupoffloading on hunyuanvideo_frampack
#11584 commented on
Jun 9, 2025 • 0 new comments -
Request support for MAGI-1
#11519 commented on
Jun 8, 2025 • 0 new comments -
Can't load flux-fill-lora with FluxControl
#11651 commented on
Jun 6, 2025 • 0 new comments -
SD3 ControlNet Script (and others?): dataset preprocessing cache depends on unrelated arguments
#11497 commented on
Jun 5, 2025 • 0 new comments -
support .alpha keys in HiDream loras trained using OneTrainer
#11653 commented on
Jun 4, 2025 • 0 new comments -
InstructPix2Pix training script for SD3
#9101 commented on
Jun 4, 2025 • 0 new comments