-
Notifications
You must be signed in to change notification settings - Fork 6k
Insights: huggingface/diffusers
Overview
Could not load contribution data
Please try again later
5 Pull requests merged by 5 people
-
Update pipeline_flux_inpaint.py to fix padding_mask_crop returning only the inpainted area
#11658 merged
Jun 10, 2025 -
Add community class StableDiffusionXL_T5Pipeline
#11626 merged
Jun 9, 2025 -
Introduce DeprecatedPipelineMixin to simplify pipeline deprecation process
#11596 merged
Jun 9, 2025 -
[tests] Fix how compiler mixin classes are used
#11680 merged
Jun 9, 2025 -
fixed axes_dims_rope init (huggingface#11641)
#11678 merged
Jun 8, 2025
8 Pull requests opened by 6 people
-
Fix EDM DPM Solver Test and Enhance Test Coverage
#11679 opened
Jun 8, 2025 -
[tests] model-level `device_map` clarifications
#11681 opened
Jun 9, 2025 -
[wip][poc] make group offloading work with disk/nvme transfers
#11682 opened
Jun 9, 2025 -
[WIP] device_map rework and direct weights loading
#11683 opened
Jun 10, 2025 -
[GGUF] feat: support loading diffusers format gguf checkpoints.
#11684 opened
Jun 10, 2025 -
[WIP] Refactor Attention Modules
#11685 opened
Jun 10, 2025 -
Bump requests from 2.32.3 to 2.32.4 in /examples/server
#11686 opened
Jun 10, 2025 -
Add Pruna optimization framework documentation
#11688 opened
Jun 10, 2025
2 Issues closed by 1 person
-
[BUG] [CleanCode] Tuple[int] = (16, 56, 56) in FluxTransformer2DModel
#11641 closed
Jun 8, 2025
1 Issue opened by 1 person
-
[DOCS] Add `pruna` as optimization framework
#11687 opened
Jun 10, 2025
27 Unresolved conversations
Sometimes conversations happen on old items that aren’t yet closed. Here is a list of all the Issues and Pull Requests with unresolved conversations.
-
Attention Dispatcher
#11368 commented on
Jun 8, 2025 • 8 new comments -
[benchmarks] overhaul benchmarks
#11565 commented on
Jun 10, 2025 • 5 new comments -
Add FluxPAGPipeline with support for PAG
#11510 commented on
Jun 9, 2025 • 2 new comments -
[tests] tests for compilation + quantization (bnb)
#11672 commented on
Jun 10, 2025 • 0 new comments -
enable cpu offloading of new pipelines on XPU & use device agnostic empty to make pipelines work on XPU
#11671 commented on
Jun 8, 2025 • 0 new comments -
⚡️ Speed up method `AutoencoderKLWan.clear_cache` by 886%
#11665 commented on
Jun 9, 2025 • 0 new comments -
[WIP] [LoRA] support omi hidream lora.
#11660 commented on
Jun 10, 2025 • 0 new comments -
[LoRA] support Flux Control LoRA with bnb 8bit.
#11655 commented on
Jun 10, 2025 • 0 new comments -
enable torchao test cases on XPU and switch to device agnostic APIs for test cases
#11654 commented on
Jun 10, 2025 • 0 new comments -
Allow remote code repo names to contain "."
#11652 commented on
Jun 10, 2025 • 0 new comments -
Chroma as a FLUX.1 variant
#11566 commented on
Jun 10, 2025 • 0 new comments -
Add SkyReels V2: Infinite-Length Film Generative Model
#11518 commented on
Jun 10, 2025 • 0 new comments -
[torch.compile] Make HiDream torch.compile ready
#11477 commented on
Jun 10, 2025 • 0 new comments -
[quant] add __repr__ for better printing of configs.
#11452 commented on
Jun 10, 2025 • 0 new comments -
[LoRA] parse metadata from LoRA and save metadata
#11324 commented on
Jun 10, 2025 • 0 new comments -
Error in init from pretrained for LTXConditionPipeline
#11644 commented on
Jun 10, 2025 • 0 new comments -
HunyuanVideoImageToVideoPipeline memory leak
#11676 commented on
Jun 10, 2025 • 0 new comments -
[performance] investigating FluxPipeline for recompilations on resolution changes
#11360 commented on
Jun 10, 2025 • 0 new comments -
[Performance] Issue on *SanaLinearAttnProcessor2_0 family. 1.06X speedup can be reached with a simple change.
#11499 commented on
Jun 10, 2025 • 0 new comments -
OMI Format Compatibility
#11631 commented on
Jun 10, 2025 • 0 new comments -
max_shard_size
#11650 commented on
Jun 10, 2025 • 0 new comments -
The density_for_timestep_sampling and loss_weighting for SD3 Training!!!
#9056 commented on
Jun 10, 2025 • 0 new comments -
Add SUPIR Upscaler
#7219 commented on
Jun 10, 2025 • 0 new comments -
Sage Attention for diffuser library
#11168 commented on
Jun 9, 2025 • 0 new comments -
how to load lora weight with fp8 transfomer model?
#11648 commented on
Jun 9, 2025 • 0 new comments -
torch.compile can't be used with groupoffloading on hunyuanvideo_frampack
#11584 commented on
Jun 9, 2025 • 0 new comments -
Request support for MAGI-1
#11519 commented on
Jun 8, 2025 • 0 new comments