-
Notifications
You must be signed in to change notification settings - Fork 703
Insights: bitsandbytes-foundation/bitsandbytes
Overview
-
- 5 Merged pull requests
- 3 Open pull requests
- 39 Closed issues
- 0 New issues
Could not load contribution data
Please try again later
5 Pull requests merged by 3 people
-
Improvement for torch.compile support on Params4bit
#1673 merged
Jun 8, 2025 -
Add support for Intel Gaudi/HPU backend
#1662 merged
Jun 5, 2025 -
CI workflow: bump torch 2.7.0 to 2.7.1
#1670 merged
Jun 5, 2025 -
Deprecation cleanup
#1669 merged
Jun 4, 2025 -
Fix params4bit passing bnb quantized
#1665 merged
Jun 3, 2025
3 Pull requests opened by 2 people
-
Add dockerfile for Intel XPU
#1668 opened
Jun 4, 2025 -
Intel GPU: Enable SYCL building system
#1671 opened
Jun 5, 2025 -
Enable fp16/bf16 absmax
#1672 opened
Jun 6, 2025
39 Issues closed by 1 person
-
CUDA SETUP : detection failed
#897 closed
Jun 9, 2025 -
CUDA SETUP: CUDA detection failed!
#927 closed
Jun 9, 2025 -
CUDA Setup failed despite GPU being available.
#936 closed
Jun 9, 2025 -
bitsandbytes can't handle multiple path locations in LD_LIBRARY_PATH
#1112 closed
Jun 9, 2025 -
bitsandbytes interprets URLs from environment variables as paths
#1191 closed
Jun 9, 2025 -
NameError: name 'str2optimizer32bit' is not defined
#1180 closed
Jun 9, 2025 -
error on VectorstoreIndexCreator
#1194 closed
Jun 9, 2025 -
CUDA setup failed
#1068 closed
Jun 9, 2025 -
Does it work for gemma3 4b
#1572 closed
Jun 9, 2025 -
asking for CUDA 124 when only CUDA 126 exists
#1470 closed
Jun 9, 2025 -
Path is broken in Stable Diffusion after installing extension.
#1168 closed
Jun 9, 2025 -
[RFC] Cross-Platform Refactor: CPU-only implementation
#1021 closed
Jun 9, 2025 -
[RFC] Cross-Platform Refactor: Testing and CI/CD Strategy
#1031 closed
Jun 9, 2025 -
Model not able to quantize
#1354 closed
Jun 9, 2025 -
"You have a version of `bitsandbytes` that is not compatible with 4bit inference and training"
#1246 closed
Jun 9, 2025 -
bitsandbytes error
#911 closed
Jun 9, 2025 -
AttributeError: 'NoneType' object has no attribute 'device'
#244 closed
Jun 9, 2025 -
CUDA Setup failed despite CUDA being available.
#1404 closed
Jun 9, 2025 -
CUDA setup failed on linux CUDA12.3 with version 0.45.0
#1440 closed
Jun 9, 2025 -
CUDA Setup failed despite GPU being available.
#1449 closed
Jun 9, 2025 -
Could not run Kohya
#1200 closed
Jun 9, 2025 -
errorr en google colab. aparelpg@gmail.com
#1027 closed
Jun 9, 2025 -
Dreambooth sd-webui: CUDA Setup failed despite GPU being available.
#979 closed
Jun 9, 2025 -
Error when I try to train a model with Kohya
#871 closed
Jun 9, 2025 -
Lora training fails despite python -m bitsandbytes all positive
#978 closed
Jun 9, 2025 -
This is not working on Google Colab
#1002 closed
Jun 9, 2025 -
Installation succeed with CUDA 12.3, but libcudart.so is not found
#956 closed
Jun 9, 2025 -
A possible solution for the CUDA setup problem
#915 closed
Jun 9, 2025 -
Link to code for reproducing table found in Multi-backend support (non-CUDA backends) documentation?
#1456 closed
Jun 4, 2025 -
[chore] fix continuous release not working correctly
#1558 closed
Jun 4, 2025 -
CPU/XPU tests align
#1637 closed
Jun 4, 2025 -
Clarification on g++ / libcxx requirements: `GLIBCXX_3.4.32' not found
#1663 closed
Jun 4, 2025 -
Params4bit.to does not keep bnb_quantized status
#1664 closed
Jun 3, 2025
19 Unresolved conversations
Sometimes conversations happen on old items that aren’t yet closed. Here is a list of all the Issues and Pull Requests with unresolved conversations.
-
[RFC] Cross-Platform Refactor: Mac M1 support
#1020 commented on
Jun 3, 2025 • 0 new comments -
Optimizer custom ops
#1595 commented on
Jun 4, 2025 • 0 new comments -
Custom Ops: Unit Tests
#1553 commented on
Jun 5, 2025 • 0 new comments -
Linear8bitLt can not be moved back to cpu
#1332 commented on
Jun 5, 2025 • 0 new comments -
Triton now replaces `triton.language.libdevice` with `triton.language.mathlib`
#909 commented on
Jun 9, 2025 • 0 new comments -
Post-training LoRA model cannot be loaded from state dict
#960 commented on
Jun 9, 2025 • 0 new comments -
OSError: libcusparse.so.11: cannot open shared object file: No such file or directory CUDA Setup failed despite CUDA being available.
#1234 commented on
Jun 9, 2025 • 0 new comments -
[QUESTION] Quantizing in a different way...
#1256 commented on
Jun 9, 2025 • 0 new comments -
8 bit quantization and finetuning with lora is not working - receiving runtime error
#1097 commented on
Jun 9, 2025 • 0 new comments -
Multiple issues installing for AMD GPU (Radeon RX7600XT)
#1519 commented on
Jun 9, 2025 • 0 new comments -
Can't get llm_int8_skip_modules to work: 'Parameter' object has no attribute 'SCB'
#1634 commented on
Jun 9, 2025 • 0 new comments -
Phi 4 Multimodal not working with BnB/4bit quantization
#1600 commented on
Jun 9, 2025 • 0 new comments -
The output logistics of Qwen-7B under 8-bit quantization contain NaN
#1504 commented on
Jun 9, 2025 • 0 new comments -
Model support for OmniGen-v1
#1407 commented on
Jun 9, 2025 • 0 new comments -
Cannot load pre-quantized Janus Pro 7B
#1498 commented on
Jun 9, 2025 • 0 new comments -
keep get Error while "python -m bitsandbytes"
#1641 commented on
Jun 9, 2025 • 0 new comments -
Initial kernel changes to support GaLore
#1137 commented on
Jun 9, 2025 • 0 new comments -
[Triton/XPU] Support 4bit dequantization logic on Triton
#1629 commented on
Jun 10, 2025 • 0 new comments -
# Fix for matmul_4bit out Parameter Issue
#1659 commented on
Jun 3, 2025 • 0 new comments