-
Notifications
You must be signed in to change notification settings - Fork 1.2k
Insights: abetlen/llama-cpp-python
Overview
-
- 0 Merged pull requests
- 1 Open pull request
- 0 Closed issues
- 1 New issue
There hasn’t been any commit activity on abetlen/llama-cpp-python in the last week.
Want to help out?
1 Pull request opened by 1 person
-
fix: rename op_offloat to op_offload in llama.py
#2046 opened
Aug 2, 2025
1 Issue opened by 1 person
-
Build fails on Windows with non-CUDA backends (CLBlast, Vulkan) for versions >= 0.2.78
#2047 opened
Aug 3, 2025
7 Unresolved conversations
Sometimes conversations happen on old items that aren’t yet closed. Here is a list of all the Issues and Pull Requests with unresolved conversations.
-
arm64 builds for CUDA
#1446 commented on
Jul 29, 2025 • 0 new comments -
LLama cpp problem ( gpu support)
#509 commented on
Jul 31, 2025 • 0 new comments -
Windows11:ERROR: ERROR: Failed to build installable wheels for some pyproject.toml based projects (llama-cpp-python)
#2035 commented on
Aug 1, 2025 • 0 new comments -
Building and installing llama_cpp from source for RTX 50 Blackwell GPU
#2028 commented on
Aug 2, 2025 • 0 new comments -
Can't install with GPU support with Cuda toolkit 12.9 and Cuda 12.9
#2013 commented on
Aug 2, 2025 • 0 new comments -
Cannot run T5-based models
#1587 commented on
Aug 3, 2025 • 0 new comments -
feat: Add Gemma3 chat handler (#1976)
#1989 commented on
Aug 2, 2025 • 0 new comments