Lightweight CUDA library for 8-bit and 4-bit quantization enabling LLM fine-tuning on consumer GPUs.
bitsandbytes enables accessible large language models via k-bit quantization for PyTorch. We provide three main features for dramatically reducing memory consumption for inference and training:
The library includes quantization primitives for 8-bit & 4-bit operations, through bitsandbytes.nn.Linear8bitLt and bitsandbytes.nn.Linear4bit and 8-bit optimizers through bitsandbytes.optim module.
bitsandbytes has the following minimum requirements for all platforms:
<small>Note: this table reflects the status of the current development branch. For the latest stable release, see the document in the 0.49.2 tag. </small>
🚧 = In Development, 〰️ = Partially Supported, ✅ = Supported, 🐢 = Slow Implementation Supported, ❌ = Not Supported
<table> <thead> <tr> <th>Platform</th> <th>Accelerator</th> <th>Hardware Requirements</th> <th>LLM.int8()</th> <th>QLoRA 4-bit</th> <th>8-bit Optimizers</th> </tr> </thead> <tbody> <tr> <td colspan="6">🐧 <strong>Linux, glibc >= 2.24</strong></td> </tr> <tr> <td align="right">x86-64</td> <td>◻️ CPU</td> <td>Minimum: AVX2<br>Optimized: AVX512F, AVX512BF16</td> <td>✅</td> <td>✅</td> <td>❌</td> </tr> <tr> <td></td> <td>🟩 NVIDIA GPU <br><code>cuda</code></td> <td>SM60+ minimum<br>SM75+ recommended</td> <td>✅</td> <td>✅</td> <td>✅</td> </tr> <tr> <td></td> <td>🟥 AMD GPU <br><code>cuda</code></td> <td> CDNA: gfx90a, gfx942, gfx950<br> RDNA: gfx1100, gfx1101, gfx1150, gfx1151, gfx1200, gfx1201 </td> <td>✅</td> <td>✅</td> <td>✅</td> </tr> <tr> <td></td> <td>🟦 Intel GPU <br><code>xpu</code></td> <td> Data Center GPU Max Series<br> Arc A-Series (Alchemist)<br> Arc B-Series (Battlemage) </td> <td>✅</td> <td>✅</td> <td>〰️</td> </tr> <tr> <td></td> <td>🟪 Intel Gaudi <br><code>hpu</code></td> <td>Gaudi2, Gaudi3</td> <td>✅</td> <td>〰️</td> <td>❌</td> </tr> <tr> <td align="right">aarch64</td> <td>◻️ CPU</td> <td></td> <td>✅</td> <td>✅</td> <td>❌</td> </tr> <tr> <td></td> <td>🟩 NVIDIA GPU <br><code>cuda</code></td> <td>SM75+</td> <td>✅</td> <td>✅</td> <td>✅</td> </tr> <tr> <td colspan="6">🪟 <strong>Windows 11 / Windows Server 2022+</strong></td> </tr> <tr> <td align="right">x86-64</td> <td>◻️ CPU</td> <td>AVX2</td> <td>✅</td> <td>✅</td> <td>❌</td> </tr> <tr> <td></td> <td>🟩 NVIDIA GPU <br><code>cuda</code></td> <td>SM60+ minimum<br>SM75+ recommended</td> <td>✅</td> <td>✅</td> <td>✅</td> </tr> <tr> <td></td> <td>🟦 Intel GPU <br><code>xpu</code></td> <td> Arc A-Series (Alchemist) <br> Arc B-Series (Battlemage) </td> <td>✅</td> <td>✅</td> <td>〰️</td> </tr> <tr> <td colspan="6">🍎 <strong>macOS 14+</strong></td> </tr> <tr> <td align="right">arm64</td> <td>◻️ CPU</td> <td>Apple M1+</td> <td>✅</td> <td>✅</td> <td>❌</td> </tr> <tr> <td></td> <td>⬜ Metal <br><code>mps</code></td> <td>Apple M1+</td> <td>🐢</td> <td>🐢</td> <td>❌</td> </tbody> </table>The continued maintenance and development of bitsandbytes is made possible thanks to the generous support of our sponsors. Their contributions help ensure that we can keep improving the project and delivering valuable updates to the community.
<kbd><a href="https://hf.co" target="_blank"><img width="100" src="https://huggingface.co/datasets/huggingface/brand-assets/resolve/main/hf-logo.svg" alt="Hugging Face"></a></kbd> <kbd><a href="https://intel.com" target="_blank"><img width="100" src="https://avatars.githubusercontent.com/u/17888862?s=100&v=4" alt="Intel"></a></kbd>
bitsandbytes is MIT licensed.
If you found this library useful, please consider citing our work:
@article{dettmers2023qlora,
title={Qlora: Efficient finetuning of quantized llms},
author={Dettmers, Tim and Pagnoni, Artidoro and Holtzman, Ari and Zettlemoyer, Luke},
journal={arXiv preprint arXiv:2305.14314},
year={2023}
}
@article{dettmers2022llmint8,
title={LLM.int8(): 8-bit Matrix Multiplication for Transformers at Scale},
author={Dettmers, Tim and Lewis, Mike and Belkada, Younes and Zettlemoyer, Luke},
journal={arXiv preprint arXiv:2208.07339},
year={2022}
}
@article{dettmers2022optimizers,
title={8-bit Optimizers via Block-wise Quantization},
author={Dettmers, Tim and Lewis, Mike and Shleifer, Sam and Zettlemoyer, Luke},
journal={9th International Conference on Learning Representations, ICLR},
year={2022}
}
Preview
Markdown
[](https://attestry.ai/models/pypi-bitsandbytes)HTML
<a href="https://attestry.ai/models/pypi-bitsandbytes"><img src="https://regseal.ai/api/v1/registry/badge/pypi-bitsandbytes" alt="RegSeal Verification Status" /></a>