QLoRA: Efficient Finetuning of Quantized LLMs
Paper
•
2305.14314
•
Published
•
58
This is a LoRA adapter for richardr1126/guanaco-13b-merged, or any other merged guanaco-13b model, fine tuned from LLaMA.
This LoRA was fine-tuned using QLoRA techniques on the richardr1126/sql-create-context_guanaco_style dataset.
The following hyperparameters were used during training:
@article{dettmers2023qlora,
title={QLoRA: Efficient Finetuning of Quantized LLMs},
author={Dettmers, Tim and Pagnoni, Artidoro and Holtzman, Ari and Zettlemoyer, Luke},
journal={arXiv preprint arXiv:2305.14314},
year={2023}
}