Update README.md
Browse files
README.md
CHANGED
|
@@ -30,7 +30,7 @@ LoRA: [mpasila/Viking-Magnum-v0.1-LoRA-7B](https://huggingface.co/mpasila/Viking
|
|
| 30 |
|
| 31 |
Another thing to note is this was trained with regular LoRA (not quantized/QLoRA) so it should improve the quality a bit. This model's context length is only 4096 so it's trained on that too but I think you can use RoPE with it.
|
| 32 |
|
| 33 |
-
LoRA rank was 128 and Alpha set to the same.
|
| 34 |
|
| 35 |
# Uploaded model
|
| 36 |
|
|
|
|
| 30 |
|
| 31 |
Another thing to note is this was trained with regular LoRA (not quantized/QLoRA) so it should improve the quality a bit. This model's context length is only 4096 so it's trained on that too but I think you can use RoPE with it.
|
| 32 |
|
| 33 |
+
LoRA rank was 128 and Alpha set to the same. Trained for 1 epoch.
|
| 34 |
|
| 35 |
# Uploaded model
|
| 36 |
|