Agente-Director-Qwen2-7B-Instruct_CODE_Python-GGUF_Spanish_English_8bit

  • Developed by: Agnuxo
  • License: apache-2.0
  • Finetuned from model: Qwen/Qwen2-7B-Instruct

This model was fine-tuned using Unsloth and Huggingface's TRL library.

Model Details

  • Model Parameters: 7070.63 million
  • Model Size: 13.61 GB
  • Quantization: 8-bit quantized
  • Estimated GPU Memory Required: ~13 GB

Note: The actual memory usage may vary depending on the specific hardware and runtime environment.

Benchmark Results

This model has been fine-tuned and evaluated on the GLUE MRPC task:

  • Accuracy: 0.6078
  • F1 Score: 0.6981

GLUE MRPC Metrics

For more details, visit my GitHub.

Thanks for your interest in this model!

Downloads last month
15
GGUF
Model size
8B params
Architecture
qwen2
Hardware compatibility
Log In to view the estimation

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Evaluation results