Llama 3.1 8B Function Calling

Fine-tuned Llama 3.1 8B Instruct for function/tool calling.

Training

Evaluation (100 held-out samples)

  • Exact match: 62%
  • Function name accuracy: ~90%+

Usage

from vllm import LLM

llm = LLM(model="alfazick/llama-3.1-8b-function-calling")

Or with transformers:

from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained("alfazick/llama-3.1-8b-function-calling")
tokenizer = AutoTokenizer.from_pretrained("alfazick/llama-3.1-8b-function-calling")

Prompt Format

<|begin_of_text|><|start_header_id|>system<|end_header_id|>

You are a helpful assistant with access to the following tools or function calls. Your task is to produce a sequence of tools or function calls necessary to generate response to the user utterance. Use the following tools or function calls as required:
[{"name": "func_name", "description": "...", "parameters": {...}}]<|eot_id|><|start_header_id|>user<|end_header_id|>

{query}<|eot_id|><|start_header_id|>assistant<|end_header_id|>

Output Format

[{"name": "function_name", "arguments": {"arg": "value"}}]

Limitations

  • Trained on 900 examples (proof of concept)
  • May have argument variations vs ground truth
  • Best for single/simple tool calls

Training Details

  • Framework: Unsloth 2025.11.2 + TRL
  • Hardware: RTX 5090 (32GB)
  • Method: LoRA (r=16, alpha=16)
Downloads last month
5
Safetensors
Model size
8B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for alfazick/llama-3.1-8b-function-calling

Finetuned
(2139)
this model

Dataset used to train alfazick/llama-3.1-8b-function-calling