fatwa-qa-evaluation / README.md
Raniahossam33's picture
Add dataset card
42cfb13 verified
metadata
license: apache-2.0
language:
  - ar
tags:
  - islamic-finance
  - fatwa
  - question-answering
  - evaluation
  - benchmark
  - arabic
size_categories:
  - 1K<n<10K
task_categories:
  - question-answering
  - text-generation
pretty_name: Fatwa QA Evaluation Dataset

Fatwa QA Evaluation Dataset

Dataset Description

This dataset contains Islamic finance and jurisprudence fatwa question-answer pairs for evaluating Arabic language models. This is an open-ended QA evaluation benchmark where models generate free-form answers.

Dataset Statistics

  • Total Samples: 2,000
  • Average Question Length: 243.9 characters
  • Average Answer Length: 492.3 characters

Dataset Structure

Data Fields

  • id: Unique identifier (format: fatwa_eval_XXXXX)
  • prompt: Full evaluation prompt (instruction + question + الإجابة:)
  • question: Original question text
  • answer: Ground truth answer
  • category: Islamic finance category
  • question_length: Character count of the question
  • answer_length: Character count of the answer

Categories

  • zakat: 792 samples
  • riba: 407 samples
  • murabaha: 234 samples
  • gharar: 149 samples
  • waqf: 124 samples
  • ijara: 102 samples
  • maysir: 64 samples
  • musharaka: 44 samples
  • mudharaba: 40 samples
  • takaful: 38 samples
  • sukuk: 6 samples

Prompt Format

بناءً على أحكام الشريعة الإسلامية والفقه الإسلامي، أجب على السؤال التالي بطريقة مفصلة ومدعمة بالأدلة عند الإمكان.  السؤال: [QUESTION]  الإجابة:

Usage

from datasets import load_dataset

dataset = load_dataset("SahmBenchmark/fatwa-qa-evaluation")

# Access evaluation data
for example in dataset['test']:
    print(f"ID: {example['id']}")
    print(f"Prompt: {example['prompt']}")
    print(f"Question: {example['question']}")
    print(f"Answer: {example['answer']}")
    print(f"Category: {example['category']}")

Evaluation Example

from datasets import load_dataset
from transformers import AutoModelForCausalLM, AutoTokenizer

# Load dataset and model
dataset = load_dataset("SahmBenchmark/fatwa-qa-evaluation")
model_name = "your-model-name"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)

# Generate predictions
def generate_answer(prompt):
    inputs = tokenizer(prompt, return_tensors="pt")
    outputs = model.generate(**inputs, max_new_tokens=512)
    return tokenizer.decode(outputs[0], skip_special_tokens=True)

# Evaluate
for example in dataset['test']:
    prediction = generate_answer(example['prompt'])
    ground_truth = example['answer']
    # Compare prediction with ground_truth using your metrics

Categories

  • zakat: Islamic almsgiving
  • riba: Interest/usury-related rulings
  • murabaha: Cost-plus financing
  • gharar: Uncertainty in contracts
  • waqf: Islamic endowment
  • ijara: Islamic leasing
  • maysir: Gambling-related rulings
  • musharaka: Partnership financing
  • mudharaba: Profit-sharing partnership
  • takaful: Islamic insurance
  • sukuk: Islamic bonds

Related Datasets

Citation

@dataset{fatwa_qa_evaluation,
  title={Fatwa QA Evaluation Dataset},
  author={SahmBenchmark},
  year={2025},
  url={https://huggingface.co/datasets/SahmBenchmark/fatwa-qa-evaluation}
}

License

Apache 2.0 License