File size: 3,774 Bytes
99ded34
d751f93
db0ffab
 
 
d751f93
db0ffab
d751f93
db0ffab
 
 
d751f93
 
 
 
 
42cfb13
99ded34
db0ffab
d751f93
db0ffab
 
 
d751f93
db0ffab
d751f93
db0ffab
42cfb13
 
 
db0ffab
 
 
 
 
d751f93
 
 
 
 
 
 
 
 
 
42cfb13
 
 
 
 
 
 
 
 
 
 
d751f93
 
 
 
db0ffab
 
 
 
 
 
 
 
d751f93
 
 
 
 
 
 
 
 
 
 
 
 
db0ffab
d751f93
 
 
 
 
 
 
 
 
 
 
 
 
 
 
db0ffab
d751f93
db0ffab
 
d751f93
db0ffab
d751f93
 
 
 
 
 
 
 
 
 
 
db0ffab
 
 
d751f93
 
db0ffab
 
 
d751f93
 
db0ffab
 
d751f93
db0ffab
 
 
d751f93
db0ffab
d751f93
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
---
license: apache-2.0
language:
- ar
tags:
- islamic-finance
- fatwa
- question-answering
- evaluation
- benchmark
- arabic
size_categories:
- 1K<n<10K
task_categories:
- question-answering
- text-generation
pretty_name: "Fatwa QA Evaluation Dataset"
---

# Fatwa QA Evaluation Dataset

## Dataset Description

This dataset contains Islamic finance and jurisprudence fatwa question-answer pairs for **evaluating** Arabic language models. This is an open-ended QA evaluation benchmark where models generate free-form answers.

## Dataset Statistics

- **Total Samples**: 2,000
- **Average Question Length**: 243.9 characters
- **Average Answer Length**: 492.3 characters

## Dataset Structure

### Data Fields

- `id`: Unique identifier (format: `fatwa_eval_XXXXX`)
- `prompt`: Full evaluation prompt (instruction + question + الإجابة:)
- `question`: Original question text
- `answer`: Ground truth answer
- `category`: Islamic finance category
- `question_length`: Character count of the question
- `answer_length`: Character count of the answer

### Categories

- **zakat**: 792 samples
- **riba**: 407 samples
- **murabaha**: 234 samples
- **gharar**: 149 samples
- **waqf**: 124 samples
- **ijara**: 102 samples
- **maysir**: 64 samples
- **musharaka**: 44 samples
- **mudharaba**: 40 samples
- **takaful**: 38 samples
- **sukuk**: 6 samples

### Prompt Format
```
بناءً على أحكام الشريعة الإسلامية والفقه الإسلامي، أجب على السؤال التالي بطريقة مفصلة ومدعمة بالأدلة عند الإمكان.  السؤال: [QUESTION]  الإجابة:
```

## Usage
```python
from datasets import load_dataset

dataset = load_dataset("SahmBenchmark/fatwa-qa-evaluation")

# Access evaluation data
for example in dataset['test']:
    print(f"ID: {example['id']}")
    print(f"Prompt: {example['prompt']}")
    print(f"Question: {example['question']}")
    print(f"Answer: {example['answer']}")
    print(f"Category: {example['category']}")
```

### Evaluation Example
```python
from datasets import load_dataset
from transformers import AutoModelForCausalLM, AutoTokenizer

# Load dataset and model
dataset = load_dataset("SahmBenchmark/fatwa-qa-evaluation")
model_name = "your-model-name"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)

# Generate predictions
def generate_answer(prompt):
    inputs = tokenizer(prompt, return_tensors="pt")
    outputs = model.generate(**inputs, max_new_tokens=512)
    return tokenizer.decode(outputs[0], skip_special_tokens=True)

# Evaluate
for example in dataset['test']:
    prediction = generate_answer(example['prompt'])
    ground_truth = example['answer']
    # Compare prediction with ground_truth using your metrics
```

## Categories

- **zakat**: Islamic almsgiving
- **riba**: Interest/usury-related rulings
- **murabaha**: Cost-plus financing
- **gharar**: Uncertainty in contracts
- **waqf**: Islamic endowment
- **ijara**: Islamic leasing
- **maysir**: Gambling-related rulings
- **musharaka**: Partnership financing
- **mudharaba**: Profit-sharing partnership
- **takaful**: Islamic insurance
- **sukuk**: Islamic bonds

## Related Datasets

- [Fatwa Training Dataset](https://huggingface.co/datasets/SahmBenchmark/fatwa-training_standardized_new): Training data for this evaluation benchmark
- [Fatwa MCQ Evaluation](https://huggingface.co/datasets/SahmBenchmark/fatwa-mcq-evaluation_standardized): Multiple choice evaluation version

## Citation
```bibtex
@dataset{fatwa_qa_evaluation,
  title={Fatwa QA Evaluation Dataset},
  author={SahmBenchmark},
  year={2025},
  url={https://huggingface.co/datasets/SahmBenchmark/fatwa-qa-evaluation}
}
```

## License

Apache 2.0 License