Datasets:
Update README.md
Browse files
README.md
CHANGED
|
@@ -1,199 +1,212 @@
|
|
| 1 |
-
|
| 2 |
-
|
| 3 |
-
|
| 4 |
-
|
| 5 |
-
|
| 6 |
-
|
| 7 |
-
|
| 8 |
-
|
| 9 |
-
|
| 10 |
-
|
| 11 |
-
|
| 12 |
-
|
| 13 |
-
|
| 14 |
-
|
| 15 |
-
|
| 16 |
-
|
| 17 |
-
|
| 18 |
-
|
| 19 |
-
|
| 20 |
-
|
| 21 |
-
|
| 22 |
-
|
| 23 |
-
|
| 24 |
-
|
| 25 |
-
|
| 26 |
-
|
| 27 |
-
|
| 28 |
-
|
| 29 |
-
WebRenderBench
|
| 30 |
-
|
| 31 |
-
|
| 32 |
-
|
| 33 |
-
|
| 34 |
-
|
| 35 |
-
|
| 36 |
-
|
| 37 |
-
|
| 38 |
-
|
| 39 |
-
|
| 40 |
-
|
| 41 |
-
|
| 42 |
-
|
| 43 |
-
|
| 44 |
-
|
| 45 |
-
|
| 46 |
-
|
| 47 |
-
|
| 48 |
-
|
| 49 |
-
|
| 50 |
-
|
| 51 |
-
|
| 52 |
-
|
| 53 |
-
|
| 54 |
-
|
| 55 |
-
|
| 56 |
-
|
| 57 |
-
|
| 58 |
-
|
| 59 |
-
|
| 60 |
-
|
| 61 |
-
|
| 62 |
-
|
| 63 |
-
|
| 64 |
-
|
| 65 |
-
|
| 66 |
-
|
| 67 |
-
|
| 68 |
-
|
| 69 |
-
|
| 70 |
-
|
| 71 |
-
|
| 72 |
-
|
| 73 |
-
|
| 74 |
-
|
| 75 |
-
|
| 76 |
-
|
| 77 |
-
|
| 78 |
-
|
| 79 |
-
|
| 80 |
-
|
| 81 |
-
|
| 82 |
-
|
| 83 |
-
|
| 84 |
-
|
| 85 |
-
|
| 86 |
-
|
| 87 |
-
|
| 88 |
-
|
| 89 |
-
|
| 90 |
-
|
| 91 |
-
|
| 92 |
-
|
| 93 |
-
|
| 94 |
-
|
| 95 |
-
|
| 96 |
-
|
| 97 |
-
|
| 98 |
-
|
| 99 |
-
|
| 100 |
-
|
| 101 |
-
|
| 102 |
-
|
| 103 |
-
|
| 104 |
-
|
| 105 |
-
|
| 106 |
-
|
| 107 |
-
|
| 108 |
-
|
| 109 |
-
|
| 110 |
-
|
| 111 |
-
|
| 112 |
-
|
| 113 |
-
|
| 114 |
-
|
| 115 |
-
|
| 116 |
-
|
| 117 |
-
|
| 118 |
-
|
| 119 |
-
|
| 120 |
-
|
| 121 |
-
|
| 122 |
-
|
| 123 |
-
|
| 124 |
-
|
| 125 |
-
|
| 126 |
-
|
| 127 |
-
|
| 128 |
-
|
| 129 |
-
|
| 130 |
-
|
| 131 |
-
|
| 132 |
-
|
| 133 |
-
|
| 134 |
-
|
| 135 |
-
|
| 136 |
-
|
| 137 |
-
|
| 138 |
-
|
| 139 |
-
|
| 140 |
-
|
| 141 |
-
|
| 142 |
-
|
| 143 |
-
|
| 144 |
-
|
| 145 |
-
*
|
| 146 |
-
|
| 147 |
-
|
| 148 |
-
|
| 149 |
-
|
| 150 |
-
*
|
| 151 |
-
|
| 152 |
-
|
| 153 |
-
|
| 154 |
-
|
| 155 |
-
|
| 156 |
-
|
| 157 |
-
|
| 158 |
-
*
|
| 159 |
-
|
| 160 |
-
|
| 161 |
-
|
| 162 |
-
|
| 163 |
-
|
| 164 |
-
|
| 165 |
-
|
| 166 |
-
|
| 167 |
-
|
| 168 |
-
|
| 169 |
-
|
| 170 |
-
|
| 171 |
-
|
| 172 |
-
|
| 173 |
-
|
| 174 |
-
|
| 175 |
-
|
| 176 |
-
|
| 177 |
-
|
| 178 |
-
|
| 179 |
-
|
| 180 |
-
|
| 181 |
-
|
| 182 |
-
|
| 183 |
-
|
| 184 |
-
|
| 185 |
-
|
| 186 |
-
|
| 187 |
-
|
| 188 |
-
|
| 189 |
-
|
| 190 |
-
|
| 191 |
-
|
| 192 |
-
|
| 193 |
-
|
| 194 |
-
|
| 195 |
-
|
| 196 |
-
|
| 197 |
-
|
| 198 |
-
|
| 199 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: apache-2.0
|
| 3 |
+
task_categories:
|
| 4 |
+
- token-classification
|
| 5 |
+
language:
|
| 6 |
+
- en
|
| 7 |
+
- zh
|
| 8 |
+
tags:
|
| 9 |
+
- agent
|
| 10 |
+
size_categories:
|
| 11 |
+
- 100K<n<1M
|
| 12 |
+
---
|
| 13 |
+
|
| 14 |
+
<p align="center">
|
| 15 |
+
<img src="./docs/assets/logo.svg" alt="Logo" width="120" />
|
| 16 |
+
<p align="center">
|
| 17 |
+
<a href="https://github.com/PKU-DAIR">
|
| 18 |
+
<img alt="Static Badge" src="https://img.shields.io/badge/%C2%A9-PKU--DAIR-%230e529d?labelColor=%23003985">
|
| 19 |
+
</a>
|
| 20 |
+
</p>
|
| 21 |
+
</p>
|
| 22 |
+
|
| 23 |
+
## **WebRenderBench: Enhancing Web Interface Generation through Layout-Style Consistency and Reinforcement Learning**
|
| 24 |
+
|
| 25 |
+
[Paper](https://arxiv.org/pdf/2510.04097) | [中文](./docs/Chinese.md)
|
| 26 |
+
|
| 27 |
+
## **🔍 Overview**
|
| 28 |
+
|
| 29 |
+
**WebRenderBench** is a large-scale benchmark designed to advance **WebUI-to-Code** research for multimodal large language models (MLLMs) through evaluation on real-world webpages. It provides:
|
| 30 |
+
|
| 31 |
+
* **45,100** real webpages collected from public portal websites
|
| 32 |
+
* **High diversity and complexity**, covering a wide range of industries and design styles
|
| 33 |
+
* **Novel evaluation metrics** that quantify **layout and style consistency** based on rendered pages
|
| 34 |
+
* The **ALISA reinforcement learning framework**, which uses the new metrics as reward signals to optimize generation quality
|
| 35 |
+
|
| 36 |
+
---
|
| 37 |
+
|
| 38 |
+
## **🚀 Key Features**
|
| 39 |
+
|
| 40 |
+
### **Beyond the Limitations of Traditional Benchmarks**
|
| 41 |
+
|
| 42 |
+
WebRenderBench addresses the core issues of existing WebUI-to-Code benchmarks in data quality and evaluation methodology:
|
| 43 |
+
|
| 44 |
+
| Aspect | Traditional Benchmarks | Advantages of WebRenderBench |
|
| 45 |
+
| :------------------------- | :---------------------------------------------------------------------------------------------- | :------------------------------------------------------------------------------------------- |
|
| 46 |
+
| **Data Quality** | Small-scale, simple-structured, or LLM-synthesized data with limited diversity | Large-scale, real-world, and structurally complex webpages that present higher challenges |
|
| 47 |
+
| **Evaluation Reliability** | Relies on visual APIs (high cost) or code-structure comparison (fails to handle code asymmetry) | Objectively and efficiently evaluates layout and style consistency based on rendered results |
|
| 48 |
+
| **Training Effectiveness** | Difficult to optimize on crawled data with asymmetric code structures | Proposed metrics can be directly used as RL reward signals to enhance model optimization |
|
| 49 |
+
|
| 50 |
+
---
|
| 51 |
+
|
| 52 |
+
### **Dataset Characteristics**
|
| 53 |
+
|
| 54 |
+
<p align="center">
|
| 55 |
+
<img src="./docs/assets/framework.svg" alt="WebRenderBench and ALISA Framework" width="80%" />
|
| 56 |
+
</p>
|
| 57 |
+
<p align="center"><i>Figure 1: Dataset construction pipeline and the ALISA framework</i></p>
|
| 58 |
+
|
| 59 |
+
Our dataset is constructed through a systematic process to ensure both **high quality** and **diversity**:
|
| 60 |
+
|
| 61 |
+
1. **Data Collection**: URLs are obtained from open enterprise portal datasets. A high-concurrency crawler captures 210K webpages along with static resources.
|
| 62 |
+
2. **Data Processing**: MHTML pages are converted into HTML files, and cross-domain resources are processed to ensure local renderability and full-page screenshots.
|
| 63 |
+
3. **Data Cleaning**: Pages with abnormal sizes, rendering errors, or missing styles are filtered out. Multimodal QA models further remove low-quality samples with large blank areas or overlapping elements, yielding 110K valid pages.
|
| 64 |
+
4. **Data Categorization**: Pages are categorized by industry and complexity (measured via *Group Count*) to ensure balanced distribution across difficulty levels and domains.
|
| 65 |
+
|
| 66 |
+
Finally, we construct a dataset of **45.1K** samples, evenly split into training and test sets.
|
| 67 |
+
|
| 68 |
+
---
|
| 69 |
+
|
| 70 |
+
## **🌟 Evaluation Framework**
|
| 71 |
+
|
| 72 |
+
We propose a novel evaluation protocol based on **rendered webpages**, quantifying model performance along two key dimensions: **layout** and **style consistency**.
|
| 73 |
+
|
| 74 |
+
---
|
| 75 |
+
|
| 76 |
+
### **RDA (Relative Layout Difference of Associated Elements)**
|
| 77 |
+
|
| 78 |
+
**Purpose:** Measures relative layout differences between matched elements.
|
| 79 |
+
|
| 80 |
+
* **Element Association:** Matches corresponding elements between generated and target pages using text similarity (LCS) and geometric distance.
|
| 81 |
+
* **Positional Deviation:** The page is divided into a 3×3 grid. Associated elements are compared quadrant-wise—if located in different quadrants, the score is 0; otherwise, a deviation-based score is computed.
|
| 82 |
+
* **Uniqueness Weighting:** Each element is weighted by its uniqueness (inverse group size), giving higher importance to distinctive components.
|
| 83 |
+
|
| 84 |
+
---
|
| 85 |
+
|
| 86 |
+
### **GDA (Group-wise Difference in Element Counts)**
|
| 87 |
+
|
| 88 |
+
**Purpose:** Measures group-level alignment of axis-aligned elements.
|
| 89 |
+
|
| 90 |
+
* **Grouping:** Elements aligned on the same horizontal or vertical axis are treated as one group.
|
| 91 |
+
* **Count Comparison:** Compares whether corresponding groups in the generated and target pages contain the same number of elements.
|
| 92 |
+
* **Uniqueness Weighting:** Weighted by element uniqueness to emphasize key structural alignment.
|
| 93 |
+
|
| 94 |
+
---
|
| 95 |
+
|
| 96 |
+
### **SDA (Style Difference of Associated Elements)**
|
| 97 |
+
|
| 98 |
+
**Purpose:** Evaluates fine-grained style differences between associated elements.
|
| 99 |
+
|
| 100 |
+
* **Multi-Dimensional Style Extraction:** Measures differences in foreground color, background color, font size, and border radius.
|
| 101 |
+
* **Weighted Averaging:** Computes a weighted mean of style similarity scores across all associated elements to obtain an overall style score.
|
| 102 |
+
|
| 103 |
+
---
|
| 104 |
+
|
| 105 |
+
## **⚙️ Installation Guide**
|
| 106 |
+
|
| 107 |
+
### **Core Dependencies**
|
| 108 |
+
|
| 109 |
+
<!--
|
| 110 |
+
# Recommended: Use vLLM for faster inference
|
| 111 |
+
pip install vllm transformers>=4.40.0 torch>=2.0
|
| 112 |
+
|
| 113 |
+
# Other dependencies
|
| 114 |
+
pip install selenium pandas scikit-learn pillow
|
| 115 |
+
|
| 116 |
+
Alternatively:
|
| 117 |
+
pip install -r requirements.txt
|
| 118 |
+
-->
|
| 119 |
+
|
| 120 |
+
Coming Soon
|
| 121 |
+
|
| 122 |
+
---
|
| 123 |
+
|
| 124 |
+
## **📊 Benchmark Workflow**
|
| 125 |
+
|
| 126 |
+
### **Directory Structure**
|
| 127 |
+
|
| 128 |
+
```
|
| 129 |
+
|- docs/ # Documentation
|
| 130 |
+
|- scripts # Evaluation scripts
|
| 131 |
+
|- web_render_test.jsonl # Test set metadata
|
| 132 |
+
|- web_render_train.jsonl # Training set metadata
|
| 133 |
+
|- test_webpages.zip # Test set webpages
|
| 134 |
+
|- train_webpages.zip # Training set webpages
|
| 135 |
+
|- test_screenshots.zip # Test set screenshots
|
| 136 |
+
|- train_screenshots.zip # Training set screenshots
|
| 137 |
+
```
|
| 138 |
+
|
| 139 |
+
---
|
| 140 |
+
|
| 141 |
+
### **Implementation Steps**
|
| 142 |
+
|
| 143 |
+
1. **Data Preparation**
|
| 144 |
+
|
| 145 |
+
* Download the WebRenderBench dataset and extract webpage and screenshot archives.
|
| 146 |
+
* Each pair consists of a real webpage (HTML + resources) and its rendered screenshot.
|
| 147 |
+
|
| 148 |
+
2. **Model Inference**
|
| 149 |
+
|
| 150 |
+
* Run inference using engines such as **vLLM** or **LLM Deploy**, and save results to the designated directory.
|
| 151 |
+
|
| 152 |
+
3. **Evaluation**
|
| 153 |
+
|
| 154 |
+
* Run `scripts/1_get_evaluation.py`.
|
| 155 |
+
* The script launches a web server to render both generated and target HTML.
|
| 156 |
+
* WebDriver extracts DOM information and computes **RDA**, **GDA**, and **SDA** scores.
|
| 157 |
+
* Results are saved under `save_results/`.
|
| 158 |
+
* Final scores are aggregated via `scripts/2_compute_alisa_scores.py`.
|
| 159 |
+
|
| 160 |
+
4. **ALISA Training (Optional)**
|
| 161 |
+
|
| 162 |
+
* Use `models/train_rl.py` for reinforcement learning fine-tuning. *(Coming Soon)*
|
| 163 |
+
* The computed evaluation scores serve as reward signals to optimize policy models via methods such as **GRPO**.
|
| 164 |
+
|
| 165 |
+
---
|
| 166 |
+
|
| 167 |
+
## **📈 Model Performance Insights**
|
| 168 |
+
|
| 169 |
+
We evaluate **17 multimodal large language models** of varying scales and architectures (both open- and closed-source).
|
| 170 |
+
|
| 171 |
+
* **Combined Scores of RDA, GDA, and SDA (%)**
|
| 172 |
+
|
| 173 |
+

|
| 174 |
+
|
| 175 |
+
**Key Findings:**
|
| 176 |
+
|
| 177 |
+
* Overall, larger models achieve higher consistency. **GPT-4.1-mini** and **Qwen-VL-Plus** perform best among closed-source models.
|
| 178 |
+
* While most models perform reasonably on simple pages (*Group Count* < 50), **RDA scores drop sharply** as page complexity increases—precise layout alignment remains a major challenge.
|
| 179 |
+
* After reinforcement learning via the **ALISA framework**, **Qwen2.5-VL-7B** shows substantial improvements across all complexity levels, even surpassing **GPT-4.1-mini** on simpler cases.
|
| 180 |
+
|
| 181 |
+
---
|
| 182 |
+
|
| 183 |
+
## **📅 Future Work**
|
| 184 |
+
|
| 185 |
+
* [ ] Release pretrained models fine-tuned with the ALISA framework
|
| 186 |
+
* [ ] Expand dataset coverage to more industries and dynamic interaction patterns
|
| 187 |
+
* [ ] Open-source the complete toolchain for data collection, cleaning, and evaluation
|
| 188 |
+
|
| 189 |
+
---
|
| 190 |
+
|
| 191 |
+
## **📜 License**
|
| 192 |
+
|
| 193 |
+
The **WebRenderBench dataset** is released for **research purposes only**.
|
| 194 |
+
All accompanying code will be published under the **Apache License 2.0**.
|
| 195 |
+
|
| 196 |
+
All webpages in the dataset are collected from publicly accessible enterprise portals.
|
| 197 |
+
To protect privacy, all personal and sensitive information has been removed or modified.
|
| 198 |
+
|
| 199 |
+
---
|
| 200 |
+
|
| 201 |
+
## **📚 Citation**
|
| 202 |
+
|
| 203 |
+
If you use our dataset or framework in your research, please cite the following paper:
|
| 204 |
+
|
| 205 |
+
```bibtex
|
| 206 |
+
@article{webrenderbench2025,
|
| 207 |
+
title={WebRenderBench: Enhancing Web Interface Generation through Layout-Style Consistency and Reinforcement Learning},
|
| 208 |
+
author={Anonymous Author(s)},
|
| 209 |
+
year={2025},
|
| 210 |
+
journal={arXiv preprint},
|
| 211 |
+
}
|
| 212 |
+
```
|