aleversn commited on
Commit
484de04
·
verified ·
1 Parent(s): 8ed9c71

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +212 -199
README.md CHANGED
@@ -1,199 +1,212 @@
1
- <p align="center">
2
- <img src="./docs/assets/logo.svg" alt="Logo" width="120" />
3
- <p align="center">
4
- <a href="https://github.com/PKU-DAIR">
5
- <img alt="Static Badge" src="https://img.shields.io/badge/%C2%A9-PKU--DAIR-%230e529d?labelColor=%23003985">
6
- </a>
7
- </p>
8
- </p>
9
-
10
- ## **WebRenderBench: Enhancing Web Interface Generation through Layout-Style Consistency and Reinforcement Learning**
11
-
12
- [Paper](https://arxiv.org/pdf/2510.04097) | [中文](./docs/Chinese.md)
13
-
14
- ## **🔍 Overview**
15
-
16
- **WebRenderBench** is a large-scale benchmark designed to advance **WebUI-to-Code** research for multimodal large language models (MLLMs) through evaluation on real-world webpages. It provides:
17
-
18
- * **45,100** real webpages collected from public portal websites
19
- * **High diversity and complexity**, covering a wide range of industries and design styles
20
- * **Novel evaluation metrics** that quantify **layout and style consistency** based on rendered pages
21
- * The **ALISA reinforcement learning framework**, which uses the new metrics as reward signals to optimize generation quality
22
-
23
- ---
24
-
25
- ## **🚀 Key Features**
26
-
27
- ### **Beyond the Limitations of Traditional Benchmarks**
28
-
29
- WebRenderBench addresses the core issues of existing WebUI-to-Code benchmarks in data quality and evaluation methodology:
30
-
31
- | Aspect | Traditional Benchmarks | Advantages of WebRenderBench |
32
- | :------------------------- | :---------------------------------------------------------------------------------------------- | :------------------------------------------------------------------------------------------- |
33
- | **Data Quality** | Small-scale, simple-structured, or LLM-synthesized data with limited diversity | Large-scale, real-world, and structurally complex webpages that present higher challenges |
34
- | **Evaluation Reliability** | Relies on visual APIs (high cost) or code-structure comparison (fails to handle code asymmetry) | Objectively and efficiently evaluates layout and style consistency based on rendered results |
35
- | **Training Effectiveness** | Difficult to optimize on crawled data with asymmetric code structures | Proposed metrics can be directly used as RL reward signals to enhance model optimization |
36
-
37
- ---
38
-
39
- ### **Dataset Characteristics**
40
-
41
- <p align="center">
42
- <img src="./docs/assets/framework.svg" alt="WebRenderBench and ALISA Framework" width="80%" />
43
- </p>
44
- <p align="center"><i>Figure 1: Dataset construction pipeline and the ALISA framework</i></p>
45
-
46
- Our dataset is constructed through a systematic process to ensure both **high quality** and **diversity**:
47
-
48
- 1. **Data Collection**: URLs are obtained from open enterprise portal datasets. A high-concurrency crawler captures 210K webpages along with static resources.
49
- 2. **Data Processing**: MHTML pages are converted into HTML files, and cross-domain resources are processed to ensure local renderability and full-page screenshots.
50
- 3. **Data Cleaning**: Pages with abnormal sizes, rendering errors, or missing styles are filtered out. Multimodal QA models further remove low-quality samples with large blank areas or overlapping elements, yielding 110K valid pages.
51
- 4. **Data Categorization**: Pages are categorized by industry and complexity (measured via *Group Count*) to ensure balanced distribution across difficulty levels and domains.
52
-
53
- Finally, we construct a dataset of **45.1K** samples, evenly split into training and test sets.
54
-
55
- ---
56
-
57
- ## **🌟 Evaluation Framework**
58
-
59
- We propose a novel evaluation protocol based on **rendered webpages**, quantifying model performance along two key dimensions: **layout** and **style consistency**.
60
-
61
- ---
62
-
63
- ### **RDA (Relative Layout Difference of Associated Elements)**
64
-
65
- **Purpose:** Measures relative layout differences between matched elements.
66
-
67
- * **Element Association:** Matches corresponding elements between generated and target pages using text similarity (LCS) and geometric distance.
68
- * **Positional Deviation:** The page is divided into a 3×3 grid. Associated elements are compared quadrant-wise—if located in different quadrants, the score is 0; otherwise, a deviation-based score is computed.
69
- * **Uniqueness Weighting:** Each element is weighted by its uniqueness (inverse group size), giving higher importance to distinctive components.
70
-
71
- ---
72
-
73
- ### **GDA (Group-wise Difference in Element Counts)**
74
-
75
- **Purpose:** Measures group-level alignment of axis-aligned elements.
76
-
77
- * **Grouping:** Elements aligned on the same horizontal or vertical axis are treated as one group.
78
- * **Count Comparison:** Compares whether corresponding groups in the generated and target pages contain the same number of elements.
79
- * **Uniqueness Weighting:** Weighted by element uniqueness to emphasize key structural alignment.
80
-
81
- ---
82
-
83
- ### **SDA (Style Difference of Associated Elements)**
84
-
85
- **Purpose:** Evaluates fine-grained style differences between associated elements.
86
-
87
- * **Multi-Dimensional Style Extraction:** Measures differences in foreground color, background color, font size, and border radius.
88
- * **Weighted Averaging:** Computes a weighted mean of style similarity scores across all associated elements to obtain an overall style score.
89
-
90
- ---
91
-
92
- ## **⚙️ Installation Guide**
93
-
94
- ### **Core Dependencies**
95
-
96
- <!--
97
- # Recommended: Use vLLM for faster inference
98
- pip install vllm transformers>=4.40.0 torch>=2.0
99
-
100
- # Other dependencies
101
- pip install selenium pandas scikit-learn pillow
102
-
103
- Alternatively:
104
- pip install -r requirements.txt
105
- -->
106
-
107
- Coming Soon
108
-
109
- ---
110
-
111
- ## **📊 Benchmark Workflow**
112
-
113
- ### **Directory Structure**
114
-
115
- ```
116
- |- docs/ # Documentation
117
- |- scripts # Evaluation scripts
118
- |- web_render_test.jsonl # Test set metadata
119
- |- web_render_train.jsonl # Training set metadata
120
- |- test_webpages.zip # Test set webpages
121
- |- train_webpages.zip # Training set webpages
122
- |- test_screenshots.zip # Test set screenshots
123
- |- train_screenshots.zip # Training set screenshots
124
- ```
125
-
126
- ---
127
-
128
- ### **Implementation Steps**
129
-
130
- 1. **Data Preparation**
131
-
132
- * Download the WebRenderBench dataset and extract webpage and screenshot archives.
133
- * Each pair consists of a real webpage (HTML + resources) and its rendered screenshot.
134
-
135
- 2. **Model Inference**
136
-
137
- * Run inference using engines such as **vLLM** or **LLM Deploy**, and save results to the designated directory.
138
-
139
- 3. **Evaluation**
140
-
141
- * Run `scripts/1_get_evaluation.py`.
142
- * The script launches a web server to render both generated and target HTML.
143
- * WebDriver extracts DOM information and computes **RDA**, **GDA**, and **SDA** scores.
144
- * Results are saved under `save_results/`.
145
- * Final scores are aggregated via `scripts/2_compute_alisa_scores.py`.
146
-
147
- 4. **ALISA Training (Optional)**
148
-
149
- * Use `models/train_rl.py` for reinforcement learning fine-tuning. *(Coming Soon)*
150
- * The computed evaluation scores serve as reward signals to optimize policy models via methods such as **GRPO**.
151
-
152
- ---
153
-
154
- ## **📈 Model Performance Insights**
155
-
156
- We evaluate **17 multimodal large language models** of varying scales and architectures (both open- and closed-source).
157
-
158
- * **Combined Scores of RDA, GDA, and SDA (%)**
159
-
160
- ![Inference Results](./docs/assets/inference_results.png)
161
-
162
- **Key Findings:**
163
-
164
- * Overall, larger models achieve higher consistency. **GPT-4.1-mini** and **Qwen-VL-Plus** perform best among closed-source models.
165
- * While most models perform reasonably on simple pages (*Group Count* < 50), **RDA scores drop sharply** as page complexity increases—precise layout alignment remains a major challenge.
166
- * After reinforcement learning via the **ALISA framework**, **Qwen2.5-VL-7B** shows substantial improvements across all complexity levels, even surpassing **GPT-4.1-mini** on simpler cases.
167
-
168
- ---
169
-
170
- ## **📅 Future Work**
171
-
172
- * [ ] Release pretrained models fine-tuned with the ALISA framework
173
- * [ ] Expand dataset coverage to more industries and dynamic interaction patterns
174
- * [ ] Open-source the complete toolchain for data collection, cleaning, and evaluation
175
-
176
- ---
177
-
178
- ## **📜 License**
179
-
180
- The **WebRenderBench dataset** is released for **research purposes only**.
181
- All accompanying code will be published under the **Apache License 2.0**.
182
-
183
- All webpages in the dataset are collected from publicly accessible enterprise portals.
184
- To protect privacy, all personal and sensitive information has been removed or modified.
185
-
186
- ---
187
-
188
- ## **📚 Citation**
189
-
190
- If you use our dataset or framework in your research, please cite the following paper:
191
-
192
- ```bibtex
193
- @article{webrenderbench2025,
194
- title={WebRenderBench: Enhancing Web Interface Generation through Layout-Style Consistency and Reinforcement Learning},
195
- author={Anonymous Author(s)},
196
- year={2025},
197
- journal={arXiv preprint},
198
- }
199
- ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ task_categories:
4
+ - token-classification
5
+ language:
6
+ - en
7
+ - zh
8
+ tags:
9
+ - agent
10
+ size_categories:
11
+ - 100K<n<1M
12
+ ---
13
+
14
+ <p align="center">
15
+ <img src="./docs/assets/logo.svg" alt="Logo" width="120" />
16
+ <p align="center">
17
+ <a href="https://github.com/PKU-DAIR">
18
+ <img alt="Static Badge" src="https://img.shields.io/badge/%C2%A9-PKU--DAIR-%230e529d?labelColor=%23003985">
19
+ </a>
20
+ </p>
21
+ </p>
22
+
23
+ ## **WebRenderBench: Enhancing Web Interface Generation through Layout-Style Consistency and Reinforcement Learning**
24
+
25
+ [Paper](https://arxiv.org/pdf/2510.04097) | [中文](./docs/Chinese.md)
26
+
27
+ ## **🔍 Overview**
28
+
29
+ **WebRenderBench** is a large-scale benchmark designed to advance **WebUI-to-Code** research for multimodal large language models (MLLMs) through evaluation on real-world webpages. It provides:
30
+
31
+ * **45,100** real webpages collected from public portal websites
32
+ * **High diversity and complexity**, covering a wide range of industries and design styles
33
+ * **Novel evaluation metrics** that quantify **layout and style consistency** based on rendered pages
34
+ * The **ALISA reinforcement learning framework**, which uses the new metrics as reward signals to optimize generation quality
35
+
36
+ ---
37
+
38
+ ## **🚀 Key Features**
39
+
40
+ ### **Beyond the Limitations of Traditional Benchmarks**
41
+
42
+ WebRenderBench addresses the core issues of existing WebUI-to-Code benchmarks in data quality and evaluation methodology:
43
+
44
+ | Aspect | Traditional Benchmarks | Advantages of WebRenderBench |
45
+ | :------------------------- | :---------------------------------------------------------------------------------------------- | :------------------------------------------------------------------------------------------- |
46
+ | **Data Quality** | Small-scale, simple-structured, or LLM-synthesized data with limited diversity | Large-scale, real-world, and structurally complex webpages that present higher challenges |
47
+ | **Evaluation Reliability** | Relies on visual APIs (high cost) or code-structure comparison (fails to handle code asymmetry) | Objectively and efficiently evaluates layout and style consistency based on rendered results |
48
+ | **Training Effectiveness** | Difficult to optimize on crawled data with asymmetric code structures | Proposed metrics can be directly used as RL reward signals to enhance model optimization |
49
+
50
+ ---
51
+
52
+ ### **Dataset Characteristics**
53
+
54
+ <p align="center">
55
+ <img src="./docs/assets/framework.svg" alt="WebRenderBench and ALISA Framework" width="80%" />
56
+ </p>
57
+ <p align="center"><i>Figure 1: Dataset construction pipeline and the ALISA framework</i></p>
58
+
59
+ Our dataset is constructed through a systematic process to ensure both **high quality** and **diversity**:
60
+
61
+ 1. **Data Collection**: URLs are obtained from open enterprise portal datasets. A high-concurrency crawler captures 210K webpages along with static resources.
62
+ 2. **Data Processing**: MHTML pages are converted into HTML files, and cross-domain resources are processed to ensure local renderability and full-page screenshots.
63
+ 3. **Data Cleaning**: Pages with abnormal sizes, rendering errors, or missing styles are filtered out. Multimodal QA models further remove low-quality samples with large blank areas or overlapping elements, yielding 110K valid pages.
64
+ 4. **Data Categorization**: Pages are categorized by industry and complexity (measured via *Group Count*) to ensure balanced distribution across difficulty levels and domains.
65
+
66
+ Finally, we construct a dataset of **45.1K** samples, evenly split into training and test sets.
67
+
68
+ ---
69
+
70
+ ## **🌟 Evaluation Framework**
71
+
72
+ We propose a novel evaluation protocol based on **rendered webpages**, quantifying model performance along two key dimensions: **layout** and **style consistency**.
73
+
74
+ ---
75
+
76
+ ### **RDA (Relative Layout Difference of Associated Elements)**
77
+
78
+ **Purpose:** Measures relative layout differences between matched elements.
79
+
80
+ * **Element Association:** Matches corresponding elements between generated and target pages using text similarity (LCS) and geometric distance.
81
+ * **Positional Deviation:** The page is divided into a 3×3 grid. Associated elements are compared quadrant-wise—if located in different quadrants, the score is 0; otherwise, a deviation-based score is computed.
82
+ * **Uniqueness Weighting:** Each element is weighted by its uniqueness (inverse group size), giving higher importance to distinctive components.
83
+
84
+ ---
85
+
86
+ ### **GDA (Group-wise Difference in Element Counts)**
87
+
88
+ **Purpose:** Measures group-level alignment of axis-aligned elements.
89
+
90
+ * **Grouping:** Elements aligned on the same horizontal or vertical axis are treated as one group.
91
+ * **Count Comparison:** Compares whether corresponding groups in the generated and target pages contain the same number of elements.
92
+ * **Uniqueness Weighting:** Weighted by element uniqueness to emphasize key structural alignment.
93
+
94
+ ---
95
+
96
+ ### **SDA (Style Difference of Associated Elements)**
97
+
98
+ **Purpose:** Evaluates fine-grained style differences between associated elements.
99
+
100
+ * **Multi-Dimensional Style Extraction:** Measures differences in foreground color, background color, font size, and border radius.
101
+ * **Weighted Averaging:** Computes a weighted mean of style similarity scores across all associated elements to obtain an overall style score.
102
+
103
+ ---
104
+
105
+ ## **⚙️ Installation Guide**
106
+
107
+ ### **Core Dependencies**
108
+
109
+ <!--
110
+ # Recommended: Use vLLM for faster inference
111
+ pip install vllm transformers>=4.40.0 torch>=2.0
112
+
113
+ # Other dependencies
114
+ pip install selenium pandas scikit-learn pillow
115
+
116
+ Alternatively:
117
+ pip install -r requirements.txt
118
+ -->
119
+
120
+ Coming Soon
121
+
122
+ ---
123
+
124
+ ## **📊 Benchmark Workflow**
125
+
126
+ ### **Directory Structure**
127
+
128
+ ```
129
+ |- docs/ # Documentation
130
+ |- scripts # Evaluation scripts
131
+ |- web_render_test.jsonl # Test set metadata
132
+ |- web_render_train.jsonl # Training set metadata
133
+ |- test_webpages.zip # Test set webpages
134
+ |- train_webpages.zip # Training set webpages
135
+ |- test_screenshots.zip # Test set screenshots
136
+ |- train_screenshots.zip # Training set screenshots
137
+ ```
138
+
139
+ ---
140
+
141
+ ### **Implementation Steps**
142
+
143
+ 1. **Data Preparation**
144
+
145
+ * Download the WebRenderBench dataset and extract webpage and screenshot archives.
146
+ * Each pair consists of a real webpage (HTML + resources) and its rendered screenshot.
147
+
148
+ 2. **Model Inference**
149
+
150
+ * Run inference using engines such as **vLLM** or **LLM Deploy**, and save results to the designated directory.
151
+
152
+ 3. **Evaluation**
153
+
154
+ * Run `scripts/1_get_evaluation.py`.
155
+ * The script launches a web server to render both generated and target HTML.
156
+ * WebDriver extracts DOM information and computes **RDA**, **GDA**, and **SDA** scores.
157
+ * Results are saved under `save_results/`.
158
+ * Final scores are aggregated via `scripts/2_compute_alisa_scores.py`.
159
+
160
+ 4. **ALISA Training (Optional)**
161
+
162
+ * Use `models/train_rl.py` for reinforcement learning fine-tuning. *(Coming Soon)*
163
+ * The computed evaluation scores serve as reward signals to optimize policy models via methods such as **GRPO**.
164
+
165
+ ---
166
+
167
+ ## **📈 Model Performance Insights**
168
+
169
+ We evaluate **17 multimodal large language models** of varying scales and architectures (both open- and closed-source).
170
+
171
+ * **Combined Scores of RDA, GDA, and SDA (%)**
172
+
173
+ ![Inference Results](./docs/assets/inference_results.png)
174
+
175
+ **Key Findings:**
176
+
177
+ * Overall, larger models achieve higher consistency. **GPT-4.1-mini** and **Qwen-VL-Plus** perform best among closed-source models.
178
+ * While most models perform reasonably on simple pages (*Group Count* < 50), **RDA scores drop sharply** as page complexity increases—precise layout alignment remains a major challenge.
179
+ * After reinforcement learning via the **ALISA framework**, **Qwen2.5-VL-7B** shows substantial improvements across all complexity levels, even surpassing **GPT-4.1-mini** on simpler cases.
180
+
181
+ ---
182
+
183
+ ## **📅 Future Work**
184
+
185
+ * [ ] Release pretrained models fine-tuned with the ALISA framework
186
+ * [ ] Expand dataset coverage to more industries and dynamic interaction patterns
187
+ * [ ] Open-source the complete toolchain for data collection, cleaning, and evaluation
188
+
189
+ ---
190
+
191
+ ## **📜 License**
192
+
193
+ The **WebRenderBench dataset** is released for **research purposes only**.
194
+ All accompanying code will be published under the **Apache License 2.0**.
195
+
196
+ All webpages in the dataset are collected from publicly accessible enterprise portals.
197
+ To protect privacy, all personal and sensitive information has been removed or modified.
198
+
199
+ ---
200
+
201
+ ## **📚 Citation**
202
+
203
+ If you use our dataset or framework in your research, please cite the following paper:
204
+
205
+ ```bibtex
206
+ @article{webrenderbench2025,
207
+ title={WebRenderBench: Enhancing Web Interface Generation through Layout-Style Consistency and Reinforcement Learning},
208
+ author={Anonymous Author(s)},
209
+ year={2025},
210
+ journal={arXiv preprint},
211
+ }
212
+ ```