jzhang533 commited on
Commit
3a5fb31
·
1 Parent(s): beca320

upload weights and update README.md

Browse files

Signed-off-by: Zhang Jun <[email protected]>

Files changed (43) hide show
  1. .gitattributes +1 -0
  2. README.md +91 -67
  3. added_tokens.json +1 -0
  4. benchmark.jpg +3 -0
  5. config.json +75 -0
  6. configuration_ernie4_5_vl.py +658 -0
  7. generation_config.json +10 -0
  8. model-00001-of-00029.safetensors +3 -0
  9. model-00002-of-00029.safetensors +3 -0
  10. model-00003-of-00029.safetensors +3 -0
  11. model-00004-of-00029.safetensors +3 -0
  12. model-00005-of-00029.safetensors +3 -0
  13. model-00006-of-00029.safetensors +3 -0
  14. model-00007-of-00029.safetensors +3 -0
  15. model-00008-of-00029.safetensors +3 -0
  16. model-00009-of-00029.safetensors +3 -0
  17. model-00010-of-00029.safetensors +3 -0
  18. model-00011-of-00029.safetensors +3 -0
  19. model-00012-of-00029.safetensors +3 -0
  20. model-00013-of-00029.safetensors +3 -0
  21. model-00014-of-00029.safetensors +3 -0
  22. model-00015-of-00029.safetensors +3 -0
  23. model-00016-of-00029.safetensors +3 -0
  24. model-00017-of-00029.safetensors +3 -0
  25. model-00018-of-00029.safetensors +3 -0
  26. model-00019-of-00029.safetensors +3 -0
  27. model-00020-of-00029.safetensors +3 -0
  28. model-00021-of-00029.safetensors +3 -0
  29. model-00022-of-00029.safetensors +3 -0
  30. model-00023-of-00029.safetensors +3 -0
  31. model-00024-of-00029.safetensors +3 -0
  32. model-00025-of-00029.safetensors +3 -0
  33. model-00026-of-00029.safetensors +3 -0
  34. model-00027-of-00029.safetensors +3 -0
  35. model-00028-of-00029.safetensors +3 -0
  36. model-00029-of-00029.safetensors +3 -0
  37. model.safetensors.index.json +0 -0
  38. modeling_ernie4_5_vl.py +0 -0
  39. preprocessor_config.json +29 -0
  40. processing_ernie4_5_vl.py +1821 -0
  41. special_tokens_map.json +1 -0
  42. tokenizer.model +3 -0
  43. tokenizer_config.json +22 -0
.gitattributes CHANGED
@@ -33,3 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ benchmark.jpg filter=lfs diff=lfs merge=lfs -text
README.md CHANGED
@@ -17,7 +17,7 @@ library_name: transformers
17
  <img alt="Hugging Face" src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Baidu-ffc107?color=ffc107&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
18
  </a>
19
  <a href="https://github.com/PaddlePaddle/ERNIE" target="_blank" style="margin: 2px;">
20
- <img alt="Github" src="https://img.shields.io/badge/GitHub-ERNIE-000?logo=github&color=0000FF" style="display: inline-block; vertical-align: middle;"/>
21
  </a>
22
  <a href="https://ernie.baidu.com/blog/ernie4.5" target="_blank" style="margin: 2px;">
23
  <img alt="Blog" src="https://img.shields.io/badge/🖖_Blog-ERNIE4.5-A020A0" style="display: inline-block; vertical-align: middle;"/>
@@ -28,6 +28,9 @@ library_name: transformers
28
  <a href="https://x.com/PaddlePaddle" target="_blank" style="margin: 2px;">
29
  <img alt="X" src="https://img.shields.io/badge/X-PaddlePaddle-6080F0"?logo=x&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
30
  </a>
 
 
 
31
  </div>
32
 
33
  <div align="center" style="line-height: 1;">
@@ -36,75 +39,34 @@ library_name: transformers
36
  </a>
37
  </div>
38
 
39
- # ERNIE-4.5-VL-28B-A3B-Thinking
40
 
41
  ## Model Highlights
42
 
43
- Building upon the solid architecture of ERNIE-4.5-VL-28B-A3B, the enhanced **ERNIE-4.5-VL-28B-A3B-Thinking** represents a substantive advancement in large-scale multimodal learning. An additional 1T-token mid-training phase on large-scale, high-quality visual-linguistic reasoning data further enhanced the model’s representational depth and cross-modal semantic alignment, resulting in stronger visual-textual reasoning capability. ERNIE-4.5-VL-28B-A3B-Thinking also adopts large-scale multimodal reinforcement learning on verifiable tasks. It leverages the GSPO [1] and Icepop [2] strategies to stabilize MoE-based RL training, and incorporates a dynamic difficulty sampling mechanism that prioritizes samples conducive to improving training efficiency.
44
-
45
- In **ERNIE-4.5-VL-28B-A3B-Thinking**, we adopt a more explicit expression mechanism to enhance the user experience in triggering grounding and pointing capabilities. Additionally, we introduced think-with-images features such as image zooming and image search, which significantly improve the model’s ability to interact with its environment. These enhancements form a key foundation for building a multimodal agent.
46
-
47
- # Key Capabilities
48
- - **Visual Reasoning**: Strengthened by large-scale midtrain and RL optimization, the model demonstrates superior multi-step reasoning, diagram interpretation, and causal inference across complex visual tasks.
49
- - **STEM**: Excels in visual–scientific problem solving, integrating symbolic math, geometry, and experimental visualization for deeper analytical comprehension.
50
- - **Visual Grounding**: Enhanced grounding mechanisms, refined with community feedback, enable precise object–language alignment and contextual awareness in real-world visual scenes.
51
- - **Thinking with Images**: Extends beyond image comprehension to use image-related tools for reasoning, annotation, and manipulation—enabling the model to think through visual interfaces and act as an intelligent visual agent.
52
 
 
53
 
54
- ## Model Overview
55
 
56
- In a comprehensive evaluation across multiple vision-language benchmarks, ERNIE-4.5-VL-28B-A3B-Thinking demonstrates outstanding overall capability. In more than 78% of the tasks, its performance reaches or exceeds 90% of Gemini-2.5-Pro’s level, with several key tasks even surpassing Gemini-2.5-Pro. This highlights the model’s remarkable progress in multimodal understanding and reasoning.
57
 
 
58
 
 
59
 
 
 
 
 
 
 
60
 
61
  ## Quickstart
62
 
63
- ### Model Finetuning with ERNIEKit
64
-
65
- [ERNIEKit](https://github.com/PaddlePaddle/ERNIE) is a training toolkit based on PaddlePaddle, specifically designed for the ERNIE series of open-source large models. It provides comprehensive support for scenarios such as instruction fine-tuning (SFT, LoRA) and alignment training (DPO), ensuring optimal performance.
66
-
67
- Usage Examples:
68
-
69
- ```bash
70
- # Download model
71
- huggingface-cli download baidu/ERNIE-4.5-VL-28B-A3B-Thinking --local-dir baidu/ERNIE-4.5-VL-28B-A3B-Thinking
72
- # SFT
73
- erniekit train examples/configs/ERNIE-4.5-VL-28B-A3B-Thinking/sft/run_sft_lora_8k.yaml
74
- # SFT (Function Call)
75
- erniekit train examples/configs/ERNIE-4.5-VL-28B-A3B-Thinking/sft_function_call/run_sft_8k.yaml
76
- ```
77
-
78
- For more detailed examples, including SFT with LoRA, multi-GPU configurations, and advanced scripts, please refer to the examples folder within the [ERNIEKit](https://github.com/PaddlePaddle/ERNIE) repository.
79
-
80
-
81
 
82
- ### FastDeploy Inference
83
-
84
- ### vLLM inference
85
-
86
- Install vllm main branch
87
- ```bash
88
- pip install uv
89
- uv pip install -U vllm --pre --extra-index-url https://wheels.vllm.ai/nightly --extra-index-url https://download.pytorch.org/whl/cu129 --index-strategy unsafe-best-match
90
- ```
91
- Run vllm
92
- ```bash
93
- # 80G*1 GPU,If an error occurs, add the --gpu-memory-utilization 0.95 and try again
94
- vllm serve baidu/ERNIE-4.5-VL-28B-A3B-Thinking --trust-remote-code
95
- ```
96
- Run vllm using `reasoning-parser` and `tool-call-parser`
97
- ```bash
98
- # 80G*1 GPU,If an error occurs, add the --gpu-memory-utilization 0.95 and try again
99
- vllm serve baidu/ERNIE-4.5-VL-28B-A3B-Thinking --trust-remote-code \
100
- --reasoning-parser ernie45 \
101
- --tool-call-parser ernie45 \
102
- --enable-auto-tool-choice
103
- ```
104
-
105
- ### Using `transformers` library
106
-
107
- Here is an example of how to use the transformers library for inference:
108
 
109
  ```python
110
  import torch
@@ -119,24 +81,30 @@ model = AutoModelForCausalLM.from_pretrained(
119
  )
120
 
121
  processor = AutoProcessor.from_pretrained(model_path, trust_remote_code=True)
122
- processor.eval()
123
  model.add_image_preprocess(processor)
124
 
125
  messages = [
126
  {
127
  "role": "user",
128
  "content": [
129
- {"type": "text", "text": "Describe the image."},
130
- {"type": "image_url", "image_url": {"url": "https://paddlenlp.bj.bcebos.com/datasets/paddlemix/demo_images/example1.jpg"}},
 
 
 
 
 
 
 
 
131
  ]
132
  },
133
  ]
134
 
135
- text = processor.apply_chat_template(
136
  messages,
137
  tokenize=False,
138
  add_generation_prompt=True,
139
- chat_template_kwargs={"options": {"thinking_mode": "close"}},
140
  )
141
  image_inputs, video_inputs = processor.process_vision_info(messages)
142
  inputs = processor(
@@ -153,19 +121,75 @@ inputs = inputs.to(device)
153
  generated_ids = model.generate(
154
  inputs=inputs['input_ids'].to(device),
155
  **inputs,
156
- max_new_tokens=128,
157
  use_cache=False
158
  )
159
  output_text = processor.decode(generated_ids[0][len(inputs['input_ids'][0]):])
160
  print(output_text)
161
  ```
162
 
163
- ## Reference
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
164
 
165
- [1] Zheng C, Liu S, Li M, et al. Group sequence policy optimization[J]. arXiv preprint arXiv:2507.18071, 2025.
 
 
166
 
167
- [2] Team L, Shen A, Li B, et al. Every Step Evolves: Scaling Reinforcement Learning for Trillion-Scale Thinking Model[J]. arXiv preprint arXiv:2510.18855, 2025.
 
 
 
 
 
 
 
168
 
 
169
 
170
  ## License
171
 
@@ -183,4 +207,4 @@ If you find ERNIE 4.5 useful or wish to use it in your projects, please kindly c
183
  primaryClass={cs.CL},
184
  howpublished={\url{https://ernie.baidu.com/blog/publication/ERNIE_Technical_Report.pdf}}
185
  }
186
- ```
 
17
  <img alt="Hugging Face" src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Baidu-ffc107?color=ffc107&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
18
  </a>
19
  <a href="https://github.com/PaddlePaddle/ERNIE" target="_blank" style="margin: 2px;">
20
+ <img alt="GitHub" src="https://img.shields.io/badge/GitHub-ERNIE-000?logo=github&color=0000FF" style="display: inline-block; vertical-align: middle;"/>
21
  </a>
22
  <a href="https://ernie.baidu.com/blog/ernie4.5" target="_blank" style="margin: 2px;">
23
  <img alt="Blog" src="https://img.shields.io/badge/🖖_Blog-ERNIE4.5-A020A0" style="display: inline-block; vertical-align: middle;"/>
 
28
  <a href="https://x.com/PaddlePaddle" target="_blank" style="margin: 2px;">
29
  <img alt="X" src="https://img.shields.io/badge/X-PaddlePaddle-6080F0"?logo=x&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
30
  </a>
31
+ <a href="https://x.com/ErnieforDevs" target="_blank" style="margin: 2px;">
32
+ <img alt="X" src="https://img.shields.io/badge/X-ErnieforDevs-A080F0"?logo=x&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
33
+ </a>
34
  </div>
35
 
36
  <div align="center" style="line-height: 1;">
 
39
  </a>
40
  </div>
41
 
42
+ # 🚀 **Introducing ERNIE-4.5-VL-28B-A3B-Thinking: A Breakthrough in Multimodal AI**
43
 
44
  ## Model Highlights
45
 
46
+ Built upon the powerful ERNIE-4.5-VL-28B-A3B architecture, the newly upgraded **ERNIE-4.5-VL-28B-A3B-Thinking** achieves a remarkable leap forward in multimodal reasoning capabilities. 🧠✨ Through an extensive mid-training phase, the model absorbed a vast and highly diverse corpus of premium visual-language reasoning data. This massive-scale training process dramatically boosted the model's representation power while deepening the semantic alignment between visual and language modalities—unlocking unprecedented capabilities in nuanced visual-textual reasoning. 📊
 
 
 
 
 
 
 
 
47
 
48
+ The model leverages cutting-edge multimodal reinforcement learning techniques on verifiable tasks, integrating GSPO and IcePop strategies to stabilize MoE training combined with dynamic difficulty sampling for exceptional learning efficiency. ⚡ Responding to strong community demand, we've significantly strengthened the model's grounding performance with improved instruction-following capabilities, making visual grounding functions more accessible than ever. 🎯 Additionally, our innovative "Thinking with Images" feature, when paired with tools like image zooming and image search, dramatically elevates the model's ability to process fine-grained details and handle long-tail visual knowledge. 🔍🖼️
49
 
50
+ Together, these enhancements form a critical foundation for developing sophisticated multimodal agents, empowering developers and researchers to create next-generation AI applications that push the boundaries of what's possible in visual-language understanding. 🤖🌟
51
 
52
+ ![benchmark](./benchmark.jpg)
53
 
54
+ ## Key Capabilities
55
 
56
+ As a lightweight model that activates only **3B parameters** ⚡, **ERNIE-4.5-VL-28B-A3B-Thinking** closely matches the performance of the industry's top flagship models across various benchmarks. 🚀
57
 
58
+ - **Visual Reasoning** 🧠👁️: Bolstered by large-scale reinforcement learning, the model demonstrates exceptional multi-step reasoning, chart analysis, and causal reasoning capabilities in complex visual tasks! 📊✨
59
+ - **STEM Reasoning** 🔬📐: Leveraging its powerful visual abilities, the model achieves a leap in performance on STEM tasks like solving problems from photos, easily handling even complex questions! 🎯💡
60
+ - **Visual Grounding** 📍🎨: Features more precise grounding and flexible instruction execution, easily triggering grounding functions in complex industrial scenarios for a significant efficiency boost! ⚙️💪
61
+ - **Thinking with Images** 🤔🔍: The model thinks like a human, capable of freely zooming in and out of images to grasp every detail and uncover all information. 🖼️✨
62
+ - **Tool Utilization** 🛠️⚡: Empowered by robust tool-calling capabilities, the model can instantly use functions like image search to easily identify long-tail knowledge and achieve comprehensive information retrieval! 🔎📚
63
+ - **Video Understanding** 🎬🎥: The model possesses outstanding temporal awareness and event localization abilities, accurately identifying content changes across different time segments in a video, making video analysis smarter and more efficient! ⏱️🌟
64
 
65
  ## Quickstart
66
 
67
+ ### Using `transformers` Library
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
68
 
69
+ Here is an example of how to use the `transformers` library for inference:
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
70
 
71
  ```python
72
  import torch
 
81
  )
82
 
83
  processor = AutoProcessor.from_pretrained(model_path, trust_remote_code=True)
 
84
  model.add_image_preprocess(processor)
85
 
86
  messages = [
87
  {
88
  "role": "user",
89
  "content": [
90
+ {
91
+ "type": "text",
92
+ "text": "What color clothes is the girl in the picture wearing?"
93
+ },
94
+ {
95
+ "type": "image_url",
96
+ "image_url": {
97
+ "url": "https://paddlenlp.bj.bcebos.com/datasets/paddlemix/demo_images/example1.jpg"
98
+ }
99
+ },
100
  ]
101
  },
102
  ]
103
 
104
+ text = processor.tokenizer.apply_chat_template(
105
  messages,
106
  tokenize=False,
107
  add_generation_prompt=True,
 
108
  )
109
  image_inputs, video_inputs = processor.process_vision_info(messages)
110
  inputs = processor(
 
121
  generated_ids = model.generate(
122
  inputs=inputs['input_ids'].to(device),
123
  **inputs,
124
+ max_new_tokens=1024,
125
  use_cache=False
126
  )
127
  output_text = processor.decode(generated_ids[0][len(inputs['input_ids'][0]):])
128
  print(output_text)
129
  ```
130
 
131
+ ### vLLM Inference
132
+
133
+ Install the vLLM main branch
134
+
135
+ ```bash
136
+ pip install uv
137
+ uv pip install -U vllm --pre \
138
+ --extra-index-url https://wheels.vllm.ai/nightly \
139
+ --extra-index-url https://download.pytorch.org/whl/cu129 \
140
+ --index-strategy unsafe-best-match
141
+ ```
142
+
143
+ Run vLLM
144
+
145
+ ```bash
146
+ # 80G*1 GPU,If an error occurs, add the --gpu-memory-utilization 0.95 and try again
147
+ vllm serve baidu/ERNIE-4.5-VL-28B-A3B-Thinking --trust-remote-code
148
+ ```
149
+
150
+ Run vLLM using `reasoning-parser` and `tool-call-parser`
151
+
152
+ ```bash
153
+ # 80G*1 GPU,If an error occurs, add the --gpu-memory-utilization 0.95 and try again
154
+ vllm serve baidu/ERNIE-4.5-VL-28B-A3B-Thinking --trust-remote-code \
155
+ --reasoning-parser ernie45 \
156
+ --tool-call-parser ernie45 \
157
+ --enable-auto-tool-choice
158
+ ```
159
+
160
+ ### FastDeploy Inference
161
+
162
+ Quickly deploy services using FastDeploy as shown below. For more detailed usage, refer to the [FastDeploy GitHub Repository](https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/get_started/ernie-4.5-vl-thinking.md).
163
+
164
+ **Note:** For single-card deployment, at least 80GB of GPU memory is required.
165
+
166
+ ```bash
167
+ fastdeploy serve --model baidu/ERNIE-4.5-VL-28B-A3B-Thinking \
168
+ --max-model-len 131072 \
169
+ --max-num-seqs 32 \
170
+ --port 8180 \
171
+ --quantization wint8 \
172
+ --reasoning-parser ernie-45-vl-thinking \
173
+ --tool-call-parser ernie-45-vl-thinking \
174
+ --mm-processor-kwargs '{"image_max_pixels": 12845056 }'
175
+ ```
176
+
177
+ ### Finetuning with ERNIEKit
178
 
179
+ [ERNIEKit](https://github.com/PaddlePaddle/ERNIE) is a training toolkit based on PaddlePaddle, specifically designed for the ERNIE series of open-source large models. It provides comprehensive support for scenarios such as instruction fine-tuning (SFT, LoRA) and alignment training (DPO), ensuring optimal performance.
180
+
181
+ Usage Examples:
182
 
183
+ ```bash
184
+ # Download model
185
+ huggingface-cli download baidu/ERNIE-4.5-VL-28B-A3B-Thinking --local-dir baidu/ERNIE-4.5-VL-28B-A3B-Thinking
186
+ # SFT
187
+ erniekit train examples/configs/ERNIE-4.5-VL-28B-A3B-Thinking/sft/run_sft_lora_8k.yaml
188
+ # SFT (Function Call)
189
+ erniekit train examples/configs/ERNIE-4.5-VL-28B-A3B-Thinking/sft_function_call/run_sft_8k.yaml
190
+ ```
191
 
192
+ For more detailed examples, including SFT with LoRA, multi-GPU configurations, and advanced scripts, please refer to the examples folder within the [ERNIEKit](https://github.com/PaddlePaddle/ERNIE) repository.
193
 
194
  ## License
195
 
 
207
  primaryClass={cs.CL},
208
  howpublished={\url{https://ernie.baidu.com/blog/publication/ERNIE_Technical_Report.pdf}}
209
  }
210
+ ```
added_tokens.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"<|IMAGE_PLACEHOLDER|>": 100295, "<|AUDIO_PLACEHOLDER|>": 100296, "<|LOC_0|>": 100297, "<|LOC_1|>": 100298, "<|LOC_2|>": 100299, "<|LOC_3|>": 100300, "<|LOC_4|>": 100301, "<|LOC_5|>": 100302, "<|LOC_6|>": 100303, "<|LOC_7|>": 100304, "<|LOC_8|>": 100305, "<|LOC_9|>": 100306, "<|LOC_10|>": 100307, "<|LOC_11|>": 100308, "<|LOC_12|>": 100309, "<|LOC_13|>": 100310, "<|LOC_14|>": 100311, "<|LOC_15|>": 100312, "<|LOC_16|>": 100313, "<|LOC_17|>": 100314, "<|LOC_18|>": 100315, "<|LOC_19|>": 100316, "<|LOC_20|>": 100317, "<|LOC_21|>": 100318, "<|LOC_22|>": 100319, "<|LOC_23|>": 100320, "<|LOC_24|>": 100321, "<|LOC_25|>": 100322, "<|LOC_26|>": 100323, "<|LOC_27|>": 100324, "<|LOC_28|>": 100325, "<|LOC_29|>": 100326, "<|LOC_30|>": 100327, "<|LOC_31|>": 100328, "<|LOC_32|>": 100329, "<|LOC_33|>": 100330, "<|LOC_34|>": 100331, "<|LOC_35|>": 100332, "<|LOC_36|>": 100333, "<|LOC_37|>": 100334, "<|LOC_38|>": 100335, "<|LOC_39|>": 100336, "<|LOC_40|>": 100337, "<|LOC_41|>": 100338, "<|LOC_42|>": 100339, "<|LOC_43|>": 100340, "<|LOC_44|>": 100341, "<|LOC_45|>": 100342, "<|LOC_46|>": 100343, "<|LOC_47|>": 100344, "<|LOC_48|>": 100345, "<|LOC_49|>": 100346, "<|LOC_50|>": 100347, "<|LOC_51|>": 100348, "<|LOC_52|>": 100349, "<|LOC_53|>": 100350, "<|LOC_54|>": 100351, "<|LOC_55|>": 100352, "<|LOC_56|>": 100353, "<|LOC_57|>": 100354, "<|LOC_58|>": 100355, "<|LOC_59|>": 100356, "<|LOC_60|>": 100357, "<|LOC_61|>": 100358, "<|LOC_62|>": 100359, "<|LOC_63|>": 100360, "<|LOC_64|>": 100361, "<|LOC_65|>": 100362, "<|LOC_66|>": 100363, "<|LOC_67|>": 100364, "<|LOC_68|>": 100365, "<|LOC_69|>": 100366, "<|LOC_70|>": 100367, "<|LOC_71|>": 100368, "<|LOC_72|>": 100369, "<|LOC_73|>": 100370, "<|LOC_74|>": 100371, "<|LOC_75|>": 100372, "<|LOC_76|>": 100373, "<|LOC_77|>": 100374, "<|LOC_78|>": 100375, "<|LOC_79|>": 100376, "<|LOC_80|>": 100377, "<|LOC_81|>": 100378, "<|LOC_82|>": 100379, "<|LOC_83|>": 100380, "<|LOC_84|>": 100381, "<|LOC_85|>": 100382, "<|LOC_86|>": 100383, "<|LOC_87|>": 100384, "<|LOC_88|>": 100385, "<|LOC_89|>": 100386, "<|LOC_90|>": 100387, "<|LOC_91|>": 100388, "<|LOC_92|>": 100389, "<|LOC_93|>": 100390, "<|LOC_94|>": 100391, "<|LOC_95|>": 100392, "<|LOC_96|>": 100393, "<|LOC_97|>": 100394, "<|LOC_98|>": 100395, "<|LOC_99|>": 100396, "<|LOC_100|>": 100397, "<|LOC_101|>": 100398, "<|LOC_102|>": 100399, "<|LOC_103|>": 100400, "<|LOC_104|>": 100401, "<|LOC_105|>": 100402, "<|LOC_106|>": 100403, "<|LOC_107|>": 100404, "<|LOC_108|>": 100405, "<|LOC_109|>": 100406, "<|LOC_110|>": 100407, "<|LOC_111|>": 100408, "<|LOC_112|>": 100409, "<|LOC_113|>": 100410, "<|LOC_114|>": 100411, "<|LOC_115|>": 100412, "<|LOC_116|>": 100413, "<|LOC_117|>": 100414, "<|LOC_118|>": 100415, "<|LOC_119|>": 100416, "<|LOC_120|>": 100417, "<|LOC_121|>": 100418, "<|LOC_122|>": 100419, "<|LOC_123|>": 100420, "<|LOC_124|>": 100421, "<|LOC_125|>": 100422, "<|LOC_126|>": 100423, "<|LOC_127|>": 100424, "<|LOC_128|>": 100425, "<|LOC_129|>": 100426, "<|LOC_130|>": 100427, "<|LOC_131|>": 100428, "<|LOC_132|>": 100429, "<|LOC_133|>": 100430, "<|LOC_134|>": 100431, "<|LOC_135|>": 100432, "<|LOC_136|>": 100433, "<|LOC_137|>": 100434, "<|LOC_138|>": 100435, "<|LOC_139|>": 100436, "<|LOC_140|>": 100437, "<|LOC_141|>": 100438, "<|LOC_142|>": 100439, "<|LOC_143|>": 100440, "<|LOC_144|>": 100441, "<|LOC_145|>": 100442, "<|LOC_146|>": 100443, "<|LOC_147|>": 100444, "<|LOC_148|>": 100445, "<|LOC_149|>": 100446, "<|LOC_150|>": 100447, "<|LOC_151|>": 100448, "<|LOC_152|>": 100449, "<|LOC_153|>": 100450, "<|LOC_154|>": 100451, "<|LOC_155|>": 100452, "<|LOC_156|>": 100453, "<|LOC_157|>": 100454, "<|LOC_158|>": 100455, "<|LOC_159|>": 100456, "<|LOC_160|>": 100457, "<|LOC_161|>": 100458, "<|LOC_162|>": 100459, "<|LOC_163|>": 100460, "<|LOC_164|>": 100461, "<|LOC_165|>": 100462, "<|LOC_166|>": 100463, "<|LOC_167|>": 100464, "<|LOC_168|>": 100465, "<|LOC_169|>": 100466, "<|LOC_170|>": 100467, "<|LOC_171|>": 100468, "<|LOC_172|>": 100469, "<|LOC_173|>": 100470, "<|LOC_174|>": 100471, "<|LOC_175|>": 100472, "<|LOC_176|>": 100473, "<|LOC_177|>": 100474, "<|LOC_178|>": 100475, "<|LOC_179|>": 100476, "<|LOC_180|>": 100477, "<|LOC_181|>": 100478, "<|LOC_182|>": 100479, "<|LOC_183|>": 100480, "<|LOC_184|>": 100481, "<|LOC_185|>": 100482, "<|LOC_186|>": 100483, "<|LOC_187|>": 100484, "<|LOC_188|>": 100485, "<|LOC_189|>": 100486, "<|LOC_190|>": 100487, "<|LOC_191|>": 100488, "<|LOC_192|>": 100489, "<|LOC_193|>": 100490, "<|LOC_194|>": 100491, "<|LOC_195|>": 100492, "<|LOC_196|>": 100493, "<|LOC_197|>": 100494, "<|LOC_198|>": 100495, "<|LOC_199|>": 100496, "<|LOC_200|>": 100497, "<|LOC_201|>": 100498, "<|LOC_202|>": 100499, "<|LOC_203|>": 100500, "<|LOC_204|>": 100501, "<|LOC_205|>": 100502, "<|LOC_206|>": 100503, "<|LOC_207|>": 100504, "<|LOC_208|>": 100505, "<|LOC_209|>": 100506, "<|LOC_210|>": 100507, "<|LOC_211|>": 100508, "<|LOC_212|>": 100509, "<|LOC_213|>": 100510, "<|LOC_214|>": 100511, "<|LOC_215|>": 100512, "<|LOC_216|>": 100513, "<|LOC_217|>": 100514, "<|LOC_218|>": 100515, "<|LOC_219|>": 100516, "<|LOC_220|>": 100517, "<|LOC_221|>": 100518, "<|LOC_222|>": 100519, "<|LOC_223|>": 100520, "<|LOC_224|>": 100521, "<|LOC_225|>": 100522, "<|LOC_226|>": 100523, "<|LOC_227|>": 100524, "<|LOC_228|>": 100525, "<|LOC_229|>": 100526, "<|LOC_230|>": 100527, "<|LOC_231|>": 100528, "<|LOC_232|>": 100529, "<|LOC_233|>": 100530, "<|LOC_234|>": 100531, "<|LOC_235|>": 100532, "<|LOC_236|>": 100533, "<|LOC_237|>": 100534, "<|LOC_238|>": 100535, "<|LOC_239|>": 100536, "<|LOC_240|>": 100537, "<|LOC_241|>": 100538, "<|LOC_242|>": 100539, "<|LOC_243|>": 100540, "<|LOC_244|>": 100541, "<|LOC_245|>": 100542, "<|LOC_246|>": 100543, "<|LOC_247|>": 100544, "<|LOC_248|>": 100545, "<|LOC_249|>": 100546, "<|LOC_250|>": 100547, "<|LOC_251|>": 100548, "<|LOC_252|>": 100549, "<|LOC_253|>": 100550, "<|LOC_254|>": 100551, "<|LOC_255|>": 100552, "<|LOC_256|>": 100553, "<|LOC_257|>": 100554, "<|LOC_258|>": 100555, "<|LOC_259|>": 100556, "<|LOC_260|>": 100557, "<|LOC_261|>": 100558, "<|LOC_262|>": 100559, "<|LOC_263|>": 100560, "<|LOC_264|>": 100561, "<|LOC_265|>": 100562, "<|LOC_266|>": 100563, "<|LOC_267|>": 100564, "<|LOC_268|>": 100565, "<|LOC_269|>": 100566, "<|LOC_270|>": 100567, "<|LOC_271|>": 100568, "<|LOC_272|>": 100569, "<|LOC_273|>": 100570, "<|LOC_274|>": 100571, "<|LOC_275|>": 100572, "<|LOC_276|>": 100573, "<|LOC_277|>": 100574, "<|LOC_278|>": 100575, "<|LOC_279|>": 100576, "<|LOC_280|>": 100577, "<|LOC_281|>": 100578, "<|LOC_282|>": 100579, "<|LOC_283|>": 100580, "<|LOC_284|>": 100581, "<|LOC_285|>": 100582, "<|LOC_286|>": 100583, "<|LOC_287|>": 100584, "<|LOC_288|>": 100585, "<|LOC_289|>": 100586, "<|LOC_290|>": 100587, "<|LOC_291|>": 100588, "<|LOC_292|>": 100589, "<|LOC_293|>": 100590, "<|LOC_294|>": 100591, "<|LOC_295|>": 100592, "<|LOC_296|>": 100593, "<|LOC_297|>": 100594, "<|LOC_298|>": 100595, "<|LOC_299|>": 100596, "<|LOC_300|>": 100597, "<|LOC_301|>": 100598, "<|LOC_302|>": 100599, "<|LOC_303|>": 100600, "<|LOC_304|>": 100601, "<|LOC_305|>": 100602, "<|LOC_306|>": 100603, "<|LOC_307|>": 100604, "<|LOC_308|>": 100605, "<|LOC_309|>": 100606, "<|LOC_310|>": 100607, "<|LOC_311|>": 100608, "<|LOC_312|>": 100609, "<|LOC_313|>": 100610, "<|LOC_314|>": 100611, "<|LOC_315|>": 100612, "<|LOC_316|>": 100613, "<|LOC_317|>": 100614, "<|LOC_318|>": 100615, "<|LOC_319|>": 100616, "<|LOC_320|>": 100617, "<|LOC_321|>": 100618, "<|LOC_322|>": 100619, "<|LOC_323|>": 100620, "<|LOC_324|>": 100621, "<|LOC_325|>": 100622, "<|LOC_326|>": 100623, "<|LOC_327|>": 100624, "<|LOC_328|>": 100625, "<|LOC_329|>": 100626, "<|LOC_330|>": 100627, "<|LOC_331|>": 100628, "<|LOC_332|>": 100629, "<|LOC_333|>": 100630, "<|LOC_334|>": 100631, "<|LOC_335|>": 100632, "<|LOC_336|>": 100633, "<|LOC_337|>": 100634, "<|LOC_338|>": 100635, "<|LOC_339|>": 100636, "<|LOC_340|>": 100637, "<|LOC_341|>": 100638, "<|LOC_342|>": 100639, "<|LOC_343|>": 100640, "<|LOC_344|>": 100641, "<|LOC_345|>": 100642, "<|LOC_346|>": 100643, "<|LOC_347|>": 100644, "<|LOC_348|>": 100645, "<|LOC_349|>": 100646, "<|LOC_350|>": 100647, "<|LOC_351|>": 100648, "<|LOC_352|>": 100649, "<|LOC_353|>": 100650, "<|LOC_354|>": 100651, "<|LOC_355|>": 100652, "<|LOC_356|>": 100653, "<|LOC_357|>": 100654, "<|LOC_358|>": 100655, "<|LOC_359|>": 100656, "<|LOC_360|>": 100657, "<|LOC_361|>": 100658, "<|LOC_362|>": 100659, "<|LOC_363|>": 100660, "<|LOC_364|>": 100661, "<|LOC_365|>": 100662, "<|LOC_366|>": 100663, "<|LOC_367|>": 100664, "<|LOC_368|>": 100665, "<|LOC_369|>": 100666, "<|LOC_370|>": 100667, "<|LOC_371|>": 100668, "<|LOC_372|>": 100669, "<|LOC_373|>": 100670, "<|LOC_374|>": 100671, "<|LOC_375|>": 100672, "<|LOC_376|>": 100673, "<|LOC_377|>": 100674, "<|LOC_378|>": 100675, "<|LOC_379|>": 100676, "<|LOC_380|>": 100677, "<|LOC_381|>": 100678, "<|LOC_382|>": 100679, "<|LOC_383|>": 100680, "<|LOC_384|>": 100681, "<|LOC_385|>": 100682, "<|LOC_386|>": 100683, "<|LOC_387|>": 100684, "<|LOC_388|>": 100685, "<|LOC_389|>": 100686, "<|LOC_390|>": 100687, "<|LOC_391|>": 100688, "<|LOC_392|>": 100689, "<|LOC_393|>": 100690, "<|LOC_394|>": 100691, "<|LOC_395|>": 100692, "<|LOC_396|>": 100693, "<|LOC_397|>": 100694, "<|LOC_398|>": 100695, "<|LOC_399|>": 100696, "<|LOC_400|>": 100697, "<|LOC_401|>": 100698, "<|LOC_402|>": 100699, "<|LOC_403|>": 100700, "<|LOC_404|>": 100701, "<|LOC_405|>": 100702, "<|LOC_406|>": 100703, "<|LOC_407|>": 100704, "<|LOC_408|>": 100705, "<|LOC_409|>": 100706, "<|LOC_410|>": 100707, "<|LOC_411|>": 100708, "<|LOC_412|>": 100709, "<|LOC_413|>": 100710, "<|LOC_414|>": 100711, "<|LOC_415|>": 100712, "<|LOC_416|>": 100713, "<|LOC_417|>": 100714, "<|LOC_418|>": 100715, "<|LOC_419|>": 100716, "<|LOC_420|>": 100717, "<|LOC_421|>": 100718, "<|LOC_422|>": 100719, "<|LOC_423|>": 100720, "<|LOC_424|>": 100721, "<|LOC_425|>": 100722, "<|LOC_426|>": 100723, "<|LOC_427|>": 100724, "<|LOC_428|>": 100725, "<|LOC_429|>": 100726, "<|LOC_430|>": 100727, "<|LOC_431|>": 100728, "<|LOC_432|>": 100729, "<|LOC_433|>": 100730, "<|LOC_434|>": 100731, "<|LOC_435|>": 100732, "<|LOC_436|>": 100733, "<|LOC_437|>": 100734, "<|LOC_438|>": 100735, "<|LOC_439|>": 100736, "<|LOC_440|>": 100737, "<|LOC_441|>": 100738, "<|LOC_442|>": 100739, "<|LOC_443|>": 100740, "<|LOC_444|>": 100741, "<|LOC_445|>": 100742, "<|LOC_446|>": 100743, "<|LOC_447|>": 100744, "<|LOC_448|>": 100745, "<|LOC_449|>": 100746, "<|LOC_450|>": 100747, "<|LOC_451|>": 100748, "<|LOC_452|>": 100749, "<|LOC_453|>": 100750, "<|LOC_454|>": 100751, "<|LOC_455|>": 100752, "<|LOC_456|>": 100753, "<|LOC_457|>": 100754, "<|LOC_458|>": 100755, "<|LOC_459|>": 100756, "<|LOC_460|>": 100757, "<|LOC_461|>": 100758, "<|LOC_462|>": 100759, "<|LOC_463|>": 100760, "<|LOC_464|>": 100761, "<|LOC_465|>": 100762, "<|LOC_466|>": 100763, "<|LOC_467|>": 100764, "<|LOC_468|>": 100765, "<|LOC_469|>": 100766, "<|LOC_470|>": 100767, "<|LOC_471|>": 100768, "<|LOC_472|>": 100769, "<|LOC_473|>": 100770, "<|LOC_474|>": 100771, "<|LOC_475|>": 100772, "<|LOC_476|>": 100773, "<|LOC_477|>": 100774, "<|LOC_478|>": 100775, "<|LOC_479|>": 100776, "<|LOC_480|>": 100777, "<|LOC_481|>": 100778, "<|LOC_482|>": 100779, "<|LOC_483|>": 100780, "<|LOC_484|>": 100781, "<|LOC_485|>": 100782, "<|LOC_486|>": 100783, "<|LOC_487|>": 100784, "<|LOC_488|>": 100785, "<|LOC_489|>": 100786, "<|LOC_490|>": 100787, "<|LOC_491|>": 100788, "<|LOC_492|>": 100789, "<|LOC_493|>": 100790, "<|LOC_494|>": 100791, "<|LOC_495|>": 100792, "<|LOC_496|>": 100793, "<|LOC_497|>": 100794, "<|LOC_498|>": 100795, "<|LOC_499|>": 100796, "<|LOC_500|>": 100797, "<|LOC_501|>": 100798, "<|LOC_502|>": 100799, "<|LOC_503|>": 100800, "<|LOC_504|>": 100801, "<|LOC_505|>": 100802, "<|LOC_506|>": 100803, "<|LOC_507|>": 100804, "<|LOC_508|>": 100805, "<|LOC_509|>": 100806, "<|LOC_510|>": 100807, "<|LOC_511|>": 100808, "<|LOC_512|>": 100809, "<|LOC_513|>": 100810, "<|LOC_514|>": 100811, "<|LOC_515|>": 100812, "<|LOC_516|>": 100813, "<|LOC_517|>": 100814, "<|LOC_518|>": 100815, "<|LOC_519|>": 100816, "<|LOC_520|>": 100817, "<|LOC_521|>": 100818, "<|LOC_522|>": 100819, "<|LOC_523|>": 100820, "<|LOC_524|>": 100821, "<|LOC_525|>": 100822, "<|LOC_526|>": 100823, "<|LOC_527|>": 100824, "<|LOC_528|>": 100825, "<|LOC_529|>": 100826, "<|LOC_530|>": 100827, "<|LOC_531|>": 100828, "<|LOC_532|>": 100829, "<|LOC_533|>": 100830, "<|LOC_534|>": 100831, "<|LOC_535|>": 100832, "<|LOC_536|>": 100833, "<|LOC_537|>": 100834, "<|LOC_538|>": 100835, "<|LOC_539|>": 100836, "<|LOC_540|>": 100837, "<|LOC_541|>": 100838, "<|LOC_542|>": 100839, "<|LOC_543|>": 100840, "<|LOC_544|>": 100841, "<|LOC_545|>": 100842, "<|LOC_546|>": 100843, "<|LOC_547|>": 100844, "<|LOC_548|>": 100845, "<|LOC_549|>": 100846, "<|LOC_550|>": 100847, "<|LOC_551|>": 100848, "<|LOC_552|>": 100849, "<|LOC_553|>": 100850, "<|LOC_554|>": 100851, "<|LOC_555|>": 100852, "<|LOC_556|>": 100853, "<|LOC_557|>": 100854, "<|LOC_558|>": 100855, "<|LOC_559|>": 100856, "<|LOC_560|>": 100857, "<|LOC_561|>": 100858, "<|LOC_562|>": 100859, "<|LOC_563|>": 100860, "<|LOC_564|>": 100861, "<|LOC_565|>": 100862, "<|LOC_566|>": 100863, "<|LOC_567|>": 100864, "<|LOC_568|>": 100865, "<|LOC_569|>": 100866, "<|LOC_570|>": 100867, "<|LOC_571|>": 100868, "<|LOC_572|>": 100869, "<|LOC_573|>": 100870, "<|LOC_574|>": 100871, "<|LOC_575|>": 100872, "<|LOC_576|>": 100873, "<|LOC_577|>": 100874, "<|LOC_578|>": 100875, "<|LOC_579|>": 100876, "<|LOC_580|>": 100877, "<|LOC_581|>": 100878, "<|LOC_582|>": 100879, "<|LOC_583|>": 100880, "<|LOC_584|>": 100881, "<|LOC_585|>": 100882, "<|LOC_586|>": 100883, "<|LOC_587|>": 100884, "<|LOC_588|>": 100885, "<|LOC_589|>": 100886, "<|LOC_590|>": 100887, "<|LOC_591|>": 100888, "<|LOC_592|>": 100889, "<|LOC_593|>": 100890, "<|LOC_594|>": 100891, "<|LOC_595|>": 100892, "<|LOC_596|>": 100893, "<|LOC_597|>": 100894, "<|LOC_598|>": 100895, "<|LOC_599|>": 100896, "<|LOC_600|>": 100897, "<|LOC_601|>": 100898, "<|LOC_602|>": 100899, "<|LOC_603|>": 100900, "<|LOC_604|>": 100901, "<|LOC_605|>": 100902, "<|LOC_606|>": 100903, "<|LOC_607|>": 100904, "<|LOC_608|>": 100905, "<|LOC_609|>": 100906, "<|LOC_610|>": 100907, "<|LOC_611|>": 100908, "<|LOC_612|>": 100909, "<|LOC_613|>": 100910, "<|LOC_614|>": 100911, "<|LOC_615|>": 100912, "<|LOC_616|>": 100913, "<|LOC_617|>": 100914, "<|LOC_618|>": 100915, "<|LOC_619|>": 100916, "<|LOC_620|>": 100917, "<|LOC_621|>": 100918, "<|LOC_622|>": 100919, "<|LOC_623|>": 100920, "<|LOC_624|>": 100921, "<|LOC_625|>": 100922, "<|LOC_626|>": 100923, "<|LOC_627|>": 100924, "<|LOC_628|>": 100925, "<|LOC_629|>": 100926, "<|LOC_630|>": 100927, "<|LOC_631|>": 100928, "<|LOC_632|>": 100929, "<|LOC_633|>": 100930, "<|LOC_634|>": 100931, "<|LOC_635|>": 100932, "<|LOC_636|>": 100933, "<|LOC_637|>": 100934, "<|LOC_638|>": 100935, "<|LOC_639|>": 100936, "<|LOC_640|>": 100937, "<|LOC_641|>": 100938, "<|LOC_642|>": 100939, "<|LOC_643|>": 100940, "<|LOC_644|>": 100941, "<|LOC_645|>": 100942, "<|LOC_646|>": 100943, "<|LOC_647|>": 100944, "<|LOC_648|>": 100945, "<|LOC_649|>": 100946, "<|LOC_650|>": 100947, "<|LOC_651|>": 100948, "<|LOC_652|>": 100949, "<|LOC_653|>": 100950, "<|LOC_654|>": 100951, "<|LOC_655|>": 100952, "<|LOC_656|>": 100953, "<|LOC_657|>": 100954, "<|LOC_658|>": 100955, "<|LOC_659|>": 100956, "<|LOC_660|>": 100957, "<|LOC_661|>": 100958, "<|LOC_662|>": 100959, "<|LOC_663|>": 100960, "<|LOC_664|>": 100961, "<|LOC_665|>": 100962, "<|LOC_666|>": 100963, "<|LOC_667|>": 100964, "<|LOC_668|>": 100965, "<|LOC_669|>": 100966, "<|LOC_670|>": 100967, "<|LOC_671|>": 100968, "<|LOC_672|>": 100969, "<|LOC_673|>": 100970, "<|LOC_674|>": 100971, "<|LOC_675|>": 100972, "<|LOC_676|>": 100973, "<|LOC_677|>": 100974, "<|LOC_678|>": 100975, "<|LOC_679|>": 100976, "<|LOC_680|>": 100977, "<|LOC_681|>": 100978, "<|LOC_682|>": 100979, "<|LOC_683|>": 100980, "<|LOC_684|>": 100981, "<|LOC_685|>": 100982, "<|LOC_686|>": 100983, "<|LOC_687|>": 100984, "<|LOC_688|>": 100985, "<|LOC_689|>": 100986, "<|LOC_690|>": 100987, "<|LOC_691|>": 100988, "<|LOC_692|>": 100989, "<|LOC_693|>": 100990, "<|LOC_694|>": 100991, "<|LOC_695|>": 100992, "<|LOC_696|>": 100993, "<|LOC_697|>": 100994, "<|LOC_698|>": 100995, "<|LOC_699|>": 100996, "<|LOC_700|>": 100997, "<|LOC_701|>": 100998, "<|LOC_702|>": 100999, "<|LOC_703|>": 101000, "<|LOC_704|>": 101001, "<|LOC_705|>": 101002, "<|LOC_706|>": 101003, "<|LOC_707|>": 101004, "<|LOC_708|>": 101005, "<|LOC_709|>": 101006, "<|LOC_710|>": 101007, "<|LOC_711|>": 101008, "<|LOC_712|>": 101009, "<|LOC_713|>": 101010, "<|LOC_714|>": 101011, "<|LOC_715|>": 101012, "<|LOC_716|>": 101013, "<|LOC_717|>": 101014, "<|LOC_718|>": 101015, "<|LOC_719|>": 101016, "<|LOC_720|>": 101017, "<|LOC_721|>": 101018, "<|LOC_722|>": 101019, "<|LOC_723|>": 101020, "<|LOC_724|>": 101021, "<|LOC_725|>": 101022, "<|LOC_726|>": 101023, "<|LOC_727|>": 101024, "<|LOC_728|>": 101025, "<|LOC_729|>": 101026, "<|LOC_730|>": 101027, "<|LOC_731|>": 101028, "<|LOC_732|>": 101029, "<|LOC_733|>": 101030, "<|LOC_734|>": 101031, "<|LOC_735|>": 101032, "<|LOC_736|>": 101033, "<|LOC_737|>": 101034, "<|LOC_738|>": 101035, "<|LOC_739|>": 101036, "<|LOC_740|>": 101037, "<|LOC_741|>": 101038, "<|LOC_742|>": 101039, "<|LOC_743|>": 101040, "<|LOC_744|>": 101041, "<|LOC_745|>": 101042, "<|LOC_746|>": 101043, "<|LOC_747|>": 101044, "<|LOC_748|>": 101045, "<|LOC_749|>": 101046, "<|LOC_750|>": 101047, "<|LOC_751|>": 101048, "<|LOC_752|>": 101049, "<|LOC_753|>": 101050, "<|LOC_754|>": 101051, "<|LOC_755|>": 101052, "<|LOC_756|>": 101053, "<|LOC_757|>": 101054, "<|LOC_758|>": 101055, "<|LOC_759|>": 101056, "<|LOC_760|>": 101057, "<|LOC_761|>": 101058, "<|LOC_762|>": 101059, "<|LOC_763|>": 101060, "<|LOC_764|>": 101061, "<|LOC_765|>": 101062, "<|LOC_766|>": 101063, "<|LOC_767|>": 101064, "<|LOC_768|>": 101065, "<|LOC_769|>": 101066, "<|LOC_770|>": 101067, "<|LOC_771|>": 101068, "<|LOC_772|>": 101069, "<|LOC_773|>": 101070, "<|LOC_774|>": 101071, "<|LOC_775|>": 101072, "<|LOC_776|>": 101073, "<|LOC_777|>": 101074, "<|LOC_778|>": 101075, "<|LOC_779|>": 101076, "<|LOC_780|>": 101077, "<|LOC_781|>": 101078, "<|LOC_782|>": 101079, "<|LOC_783|>": 101080, "<|LOC_784|>": 101081, "<|LOC_785|>": 101082, "<|LOC_786|>": 101083, "<|LOC_787|>": 101084, "<|LOC_788|>": 101085, "<|LOC_789|>": 101086, "<|LOC_790|>": 101087, "<|LOC_791|>": 101088, "<|LOC_792|>": 101089, "<|LOC_793|>": 101090, "<|LOC_794|>": 101091, "<|LOC_795|>": 101092, "<|LOC_796|>": 101093, "<|LOC_797|>": 101094, "<|LOC_798|>": 101095, "<|LOC_799|>": 101096, "<|LOC_800|>": 101097, "<|LOC_801|>": 101098, "<|LOC_802|>": 101099, "<|LOC_803|>": 101100, "<|LOC_804|>": 101101, "<|LOC_805|>": 101102, "<|LOC_806|>": 101103, "<|LOC_807|>": 101104, "<|LOC_808|>": 101105, "<|LOC_809|>": 101106, "<|LOC_810|>": 101107, "<|LOC_811|>": 101108, "<|LOC_812|>": 101109, "<|LOC_813|>": 101110, "<|LOC_814|>": 101111, "<|LOC_815|>": 101112, "<|LOC_816|>": 101113, "<|LOC_817|>": 101114, "<|LOC_818|>": 101115, "<|LOC_819|>": 101116, "<|LOC_820|>": 101117, "<|LOC_821|>": 101118, "<|LOC_822|>": 101119, "<|LOC_823|>": 101120, "<|LOC_824|>": 101121, "<|LOC_825|>": 101122, "<|LOC_826|>": 101123, "<|LOC_827|>": 101124, "<|LOC_828|>": 101125, "<|LOC_829|>": 101126, "<|LOC_830|>": 101127, "<|LOC_831|>": 101128, "<|LOC_832|>": 101129, "<|LOC_833|>": 101130, "<|LOC_834|>": 101131, "<|LOC_835|>": 101132, "<|LOC_836|>": 101133, "<|LOC_837|>": 101134, "<|LOC_838|>": 101135, "<|LOC_839|>": 101136, "<|LOC_840|>": 101137, "<|LOC_841|>": 101138, "<|LOC_842|>": 101139, "<|LOC_843|>": 101140, "<|LOC_844|>": 101141, "<|LOC_845|>": 101142, "<|LOC_846|>": 101143, "<|LOC_847|>": 101144, "<|LOC_848|>": 101145, "<|LOC_849|>": 101146, "<|LOC_850|>": 101147, "<|LOC_851|>": 101148, "<|LOC_852|>": 101149, "<|LOC_853|>": 101150, "<|LOC_854|>": 101151, "<|LOC_855|>": 101152, "<|LOC_856|>": 101153, "<|LOC_857|>": 101154, "<|LOC_858|>": 101155, "<|LOC_859|>": 101156, "<|LOC_860|>": 101157, "<|LOC_861|>": 101158, "<|LOC_862|>": 101159, "<|LOC_863|>": 101160, "<|LOC_864|>": 101161, "<|LOC_865|>": 101162, "<|LOC_866|>": 101163, "<|LOC_867|>": 101164, "<|LOC_868|>": 101165, "<|LOC_869|>": 101166, "<|LOC_870|>": 101167, "<|LOC_871|>": 101168, "<|LOC_872|>": 101169, "<|LOC_873|>": 101170, "<|LOC_874|>": 101171, "<|LOC_875|>": 101172, "<|LOC_876|>": 101173, "<|LOC_877|>": 101174, "<|LOC_878|>": 101175, "<|LOC_879|>": 101176, "<|LOC_880|>": 101177, "<|LOC_881|>": 101178, "<|LOC_882|>": 101179, "<|LOC_883|>": 101180, "<|LOC_884|>": 101181, "<|LOC_885|>": 101182, "<|LOC_886|>": 101183, "<|LOC_887|>": 101184, "<|LOC_888|>": 101185, "<|LOC_889|>": 101186, "<|LOC_890|>": 101187, "<|LOC_891|>": 101188, "<|LOC_892|>": 101189, "<|LOC_893|>": 101190, "<|LOC_894|>": 101191, "<|LOC_895|>": 101192, "<|LOC_896|>": 101193, "<|LOC_897|>": 101194, "<|LOC_898|>": 101195, "<|LOC_899|>": 101196, "<|LOC_900|>": 101197, "<|LOC_901|>": 101198, "<|LOC_902|>": 101199, "<|LOC_903|>": 101200, "<|LOC_904|>": 101201, "<|LOC_905|>": 101202, "<|LOC_906|>": 101203, "<|LOC_907|>": 101204, "<|LOC_908|>": 101205, "<|LOC_909|>": 101206, "<|LOC_910|>": 101207, "<|LOC_911|>": 101208, "<|LOC_912|>": 101209, "<|LOC_913|>": 101210, "<|LOC_914|>": 101211, "<|LOC_915|>": 101212, "<|LOC_916|>": 101213, "<|LOC_917|>": 101214, "<|LOC_918|>": 101215, "<|LOC_919|>": 101216, "<|LOC_920|>": 101217, "<|LOC_921|>": 101218, "<|LOC_922|>": 101219, "<|LOC_923|>": 101220, "<|LOC_924|>": 101221, "<|LOC_925|>": 101222, "<|LOC_926|>": 101223, "<|LOC_927|>": 101224, "<|LOC_928|>": 101225, "<|LOC_929|>": 101226, "<|LOC_930|>": 101227, "<|LOC_931|>": 101228, "<|LOC_932|>": 101229, "<|LOC_933|>": 101230, "<|LOC_934|>": 101231, "<|LOC_935|>": 101232, "<|LOC_936|>": 101233, "<|LOC_937|>": 101234, "<|LOC_938|>": 101235, "<|LOC_939|>": 101236, "<|LOC_940|>": 101237, "<|LOC_941|>": 101238, "<|LOC_942|>": 101239, "<|LOC_943|>": 101240, "<|LOC_944|>": 101241, "<|LOC_945|>": 101242, "<|LOC_946|>": 101243, "<|LOC_947|>": 101244, "<|LOC_948|>": 101245, "<|LOC_949|>": 101246, "<|LOC_950|>": 101247, "<|LOC_951|>": 101248, "<|LOC_952|>": 101249, "<|LOC_953|>": 101250, "<|LOC_954|>": 101251, "<|LOC_955|>": 101252, "<|LOC_956|>": 101253, "<|LOC_957|>": 101254, "<|LOC_958|>": 101255, "<|LOC_959|>": 101256, "<|LOC_960|>": 101257, "<|LOC_961|>": 101258, "<|LOC_962|>": 101259, "<|LOC_963|>": 101260, "<|LOC_964|>": 101261, "<|LOC_965|>": 101262, "<|LOC_966|>": 101263, "<|LOC_967|>": 101264, "<|LOC_968|>": 101265, "<|LOC_969|>": 101266, "<|LOC_970|>": 101267, "<|LOC_971|>": 101268, "<|LOC_972|>": 101269, "<|LOC_973|>": 101270, "<|LOC_974|>": 101271, "<|LOC_975|>": 101272, "<|LOC_976|>": 101273, "<|LOC_977|>": 101274, "<|LOC_978|>": 101275, "<|LOC_979|>": 101276, "<|LOC_980|>": 101277, "<|LOC_981|>": 101278, "<|LOC_982|>": 101279, "<|LOC_983|>": 101280, "<|LOC_984|>": 101281, "<|LOC_985|>": 101282, "<|LOC_986|>": 101283, "<|LOC_987|>": 101284, "<|LOC_988|>": 101285, "<|LOC_989|>": 101286, "<|LOC_990|>": 101287, "<|LOC_991|>": 101288, "<|LOC_992|>": 101289, "<|LOC_993|>": 101290, "<|LOC_994|>": 101291, "<|LOC_995|>": 101292, "<|LOC_996|>": 101293, "<|LOC_997|>": 101294, "<|LOC_998|>": 101295, "<|LOC_999|>": 101296, "<|LOC_1000|>": 101297, "<|LOC_BEGIN|>": 101298, "<|LOC_END|>": 101299, "<|LOC_SEP|>": 101300, "<|CROP_COL_SEP|>": 101301, "<|CROP_ROW_SEP|>": 101302, "<|IMAGE_SEP|>": 101303, "<|IMAGE_START|>": 101304, "<|IMAGE_END|>": 101305, "<|VIDEO_START|>": 101306, "<|VIDEO_END|>": 101307, "<|ASR_START|>": 101308, "<|ASR_END|>": 101309}
benchmark.jpg ADDED

Git LFS Details

  • SHA256: 74f70f90a1c843e73bf1e3eb13a3a4723da191b7bd485d00e588153c3c10ba83
  • Pointer size: 131 Bytes
  • Size of remote file: 660 kB
config.json ADDED
@@ -0,0 +1,75 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architectures": [
3
+ "Ernie4_5_VLMoeForConditionalGeneration"
4
+ ],
5
+ "auto_map": {
6
+ "AutoConfig": "configuration_ernie4_5_vl.Ernie4_5_VLMoEConfig",
7
+ "AutoModel": "modeling_ernie4_5_vl.Ernie4_5_VLMoeForConditionalGeneration",
8
+ "AutoModelForCausalLM": "modeling_ernie4_5_vl.Ernie4_5_VLMoeForConditionalGeneration",
9
+ "AutoProcessor": "processing_ernie4_5_vl.Ernie4_5_VLProcessor",
10
+ "AutoImageProcessor": "processing_ernie4_5_vl.Ernie4_5_VLImageProcessor"
11
+ },
12
+ "pad_token_id": 0,
13
+ "bos_token_id": 1,
14
+ "eos_token_id": 2,
15
+ "torch_dtype": "bfloat16",
16
+ "hidden_act": "silu",
17
+ "hidden_size": 2560,
18
+ "intermediate_size": 12288,
19
+ "im_patch_id": 100295,
20
+ "image_start_token_id": 101304,
21
+ "image_end_token_id": 101305,
22
+ "video_start_token_id": 101306,
23
+ "video_end_token_id": 101307,
24
+ "max_position_embeddings": 131072,
25
+ "model_type": "ernie4_5_moe_vl",
26
+ "moe_capacity": [128, 128, 128],
27
+ "moe_gate": "topk",
28
+ "moe_intermediate_size": [1536, 512],
29
+ "moe_k": 6,
30
+ "moe_layer_end_index": [29, 28],
31
+ "moe_layer_interval": 1,
32
+ "moe_layer_start_index": [1, 1],
33
+ "moe_multimodal_dispatch_use_allgather": "v2-alltoall-unpad-text",
34
+ "moe_num_experts": [64, 64],
35
+ "moe_num_shared_experts": 2,
36
+ "moe_use_aux_free": true,
37
+ "num_attention_heads": 20,
38
+ "num_hidden_layers": 28,
39
+ "num_key_value_heads": 4,
40
+ "pixel_hidden_size": 1280,
41
+ "rms_norm_eps": 1e-05,
42
+ "rope_3d": true,
43
+ "rope_theta": 500000,
44
+ "spatial_conv_size": 2,
45
+ "temporal_conv_size": 2,
46
+ "vocab_size": 103424,
47
+ "tie_word_embeddings": true,
48
+ "use_cache": true,
49
+ "use_rmsnorm": true,
50
+ "use_bias": false,
51
+ "rope_scaling": {
52
+ "type": "default",
53
+ "mrope_section": [
54
+ 22,
55
+ 22,
56
+ 20
57
+ ]
58
+ },
59
+ "vision_config": {
60
+ "attn_implementation": "eager",
61
+ "depth": 32,
62
+ "embed_dim": 1280,
63
+ "hidden_act": "quick_gelu",
64
+ "hidden_size": 1280,
65
+ "in_channels": 3,
66
+ "in_chans": 3,
67
+ "mlp_ratio": 4,
68
+ "num_heads": 16,
69
+ "patch_size": 14,
70
+ "spatial_merge_size": 2,
71
+ "spatial_patch_size": 14,
72
+ "vit_first_fwd_bsz": 128,
73
+ "attn_sep": true
74
+ }
75
+ }
configuration_ernie4_5_vl.py ADDED
@@ -0,0 +1,658 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright (c) 2025 Baidu, Inc. All Rights Reserved.
2
+ #
3
+ # Licensed under the Apache License, Version 2.0 (the "License");
4
+ # you may not use this file except in compliance with the License.
5
+ # You may obtain a copy of the License at
6
+ #
7
+ # http://www.apache.org/licenses/LICENSE-2.0
8
+ #
9
+ # Unless required by applicable law or agreed to in writing, software
10
+ # distributed under the License is distributed on an "AS IS" BASIS,
11
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
+ # See the License for the specific language governing permissions and
13
+ # limitations under the License.
14
+
15
+ """Ernie model configuration"""
16
+ import copy
17
+
18
+ from typing import List, Optional, Tuple, Union
19
+
20
+ from transformers import PretrainedConfig
21
+
22
+
23
+ __all__ = [
24
+ "ERNIE_PRETRAINED_INIT_CONFIGURATION",
25
+ "Ernie4_5_Config",
26
+ "Ernie4_5_MoEConfig",
27
+ "Ernie4_5_VLMoEConfig",
28
+ ]
29
+
30
+
31
+ class DFNRopeVisionTransformerConfig(PretrainedConfig):
32
+ """
33
+ Configuration class for DFNRopeVisionTransformer model.
34
+ This class inherits from [`PretrainedConfig`] and can be used to control the model outputs. Read the
35
+ documentation from [`PretrainedConfig`] for more information.
36
+ """
37
+
38
+ model_type = "DFNRope_vision_transformer"
39
+ base_model_tp_plan = {}
40
+
41
+ def __init__(
42
+ self,
43
+ depth=32,
44
+ embed_dim=1280,
45
+ hidden_size=3584,
46
+ hidden_act="quick_gelu",
47
+ mlp_ratio=4,
48
+ num_heads=16,
49
+ in_channels=3,
50
+ patch_size=14,
51
+ spatial_merge_size=2,
52
+ attn_implementation="eager", # new added
53
+ pp_data_balance=False,
54
+ recompute=False,
55
+ attn_sep=False,
56
+ vit_first_fwd_bsz=128,
57
+ vit_num_recompute_layers=10000,
58
+ **kwargs,
59
+ ):
60
+ """
61
+ Initialize DFNRopeVisionTransformer model configuration with default or specified parameters.
62
+
63
+ Args:
64
+ depth (int): Number of transformer layers in the model.
65
+ embed_dim (int): Dimensionality of the embedding layer.
66
+ hidden_size (int): Dimensionality of the feedforward network.
67
+ hidden_act (str): Activation function for the feedforward network.
68
+ mlp_ratio (float): Ratio between the number of input features and
69
+ the number of output features in the feedforward network.
70
+ num_heads (int): Number of attention heads in each attention layer.
71
+ in_channels (int): Number of channels in the input image.
72
+ patch_size (int):
73
+ Size of patches in the input image. Defaults to 14.
74
+ spatial_merge_size (int):
75
+ Spatial merge size for the spatial transformer module. Defaults to 2.
76
+ attn_implementation (str): Attention implementation type. Defaults to "eager".
77
+ pp_data_balance (bool): Whether to balance data during preprocessing. Defaults to False.
78
+ recompute (bool): Whether to use recompute. Defaults to False.
79
+ attn_sep (bool): Whether to separate attention computation into two stages. Defaults to False.
80
+ vit_first_fwd_bsz (int): First forward batch size for ViT. Defaults to 128.
81
+ vit_num_recompute_layers (int): Number of recomputed layers for ViT. Defaults to
82
+ """
83
+ super().__init__(**kwargs)
84
+
85
+ self.depth = depth
86
+ self.embed_dim = embed_dim
87
+ self.hidden_size = hidden_size
88
+ self.hidden_act = hidden_act
89
+ self.mlp_ratio = mlp_ratio
90
+ self.num_heads = num_heads
91
+ self.in_channels = in_channels
92
+ self.patch_size = patch_size
93
+ self.spatial_merge_size = spatial_merge_size
94
+ self.attn_implementation = attn_implementation
95
+ self.pp_data_balance = pp_data_balance
96
+ self.recompute = recompute
97
+ self.attn_sep = attn_sep
98
+ self.vit_first_fwd_bsz = vit_first_fwd_bsz
99
+ self.vit_num_recompute_layers = vit_num_recompute_layers
100
+
101
+ def get(self, key, default=None):
102
+ """get config value by key"""
103
+ if hasattr(self, key):
104
+ return getattr(self, key)
105
+ else:
106
+ return default
107
+
108
+
109
+ ERNIE_PRETRAINED_INIT_CONFIGURATION = {
110
+ "ernie/tiny-random-ernie": {
111
+ "hidden_size": 768,
112
+ "initializer_range": 0.02,
113
+ "intermediate_size": 11008,
114
+ "max_position_embeddings": 2048,
115
+ "model_type": "ernie",
116
+ "num_attention_heads": 2,
117
+ "num_hidden_layers": 2,
118
+ "rms_norm_eps": 1e-06,
119
+ "vocab_size": 32000,
120
+ "bos_token_id": 1,
121
+ "eos_token_id": 2,
122
+ "pad_token_id": 0,
123
+ "use_cache": False,
124
+ "recompute": False,
125
+ "use_flash_attn": True,
126
+ "use_pure_fp16": False,
127
+ },
128
+ }
129
+
130
+
131
+ class Ernie4_5_Config(PretrainedConfig):
132
+ """
133
+ Configuration class for ERNIE model.
134
+
135
+ This class stores the configuration of an ERNIE model, defining the model architecture.
136
+ It inherits from PretrainedConfig and can be used to control model outputs.
137
+ """
138
+
139
+ model_type = "ernie"
140
+ pretrained_init_configuration = ERNIE_PRETRAINED_INIT_CONFIGURATION
141
+ base_model_tp_plan = {}
142
+
143
+ def __init__(
144
+ self,
145
+ vocab_size=32000,
146
+ hidden_size=768,
147
+ intermediate_size=11008,
148
+ max_position_embeddings=32768,
149
+ num_hidden_layers=2,
150
+ num_attention_heads=2,
151
+ initializer_range=0.02, # no use
152
+ rms_norm_eps=1e-6,
153
+ use_cache=False,
154
+ use_flash_attention=True,
155
+ use_sparse_flash_attn=True,
156
+ use_var_len_flash_attn=False,
157
+ recompute=False,
158
+ recompute_granularity="core_attn",
159
+ recompute_use_reentrant=False,
160
+ use_rmsnorm=True,
161
+ fuse_rms_norm=False,
162
+ fuse_ln=False,
163
+ pad_token_id=0,
164
+ bos_token_id=1,
165
+ eos_token_id=2,
166
+ fuse_swiglu=False,
167
+ use_bias=False,
168
+ rope_theta=10000,
169
+ fuse_rope=False,
170
+ fuse_softmax_mask=False,
171
+ use_fast_ln=False,
172
+ weight_share_add_bias=True,
173
+ fuse_linear=False,
174
+ max_sequence_length=None,
175
+ ignored_index=-100,
176
+ add_tail_layers=False,
177
+ use_recompute_lm_head=False,
178
+ use_recompute_loss_fn=False,
179
+ refined_recompute=dict(),
180
+ attention_probs_dropout_prob=0.0,
181
+ hidden_dropout_prob=0.0,
182
+ compression_ratio: float = 1.0,
183
+ num_key_value_heads=None,
184
+ use_sparse_head_and_loss_fn=False,
185
+ micro_batch_size=-1,
186
+ use_ep_comm_overlap=False,
187
+ use_fused_head_and_loss_fn=False,
188
+ token_balance_loss=False,
189
+ token_balance_seqlen=False, # calculated based on batchsize and seqlen
190
+ cachekv_quant: bool = False,
191
+ pp_seg_method="layer:ErnieDecoderLayer|EmptyLayer",
192
+ **kwargs,
193
+ ):
194
+ """
195
+ Initialize ERNIE model configuration with default or specified parameters.
196
+
197
+ Args:
198
+ vocab_size (int): Size of the vocabulary (number of unique tokens)
199
+ hidden_size (int): Dimensionality of the encoder layers and the pooler layer
200
+ intermediate_size (int): Dimensionality of the "intermediate" (feed-forward) layer
201
+ max_position_embeddings (int): Maximum sequence length the model can handle
202
+ num_hidden_layers (int): Number of hidden layers in the Transformer encoder
203
+ num_attention_heads (int): Number of attention heads for each attention layer
204
+ rms_norm_eps (float): The epsilon used by the RMS normalization layers
205
+ use_cache (bool): Whether to use caching for faster generation (decoding)
206
+ use_flash_attention (bool): Whether to use FlashAttention for optimized attention computation
207
+ use_sparse_flash_attn (bool): Whether to use sparse FlashAttention
208
+ use_var_len_flash_attn (bool): Whether to use variable-length FlashAttention
209
+ recompute (bool): Whether to use gradient checkpointing to save memory
210
+ recompute_granularity (str): Granularity of recomputation ("core_attn", "full", etc.)
211
+ recompute_use_reentrant (bool): Whether to use reentrant checkpointing
212
+ use_rmsnorm (bool): Whether to use RMSNorm instead of LayerNorm
213
+ fuse_rms_norm (bool): Whether to fuse RMSNorm operations for optimization
214
+ fuse_ln (bool): Whether to fuse LayerNorm operations
215
+ pad_token_id (int): Token ID used for padding sequences
216
+ bos_token_id (int): Token ID used for beginning-of-sequence
217
+ eos_token_id (int): Token ID used for end-of-sequence
218
+ fuse_swiglu (bool): Whether to fuse SwiGLU operations
219
+ use_bias (bool): Whether to use bias terms in linear layers
220
+ rope_theta (float): The base period of the RoPE embeddings
221
+ fuse_rope (bool): Whether to fuse RoPE operations
222
+ use_fast_ln (bool): Whether to use optimized LayerNorm implementation
223
+ weight_share_add_bias (bool): Whether to share bias weights in certain layers
224
+ fuse_linear (bool): Whether to fuse linear operations
225
+ max_sequence_length (int): Maximum sequence length for positional embeddings
226
+ ignored_index (int): Target value that is ignored during loss computation
227
+ add_tail_layers (bool): Whether to add additional layers at the end
228
+ use_recompute_lm_head (bool): Whether to recompute gradients for language model head
229
+ use_recompute_loss_fn (bool): Whether to recompute gradients for loss function
230
+ refined_recompute (dict): Dictionary specifying refined recomputation settings
231
+ attention_probs_dropout_prob (float): Dropout probability for attention weights
232
+ hidden_dropout_prob (float): Dropout probability for hidden layers
233
+ compression_ratio (float): Ratio for KV cache compression (1.0 = no compression)
234
+ num_key_value_heads (int): Number of key/value heads (for Grouped Query Attention)
235
+ use_sparse_head_and_loss_fn (bool): Whether to use sparse attention head and loss function
236
+ micro_batch_size (int): Size of micro batches (-1 for automatic)
237
+ use_ep_comm_overlap (bool): Whether to overlap communication with computation
238
+ use_fused_head_loss_fn (bool): Whether to use fused head and loss function
239
+ token_balance_loss (bool): Whether to balance loss by token count
240
+ token_balance_seqlen (bool): Whether to balance sequence lengths
241
+ cachekv_quant (bool): Whether to quantize key-value cache
242
+ pp_seg_method (str): Method for pipeline parallel segmentation
243
+ **kwargs: Additional keyword arguments passed to parent class
244
+ """
245
+
246
+ # Set default for tied embeddings if not specified.
247
+ if "tie_word_embeddings" not in kwargs:
248
+ kwargs["tie_word_embeddings"] = False
249
+ super().__init__(
250
+ pad_token_id=pad_token_id,
251
+ bos_token_id=bos_token_id,
252
+ eos_token_id=eos_token_id,
253
+ **kwargs,
254
+ )
255
+ self.vocab_size = vocab_size
256
+ self.hidden_size = hidden_size
257
+ self.intermediate_size = intermediate_size
258
+ self.max_position_embeddings = max_position_embeddings
259
+ self.num_hidden_layers = num_hidden_layers
260
+ self.num_attention_heads = num_attention_heads
261
+ self.initializer_range = initializer_range
262
+ self.rms_norm_eps = rms_norm_eps
263
+ self.use_cache = use_cache
264
+ self.recompute = recompute
265
+ self.recompute_granularity = recompute_granularity
266
+ self.use_flash_attention = use_flash_attention
267
+ self.use_sparse_flash_attn = use_sparse_flash_attn
268
+ self.recompute_use_reentrant = recompute_use_reentrant
269
+ self.use_var_len_flash_attn = use_var_len_flash_attn
270
+ self.pad_token_id = pad_token_id
271
+ self.bos_token_id = bos_token_id
272
+ self.eos_token_id = eos_token_id
273
+ self.fuse_swiglu = fuse_swiglu
274
+ self.fuse_rms_norm = fuse_rms_norm
275
+ self.fuse_ln = fuse_ln
276
+ self.use_rmsnorm = use_rmsnorm
277
+ self.micro_batch_size = micro_batch_size
278
+
279
+ self.max_sequence_length = max_sequence_length
280
+ self.use_bias = use_bias
281
+ self.weight_share_add_bias = weight_share_add_bias
282
+ self.rope_theta = rope_theta
283
+ self.fuse_rope = fuse_rope
284
+ self.fuse_softmax_mask = fuse_softmax_mask
285
+ self.use_fast_ln = use_fast_ln
286
+
287
+ self.fuse_linear = fuse_linear
288
+ self.ignored_index = ignored_index
289
+ self.add_tail_layers = add_tail_layers
290
+ self.use_recompute_lm_head = use_recompute_lm_head
291
+ self.use_recompute_loss_fn = use_recompute_loss_fn
292
+
293
+ self.refined_recompute = refined_recompute
294
+ self.skip_recompute_ops = dict()
295
+ """
296
+ `refined_recompute` is a dictionary that specifies fine-grained gradient recomputation settings,
297
+ which currently only takes effect in Pipeline Parallel (PP) mode.
298
+
299
+ In PP mode, this dictionary populates `self.skip_recompute_ops` with the following structure:
300
+ - Key (`op_name`): The operation name to configure, with possible values:
301
+ * "mlp_row_ln" - MLP row-wise layer normalization
302
+ * "flash_attn" - Flash attention operation
303
+ * "attention_row_ln" - Attention row-wise layer normalization
304
+ * "attention_column_ln" - Attention column-wise layer normalization
305
+ * "mlp_column_ln" - MLP column-wise layer normalization
306
+
307
+ - Value (`skip_num`): Controls how many times to skip recomputation:
308
+ * 0: Never skip recomputation (minimum memory usage)
309
+ * -1: Always skip recomputation (maximum memory usage)
310
+ * [0,1,...,12]: Skip recomputation for specified number of times
311
+ * ≥12: Equivalent to -1 (always skip recomputation)
312
+
313
+ This allows precise control over memory/computation tradeoffs for different operations.
314
+ """
315
+ self.attention_probs_dropout_prob = attention_probs_dropout_prob
316
+ self.hidden_dropout_prob = hidden_dropout_prob
317
+ self.compression_ratio = compression_ratio
318
+ self.num_key_value_heads = num_key_value_heads
319
+ self.use_sparse_head_and_loss_fn = use_sparse_head_and_loss_fn
320
+ self.use_ep_comm_overlap = use_ep_comm_overlap
321
+ self.use_fused_head_and_loss_fn = use_fused_head_and_loss_fn
322
+ self.token_balance_loss = token_balance_loss
323
+ self.token_balance_seqlen = token_balance_seqlen
324
+ self.cachekv_quant = cachekv_quant
325
+ self.pp_seg_method = pp_seg_method
326
+
327
+ def get(self, key, default=None):
328
+ """get config value by key"""
329
+ if hasattr(self, key):
330
+ return getattr(self, key)
331
+ else:
332
+ return default
333
+
334
+
335
+ class Ernie4_5_MoEConfig(Ernie4_5_Config):
336
+ r"""
337
+ Configuration class for ErnieMoE model architecture.
338
+
339
+ This class stores the configuration for a [`~ErnieModel`] and is used to instantiate
340
+ an ErnieMoE model according to the specified arguments. Inherits from [`PretrainedConfig`]
341
+ and can control model outputs.
342
+
343
+ Attributes:
344
+ Inherits all attributes from Ernie4_5_Config and adds MoE-specific configurations.
345
+ """
346
+
347
+ model_type = "ernie"
348
+ attribute_map = {
349
+ "n_positions": "max_position_embeddings",
350
+ "n_embd": "hidden_size",
351
+ "n_layer": "num_hidden_layers",
352
+ "n_head": "num_attention_heads",
353
+ "n_inner": "intermediate_size",
354
+ "activation_function": "hidden_act",
355
+ }
356
+ pretrained_init_configuration = ERNIE_PRETRAINED_INIT_CONFIGURATION
357
+ base_model_tp_plan = {}
358
+
359
+ def __init__(
360
+ self,
361
+ moe_num_experts: Union[int, list] = 0,
362
+ use_recompute_moe=False,
363
+ moe_capacity=(),
364
+ moe_layer_interval=2,
365
+ moe_layer_start_index=0,
366
+ moe_layer_end_index=-1,
367
+ moe_aux_loss_lambda=1e-2,
368
+ moe_z_loss_lambda=1e-4,
369
+ moe_orthogonal_loss_lambda=1e-2,
370
+ sinkhorn_2gate=True,
371
+ sinkhorn_temp=3e-2,
372
+ global_aux_loss=False,
373
+ moe_dropout_prob=0.0,
374
+ moe_group="world",
375
+ moe_gate="top2",
376
+ moe_intermediate_size: Union[int, list] = 0,
377
+ moe_num_shared_experts: int = 0,
378
+ moe_reverse_token_drop: bool = False,
379
+ moe_gate_act: str = "softmax",
380
+ moe_norm_gate_logits=True,
381
+ moe_all_to_all_dropout: float = 0.0,
382
+ moe_k=2,
383
+ moe_use_aux_free: bool = False,
384
+ # `moe_group_experts` must be used with `moe_use_hard_gate=True`
385
+ moe_group_experts: bool = False,
386
+ moe_group_orthogonal_loss: bool = True,
387
+ enable_delay_scale_loss: bool = True,
388
+ num_acc_steps: int = 1,
389
+ fuse_gate_detach_matmul: bool = False,
390
+ dpo_config=None,
391
+ moe_multimodal_dispatch_use_allgather: str = "",
392
+ moe_use_hard_gate=False,
393
+ moe_dense_experts_token_type_id=3,
394
+ **kwargs,
395
+ ):
396
+ """
397
+ Initialize ErnieMoE configuration with MoE-specific parameters.
398
+
399
+ Args:
400
+ moe_num_experts: Number of experts in MoE layers
401
+ use_recompute_moe: Whether to use recomputation for MoE layers
402
+ moe_capacity: Capacity configuration for MoE layers
403
+ moe_layer_interval: Interval between MoE layers
404
+ moe_layer_start_index: Starting layer index for MoE
405
+ moe_layer_end_index: Ending layer index for MoE (-1 means last layer)
406
+ moe_aux_loss_lambda: Weight for auxiliary loss
407
+ moe_z_loss_lambda: Weight for z-loss
408
+ moe_orthogonal_loss_lambda: Weight for orthogonal loss
409
+ sinkhorn_2gate: Whether to use sinkhorn 2-gate routing
410
+ sinkhorn_temp: Temperature for sinkhorn routing
411
+ global_aux_loss: Whether to use global auxiliary loss
412
+ moe_dropout_prob: Dropout probability for MoE layers
413
+ moe_group: Group configuration for MoE experts
414
+ moe_gate: Type of gating mechanism ('top2', etc.)
415
+ moe_intermediate_size: Intermediate size for MoE layers
416
+ moe_num_shared_experts: Number of shared experts
417
+ moe_reverse_token_drop: Whether to use reverse token dropping
418
+ moe_gate_act: Activation function for gating
419
+ moe_norm_gate_logits: Whether to normalize gate logits
420
+ moe_all_to_all_dropout: Dropout for all-to-all communication
421
+ moe_k: Number of experts to route to
422
+ moe_use_aux_free: Whether to use auxiliary-free routing
423
+ moe_group_experts: Whether to group experts (requires hard gating)
424
+ moe_group_orthogonal_loss: Whether to use group orthogonal loss
425
+ enable_delay_scale_loss: Whether to enable delayed loss scaling
426
+ num_acc_steps: Number of accumulation steps
427
+ fuse_gate_detach_matmul: Whether to fuse gate detach matmul
428
+ **kwargs: Additional base model configuration parameters
429
+
430
+ Note:
431
+ When use_recompute_moe is True, recompute_granularity will be changed to full_attn.
432
+ """
433
+
434
+ if use_recompute_moe:
435
+ logger.warning(
436
+ "set `use_recompute_moe`=True, disabling `recompute_granularity=full`, change to full_attn."
437
+ )
438
+ if kwargs["recompute"] and kwargs["recompute_granularity"] == "full":
439
+ kwargs["recompute_granularity"] = "full_attn"
440
+ super().__init__(**kwargs)
441
+
442
+ self.moe_num_experts = moe_num_experts
443
+ self.use_recompute_moe = use_recompute_moe
444
+ self.moe_capacity = moe_capacity
445
+ self.moe_aux_loss_lambda = moe_aux_loss_lambda
446
+ self.moe_z_loss_lambda = moe_z_loss_lambda
447
+ self.moe_orthogonal_loss_lambda = moe_orthogonal_loss_lambda
448
+ self.global_aux_loss = global_aux_loss
449
+ self.sinkhorn_2gate = sinkhorn_2gate
450
+ self.sinkhorn_temp = sinkhorn_temp
451
+ self.moe_layer_interval = moe_layer_interval
452
+ self.moe_dropout_prob = moe_dropout_prob
453
+ self.moe_group = moe_group
454
+ self.moe_gate = moe_gate
455
+ self.moe_intermediate_size = moe_intermediate_size
456
+ self.moe_num_shared_experts = moe_num_shared_experts
457
+ self.moe_reverse_token_drop = moe_reverse_token_drop
458
+ self.moe_k = moe_k
459
+ self.moe_all_to_all_dropout = moe_all_to_all_dropout
460
+ self.moe_group_experts = moe_group_experts
461
+ self.moe_group_orthogonal_loss = moe_group_orthogonal_loss
462
+ self.enable_delay_scale_loss = enable_delay_scale_loss
463
+ self.num_acc_steps = num_acc_steps
464
+ self.moe_layer_start_index = moe_layer_start_index
465
+ self.moe_layer_end_index = (
466
+ self.num_hidden_layers - 1
467
+ if moe_layer_end_index == -1
468
+ else moe_layer_end_index
469
+ )
470
+ self.moe_gate_act = moe_gate_act
471
+ self.moe_norm_gate_logits = moe_norm_gate_logits
472
+ self.moe_use_aux_free = moe_use_aux_free
473
+ self.fuse_gate_detach_matmul = fuse_gate_detach_matmul
474
+ self.dpo_config = dpo_config
475
+ self.moe_multimodal_dispatch_use_allgather = (
476
+ moe_multimodal_dispatch_use_allgather
477
+ )
478
+ self.moe_use_hard_gate = moe_use_hard_gate
479
+ self.moe_dense_experts_token_type_id = moe_dense_experts_token_type_id
480
+
481
+ @property
482
+ def multimodel_experts(self) -> bool:
483
+ """multimodel experts."""
484
+ return (
485
+ isinstance(self.moe_num_experts, (tuple, list))
486
+ and len(self.moe_num_experts) > 1
487
+ )
488
+
489
+ @property
490
+ def use_moe(self) -> bool:
491
+ """
492
+ Check if model is using MoE architecture.
493
+
494
+ Returns:
495
+ bool: True if moe_num_experts > 0, False otherwise
496
+ """
497
+ return self.moe_num_experts > 0
498
+
499
+
500
+ class Ernie4_5_VLMoEConfig(Ernie4_5_MoEConfig):
501
+ """
502
+ This is the configuration class to store the configuration of a [`~ErnieModel`]. It is used to instantiate an Ernie
503
+ model according to the specified arguments, defining the model architecture. Instantiating a configuration with the
504
+ defaults will yield a similar configuration to that of the Ernie-7B.
505
+ Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
506
+ documentation from [`PretrainedConfig`] for more information.
507
+ Args:
508
+ vocab_size (`int`, *optional*, defaults to 32000):
509
+ Vocabulary size of the Ernie model. Defines the number of different tokens that can be represented by the
510
+ `inputs_ids` passed when calling [`~ErnieModel`] or [`~TFErnieModel`].
511
+ hidden_size (`int`, *optional*, defaults to 4096):
512
+ Dimension of the hidden representations.
513
+ intermediate_size (`int`, *optional*, defaults to 11008):
514
+ Dimension of the MLP representations.
515
+ num_hidden_layers (`int`, *optional*, defaults to 32):
516
+ Number of hidden layers in the Transformer encoder.
517
+ num_attention_heads (`int`, *optional*, defaults to 32):
518
+ Number of attention heads for each attention layer in the Transformer encoder.
519
+ hidden_act (`str` or `function`, *optional*, defaults to `"silu"`):
520
+ The non-linear activation function (function or string) in the decoder.
521
+ initializer_range (`float`, *optional*, defaults to 0.02):
522
+ The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
523
+ rms_norm_eps (`float`, *optional*, defaults to 1e-12):
524
+ The epsilon used by the rms normalization layers.
525
+ use_cache (`bool`, *optional*, defaults to `True`):
526
+ Whether or not the model should return the last key/values attentions (not used by all models). Only
527
+ relevant if `config.is_decoder=True`.
528
+ tie_word_embeddings(`bool`, *optional*, defaults to `False`):
529
+ Whether to tie weight embeddings
530
+ """
531
+
532
+ model_type = "ernie4_5_moe_vl"
533
+ attribute_map = {
534
+ "n_positions": "max_position_embeddings",
535
+ "n_embd": "hidden_size",
536
+ "n_layer": "num_hidden_layers",
537
+ "n_head": "num_attention_heads",
538
+ "n_inner": "intermediate_size",
539
+ "activation_function": "hidden_act",
540
+ }
541
+ base_model_tp_plan = {
542
+ "model.layers.*.self_attn.q_proj": "colwise_rep",
543
+ "model.layers.*.self_attn.k_proj": "colwise_rep",
544
+ "model.layers.*.self_attn.v_proj": "colwise_rep",
545
+ "model.layers.*.self_attn.o_proj": "rowwise_rep",
546
+ "model.layers.*.mlp.experts.*.gate_proj": "colwise",
547
+ "model.layers.*.mlp.experts.*.up_proj": "colwise",
548
+ "model.layers.*.mlp.experts.*.down_proj": "rowwise",
549
+ "model.layers.*.mlp_text.experts.*.gate_proj": "colwise",
550
+ "model.layers.*.mlp_text.experts.*.up_proj": "colwise",
551
+ "model.layers.*.mlp_text.experts.*.down_proj": "rowwise",
552
+ "model.layers.*.mlp.gate_proj": "colwise",
553
+ "model.layers.*.mlp.up_proj": "colwise",
554
+ "model.layers.*.mlp.down_proj": "rowwise"
555
+ }
556
+
557
+ def __init__(
558
+ self,
559
+ vision_config=None,
560
+ im_patch_id=None,
561
+ image_start_token_id=None,
562
+ image_end_token_id=None,
563
+ video_start_token_id=None,
564
+ video_end_token_id=None,
565
+ pixel_hidden_size=None,
566
+ modality_detach=False,
567
+ temporal_conv_size=2,
568
+ spatial_conv_size=2,
569
+ mm_vocab_size=0, # vocab for mm specialtokens
570
+ max_text_id=None,
571
+ use_temporal_conv=True,
572
+ moe_use_size_all2all=False,
573
+ moe_num_attn_experts=False,
574
+ moe_dense_experts_token_type_id: int = 3,
575
+ moe_use_hard_gate: bool = True,
576
+ moe_fuse_experts: bool = False,
577
+ moe_use_token_type_bias: bool = False,
578
+ disable_ffn_model_parallel=False,
579
+ fuse_attn_ffn=True,
580
+ rope_3d=True,
581
+ freq_allocation=20,
582
+ using_precision_check=False,
583
+ use_recompute_resampler=False,
584
+ resampler_fuse_rms_norm=False,
585
+ moe_layer_feed_fake_token=False,
586
+ tensor_parallel_degree=1,
587
+ **kwargs,
588
+ ):
589
+ super().__init__(**kwargs)
590
+ if isinstance(vision_config, dict):
591
+ self.vision_config = DFNRopeVisionTransformerConfig(**vision_config)
592
+ else:
593
+ self.vision_config = DFNRopeVisionTransformerConfig()
594
+ self.im_patch_id = im_patch_id
595
+ self.image_start_token_id = image_start_token_id
596
+ self.image_end_token_id = image_end_token_id
597
+ self.video_start_token_id = video_start_token_id
598
+ self.video_end_token_id = video_end_token_id
599
+ self.pixel_hidden_size = pixel_hidden_size
600
+ self.modality_detach = modality_detach
601
+ self.temporal_conv_size = temporal_conv_size
602
+ self.spatial_conv_size = spatial_conv_size
603
+ self.mm_vocab_size = mm_vocab_size
604
+ self.max_text_id = max_text_id
605
+ self.use_temporal_conv = use_temporal_conv
606
+
607
+ self.moe_use_size_all2all = moe_use_size_all2all
608
+ self.moe_num_attn_experts = moe_num_attn_experts
609
+ self.moe_dense_experts_token_type_id = moe_dense_experts_token_type_id
610
+ self.moe_use_hard_gate = moe_use_hard_gate
611
+ self.moe_fuse_experts = moe_fuse_experts
612
+ self.moe_use_token_type_bias = moe_use_token_type_bias
613
+ self.disable_ffn_model_parallel = disable_ffn_model_parallel
614
+
615
+ self.fuse_attn_ffn = fuse_attn_ffn
616
+ self.rope_3d = rope_3d
617
+ self.freq_allocation = freq_allocation
618
+ self.using_precision_check = using_precision_check
619
+ self.use_recompute_resampler = use_recompute_resampler
620
+ self.resampler_fuse_rms_norm = resampler_fuse_rms_norm
621
+ self.moe_layer_feed_fake_token = moe_layer_feed_fake_token
622
+
623
+ self.tensor_parallel_degree = tensor_parallel_degree
624
+
625
+ @property
626
+ def multimodel_experts(self) -> bool:
627
+ """Check if model is using more than 1 multimodel experts."""
628
+ return (
629
+ isinstance(self.moe_num_experts, (tuple, list))
630
+ and len(self.moe_num_experts) > 1
631
+ )
632
+
633
+ @property
634
+ def use_moe(self) -> bool:
635
+ """
636
+ Check if model is using MoE architecture.
637
+
638
+ Returns:
639
+ bool: True if moe_num_experts > 0, False otherwise
640
+ """
641
+ return (
642
+ sum(self.moe_num_experts) > 0
643
+ if self.multimodel_experts
644
+ else self.moe_num_experts > 0
645
+ )
646
+
647
+ def to_dict(self, saving_file=False):
648
+ """to_dict"""
649
+ output = copy.deepcopy(self.__dict__)
650
+ if self.vision_config:
651
+ output["vision_config"] = (
652
+ self.vision_config.to_dict()
653
+ if isinstance(self.vision_config, (DFNRopeVisionTransformerConfig))
654
+ else self.vision_config
655
+ )
656
+
657
+ output["model_type"] = self.__class__.model_type
658
+ return output
generation_config.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "top_p": 0.95,
3
+ "temperature": 0.6,
4
+ "repetition_penalty": 1.0,
5
+ "frequency_penalty": 0.0,
6
+ "presence_penalty": 0.0,
7
+ "pad_token_id": 0,
8
+ "bos_token_id": 1,
9
+ "eos_token_id": 2
10
+ }
model-00001-of-00029.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:316f0b2880ffe623cdf44e8bf43bce413be22a03a45fba55b244bc49e2a27f52
3
+ size 2499162944
model-00002-of-00029.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:32d436640f2cec018117b56adb9a091be5ea7ddaecf29120cd13232269207d0d
3
+ size 329478088
model-00003-of-00029.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d37f32a9ad65ef2b65047a1719005800d915693d922f4d33ab03d07411d87a59
3
+ size 2093279192
model-00004-of-00029.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0c4587c88f9355a6978e7fc851542ac9cec5d60f6706baa14576e0165f5a754a
3
+ size 2093279192
model-00005-of-00029.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e3fc5934d2d7269bc5ea8c8b220cf733137bb4d94b5e0ada96e73800ab561563
3
+ size 2093279192
model-00006-of-00029.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ae1750a29fb2e600de00fc9e0bfd1183646f05f6ba8bf0195bde2a163540f298
3
+ size 2093279192
model-00007-of-00029.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f358e455e5acc240fd67b36efda92fad1bc9c621984dacfb83c4e4c1257e6692
3
+ size 2093279192
model-00008-of-00029.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e3b2e9e9a471a3aabc579e062f29d0098bfd6c50d9715c90146126c4d695b9de
3
+ size 2093279192
model-00009-of-00029.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:08edfc746286cdfd9ee72cb98befabca0133b9bd79589be992f5cf5b395f188f
3
+ size 2093279192
model-00010-of-00029.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:38fb50d70257dce6d3c12ac4891503b93b296a02ca6e9192ce4ff18ea79e4988
3
+ size 2093279192
model-00011-of-00029.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:17d2c7f537587210b45d2ea50b837905c684f826fc174ba6273ed11cf9a2ec35
3
+ size 2093279192
model-00012-of-00029.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:15b8002d4747b545c625afb416e3b92dc0c05daea2b4a8ac51ba0c8956ac52a6
3
+ size 2093279592
model-00013-of-00029.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ffd7d02c5d3c1cc752a1a9da9d608da9a621bf2a448ec8b81b7ad20d95eaec72
3
+ size 2093279592
model-00014-of-00029.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b8b7f8cc43cb6b1d494abe33248ab53d3afe998a7e137bb6c25e87fe9974e026
3
+ size 2093279592
model-00015-of-00029.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:13c11d6a49354d3905539aed6eef20caa574271888d810efb6479f11339d54bb
3
+ size 2093279592
model-00016-of-00029.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fc87eec9268c8e5c64027943858a8dfa91066b561dbbb9bf338fb928dce601d3
3
+ size 2093279592
model-00017-of-00029.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:63c747d1354fc0edad49e3a91d29499d1c0419805879db18393489bccdda0667
3
+ size 2093279592
model-00018-of-00029.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e51248c89f68460b146da8c19c95e0561fb6743d77662ea3724b953543016ba8
3
+ size 2093279592
model-00019-of-00029.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:46a6708194e6fed64a8ce4440fa714e7ef7dfc690ef56f7af6dd3804935dd2a9
3
+ size 2093279592
model-00020-of-00029.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:58d17fd0fd65fe844c6cd4d0b2184831dc532087b6f3696133415356fd7e1e3c
3
+ size 2093279592
model-00021-of-00029.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ad368d5f6a6b39679bb9961717336fa6591678d3cd8227c03c6e02c008bcd9d4
3
+ size 2093279592
model-00022-of-00029.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ca23d84bda0e885d7ee694c57e310dfeec7545e746ea4747ee7f59e3ccb93dc8
3
+ size 2093279592
model-00023-of-00029.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f67cc38459941db3c23cf503c5a2970669a7752097ac7b527652d9aa412344be
3
+ size 2093279592
model-00024-of-00029.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:339043048bb39c2817803e329f84a53f63f3feba287c67949d6d12fa8ea79d77
3
+ size 2093279592
model-00025-of-00029.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cfc32f1945d424c0ae7a5b10dc1c69394cf142b864ed3a7c98fc9a18ab3d0794
3
+ size 2093279592
model-00026-of-00029.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c168877f3c174052b359376c12c899913d8b52c11b07482eacd7bdc4a73d9099
3
+ size 2093279592
model-00027-of-00029.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9feec6d2b7c327e751542eb73b8f3b07febc9d20785911c61e7b83fb5dfefd8c
3
+ size 2093279592
model-00028-of-00029.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:98a94b94e5e3c00d5b3371390997a5a1363becec23fa049ecd08bec51cd1e06c
3
+ size 2093279592
model-00029-of-00029.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:435d544797ccbe15396824bd1d4cbea8621a1115e5850eb06dc25b597be71342
3
+ size 2093279592
model.safetensors.index.json ADDED
The diff for this file is too large to render. See raw diff
 
modeling_ernie4_5_vl.py ADDED
The diff for this file is too large to render. See raw diff
 
preprocessor_config.json ADDED
@@ -0,0 +1,29 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "crop_size": {
3
+ "height": 224,
4
+ "width": 224
5
+ },
6
+ "do_center_crop": false,
7
+ "do_convert_rgb": true,
8
+ "do_normalize": true,
9
+ "do_rescale": true,
10
+ "do_resize": true,
11
+ "image_mean": [
12
+ 0.48145466,
13
+ 0.4578275,
14
+ 0.40821073
15
+ ],
16
+ "image_std": [
17
+ 0.26862954,
18
+ 0.26130258,
19
+ 0.27577711
20
+ ],
21
+ "resample": 3,
22
+ "rescale_factor": 0.00392156862745098,
23
+ "size": {
24
+ "height": 224,
25
+ "width": 224
26
+ },
27
+ "min_pixels": 3136,
28
+ "max_pixels": 4816896
29
+ }
processing_ernie4_5_vl.py ADDED
@@ -0,0 +1,1821 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright (c) 2025 Baidu, Inc. All Rights Reserved.
2
+ #
3
+ # Licensed under the Apache License, Version 2.0 (the "License");
4
+ # you may not use this file except in compliance with the License.
5
+ # You may obtain a copy of the License at
6
+ #
7
+ # http://www.apache.org/licenses/LICENSE-2.0
8
+ #
9
+ # Unless required by applicable law or agreed to in writing, software
10
+ # distributed under the License is distributed on an "AS IS" BASIS,
11
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
+ # See the License for the specific language governing permissions and
13
+ # limitations under the License.
14
+
15
+ """Tokenization classes and Image processor class, Processor class for Ernie_45T_VL."""
16
+
17
+ import copy
18
+ import io
19
+ import os
20
+ import math
21
+ import random
22
+ import requests
23
+ import base64
24
+ import datetime
25
+ import hashlib
26
+ import threading
27
+ import uuid
28
+ import decord
29
+ from shutil import copyfile
30
+ from typing import Any, Dict, List, Optional, Tuple, Union
31
+
32
+ import numpy as np
33
+ import torch
34
+ from PIL import Image, ImageDraw, ImageFont
35
+ from PIL.ExifTags import TAGS
36
+ from collections import defaultdict
37
+ from pathlib import Path
38
+ from tempfile import NamedTemporaryFile as ntf
39
+
40
+ import sentencepiece as spm
41
+ from transformers.tokenization_utils import PreTrainedTokenizer
42
+ from transformers.tokenization_utils_base import (
43
+ PaddingStrategy,
44
+ TextInput,
45
+ )
46
+ from transformers.utils import TensorType, logging
47
+ from transformers.video_utils import VideoInput
48
+ from transformers.processing_utils import ProcessorMixin
49
+ from transformers.feature_extraction_utils import BatchFeature
50
+ from transformers.image_processing_utils import BaseImageProcessor, BatchFeature
51
+ from transformers.image_transforms import (
52
+ convert_to_rgb,
53
+ normalize,
54
+ rescale,
55
+ resize,
56
+ to_channel_dimension_format,
57
+ )
58
+ from transformers.image_utils import (
59
+ OPENAI_CLIP_MEAN,
60
+ OPENAI_CLIP_STD,
61
+ ChannelDimension,
62
+ ImageInput,
63
+ PILImageResampling,
64
+ get_image_size,
65
+ infer_channel_dimension_format,
66
+ is_valid_image,
67
+ make_list_of_images,
68
+ to_numpy_array,
69
+ valid_images,
70
+ )
71
+
72
+ logger = logging.get_logger(__name__)
73
+
74
+
75
+ class Ernie4_5_VLTokenizer(PreTrainedTokenizer):
76
+ """
77
+ Ernie4_5_VLTokenizer
78
+ """
79
+
80
+ vocab_files_names = {
81
+ "vocab_file": "tokenizer.model",
82
+ }
83
+ # Model input names expected by the tokenizer
84
+ model_input_names = ["input_ids", "position_ids", "attention_mask", "labels"]
85
+ # Padding side (where to add padding tokens)
86
+ padding_side = "right"
87
+
88
+ def __init__(
89
+ self,
90
+ vocab_file,
91
+ bos_token="<s>",
92
+ cls_token="<cls>",
93
+ eos_token="</s>",
94
+ mask_token="<mask:0>",
95
+ pad_token="<pad>",
96
+ sep_token="<sep>",
97
+ unk_token="<unk>",
98
+ additional_special_tokens=None,
99
+ **kwargs,
100
+ ):
101
+ """
102
+ Initialize the Ernie4_5_VLTokenizer
103
+
104
+ Args:
105
+ vocab_file (str): Path to the tokenizer vocabulary model.
106
+ bos_token (str, optional): The beginning of sequence token. Defaults to `"<s>"`.
107
+ cls_token (str, optional): The classifier token. Defaults to `"<cls>"`.
108
+ eos_token (str, optional): The end of sequence token. Defaults to `"</s>"`.
109
+ mask_token (str, optional): The masking token. Defaults to `"<mask:0>"`.
110
+ pad_token (str, optional): The padding token. Defaults to `"<pad>"`.
111
+ sep_token (str, optional): The separation token. Defaults to `"<sep>"`.
112
+ unk_token (str, optional): The unknown tokens symbol. Defaults to `"<unk>"`.
113
+ additional_special_tokens (List[str], optional): Additional special tokens to use.
114
+ Defaults to `["<mask:1>", "<mask:7>"]`.
115
+ **kwargs (dict): Additional keyword arguments passed along to the superclass.
116
+ """
117
+
118
+ # Store vocabulary file path
119
+ self.vocab_file = vocab_file
120
+ # Initialize SentencePiece processor
121
+ self.sp_model = spm.SentencePieceProcessor()
122
+ # Load the vocabulary model
123
+ self.sp_model.Load(vocab_file)
124
+
125
+ # Set default additional special tokens if none provided
126
+ if additional_special_tokens is None:
127
+ additional_special_tokens = ["<mask:1>", "<mask:7>"]
128
+ super().__init__(
129
+ bos_token=bos_token,
130
+ cls_token=cls_token,
131
+ eos_token=eos_token,
132
+ mask_token=mask_token,
133
+ pad_token=pad_token,
134
+ sep_token=sep_token,
135
+ unk_token=unk_token,
136
+ additional_special_tokens=additional_special_tokens,
137
+ **kwargs,
138
+ )
139
+
140
+ @property
141
+ def space_token(self):
142
+ """Return the space token"""
143
+ return "<mask:1>"
144
+
145
+ @property
146
+ def space_token_id(self):
147
+ """Return the ID of the space token"""
148
+ return self.sp_model.piece_to_id("<mask:1>")
149
+
150
+ @property
151
+ def gend_token(self):
152
+ """Return the gender token"""
153
+ return "<mask:7>"
154
+
155
+ @property
156
+ def gend_token_id(self):
157
+ """Return the ID of the gender token"""
158
+ return self.sp_model.piece_to_id("<mask:7>")
159
+
160
+ @property
161
+ def im_start_id(self):
162
+ """Return the ID of the image start token"""
163
+ return self.sp_model.piece_to_id("<|im_start|>")
164
+
165
+ @property
166
+ def im_end_id(self):
167
+ """Return the ID of the image end token"""
168
+ return self.sp_model.piece_to_id("<|im_end|>")
169
+
170
+ @property
171
+ def vocab_size(self):
172
+ """Return the size of the vocabulary"""
173
+ return self.sp_model.vocab_size()
174
+
175
+ def get_vocab(self):
176
+ """Return the vocabulary as a dictionary mapping tokens to IDs"""
177
+ vocab = {self.convert_ids_to_tokens(i): i for i in range(self.vocab_size)}
178
+ vocab.update(self.added_tokens_encoder)
179
+ return vocab
180
+
181
+ def _tokenize(self, text):
182
+ """Tokenize the input text into pieces"""
183
+ return self.sp_model.encode_as_pieces(text)
184
+
185
+ def _convert_token_to_id(self, token):
186
+ """Convert a token to its corresponding ID"""
187
+ return self.sp_model.piece_to_id(token)
188
+
189
+ def _convert_id_to_token(self, id):
190
+ """Convert an ID to its corresponding token"""
191
+ return self.sp_model.id_to_piece(id)
192
+
193
+ def convert_tokens_to_string(self, tokens):
194
+ """Convert a sequence of tokens back to a string"""
195
+ current_sub_tokens = []
196
+ out_string = ""
197
+
198
+ for token in tokens:
199
+ # Handle special tokens differently
200
+ if token in self.all_special_tokens:
201
+ out_string += self.sp_model.decode(current_sub_tokens) + token
202
+ current_sub_tokens = []
203
+ else:
204
+ current_sub_tokens.append(token)
205
+
206
+ # Add any remaining sub-tokens
207
+ out_string += self.sp_model.decode(current_sub_tokens)
208
+ return out_string
209
+
210
+ def prepare_for_model(self, *args, **kwargs):
211
+ """Prepare the tokenized inputs for the model"""
212
+ # Remove add_special_tokens if present (not supported)
213
+ if "add_special_tokens" in kwargs:
214
+ kwargs.pop("add_special_tokens")
215
+ return super().prepare_for_model(*args, **kwargs)
216
+
217
+ def save_vocabulary(
218
+ self, save_directory, filename_prefix: Optional[str] = None
219
+ ) -> Tuple[str]:
220
+ """
221
+ Save the vocabulary and special tokens file to a directory.
222
+
223
+ Args:
224
+ save_directory (`str`): The directory to save the vocabulary to
225
+ filename_prefix (`str`, optional): Prefix to add to the filename
226
+
227
+ Returns:
228
+ `Tuple(str)`: Paths to the saved files
229
+ """
230
+ if not os.path.isdir(save_directory):
231
+ logger.error(f"Vocabulary path ({save_directory}) should be a directory")
232
+ return
233
+
234
+ # Construct output vocabulary file path
235
+ out_vocab_file = os.path.join(
236
+ save_directory,
237
+ (filename_prefix + "-" if filename_prefix else "")
238
+ + self.vocab_files_names["vocab_file"],
239
+ )
240
+
241
+ # Copy or create vocabulary file
242
+ if os.path.abspath(self.vocab_file) != os.path.abspath(
243
+ out_vocab_file
244
+ ) and os.path.isfile(self.vocab_file):
245
+ copyfile(self.vocab_file, out_vocab_file)
246
+ elif not os.path.isfile(self.vocab_file):
247
+ with open(out_vocab_file, "wb") as fi:
248
+ content_spiece_model = self.sp_model.serialized_model_proto()
249
+ fi.write(content_spiece_model)
250
+
251
+ return (out_vocab_file,)
252
+
253
+ def _decode(self, *args, **kwargs):
254
+ """Decode token_id back to text"""
255
+ # Remove some parameters that aren't used
256
+ kwargs.pop("clean_up_tokenization_spaces", None)
257
+ kwargs.pop("spaces_between_special_tokens", None)
258
+
259
+ # Call parent decode method with specific parameters
260
+ return super()._decode(
261
+ *args,
262
+ **kwargs,
263
+ clean_up_tokenization_spaces=False,
264
+ spaces_between_special_tokens=False,
265
+ )
266
+
267
+ def _pad(
268
+ self,
269
+ encoded_inputs: Dict,
270
+ max_length: Optional[int] = None,
271
+ padding_strategy=PaddingStrategy.DO_NOT_PAD,
272
+ pad_to_multiple_of: Optional[int] = None,
273
+ return_attention_mask: Optional[bool] = None,
274
+ **kwargs
275
+ ) -> dict:
276
+ """Pad the encoded inputs to the specified length"""
277
+ if return_attention_mask is None:
278
+ return_attention_mask = "attention_mask" in self.model_input_names
279
+ if return_attention_mask:
280
+ required_input = encoded_inputs[self.model_input_names[0]]
281
+ if padding_strategy == PaddingStrategy.LONGEST:
282
+ max_length = len(required_input)
283
+
284
+ # Adjust max_length if needed for multiple of padding
285
+ if (
286
+ max_length is not None
287
+ and pad_to_multiple_of is not None
288
+ and (max_length % pad_to_multiple_of != 0)
289
+ ):
290
+ max_length = (
291
+ (max_length // pad_to_multiple_of) + 1
292
+ ) * pad_to_multiple_of
293
+
294
+ # Check if padding is needed
295
+ needs_to_be_padded = (
296
+ padding_strategy != PaddingStrategy.DO_NOT_PAD
297
+ and len(required_input) != max_length
298
+ )
299
+
300
+ # Handle attention mask if present
301
+ if (
302
+ "attention_mask" in encoded_inputs
303
+ and encoded_inputs["attention_mask"] is not None
304
+ ):
305
+ attention_mask = encoded_inputs.pop("attention_mask")
306
+ if isinstance(attention_mask, torch.Tensor):
307
+ attention_mask = attention_mask.numpy()
308
+ elif isinstance(attention_mask, list):
309
+ attention_mask = np.array(attention_mask)
310
+ elif not isinstance(attention_mask, np.ndarray):
311
+ raise ValueError(
312
+ f"Unexpected type {type(attention_mask)} of attention_mask, "
313
+ )
314
+ else:
315
+ # Create default attention mask if none provided
316
+ attention_mask = np.tril(
317
+ np.ones((len(required_input), len(required_input)), dtype=np.int64)
318
+ )
319
+ attention_mask = np.expand_dims(attention_mask, axis=0)
320
+
321
+ # Perform padding if needed
322
+ if needs_to_be_padded:
323
+ difference = max_length - len(required_input)
324
+ if self.padding_side == "right":
325
+ if attention_mask.ndim == 1:
326
+ pad_width = [(0, difference)]
327
+ else:
328
+ pad_width = [(0, 0), (0, difference), (0, difference)]
329
+ elif self.padding_side == "left":
330
+ if attention_mask.ndim == 1:
331
+ pad_width = [(difference, 0)]
332
+ else:
333
+ pad_width = [(0, 0), (difference, 0), (difference, 0)]
334
+ else:
335
+ raise ValueError(
336
+ "Invalid padding strategy:" + str(self.padding_side)
337
+ )
338
+
339
+ attention_mask = np.pad(
340
+ attention_mask,
341
+ pad_width=pad_width,
342
+ mode="constant",
343
+ constant_values=0,
344
+ )
345
+
346
+ # Call parent padding method
347
+ encoded_inputs = super()._pad(
348
+ encoded_inputs,
349
+ max_length,
350
+ padding_strategy=padding_strategy,
351
+ pad_to_multiple_of=pad_to_multiple_of,
352
+ return_attention_mask=False,
353
+ )
354
+
355
+ # Add attention mask back if needed
356
+ if return_attention_mask:
357
+ encoded_inputs["attention_mask"] = attention_mask.tolist()
358
+
359
+ return encoded_inputs
360
+
361
+
362
+ def round_by_factor(number: int, factor: int) -> int:
363
+ """Returns the closest integer to 'number' that is divisible by 'factor'."""
364
+ return round(number / factor) * factor
365
+
366
+
367
+ def ceil_by_factor(number: int, factor: int) -> int:
368
+ """Returns the smallest integer greater than or equal to 'number' that is divisible by 'factor'."""
369
+ return math.ceil(number / factor) * factor
370
+
371
+
372
+ def floor_by_factor(number: int, factor: int) -> int:
373
+ """Returns the largest integer less than or equal to 'number' that is divisible by 'factor'."""
374
+ return math.floor(number / factor) * factor
375
+
376
+
377
+ def smart_resize(
378
+ height: int,
379
+ width: int,
380
+ factor: int = 28,
381
+ min_pixels: int = 4 * 28 * 28,
382
+ max_pixels: int = 16384 * 28 * 28,
383
+ ):
384
+ """
385
+ Rescales the image so that the following conditions are met:
386
+
387
+ 1. Both dimensions (height and width) are divisible by 'factor'.
388
+
389
+ 2. The total number of pixels is within the range ['min_pixels', 'max_pixels'].
390
+
391
+ 3. The aspect ratio of the image is maintained as closely as possible.
392
+ """
393
+ MAX_RATIO = 200
394
+ if max(height, width) / min(height, width) > MAX_RATIO:
395
+ if height > width:
396
+ new_width = max(factor, round_by_factor(width, factor))
397
+ new_height = floor_by_factor(new_width * MAX_RATIO, factor)
398
+ else:
399
+ new_height = max(factor, round_by_factor(height, factor))
400
+ new_width = floor_by_factor(new_height * MAX_RATIO, factor)
401
+
402
+ logger.info(
403
+ f"absolute aspect ratio must be smaller than {MAX_RATIO}, got {max(height, width) / min(height, width)},\
404
+ resize to {max(new_height, new_width) / min(new_height, new_width)}"
405
+ )
406
+
407
+ height = new_height
408
+ width = new_width
409
+
410
+ h_bar = max(factor, round_by_factor(height, factor))
411
+ w_bar = max(factor, round_by_factor(width, factor))
412
+ if h_bar * w_bar > max_pixels:
413
+ beta = math.sqrt((height * width) / max_pixels)
414
+ h_bar = floor_by_factor(height / beta, factor)
415
+ w_bar = floor_by_factor(width / beta, factor)
416
+ elif h_bar * w_bar < min_pixels:
417
+ beta = math.sqrt(min_pixels / (height * width))
418
+ h_bar = ceil_by_factor(height * beta, factor)
419
+ w_bar = ceil_by_factor(width * beta, factor)
420
+
421
+ if min_pixels > h_bar * w_bar or h_bar * w_bar > max_pixels:
422
+ raise ValueError(f"encounter invalid h_bar: {h_bar}, w_bar: {w_bar}")
423
+
424
+ return h_bar, w_bar
425
+
426
+
427
+ def is_scaled_image(image: np.ndarray) -> bool:
428
+ """
429
+ Checks to see whether the pixel values have already been rescaled to [0, 1].
430
+ """
431
+ if image.dtype == np.uint8:
432
+ return False
433
+
434
+ # It's possible the image has pixel values in [0, 255] but is of floating type
435
+ return np.min(image) >= 0 and np.max(image) <= 1
436
+
437
+
438
+ def make_batched_images(images) -> List[List[ImageInput]]:
439
+ """
440
+ Accepts images in list or nested list format, and makes a list of images for preprocessing.
441
+
442
+ Args:
443
+ images (`Union[List[List[ImageInput]], List[ImageInput], ImageInput]`):
444
+ The input image.
445
+
446
+ Returns:
447
+ list: A list of images.
448
+ """
449
+ if (
450
+ isinstance(images, (list, tuple))
451
+ and isinstance(images[0], (list, tuple))
452
+ and is_valid_image(images[0][0])
453
+ ):
454
+ return [img for img_list in images for img in img_list]
455
+
456
+ elif isinstance(images, (list, tuple)) and is_valid_image(images[0]):
457
+ return images
458
+
459
+ elif is_valid_image(images):
460
+ return [images]
461
+
462
+ raise ValueError(f"Could not make batched images from {images}")
463
+
464
+
465
+ # Copied from transformers.models.llava_next_video.image_processing_llava_next_video.make_batched_videos
466
+ def make_batched_videos(videos) -> List[VideoInput]:
467
+ """dummy"""
468
+ if (
469
+ isinstance(videos, (list, tuple))
470
+ and isinstance(videos[0], (list, tuple))
471
+ and is_valid_image(videos[0][0])
472
+ ):
473
+ return videos
474
+
475
+ elif isinstance(videos, (list, tuple)) and is_valid_image(videos[0]):
476
+ if isinstance(videos[0], Image.Image):
477
+ return [videos]
478
+ elif len(videos[0].shape) == 4:
479
+ return [list(video) for video in videos]
480
+
481
+ elif is_valid_image(videos) and len(videos.shape) == 4:
482
+ return [list(videos)]
483
+
484
+ raise ValueError(f"Could not make batched video from {videos}")
485
+
486
+
487
+ class Ernie4_5_VLImageProcessor(BaseImageProcessor):
488
+ r"""
489
+ Constructs a adaptive image processor that dynamically resizes images based on the original images.
490
+
491
+ Args:
492
+ do_resize (`bool`, *optional*, defaults to `True`):
493
+ Whether to resize the image's (height, width) dimensions.
494
+ resample (`PILImageResampling`, *optional*, defaults to `Resampling.BICUBIC`):
495
+ Resampling filter to use when resizing the image.
496
+ do_rescale (`bool`, *optional*, defaults to `True`):
497
+ Whether to rescale the image by the specified scale `rescale_factor`.
498
+ rescale_factor (`int` or `float`, *optional*, defaults to `1/255`):
499
+ Scale factor to use if rescaling the image.
500
+ do_normalize (`bool`, *optional*, defaults to `True`):
501
+ Whether to normalize the image.
502
+ image_mean (`float` or `List[float]`, *optional*, defaults to `[0.48145466, 0.4578275, 0.40821073]`):
503
+ Mean to use if normalizing the image. This is a float or list of floats for each channel in the image.
504
+ image_std (`float` or `List[float]`, *optional*, defaults to `[0.26862954, 0.26130258, 0.27577711]`):
505
+ Standard deviation to use if normalizing the image. This is a float or list of floats for each channel
506
+ in the image.
507
+ do_convert_rgb (`bool`, *optional*, defaults to `True`):
508
+ Whether to convert the image to RGB.
509
+ min_pixels (`int`, *optional*, defaults to `56 * 56`):
510
+ The min pixels of the image to resize the image.
511
+ max_pixels (`int`, *optional*, defaults to `28 * 28 * 1280`):
512
+ The max pixels of the image to resize the image.
513
+ patch_size (`int`, *optional*, defaults to 14):
514
+ The spacial patch size of the vision encoder.
515
+ temporal_conv_size (`int`, *optional*, defaults to 2):
516
+ The temporal conv size in resampler.
517
+ merge_size (`int`, *optional*, defaults to 2):
518
+ The merge size of the vision encoder to llm encoder.
519
+ """
520
+
521
+ model_input_names = [
522
+ "pixel_values",
523
+ "image_grid_thw",
524
+ "pixel_values_videos",
525
+ "video_grid_thw",
526
+ ]
527
+
528
+ def __init__(
529
+ self,
530
+ do_resize: bool = True,
531
+ resample: PILImageResampling = PILImageResampling.BICUBIC,
532
+ do_rescale: bool = True,
533
+ rescale_factor: Union[float, List[float]] = 1 / 255,
534
+ do_normalize: bool = True,
535
+ image_mean: Optional[Union[float, List[float]]] = None,
536
+ image_std: Optional[Union[float, List[float]]] = None,
537
+ do_convert_rgb: bool = True,
538
+ min_pixels: int = 56 * 56,
539
+ max_pixels: int = 28 * 28 * 1280,
540
+ patch_size: int = 14,
541
+ temporal_conv_size: int = 2,
542
+ merge_size: int = 2,
543
+ **kwargs,
544
+ ) -> None:
545
+ """init"""
546
+ super().__init__(**kwargs)
547
+ self.do_resize = do_resize
548
+ self.resample = resample
549
+ self.do_rescale = do_rescale
550
+ self.rescale_factor = rescale_factor
551
+ self.do_normalize = do_normalize
552
+ self.image_mean = image_mean if image_mean is not None else OPENAI_CLIP_MEAN
553
+ self.image_std = image_std if image_std is not None else OPENAI_CLIP_STD
554
+ self.min_pixels = min_pixels
555
+ self.max_pixels = max_pixels
556
+ self.patch_size = patch_size
557
+ self.temporal_conv_size = temporal_conv_size
558
+ self.merge_size = merge_size
559
+ self.size = {"min_pixels": min_pixels, "max_pixels": max_pixels}
560
+ self.do_convert_rgb = do_convert_rgb
561
+
562
+ def set_pixels(self, min_pixels=None, max_pixels=None, msg=""):
563
+ """set_pixels"""
564
+ if min_pixels is not None:
565
+ assert (
566
+ isinstance(min_pixels, int) and min_pixels >= 0
567
+ ), "min_pixels must be positive int"
568
+ logger.info(
569
+ f"{msg} Ernie4_5_VLImageProcessor set min_pixels = {min_pixels}"
570
+ )
571
+ self.min_pixels = min_pixels
572
+ self.size["min_pixels"] = int(min_pixels)
573
+ if max_pixels is not None:
574
+ assert (
575
+ isinstance(max_pixels, int) and max_pixels > 0
576
+ ), "max_pixels must be positive int"
577
+ logger.info(
578
+ f"{msg} Ernie4_5_VLImageProcessor set max_pixels = {max_pixels}"
579
+ )
580
+ self.max_pixels = max_pixels
581
+ self.size["max_pixels"] = int(max_pixels)
582
+
583
+ def get_smarted_resize(self, height, width, min_pixels=None, max_pixels=None):
584
+ """dummy"""
585
+ actual_min_pixels = min_pixels if min_pixels is not None else self.min_pixels
586
+ actual_max_pixels = max_pixels if max_pixels is not None else self.max_pixels
587
+ resized_height, resized_width = smart_resize(
588
+ height,
589
+ width,
590
+ factor=self.patch_size * self.merge_size,
591
+ min_pixels=actual_min_pixels,
592
+ max_pixels=actual_max_pixels,
593
+ )
594
+ return (resized_height, resized_width), (
595
+ resized_height // self.patch_size,
596
+ resized_width // self.patch_size,
597
+ )
598
+
599
+ def _preprocess(
600
+ self,
601
+ images: Union[ImageInput, VideoInput],
602
+ do_resize: bool = True,
603
+ resample: PILImageResampling = None,
604
+ do_rescale: bool = True,
605
+ rescale_factor: float = 1 / 255,
606
+ do_normalize: bool = True,
607
+ image_mean: Optional[Union[float, List[float]]] = None,
608
+ image_std: Optional[Union[float, List[float]]] = None,
609
+ do_convert_rgb: bool = False,
610
+ data_format: Optional[ChannelDimension] = ChannelDimension.FIRST,
611
+ input_data_format: Optional[Union[str, ChannelDimension]] = None,
612
+ predetermined_grid_thw=None,
613
+ ):
614
+ """
615
+ Preprocess an image or batch of images. Copy of the `preprocess` method from `CLIPImageProcessor`.
616
+
617
+ Args:
618
+ images (`ImageInput` or `VideoInput`):
619
+ Image or batch of images to preprocess. Expects pixel values ranging from 0 to 255.
620
+ If pixel values range from 0 to 1, set `do_rescale=False`.
621
+ do_resize (`bool`, *optional*, defaults to `self.do_resize`):
622
+ Whether to resize the image.
623
+ resample (`PILImageResampling`, *optional*, defaults to `self.resample`):
624
+ Resampling filter to use if resizing the image. This can be one of the `PILImageResampling` enums.
625
+ do_rescale (`bool`, *optional*, defaults to `self.do_rescale`):
626
+ Whether to rescale the image.
627
+ rescale_factor (`float`, *optional*, defaults to `self.rescale_factor`):
628
+ Scale factor to use if rescaling the image.
629
+ do_normalize (`bool`, *optional*, defaults to `self.do_normalize`):
630
+ Whether to normalize the image.
631
+ image_mean (`float` or `List[float]`, *optional*, defaults to `self.image_mean`):
632
+ Mean to use if normalizing the image.
633
+ Can be a float or a list of floats corresponding to the number of channels in the image.
634
+ image_std (`float` or `List[float]`, *optional*, defaults to `self.image_std`):
635
+ Standard deviation to use if normalizing the image.
636
+ Can be a float or a list of floats corresponding to the number of channels in the image.
637
+ do_convert_rgb (`bool`, *optional*, defaults to `self.do_convert_rgb`):
638
+ Whether to convert the image to RGB.
639
+ data_format (`ChannelDimension`, *optional*, defaults to `ChannelDimension.FIRST`):
640
+ The channel dimension format for the output image. Can be one of:
641
+ - `"channels_first"` or `ChannelDimension.FIRST`: image in (num_channels, height, width) format.
642
+ - `"channels_last"` or `ChannelDimension.LAST`: image in (height, width, num_channels) format.
643
+ - Unset: Use the channel dimension format of the input image.
644
+ input_data_format (`ChannelDimension` or `str`, *optional*):
645
+ The channel dimension format for the input image. Can be one of:
646
+ - `"channels_first"` or `ChannelDimension.FIRST`: image in (num_channels, height, width) format.
647
+ - `"channels_last"` or `ChannelDimension.LAST`: image in (height, width, num_channels) format.
648
+ - `"none"` or `ChannelDimension.NONE`: image in (height, width) format.
649
+ - `"none"` or `ChannelDimension.NONE`: image in (height, width) format.
650
+ """
651
+ images = make_list_of_images(images)
652
+
653
+ if do_convert_rgb:
654
+ images = [convert_to_rgb(image) for image in images]
655
+
656
+ # All transformations expect numpy arrays.
657
+ images = [to_numpy_array(image) for image in images]
658
+
659
+ if is_scaled_image(images[0]) and do_rescale:
660
+ logger.warning_once(
661
+ "It looks like you are trying to rescale already rescaled images. If the input"
662
+ " images have pixel values between 0 and 1, set `do_rescale=False` to avoid rescaling them again."
663
+ )
664
+ if input_data_format is None:
665
+ # We assume that all images have the same channel dimension format.
666
+ input_data_format = infer_channel_dimension_format(images[0])
667
+
668
+ height, width = get_image_size(images[0], channel_dim=input_data_format)
669
+ resized_height, resized_width = height, width
670
+ processed_images = []
671
+
672
+ if predetermined_grid_thw is not None:
673
+ assert len(predetermined_grid_thw) == len(
674
+ images
675
+ ), f"len(predetermined_grid_thw) {len(predetermined_grid_thw)} == len(images) {len(images)}"
676
+
677
+ for img_idx, image in enumerate(images):
678
+ if do_resize:
679
+ if predetermined_grid_thw is not None:
680
+ (resized_height, resized_width) = predetermined_grid_thw[img_idx]
681
+ resized_height *= self.patch_size
682
+ resized_width *= self.patch_size
683
+ else:
684
+ resized_height, resized_width = smart_resize(
685
+ height,
686
+ width,
687
+ factor=self.patch_size * self.merge_size,
688
+ min_pixels=self.min_pixels,
689
+ max_pixels=self.max_pixels,
690
+ )
691
+
692
+ image = resize(
693
+ image,
694
+ size=(resized_height, resized_width),
695
+ resample=resample,
696
+ data_format=input_data_format,
697
+ )
698
+ if do_rescale:
699
+ image = rescale(
700
+ image, scale=rescale_factor, data_format=input_data_format
701
+ )
702
+
703
+ if do_normalize:
704
+ image = normalize(
705
+ image=image,
706
+ mean=image_mean,
707
+ std=image_std,
708
+ data_format=input_data_format,
709
+ )
710
+
711
+ image = to_channel_dimension_format(
712
+ image, data_format, input_channel_dim=input_data_format
713
+ ) # [C, H, W]
714
+
715
+ processed_images.append(image)
716
+ patches = np.array(processed_images)
717
+ if data_format == ChannelDimension.LAST:
718
+ patches = patches.transpose([0, 3, 1, 2])
719
+
720
+ channel = patches.shape[1] # [time, C, H, W]
721
+ grid_t = patches.shape[0]
722
+ grid_h, grid_w = (
723
+ resized_height // self.patch_size,
724
+ resized_width // self.patch_size,
725
+ )
726
+ patches = patches.reshape(
727
+ [
728
+ grid_t,
729
+ channel,
730
+ grid_h // self.merge_size,
731
+ self.merge_size,
732
+ self.patch_size,
733
+ grid_w // self.merge_size,
734
+ self.merge_size,
735
+ self.patch_size,
736
+ ]
737
+ )
738
+ # [grid_t, grid_h/merge_size, grid_w/merge_size, merge_size, merge_size, C, psz, psz]
739
+ patches = patches.transpose([0, 2, 5, 3, 6, 1, 4, 7])
740
+
741
+ flatten_patches = patches.reshape(
742
+ [grid_t * grid_h * grid_w, channel * self.patch_size * self.patch_size]
743
+ ) # [grid_t * grid_h * grid_w, C * psz * psz]
744
+
745
+ return flatten_patches, (grid_t, grid_h, grid_w)
746
+
747
+ def preprocess(
748
+ self,
749
+ images: ImageInput,
750
+ videos: VideoInput = None,
751
+ do_resize: bool = True,
752
+ size: Optional[Union[int, List[int]]] = None,
753
+ resample: PILImageResampling = None,
754
+ do_rescale: bool = True,
755
+ rescale_factor: float = 1 / 255,
756
+ do_normalize: bool = True,
757
+ image_mean: Optional[Union[float, List[float]]] = None,
758
+ image_std: Optional[Union[float, List[float]]] = None,
759
+ do_convert_rgb: bool = False,
760
+ return_tensors: Optional[Union[str, TensorType]] = None,
761
+ data_format: Optional[ChannelDimension] = ChannelDimension.FIRST,
762
+ input_data_format: Optional[Union[str, ChannelDimension]] = None,
763
+ predetermined_grid_thw=None,
764
+ ):
765
+ """
766
+ Args:
767
+ images (`ImageInput`):
768
+ Image to preprocess. Expects a single or batch of images with pixel values ranging from 0 to 255. If
769
+ passing in images with pixel values between 0 and 1, set `do_rescale=False`.
770
+ videos (`VideoInput`):
771
+ Video to preprocess. Expects a single or batch of videos with pixel values ranging from 0 to 255. If
772
+ passing in videos with pixel values between 0 and 1, set `do_rescale=False`.
773
+ do_resize (`bool`, *optional*, defaults to `self.do_resize`):
774
+ Whether to resize the image.
775
+ size (`Dict[str, int]`, *optional*, defaults to `self.size`):
776
+ Size of the image after resizing. Shortest edge of the image is resized to size["shortest_edge"], with
777
+ the longest edge resized to keep the input aspect ratio.
778
+ resample (`int`, *optional*, defaults to `self.resample`):
779
+ Resampling filter to use if resizing the image. This can be one of the enum `PILImageResampling`. Only
780
+ has an effect if `do_resize` is set to `True`.
781
+ do_rescale (`bool`, *optional*, defaults to `self.do_rescale`):
782
+ Whether to rescale the image.
783
+ rescale_factor (`float`, *optional*, defaults to `self.rescale_factor`):
784
+ Rescale factor to rescale the image by if `do_rescale` is set to `True`.
785
+ do_normalize (`bool`, *optional*, defaults to `self.do_normalize`):
786
+ Whether to normalize the image.
787
+ image_mean (`float` or `List[float]`, *optional*, defaults to `self.image_mean`):
788
+ Image mean to use for normalization. Only has an effect if `do_normalize` is set to `True`.
789
+ image_std (`float` or `List[float]`, *optional*, defaults to `self.image_std`):
790
+ Image standard deviation to use for normalization. Only has an effect if `do_normalize` is set to
791
+ `True`.
792
+ do_convert_rgb (`bool`, *optional*, defaults to `self.do_convert_rgb`):
793
+ Whether to convert the image to RGB.
794
+ return_tensors (`str` or `TensorType`, *optional*):
795
+ The type of tensors to return. Can be one of:
796
+ - Unset: Return a list of `np.ndarray`.
797
+ - `TensorType.PYTORCH` or `'pt'`: Return a batch of type `torch.Tensor`.
798
+ - `TensorType.NUMPY` or `'np'`: Return a batch of type `np.ndarray`.
799
+ data_format (`ChannelDimension` or `str`, *optional*, defaults to `ChannelDimension.FIRST`):
800
+ The channel dimension format for the output image. Can be one of:
801
+ - `"channels_first"` or `ChannelDimension.FIRST`: image in (num_channels, height, width) format.
802
+ - `"channels_last"` or `ChannelDimension.LAST`: image in (height, width, num_channels) format.
803
+ - Unset: Use the channel dimension format of the input image.
804
+ input_data_format (`ChannelDimension` or `str`, *optional*):
805
+ The channel dimension format for the input image. If unset, the channel dimension format is inferred
806
+ from the input image. Can be one of:
807
+ - `"channels_first"` or `ChannelDimension.FIRST`: image in (num_channels, height, width) format.
808
+ - `"channels_last"` or `ChannelDimension.LAST`: image in (height, width, num_channels) format.
809
+ - `"none"` or `ChannelDimension.NONE`: image in (height, width) format.
810
+
811
+ """
812
+ do_resize = do_resize if do_resize is not None else self.do_resize
813
+ size = size if size is not None else self.size
814
+ resample = resample if resample is not None else self.resample
815
+ do_rescale = do_rescale if do_rescale is not None else self.do_rescale
816
+ rescale_factor = (
817
+ rescale_factor if rescale_factor is not None else self.rescale_factor
818
+ )
819
+ do_normalize = do_normalize if do_normalize is not None else self.do_normalize
820
+ image_mean = image_mean if image_mean is not None else self.image_mean
821
+ image_std = image_std if image_std is not None else self.image_std
822
+ do_convert_rgb = (
823
+ do_convert_rgb if do_convert_rgb is not None else self.do_convert_rgb
824
+ )
825
+
826
+ if images is not None:
827
+ images = make_batched_images(images)
828
+
829
+ if images is not None and not valid_images(images):
830
+ raise ValueError(
831
+ "Invalid image type. Must be of type PIL.Image.Image, numpy.ndarray, "
832
+ "torch.Tensor."
833
+ )
834
+
835
+ data = {}
836
+ if images is not None:
837
+ pixel_values, vision_grid_thws = [], []
838
+ for img_idx, image in enumerate(images):
839
+ if predetermined_grid_thw is not None:
840
+ predetermined_grid_thw_one = [predetermined_grid_thw[img_idx]]
841
+ else:
842
+ predetermined_grid_thw_one = None
843
+ patches, image_grid_thw = self._preprocess(
844
+ image,
845
+ do_resize=do_resize,
846
+ resample=resample,
847
+ do_rescale=do_rescale,
848
+ rescale_factor=rescale_factor,
849
+ do_normalize=do_normalize,
850
+ image_mean=image_mean,
851
+ image_std=image_std,
852
+ data_format=data_format,
853
+ do_convert_rgb=do_convert_rgb,
854
+ input_data_format=input_data_format,
855
+ predetermined_grid_thw=predetermined_grid_thw_one,
856
+ )
857
+ pixel_values.extend(patches)
858
+ vision_grid_thws.append(image_grid_thw)
859
+ pixel_values = np.array(pixel_values)
860
+ vision_grid_thws = np.array(vision_grid_thws)
861
+ data.update(
862
+ {"pixel_values": pixel_values, "image_grid_thw": vision_grid_thws}
863
+ )
864
+
865
+ if videos is not None:
866
+ videos = make_batched_videos(videos)
867
+ pixel_values, vision_grid_thws = [], []
868
+ for images in videos:
869
+ patches, video_grid_thw = self._preprocess(
870
+ images,
871
+ do_resize=do_resize,
872
+ resample=resample,
873
+ do_rescale=do_rescale,
874
+ rescale_factor=rescale_factor,
875
+ do_normalize=do_normalize,
876
+ image_mean=image_mean,
877
+ image_std=image_std,
878
+ data_format=data_format,
879
+ do_convert_rgb=do_convert_rgb,
880
+ input_data_format=input_data_format,
881
+ predetermined_grid_thw=predetermined_grid_thw,
882
+ )
883
+ pixel_values.extend(patches)
884
+ vision_grid_thws.append(video_grid_thw)
885
+ pixel_values = np.array(pixel_values)
886
+ vision_grid_thws = np.array(vision_grid_thws)
887
+
888
+ data.update(
889
+ {
890
+ "pixel_values_videos": pixel_values,
891
+ "video_grid_thw": vision_grid_thws,
892
+ }
893
+ )
894
+
895
+ return BatchFeature(data=data, tensor_type=return_tensors)
896
+
897
+
898
+ RAW_VIDEO_DIR = "./download_tmp/raw_video/"
899
+ RAW_IMAGE_DIR = "./download_tmp/raw_images/"
900
+ EXTRACTED_FRAME_DIR = "./download_tmp/extracted_frames/"
901
+ TMP_DIR = "./download_tmp/upload_tmp/"
902
+
903
+ FONT_PATH = os.path.join(Path(__file__).parent.absolute(), "Roboto-Regular.ttf")
904
+ if not os.path.exists(FONT_PATH):
905
+ ttf = requests.get("https://paddlenlp.bj.bcebos.com/vision-language-models/materials/Roboto-Regular.ttf")
906
+ open(FONT_PATH, "wb").write(ttf.content)
907
+
908
+
909
+ def is_gif(data: bytes) -> bool:
910
+ """
911
+ check if a bytes is a gif based on the magic head
912
+ """
913
+ return data[:6] in (b"GIF87a", b"GIF89a")
914
+
915
+
916
+ class VideoReaderWrapper(decord.VideoReader):
917
+ """
918
+ Solving memory leak bug
919
+
920
+ https://github.com/dmlc/decord/issues/208
921
+ """
922
+
923
+ def __init__(self, video_path, *args, **kwargs):
924
+ with ntf(delete=True, suffix=".gif") as gif_file:
925
+ gif_input = None
926
+ self.original_file = None
927
+ if isinstance(video_path, str):
928
+ self.original_file = video_path
929
+ if video_path.lower().endswith(".gif"):
930
+ gif_input = video_path
931
+ elif isinstance(video_path, bytes):
932
+ if is_gif(video_path):
933
+ gif_file.write(video_path)
934
+ gif_input = gif_file.name
935
+ elif isinstance(video_path, io.BytesIO):
936
+ video_path.seek(0)
937
+ tmp_bytes = video_path.read()
938
+ video_path.seek(0)
939
+ if is_gif(tmp_bytes):
940
+ gif_file.write(tmp_bytes)
941
+ gif_input = gif_file.name
942
+
943
+ if gif_input is not None:
944
+ try:
945
+ # moviepy 1.0
946
+ import moviepy.editor as mp
947
+ except:
948
+ # moviepy 2.0
949
+ import moviepy as mp
950
+ clip = mp.VideoFileClip(gif_input)
951
+ mp4_file = ntf(delete=False, suffix=".mp4")
952
+ clip.write_videofile(mp4_file.name, logger=None)
953
+ clip.close()
954
+ video_path = mp4_file.name
955
+ self.original_file = video_path
956
+
957
+ super().__init__(video_path, *args, **kwargs)
958
+ self.seek(0)
959
+
960
+ def __getitem__(self, key):
961
+ frames = super().__getitem__(key)
962
+ self.seek(0)
963
+ return frames
964
+
965
+ def __del__(self):
966
+ if self.original_file and os.path.exists(self.original_file):
967
+ os.remove(self.original_file)
968
+
969
+
970
+ def get_filename(url=None):
971
+ """
972
+ Get Filename
973
+ """
974
+ if url is None:
975
+ return str(uuid.uuid4()).replace("-", "")
976
+ t = datetime.datetime.now()
977
+ if not isinstance(url, bytes):
978
+ url = url.encode("utf-8")
979
+
980
+ md5_hash = hashlib.md5(url).hexdigest()
981
+ pid = os.getpid()
982
+ tid = threading.get_ident()
983
+
984
+ # Remove the suffix to prevent save-jpg from reporting errors
985
+ image_filname = f"{t.year}-{t.month:02d}-{t.day:02d}-{pid}-{tid}-{md5_hash}"
986
+ return image_filname
987
+
988
+
989
+ def file_download(url, download_dir, save_to_disk=False, retry=0, retry_interval=3):
990
+ """
991
+ Description: Download url, if url is PIL, return directly
992
+ Args:
993
+ url(str, PIL): http/local path/io.Bytes, note that io.Bytes is the image byte stream
994
+ download_path: when save_to_disk=True, return the saved address
995
+ save_to_disk: whether to save in the local path
996
+ """
997
+
998
+ if isinstance(url, Image.Image):
999
+ return url
1000
+ elif isinstance(url, VideoReaderWrapper):
1001
+ return url
1002
+ elif url.startswith("http"):
1003
+ response = requests.get(url)
1004
+ bytes_data = response.content
1005
+ elif os.path.isfile(url):
1006
+ if save_to_disk:
1007
+ return url
1008
+ bytes_data = open(url, "rb").read()
1009
+ else:
1010
+ bytes_data = base64.b64decode(url)
1011
+ if not save_to_disk:
1012
+ return bytes_data
1013
+
1014
+ download_path = os.path.join(download_dir, get_filename(url))
1015
+ Path(download_path).parent.mkdir(parents=True, exist_ok=True)
1016
+ with open(download_path, "wb") as f:
1017
+ f.write(bytes_data)
1018
+ return download_path
1019
+
1020
+
1021
+ def get_downloadable(
1022
+ url, download_dir=RAW_VIDEO_DIR, save_to_disk=False, retry=0, retry_interval=3
1023
+ ):
1024
+ """download video and store it in the disk
1025
+
1026
+ return downloaded **path** if save_to_disk is set to true
1027
+ return downloaded **bytes** if save_to_disk is set to false
1028
+ """
1029
+
1030
+ if not os.path.exists(download_dir):
1031
+ os.makedirs(download_dir)
1032
+ downloaded_path = file_download(
1033
+ url,
1034
+ download_dir,
1035
+ save_to_disk=save_to_disk,
1036
+ retry=retry,
1037
+ retry_interval=retry_interval,
1038
+ )
1039
+ return downloaded_path
1040
+
1041
+
1042
+ def get_downloadable_image(
1043
+ download_path, need_exif_info, retry_max_time=0, retry_interval=3
1044
+ ):
1045
+ """
1046
+ Get downloadable with exif info and image processing
1047
+ """
1048
+
1049
+ def get_image_exif(image):
1050
+ exif_data = image._getexif()
1051
+ exif_info = {}
1052
+ if exif_data is not None:
1053
+ for tag, value in exif_data.items():
1054
+ tag_name = TAGS.get(tag, tag)
1055
+ exif_info[tag_name] = value.strip()
1056
+ return exif_info
1057
+
1058
+ def has_transparent_background(img):
1059
+ """has_transparent_background"""
1060
+ if img.mode in ("RGBA", "LA") or (
1061
+ img.mode == "P" and "transparency" in img.info
1062
+ ):
1063
+ # Check for any pixel with alpha channel less than 255 (fully opaque)
1064
+ alpha = img.convert("RGBA").split()[-1]
1065
+ if alpha.getextrema()[0] < 255:
1066
+ return True
1067
+ return False
1068
+
1069
+ def add_white_background(img):
1070
+ """
1071
+ Add a white background to a transparent background image
1072
+ """
1073
+ if img.mode != "RGBA":
1074
+ img = img.convert("RGBA")
1075
+ # Create an image with a white background and the same size as the original image
1076
+ img_white_background = Image.new("RGBA", img.size, (255, 255, 255))
1077
+
1078
+ # Paste the original image onto a white background
1079
+ img_white_background.paste(img, (0, 0), img)
1080
+
1081
+ return img_white_background
1082
+
1083
+ def change_I16_to_L(img):
1084
+ """
1085
+ Convert image from I;16 mode to L mode
1086
+ """
1087
+ # Since the point function in I mode only supports addition, subtraction, and multiplication,
1088
+ # the following * (1 / 256) cannot be changed to division.
1089
+ return img.point(lambda i: i * (1 / 256)).convert("L")
1090
+
1091
+ image = get_downloadable(
1092
+ download_path,
1093
+ save_to_disk=False,
1094
+ retry=retry_max_time,
1095
+ retry_interval=retry_interval,
1096
+ )
1097
+ if isinstance(image, Image.Image):
1098
+ pil_image = image
1099
+ else:
1100
+ pil_image = Image.open(io.BytesIO(image))
1101
+ if need_exif_info:
1102
+ try:
1103
+ exif_info = get_image_exif(pil_image)
1104
+ except Exception as why:
1105
+ exif_info = {}
1106
+ else:
1107
+ exif_info = {}
1108
+
1109
+ try:
1110
+ if pil_image.mode == "I;16":
1111
+ pil_image = change_I16_to_L(pil_image)
1112
+ if has_transparent_background(pil_image):
1113
+ pil_image = add_white_background(pil_image)
1114
+ except Exception as e:
1115
+ pass
1116
+
1117
+ return pil_image.convert("RGB"), exif_info
1118
+
1119
+
1120
+ def read_video_decord(video_path, save_to_disk):
1121
+ """get reader and meta by decord"""
1122
+ video_path = get_downloadable(video_path, save_to_disk=save_to_disk)
1123
+ if isinstance(video_path, VideoReaderWrapper):
1124
+ video_reader = video_path
1125
+ else:
1126
+ if isinstance(video_path, bytes):
1127
+ video_path = io.BytesIO(video_path)
1128
+ video_reader = VideoReaderWrapper(video_path, num_threads=1)
1129
+ vlen = len(video_reader)
1130
+ fps = video_reader.get_avg_fps()
1131
+ duration = vlen / float(fps)
1132
+
1133
+ video_meta = {"fps": fps, "duration": duration, "num_of_frame": vlen}
1134
+
1135
+ return video_reader, video_meta, video_path
1136
+
1137
+
1138
+ def get_frame_indices(
1139
+ vlen,
1140
+ target_frames=-1,
1141
+ target_fps=-1,
1142
+ frames_sample="middle",
1143
+ fix_start=None,
1144
+ input_fps=-1,
1145
+ ):
1146
+ """get_frame_indices"""
1147
+ assert frames_sample in ["rand", "middle", "leading"]
1148
+ if target_frames > 0:
1149
+ assert target_fps <= 0, "target_fps must be negative if target_frames is given."
1150
+ if target_frames > vlen:
1151
+ acc_samples = vlen
1152
+ logger.info(
1153
+ f"target_frames={target_frames} is larger than video length {vlen}, "
1154
+ f"will sample {acc_samples} frames."
1155
+ )
1156
+ else:
1157
+ acc_samples = target_frames
1158
+ logger.debug(
1159
+ f"sampling at target_frames={target_frames}, frames_sample={frames_sample}"
1160
+ )
1161
+
1162
+ # split the video into `acc_samples` intervals, and sample from each interval.
1163
+ intervals = np.linspace(start=0, stop=vlen, num=acc_samples + 1).astype(int)
1164
+ ranges = []
1165
+ for idx, interv in enumerate(intervals[:-1]):
1166
+ ranges.append((interv, intervals[idx + 1] - 1))
1167
+ if frames_sample == "rand":
1168
+ try:
1169
+ frame_indices = [random.choice(range(x[0], x[1])) for x in ranges]
1170
+ except Exception as e:
1171
+ frame_indices = np.random.permutation(vlen)[:acc_samples]
1172
+ frame_indices.sort()
1173
+ frame_indices = list(frame_indices)
1174
+ elif fix_start is not None:
1175
+ frame_indices = [x[0] + fix_start for x in ranges]
1176
+ elif frames_sample == "leading":
1177
+ frame_indices = [x[0] for x in ranges]
1178
+ elif frames_sample == "middle":
1179
+ frame_indices = [(x[0] + x[1]) // 2 for x in ranges]
1180
+ else:
1181
+ raise NotImplementedError
1182
+
1183
+ elif target_fps > 0:
1184
+ assert (
1185
+ target_frames <= 0
1186
+ ), "target_frames must be negative if target_fps is given."
1187
+ assert input_fps > 0, "input_fps must be provided if target_fps is given."
1188
+ logger.info(f"sampling at fps={target_fps}, frames_sample={frames_sample}")
1189
+ duration = float(vlen) / input_fps
1190
+ delta = (
1191
+ 1 / target_fps
1192
+ ) # gap between frames, this is also the clip length each frame represents
1193
+ if frames_sample == "middle":
1194
+ frame_seconds = np.arange(0 + delta / 2, duration + delta / 2, delta)
1195
+ elif frames_sample == "leading":
1196
+ frame_seconds = np.arange(0, duration, delta)
1197
+ if frames_sample == "rand":
1198
+ frame_seconds = np.arange(0 + delta / 2, duration + delta / 2, delta)
1199
+ rand_offset = np.random.rand(*(frame_seconds.shape)) - 0.5
1200
+ frame_seconds += rand_offset * delta
1201
+ frame_indices = np.around(frame_seconds * input_fps).astype(int)
1202
+ frame_indices = [e for e in frame_indices if e < vlen]
1203
+
1204
+ else:
1205
+ raise ValueError(
1206
+ "Must provide either positive target_fps or positive target_frames."
1207
+ )
1208
+
1209
+ return frame_indices
1210
+
1211
+
1212
+ def read_frames_decord(
1213
+ video_path,
1214
+ video_reader,
1215
+ video_meta,
1216
+ target_frames=-1,
1217
+ target_fps=-1,
1218
+ frames_sample="middle",
1219
+ fix_start=None,
1220
+ save_to_disk=False,
1221
+ cache_dir=EXTRACTED_FRAME_DIR,
1222
+ frame_indices=None,
1223
+ tol=10,
1224
+ ):
1225
+ """get frames by decord"""
1226
+
1227
+ if frame_indices is None:
1228
+ frame_indices = get_frame_indices(
1229
+ video_meta["num_of_frame"],
1230
+ target_frames=target_frames,
1231
+ target_fps=target_fps,
1232
+ frames_sample=frames_sample,
1233
+ fix_start=fix_start,
1234
+ input_fps=video_meta["fps"],
1235
+ )
1236
+
1237
+ frames = []
1238
+ for frame_indice_index in range(0, len(frame_indices)):
1239
+ frame_indice = frame_indices[frame_indice_index]
1240
+ try:
1241
+ frames.append(video_reader[frame_indice].asnumpy()) # (T, H, W, C)
1242
+ except Exception as e:
1243
+ logger.debug(f"encounter error when get frame: {frame_indice}, error: {e}")
1244
+ previous_counter = 1
1245
+ later_counter = 1
1246
+ previous_after_flag = True
1247
+ if frame_indice == 0 or frame_indice == len(video_reader) - 1:
1248
+ cur_tol = tol * 2
1249
+ else:
1250
+ cur_tol = tol
1251
+ while previous_counter < cur_tol or later_counter < cur_tol:
1252
+ if previous_after_flag:
1253
+ if frame_indice - previous_counter < 0:
1254
+ previous_counter += 1
1255
+ previous_after_flag = not previous_after_flag
1256
+ continue
1257
+ try:
1258
+ frames.append(
1259
+ video_reader[frame_indice - previous_counter].asnumpy()
1260
+ )
1261
+ logger.info(
1262
+ f"replace {frame_indice}-th frame with {frame_indice-previous_counter}-th frame"
1263
+ )
1264
+ frame_indices[frame_indice_index] = (
1265
+ frame_indice - previous_counter
1266
+ )
1267
+ break
1268
+ except Exception as e:
1269
+ previous_counter += 1
1270
+ else:
1271
+ if frame_indice + later_counter >= len(video_reader):
1272
+ later_counter += 1
1273
+ previous_after_flag = not previous_after_flag
1274
+ continue
1275
+ try:
1276
+ frames.append(
1277
+ video_reader[frame_indice + later_counter].asnumpy()
1278
+ )
1279
+ logger.info(
1280
+ f"replace {frame_indice}-th frame with {frame_indice+later_counter}-th frame"
1281
+ )
1282
+ frame_indices[frame_indice_index] = frame_indice + later_counter
1283
+ break
1284
+ except Exception as e:
1285
+ later_counter += 1
1286
+ previous_after_flag = not previous_after_flag
1287
+
1288
+ frames = np.stack(frames, axis=0)
1289
+ assert len(frames) == len(
1290
+ frame_indices
1291
+ ), f"len(frames): {len(frames)} != len(frame_indices): {len(frame_indices)}"
1292
+
1293
+ ret = []
1294
+
1295
+ url_sha1 = get_filename()
1296
+ for idx, frame in enumerate(frames):
1297
+ tmp = Image.fromarray(frame, "RGB")
1298
+ if save_to_disk:
1299
+ save_path = os.path.join(cache_dir, f"{url_sha1}", f"{idx}.png")
1300
+ if not os.path.exists(os.path.dirname(save_path)):
1301
+ os.makedirs(os.path.dirname(save_path))
1302
+ tmp.save(save_path)
1303
+ tmp = save_path
1304
+ ret.append(tmp)
1305
+
1306
+ time_stamps = [
1307
+ frame_idx * video_meta["duration"] / video_meta["num_of_frame"]
1308
+ for frame_idx in frame_indices
1309
+ ]
1310
+
1311
+ return ret, frame_indices, time_stamps
1312
+
1313
+
1314
+ def render_single_image_with_timestamp(
1315
+ image: Image, number: str, rate: float, font_path: str = FONT_PATH
1316
+ ):
1317
+ """
1318
+ Function: Renders a timestamp to the image of pil.image
1319
+ The timestamp size is the rate of min(width, height)
1320
+ The font color is black, the outline is white, and the outline size is 10% of the font
1321
+ Returns an Image object
1322
+ """
1323
+ draw = ImageDraw.Draw(image)
1324
+ width, height = image.size
1325
+ font_size = int(min(width, height) * rate)
1326
+ outline_size = int(font_size * 0.1)
1327
+ font = ImageFont.truetype(font_path, font_size)
1328
+ x = 0
1329
+ y = 0
1330
+
1331
+ # Draw a black timestamp with a white border
1332
+ draw.text(
1333
+ (x, y),
1334
+ number,
1335
+ font=font,
1336
+ fill=(0, 0, 0),
1337
+ stroke_width=outline_size,
1338
+ stroke_fill=(255, 255, 255),
1339
+ )
1340
+
1341
+ return image
1342
+
1343
+
1344
+ def timestamp_converting(time_stamp_in_seconds):
1345
+ """
1346
+ convert timestamp format from seconds to hr:min:sec
1347
+ """
1348
+ # get hours
1349
+ hours = 0
1350
+ while time_stamp_in_seconds >= 3600:
1351
+ hours += 1
1352
+ time_stamp_in_seconds -= 3600
1353
+ # get minutes
1354
+ mins = 0
1355
+ while time_stamp_in_seconds >= 60:
1356
+ mins += 1
1357
+ time_stamp_in_seconds -= 60
1358
+ time_hours = f"{int(hours):02d}"
1359
+ time_mins = f"{int(mins):02d}"
1360
+ time_secs = f"{time_stamp_in_seconds:05.02f}"
1361
+ fi_time_stamp = time_hours + ":" + time_mins + ":" + time_secs
1362
+
1363
+ return fi_time_stamp
1364
+
1365
+
1366
+ def render_frame_timestamp(frame, timestamp, font_rate=0.1):
1367
+ """
1368
+ Function, given a frame, render the index in order
1369
+ Logic: render the index to the upper left corner of the image
1370
+ frame: frame, PIL.Image object
1371
+ timestamp: timestamp, in seconds
1372
+ font_rate: the ratio of font size to min(wi, hei)
1373
+ """
1374
+ time_stamp = "time: " + timestamp_converting(timestamp)
1375
+ new_frame = render_single_image_with_timestamp(frame, time_stamp, font_rate)
1376
+
1377
+ return new_frame
1378
+
1379
+
1380
+ IDS_TYPE_FLAG = {"text": 0, "image": 1, "video": 2, "audio": 3}
1381
+
1382
+
1383
+ class Ernie4_5_VLProcessor(ProcessorMixin):
1384
+ """
1385
+ Processes multimodal chat messages into model-ready inputs,
1386
+ handling text, images, and videos with 3D positional embeddings.
1387
+ """
1388
+
1389
+ attributes = ["image_processor", "tokenizer"]
1390
+ valid_kwargs = [
1391
+ "chat_template",
1392
+ "spatial_conv_size",
1393
+ "temporal_conv_size",
1394
+ "image_min_pixels",
1395
+ "image_max_pixels",
1396
+ "video_min_pixels",
1397
+ "video_max_pixels",
1398
+ "video_target_frames",
1399
+ "video_frames_sample",
1400
+ "video_max_frames",
1401
+ "video_min_frames",
1402
+ "video_fps",
1403
+ ]
1404
+ image_processor_class = "AutoImageProcessor"
1405
+ tokenizer_class = "AutoTokenizer"
1406
+
1407
+ CLS_TOKEN = "<|begin_of_sentence|>"
1408
+ SEP_TOKEN = "<|end_of_sentence|>"
1409
+ IMG_START = "<|IMAGE_START|>"
1410
+ IMG_END = "<|IMAGE_END|>"
1411
+ VID_START = "<|VIDEO_START|>"
1412
+ VID_END = "<|VIDEO_END|>"
1413
+
1414
+ def __init__(
1415
+ self,
1416
+ image_processor=None,
1417
+ tokenizer=None,
1418
+ chat_template=None,
1419
+ spatial_conv_size: int = 2,
1420
+ temporal_conv_size: int = 2,
1421
+ image_min_pixels: int = 4 * 28 * 28,
1422
+ image_max_pixels: int = 6177 * 28 * 28,
1423
+ video_min_pixels: int = 299 * 28 * 28,
1424
+ video_max_pixels: int = 1196 * 28 * 28,
1425
+ video_target_frames: int = -1,
1426
+ video_frames_sample: str = "leading",
1427
+ video_max_frames: int = 180,
1428
+ video_min_frames: int = 16,
1429
+ video_fps: int = 2,
1430
+ **kwargs,
1431
+ ):
1432
+ super().__init__(image_processor, tokenizer, chat_template=chat_template)
1433
+ self.tokenizer.ignored_index = -100
1434
+
1435
+ # Convolution sizes for patch aggregation
1436
+ self.spatial_conv_size = spatial_conv_size
1437
+ self.temporal_conv_size = temporal_conv_size
1438
+
1439
+ # Pixel constraints
1440
+ self.image_min_pixels = image_min_pixels
1441
+ self.image_max_pixels = image_max_pixels
1442
+ self.video_min_pixels = video_min_pixels
1443
+ self.video_max_pixels = video_max_pixels
1444
+
1445
+ # Video sampling parameters
1446
+ self.target_frames = video_target_frames
1447
+ self.frames_sample = video_frames_sample
1448
+ self.max_frames = video_max_frames
1449
+ self.min_frames = video_min_frames
1450
+ self.fps = video_fps
1451
+
1452
+ # Special tokens and IDs
1453
+ self.cls_token = self.CLS_TOKEN
1454
+ self.sep_token = self.SEP_TOKEN
1455
+ self.image_start = self.IMG_START
1456
+ self.image_end = self.IMG_END
1457
+ self.video_start = self.VID_START
1458
+ self.video_end = self.VID_END
1459
+ self.image_patch_id = self.tokenizer.convert_tokens_to_ids(
1460
+ "<|IMAGE_PLACEHOLDER|>"
1461
+ )
1462
+
1463
+ self.token_type_mapping = self._build_token_type_mapping()
1464
+ self.is_training = True
1465
+ self.role_prefixes = {"system": "", "user": "User: ", "bot": "Assistant: "}
1466
+
1467
+ def _build_token_type_mapping(self) -> Dict[Any, int]:
1468
+ mapping = defaultdict(lambda: IDS_TYPE_FLAG["text"])
1469
+ for token in (self.IMG_START, self.IMG_END, self.VID_START, self.VID_END):
1470
+ mapping[token] = IDS_TYPE_FLAG["image"]
1471
+ mapping[self.image_patch_id] = IDS_TYPE_FLAG["image"]
1472
+ return mapping
1473
+
1474
+ def train(self) -> None:
1475
+ """Enable training mode (produces labels)."""
1476
+ self.is_training = True
1477
+
1478
+ def eval(self) -> None:
1479
+ """Enable evaluation mode (doesn't produce labels)."""
1480
+ self.is_training = False
1481
+
1482
+ def _download_image(
1483
+ self,
1484
+ item: Dict,
1485
+ ):
1486
+ """Download image from url and resize it to the specified size."""
1487
+ url_info = item.get("image_url", {})
1488
+ url = url_info.get("url")
1489
+ w = url_info.get("image_width", None)
1490
+ h = url_info.get("image_height", None)
1491
+ data = get_downloadable(url, download_dir=RAW_IMAGE_DIR, save_to_disk=False)
1492
+
1493
+ img = Image.open(io.BytesIO(data) if isinstance(data, bytes) else data)
1494
+ if w and h:
1495
+ img = img.resize((w, h))
1496
+ return img
1497
+
1498
+ def _download_video(self, item: Dict):
1499
+ """Download video from url and resize it to the specified size."""
1500
+ url_info = item.get("video_url", {})
1501
+ url = url_info.get("url")
1502
+
1503
+ frames = self._load_and_process_video(url, item)
1504
+
1505
+ pixel_stack = np.stack([np.array(f.convert("RGB")) for f in frames], axis=0)
1506
+ return pixel_stack
1507
+
1508
+ def process_vision_info(self, messages: List[Dict[str, Any]]):
1509
+ """Preprocess messages into lists of text, images, and videos."""
1510
+ images = []
1511
+ videos = []
1512
+
1513
+ for msg in messages:
1514
+ content_items = msg.get("content")
1515
+ if not isinstance(content_items, list):
1516
+ content_items = [content_items]
1517
+
1518
+ for item in content_items:
1519
+ if item.get("type") == "image_url":
1520
+ img = self._download_image(item)
1521
+ images.append(img)
1522
+ elif item.get("type") == "video_url":
1523
+ pixel_stack = self._download_video(item)
1524
+ videos.append(pixel_stack)
1525
+
1526
+ return images, videos
1527
+
1528
+ def __call__(
1529
+ self,
1530
+ text: Union[str, List[str]],
1531
+ images: List[Image.Image] = None,
1532
+ videos: List[List[Image.Image]] = None,
1533
+ **kwargs,
1534
+ ) -> BatchFeature:
1535
+ """
1536
+ Convert chat messages into model inputs.
1537
+ Returns a dict with input_ids, token_type_ids, position_ids, images, grid_thw, image_type_ids, labels.
1538
+ """
1539
+ outputs = {
1540
+ "input_ids": [],
1541
+ "token_type_ids": [],
1542
+ "position_ids": [],
1543
+ "images": [],
1544
+ "grid_thw": [],
1545
+ "image_type_ids": [],
1546
+ "cur_position": 0,
1547
+ "pic_cnt": 0,
1548
+ "video_cnt": 0,
1549
+ }
1550
+ if images is None:
1551
+ images = []
1552
+ if videos is None:
1553
+ videos = []
1554
+ if not isinstance(text, list):
1555
+ text = [text]
1556
+
1557
+ texts = text[0]
1558
+
1559
+ new_video_seg = True
1560
+ for text_with_image in texts.split(self.VID_START + "<|video@placeholder|>" + self.VID_END):
1561
+ new_text_seg = True
1562
+ if not new_video_seg:
1563
+ self._add_video(videos[outputs["video_cnt"]], outputs)
1564
+ for text in text_with_image.split(self.IMG_START + "<|image@placeholder|>" + self.IMG_END):
1565
+ if not new_text_seg:
1566
+ self._add_image(images[outputs["pic_cnt"]], outputs)
1567
+ self._add_text(text, outputs)
1568
+ new_text_seg = False
1569
+ new_video_seg = False
1570
+
1571
+ for key in ["cur_position", "pic_cnt", "video_cnt"]:
1572
+ outputs.pop(key, None)
1573
+
1574
+ outputs = self._pack_outputs(outputs)
1575
+ for key in outputs.keys():
1576
+ if isinstance(outputs[key], np.ndarray):
1577
+ if key in ["images", "grid_thw"]:
1578
+ outputs[key] = torch.tensor(np.array(outputs[key]))
1579
+ else:
1580
+ outputs[key] = torch.tensor(np.array([outputs[key]]))
1581
+
1582
+ return BatchFeature(data=outputs)
1583
+
1584
+ def _add_special_token(self, token: Union[str, int], outputs: Dict) -> None:
1585
+ """add special token to outputs"""
1586
+ token_id = (
1587
+ token
1588
+ if isinstance(token, int)
1589
+ else self.tokenizer.convert_tokens_to_ids(token)
1590
+ )
1591
+ outputs["input_ids"].append(token_id)
1592
+ outputs["token_type_ids"].append(self.token_type_mapping[token])
1593
+ pos = outputs["cur_position"]
1594
+ outputs["position_ids"].append([pos] * 3)
1595
+ outputs["cur_position"] += 1
1596
+
1597
+ def _add_text(self, text: str, outputs: Dict) -> None:
1598
+ """add text to outputs"""
1599
+ tokens = self.tokenizer.convert_tokens_to_ids(self.tokenizer.tokenize(text))
1600
+ outputs["input_ids"].extend(tokens)
1601
+ outputs["token_type_ids"].extend([IDS_TYPE_FLAG["text"]] * len(tokens))
1602
+
1603
+ start = outputs["cur_position"]
1604
+ for i in range(len(tokens)):
1605
+ outputs["position_ids"].append([start + i] * 3)
1606
+ outputs["cur_position"] += len(tokens)
1607
+
1608
+ def _add_image(self, img: Image.Image, outputs: Dict) -> None:
1609
+ """add image to outputs"""
1610
+ outputs["pic_cnt"] += 1
1611
+ self._add_special_token(self.IMG_START, outputs)
1612
+
1613
+ patches_h, patches_w = self.image_processor.get_smarted_resize(
1614
+ img.height,
1615
+ img.width,
1616
+ min_pixels=self.image_min_pixels,
1617
+ max_pixels=self.image_max_pixels,
1618
+ )[1]
1619
+ num_tokens = (patches_h * patches_w) // (self.spatial_conv_size**2)
1620
+
1621
+ outputs["input_ids"].extend([self.image_patch_id] * num_tokens)
1622
+ outputs["token_type_ids"].extend([IDS_TYPE_FLAG["image"]] * num_tokens)
1623
+
1624
+ pos_ids = self._compute_3d_positions(
1625
+ 1, patches_h, patches_w, outputs["cur_position"]
1626
+ )
1627
+ outputs["position_ids"].extend(pos_ids)
1628
+ outputs["cur_position"] = np.max(pos_ids) + 1
1629
+
1630
+ # Preprocess pixels
1631
+ ret = self.image_processor.preprocess(
1632
+ images=[img.convert("RGB")],
1633
+ do_normalize=False,
1634
+ do_rescale=False,
1635
+ predetermined_grid_thw=np.array([[patches_h, patches_w]]),
1636
+ do_convert_rgb=True,
1637
+ input_data_format=ChannelDimension.LAST,
1638
+ )
1639
+ outputs["images"].append(ret["pixel_values"])
1640
+ outputs["grid_thw"].append(ret["image_grid_thw"])
1641
+ outputs["image_type_ids"].append(0)
1642
+
1643
+ self._add_special_token(self.IMG_END, outputs)
1644
+
1645
+ def _add_video(
1646
+ self, pixel_stack: np.ndarray, outputs: Dict
1647
+ ) -> None:
1648
+ outputs["video_cnt"] += 1
1649
+ self._add_special_token(self.VID_START, outputs)
1650
+
1651
+ patches_h, patches_w = self.image_processor.get_smarted_resize(
1652
+ pixel_stack.shape[1],
1653
+ pixel_stack.shape[2],
1654
+ min_pixels=self.video_min_pixels,
1655
+ max_pixels=self.video_max_pixels,
1656
+ )[1]
1657
+ num_frames = pixel_stack.shape[0]
1658
+ num_tokens = (num_frames * patches_h * patches_w) // (
1659
+ self.spatial_conv_size**2 * self.temporal_conv_size
1660
+ )
1661
+
1662
+ ret = self.image_processor.preprocess(
1663
+ images=None,
1664
+ videos=pixel_stack,
1665
+ do_normalize=False,
1666
+ do_rescale=False,
1667
+ predetermined_grid_thw=np.array([[patches_h, patches_w]] * num_frames),
1668
+ do_convert_rgb=True,
1669
+ input_data_format=ChannelDimension.LAST,
1670
+ )
1671
+ outputs["images"].append(ret["pixel_values_videos"])
1672
+ outputs["grid_thw"].append(ret["video_grid_thw"])
1673
+ outputs["image_type_ids"].extend([1] * num_frames)
1674
+
1675
+ outputs["input_ids"].extend([self.image_patch_id] * num_tokens)
1676
+ outputs["token_type_ids"].extend([IDS_TYPE_FLAG["video"]] * num_tokens)
1677
+
1678
+ pos_ids = self._compute_3d_positions(
1679
+ num_frames, patches_h, patches_w, outputs["cur_position"]
1680
+ )
1681
+ outputs["position_ids"].extend(pos_ids)
1682
+ outputs["cur_position"] = np.max(pos_ids) + 1
1683
+
1684
+ self._add_special_token(self.VID_END, outputs)
1685
+
1686
+ def _load_and_process_video(self, url: str, item: Dict) -> List[Image.Image]:
1687
+ reader, meta, path = read_video_decord(url, save_to_disk=False)
1688
+
1689
+ video_frame_args = dict()
1690
+ video_frame_args["fps"] = item.get("fps", self.fps)
1691
+ video_frame_args["min_frames"] = item.get("min_frames", self.min_frames)
1692
+ video_frame_args["max_frames"] = item.get("max_frames", self.max_frames)
1693
+ video_frame_args["target_frames"] = item.get(
1694
+ "target_frames", self.target_frames
1695
+ )
1696
+ video_frame_args["frames_sample"] = item.get(
1697
+ "frames_sample", self.frames_sample
1698
+ )
1699
+
1700
+ video_frame_args = self._set_video_frame_args(video_frame_args, meta)
1701
+
1702
+ frames_data, _, timestamps = read_frames_decord(
1703
+ path,
1704
+ reader,
1705
+ meta,
1706
+ target_frames=video_frame_args["target_frames"],
1707
+ target_fps=video_frame_args["fps"],
1708
+ frames_sample=video_frame_args["frames_sample"],
1709
+ save_to_disk=False,
1710
+ )
1711
+
1712
+ frames: List[Image.Image] = []
1713
+ for img_array, ts in zip(frames_data, timestamps):
1714
+ frames.append(render_frame_timestamp(img_array, ts))
1715
+ # Ensure even number of frames for temporal conv
1716
+ if len(frames) % 2 != 0:
1717
+ frames.append(copy.deepcopy(frames[-1]))
1718
+ return frames
1719
+
1720
+ def _set_video_frame_args(self, video_frame_args, video_meta):
1721
+ """
1722
+ Set the final frame extraction parameters based on known parameters and priorities
1723
+ """
1724
+ # Priority: video_target_frames > (video_min_frames, video_max_frames) > video_fps
1725
+ if video_frame_args["target_frames"] > 0:
1726
+ if video_frame_args["fps"] >= 0:
1727
+ raise ValueError("fps must be negative if target_frames is given")
1728
+ if (
1729
+ video_frame_args["min_frames"] > 0
1730
+ and video_frame_args["target_frames"] < video_frame_args["min_frames"]
1731
+ ):
1732
+ raise ValueError("target_frames must be larger than min_frames")
1733
+ if (
1734
+ video_frame_args["max_frames"] > 0
1735
+ and video_frame_args["target_frames"] > video_frame_args["max_frames"]
1736
+ ):
1737
+ raise ValueError("target_frames must be smaller than max_frames")
1738
+ else:
1739
+ if video_frame_args["fps"] < 0:
1740
+ raise ValueError(
1741
+ "Must provide either positive target_fps or positive target_frames."
1742
+ )
1743
+ # First calculate the number of frames extracted under video_fps
1744
+ frames_to_extract = int(video_meta["duration"] * video_frame_args["fps"])
1745
+ # Determine whether it is within the target range. If not, take target_frames as the upper or lower bound
1746
+ if (
1747
+ video_frame_args["min_frames"] > 0
1748
+ and video_frame_args["max_frames"] > 0
1749
+ and video_frame_args["min_frames"] > video_frame_args["max_frames"]
1750
+ ):
1751
+ raise ValueError("min_frames must be smaller than max_frames")
1752
+ if (
1753
+ video_frame_args["min_frames"] > 0
1754
+ and frames_to_extract < video_frame_args["min_frames"]
1755
+ ):
1756
+ video_frame_args["target_frames"] = video_frame_args["min_frames"]
1757
+ video_frame_args["fps"] = -1
1758
+ if (
1759
+ video_frame_args["max_frames"] > 0
1760
+ and frames_to_extract > video_frame_args["max_frames"]
1761
+ ):
1762
+ video_frame_args["target_frames"] = video_frame_args["max_frames"]
1763
+ video_frame_args["fps"] = -1
1764
+
1765
+ return video_frame_args
1766
+
1767
+ def _compute_3d_positions(
1768
+ self, t: int, h: int, w: int, start_idx: int
1769
+ ) -> List[List[int]]:
1770
+ # Downsample time if needed
1771
+ t_eff = t // self.temporal_conv_size if t != 1 else 1
1772
+ gh, gw = h // self.spatial_conv_size, w // self.spatial_conv_size
1773
+ time_idx = np.repeat(np.arange(t_eff), gh * gw)
1774
+ h_idx = np.tile(np.repeat(np.arange(gh), gw), t_eff)
1775
+ w_idx = np.tile(np.arange(gw), t_eff * gh)
1776
+
1777
+ coords = list(zip(time_idx, h_idx, w_idx))
1778
+ return [
1779
+ [start_idx + ti, start_idx + hi, start_idx + wi] for ti, hi, wi in coords
1780
+ ]
1781
+
1782
+ def _pack_outputs(self, outs: Dict) -> Dict[str, Any]:
1783
+ # Stack or nullify image-related fields
1784
+ if not outs["images"]:
1785
+ outs["images"] = None
1786
+ outs["grid_thw"] = None
1787
+ outs["image_type_ids"] = None
1788
+ else:
1789
+ outs["images"] = np.vstack(outs["images"])
1790
+ outs["grid_thw"] = np.vstack(outs["grid_thw"])
1791
+ outs["image_type_ids"] = np.array(outs["image_type_ids"])
1792
+
1793
+ # Convert lists to arrays
1794
+ outs["input_ids"] = np.array(outs["input_ids"], dtype=np.int64)
1795
+ outs["token_type_ids"] = np.array(outs["token_type_ids"], dtype=np.int64)
1796
+ outs["position_ids"] = np.array(outs["position_ids"], dtype=np.int64)
1797
+ return outs
1798
+
1799
+ def batch_decode(self, *args, **kwargs):
1800
+ """
1801
+ This method forwards all its arguments to Ernie4_5_VLTokenizer's [`~PreTrainedTokenizer.batch_decode`]. Please
1802
+ refer to the docstring of this method for more information.
1803
+ """
1804
+ return self.tokenizer.batch_decode(*args, **kwargs)
1805
+
1806
+ def decode(self, *args, **kwargs):
1807
+ """
1808
+ This method forwards all its arguments to Ernie4_5_VLTokenizer's [`~PreTrainedTokenizer.decode`].
1809
+ Please refer to the docstring of this method for more information.
1810
+ """
1811
+ return self.tokenizer.decode(*args, **kwargs)
1812
+
1813
+ @property
1814
+ def model_input_names(self):
1815
+ """get model input names"""
1816
+ tokenizer_input_names = self.tokenizer.model_input_names
1817
+ image_processor_input_names = self.image_processor.model_input_names
1818
+ return list(tokenizer_input_names) + list(image_processor_input_names)
1819
+
1820
+
1821
+ __all__ = ["Ernie4_5_VLTokenizer", "Ernie4_5_VLImageProcessor", "Ernie4_5_VLProcessor"]
special_tokens_map.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"bos_token": "<s>", "eos_token": "</s>", "unk_token": "<unk>", "sep_token": "<|end_of_sentence|>", "pad_token": "<unk>", "cls_token": "<|begin_of_sentence|>", "mask_token": "<mask:1>", "sys_start_token": "<mask:4>", "sys_end_token": "<mask:5>", "header_start_token": "<mask:6>", "header_end_token": "<mask:7>", "additional_special_tokens": ["<|IMAGE_PLACEHOLDER|>", "<|AUDIO_PLACEHOLDER|>", "<|LOC_0|>", "<|LOC_1|>", "<|LOC_2|>", "<|LOC_3|>", "<|LOC_4|>", "<|LOC_5|>", "<|LOC_6|>", "<|LOC_7|>", "<|LOC_8|>", "<|LOC_9|>", "<|LOC_10|>", "<|LOC_11|>", "<|LOC_12|>", "<|LOC_13|>", "<|LOC_14|>", "<|LOC_15|>", "<|LOC_16|>", "<|LOC_17|>", "<|LOC_18|>", "<|LOC_19|>", "<|LOC_20|>", "<|LOC_21|>", "<|LOC_22|>", "<|LOC_23|>", "<|LOC_24|>", "<|LOC_25|>", "<|LOC_26|>", "<|LOC_27|>", "<|LOC_28|>", "<|LOC_29|>", "<|LOC_30|>", "<|LOC_31|>", "<|LOC_32|>", "<|LOC_33|>", "<|LOC_34|>", "<|LOC_35|>", "<|LOC_36|>", "<|LOC_37|>", "<|LOC_38|>", "<|LOC_39|>", "<|LOC_40|>", "<|LOC_41|>", "<|LOC_42|>", "<|LOC_43|>", "<|LOC_44|>", "<|LOC_45|>", "<|LOC_46|>", "<|LOC_47|>", "<|LOC_48|>", "<|LOC_49|>", "<|LOC_50|>", "<|LOC_51|>", "<|LOC_52|>", "<|LOC_53|>", "<|LOC_54|>", "<|LOC_55|>", "<|LOC_56|>", "<|LOC_57|>", "<|LOC_58|>", "<|LOC_59|>", "<|LOC_60|>", "<|LOC_61|>", "<|LOC_62|>", "<|LOC_63|>", "<|LOC_64|>", "<|LOC_65|>", "<|LOC_66|>", "<|LOC_67|>", "<|LOC_68|>", "<|LOC_69|>", "<|LOC_70|>", "<|LOC_71|>", "<|LOC_72|>", "<|LOC_73|>", "<|LOC_74|>", "<|LOC_75|>", "<|LOC_76|>", "<|LOC_77|>", "<|LOC_78|>", "<|LOC_79|>", "<|LOC_80|>", "<|LOC_81|>", "<|LOC_82|>", "<|LOC_83|>", "<|LOC_84|>", "<|LOC_85|>", "<|LOC_86|>", "<|LOC_87|>", "<|LOC_88|>", "<|LOC_89|>", "<|LOC_90|>", "<|LOC_91|>", "<|LOC_92|>", "<|LOC_93|>", "<|LOC_94|>", "<|LOC_95|>", "<|LOC_96|>", "<|LOC_97|>", "<|LOC_98|>", "<|LOC_99|>", "<|LOC_100|>", "<|LOC_101|>", "<|LOC_102|>", "<|LOC_103|>", "<|LOC_104|>", "<|LOC_105|>", "<|LOC_106|>", "<|LOC_107|>", "<|LOC_108|>", "<|LOC_109|>", "<|LOC_110|>", "<|LOC_111|>", "<|LOC_112|>", "<|LOC_113|>", "<|LOC_114|>", "<|LOC_115|>", "<|LOC_116|>", "<|LOC_117|>", "<|LOC_118|>", "<|LOC_119|>", "<|LOC_120|>", "<|LOC_121|>", "<|LOC_122|>", "<|LOC_123|>", "<|LOC_124|>", "<|LOC_125|>", "<|LOC_126|>", "<|LOC_127|>", "<|LOC_128|>", "<|LOC_129|>", "<|LOC_130|>", "<|LOC_131|>", "<|LOC_132|>", "<|LOC_133|>", "<|LOC_134|>", "<|LOC_135|>", "<|LOC_136|>", "<|LOC_137|>", "<|LOC_138|>", "<|LOC_139|>", "<|LOC_140|>", "<|LOC_141|>", "<|LOC_142|>", "<|LOC_143|>", "<|LOC_144|>", "<|LOC_145|>", "<|LOC_146|>", "<|LOC_147|>", "<|LOC_148|>", "<|LOC_149|>", "<|LOC_150|>", "<|LOC_151|>", "<|LOC_152|>", "<|LOC_153|>", "<|LOC_154|>", "<|LOC_155|>", "<|LOC_156|>", "<|LOC_157|>", "<|LOC_158|>", "<|LOC_159|>", "<|LOC_160|>", "<|LOC_161|>", "<|LOC_162|>", "<|LOC_163|>", "<|LOC_164|>", "<|LOC_165|>", "<|LOC_166|>", "<|LOC_167|>", "<|LOC_168|>", "<|LOC_169|>", "<|LOC_170|>", "<|LOC_171|>", "<|LOC_172|>", "<|LOC_173|>", "<|LOC_174|>", "<|LOC_175|>", "<|LOC_176|>", "<|LOC_177|>", "<|LOC_178|>", "<|LOC_179|>", "<|LOC_180|>", "<|LOC_181|>", "<|LOC_182|>", "<|LOC_183|>", "<|LOC_184|>", "<|LOC_185|>", "<|LOC_186|>", "<|LOC_187|>", "<|LOC_188|>", "<|LOC_189|>", "<|LOC_190|>", "<|LOC_191|>", "<|LOC_192|>", "<|LOC_193|>", "<|LOC_194|>", "<|LOC_195|>", "<|LOC_196|>", "<|LOC_197|>", "<|LOC_198|>", "<|LOC_199|>", "<|LOC_200|>", "<|LOC_201|>", "<|LOC_202|>", "<|LOC_203|>", "<|LOC_204|>", "<|LOC_205|>", "<|LOC_206|>", "<|LOC_207|>", "<|LOC_208|>", "<|LOC_209|>", "<|LOC_210|>", "<|LOC_211|>", "<|LOC_212|>", "<|LOC_213|>", "<|LOC_214|>", "<|LOC_215|>", "<|LOC_216|>", "<|LOC_217|>", "<|LOC_218|>", "<|LOC_219|>", "<|LOC_220|>", "<|LOC_221|>", "<|LOC_222|>", "<|LOC_223|>", "<|LOC_224|>", "<|LOC_225|>", "<|LOC_226|>", "<|LOC_227|>", "<|LOC_228|>", "<|LOC_229|>", "<|LOC_230|>", "<|LOC_231|>", "<|LOC_232|>", "<|LOC_233|>", "<|LOC_234|>", "<|LOC_235|>", "<|LOC_236|>", "<|LOC_237|>", "<|LOC_238|>", "<|LOC_239|>", "<|LOC_240|>", "<|LOC_241|>", "<|LOC_242|>", "<|LOC_243|>", "<|LOC_244|>", "<|LOC_245|>", "<|LOC_246|>", "<|LOC_247|>", "<|LOC_248|>", "<|LOC_249|>", "<|LOC_250|>", "<|LOC_251|>", "<|LOC_252|>", "<|LOC_253|>", "<|LOC_254|>", "<|LOC_255|>", "<|LOC_256|>", "<|LOC_257|>", "<|LOC_258|>", "<|LOC_259|>", "<|LOC_260|>", "<|LOC_261|>", "<|LOC_262|>", "<|LOC_263|>", "<|LOC_264|>", "<|LOC_265|>", "<|LOC_266|>", "<|LOC_267|>", "<|LOC_268|>", "<|LOC_269|>", "<|LOC_270|>", "<|LOC_271|>", "<|LOC_272|>", "<|LOC_273|>", "<|LOC_274|>", "<|LOC_275|>", "<|LOC_276|>", "<|LOC_277|>", "<|LOC_278|>", "<|LOC_279|>", "<|LOC_280|>", "<|LOC_281|>", "<|LOC_282|>", "<|LOC_283|>", "<|LOC_284|>", "<|LOC_285|>", "<|LOC_286|>", "<|LOC_287|>", "<|LOC_288|>", "<|LOC_289|>", "<|LOC_290|>", "<|LOC_291|>", "<|LOC_292|>", "<|LOC_293|>", "<|LOC_294|>", "<|LOC_295|>", "<|LOC_296|>", "<|LOC_297|>", "<|LOC_298|>", "<|LOC_299|>", "<|LOC_300|>", "<|LOC_301|>", "<|LOC_302|>", "<|LOC_303|>", "<|LOC_304|>", "<|LOC_305|>", "<|LOC_306|>", "<|LOC_307|>", "<|LOC_308|>", "<|LOC_309|>", "<|LOC_310|>", "<|LOC_311|>", "<|LOC_312|>", "<|LOC_313|>", "<|LOC_314|>", "<|LOC_315|>", "<|LOC_316|>", "<|LOC_317|>", "<|LOC_318|>", "<|LOC_319|>", "<|LOC_320|>", "<|LOC_321|>", "<|LOC_322|>", "<|LOC_323|>", "<|LOC_324|>", "<|LOC_325|>", "<|LOC_326|>", "<|LOC_327|>", "<|LOC_328|>", "<|LOC_329|>", "<|LOC_330|>", "<|LOC_331|>", "<|LOC_332|>", "<|LOC_333|>", "<|LOC_334|>", "<|LOC_335|>", "<|LOC_336|>", "<|LOC_337|>", "<|LOC_338|>", "<|LOC_339|>", "<|LOC_340|>", "<|LOC_341|>", "<|LOC_342|>", "<|LOC_343|>", "<|LOC_344|>", "<|LOC_345|>", "<|LOC_346|>", "<|LOC_347|>", "<|LOC_348|>", "<|LOC_349|>", "<|LOC_350|>", "<|LOC_351|>", "<|LOC_352|>", "<|LOC_353|>", "<|LOC_354|>", "<|LOC_355|>", "<|LOC_356|>", "<|LOC_357|>", "<|LOC_358|>", "<|LOC_359|>", "<|LOC_360|>", "<|LOC_361|>", "<|LOC_362|>", "<|LOC_363|>", "<|LOC_364|>", "<|LOC_365|>", "<|LOC_366|>", "<|LOC_367|>", "<|LOC_368|>", "<|LOC_369|>", "<|LOC_370|>", "<|LOC_371|>", "<|LOC_372|>", "<|LOC_373|>", "<|LOC_374|>", "<|LOC_375|>", "<|LOC_376|>", "<|LOC_377|>", "<|LOC_378|>", "<|LOC_379|>", "<|LOC_380|>", "<|LOC_381|>", "<|LOC_382|>", "<|LOC_383|>", "<|LOC_384|>", "<|LOC_385|>", "<|LOC_386|>", "<|LOC_387|>", "<|LOC_388|>", "<|LOC_389|>", "<|LOC_390|>", "<|LOC_391|>", "<|LOC_392|>", "<|LOC_393|>", "<|LOC_394|>", "<|LOC_395|>", "<|LOC_396|>", "<|LOC_397|>", "<|LOC_398|>", "<|LOC_399|>", "<|LOC_400|>", "<|LOC_401|>", "<|LOC_402|>", "<|LOC_403|>", "<|LOC_404|>", "<|LOC_405|>", "<|LOC_406|>", "<|LOC_407|>", "<|LOC_408|>", "<|LOC_409|>", "<|LOC_410|>", "<|LOC_411|>", "<|LOC_412|>", "<|LOC_413|>", "<|LOC_414|>", "<|LOC_415|>", "<|LOC_416|>", "<|LOC_417|>", "<|LOC_418|>", "<|LOC_419|>", "<|LOC_420|>", "<|LOC_421|>", "<|LOC_422|>", "<|LOC_423|>", "<|LOC_424|>", "<|LOC_425|>", "<|LOC_426|>", "<|LOC_427|>", "<|LOC_428|>", "<|LOC_429|>", "<|LOC_430|>", "<|LOC_431|>", "<|LOC_432|>", "<|LOC_433|>", "<|LOC_434|>", "<|LOC_435|>", "<|LOC_436|>", "<|LOC_437|>", "<|LOC_438|>", "<|LOC_439|>", "<|LOC_440|>", "<|LOC_441|>", "<|LOC_442|>", "<|LOC_443|>", "<|LOC_444|>", "<|LOC_445|>", "<|LOC_446|>", "<|LOC_447|>", "<|LOC_448|>", "<|LOC_449|>", "<|LOC_450|>", "<|LOC_451|>", "<|LOC_452|>", "<|LOC_453|>", "<|LOC_454|>", "<|LOC_455|>", "<|LOC_456|>", "<|LOC_457|>", "<|LOC_458|>", "<|LOC_459|>", "<|LOC_460|>", "<|LOC_461|>", "<|LOC_462|>", "<|LOC_463|>", "<|LOC_464|>", "<|LOC_465|>", "<|LOC_466|>", "<|LOC_467|>", "<|LOC_468|>", "<|LOC_469|>", "<|LOC_470|>", "<|LOC_471|>", "<|LOC_472|>", "<|LOC_473|>", "<|LOC_474|>", "<|LOC_475|>", "<|LOC_476|>", "<|LOC_477|>", "<|LOC_478|>", "<|LOC_479|>", "<|LOC_480|>", "<|LOC_481|>", "<|LOC_482|>", "<|LOC_483|>", "<|LOC_484|>", "<|LOC_485|>", "<|LOC_486|>", "<|LOC_487|>", "<|LOC_488|>", "<|LOC_489|>", "<|LOC_490|>", "<|LOC_491|>", "<|LOC_492|>", "<|LOC_493|>", "<|LOC_494|>", "<|LOC_495|>", "<|LOC_496|>", "<|LOC_497|>", "<|LOC_498|>", "<|LOC_499|>", "<|LOC_500|>", "<|LOC_501|>", "<|LOC_502|>", "<|LOC_503|>", "<|LOC_504|>", "<|LOC_505|>", "<|LOC_506|>", "<|LOC_507|>", "<|LOC_508|>", "<|LOC_509|>", "<|LOC_510|>", "<|LOC_511|>", "<|LOC_512|>", "<|LOC_513|>", "<|LOC_514|>", "<|LOC_515|>", "<|LOC_516|>", "<|LOC_517|>", "<|LOC_518|>", "<|LOC_519|>", "<|LOC_520|>", "<|LOC_521|>", "<|LOC_522|>", "<|LOC_523|>", "<|LOC_524|>", "<|LOC_525|>", "<|LOC_526|>", "<|LOC_527|>", "<|LOC_528|>", "<|LOC_529|>", "<|LOC_530|>", "<|LOC_531|>", "<|LOC_532|>", "<|LOC_533|>", "<|LOC_534|>", "<|LOC_535|>", "<|LOC_536|>", "<|LOC_537|>", "<|LOC_538|>", "<|LOC_539|>", "<|LOC_540|>", "<|LOC_541|>", "<|LOC_542|>", "<|LOC_543|>", "<|LOC_544|>", "<|LOC_545|>", "<|LOC_546|>", "<|LOC_547|>", "<|LOC_548|>", "<|LOC_549|>", "<|LOC_550|>", "<|LOC_551|>", "<|LOC_552|>", "<|LOC_553|>", "<|LOC_554|>", "<|LOC_555|>", "<|LOC_556|>", "<|LOC_557|>", "<|LOC_558|>", "<|LOC_559|>", "<|LOC_560|>", "<|LOC_561|>", "<|LOC_562|>", "<|LOC_563|>", "<|LOC_564|>", "<|LOC_565|>", "<|LOC_566|>", "<|LOC_567|>", "<|LOC_568|>", "<|LOC_569|>", "<|LOC_570|>", "<|LOC_571|>", "<|LOC_572|>", "<|LOC_573|>", "<|LOC_574|>", "<|LOC_575|>", "<|LOC_576|>", "<|LOC_577|>", "<|LOC_578|>", "<|LOC_579|>", "<|LOC_580|>", "<|LOC_581|>", "<|LOC_582|>", "<|LOC_583|>", "<|LOC_584|>", "<|LOC_585|>", "<|LOC_586|>", "<|LOC_587|>", "<|LOC_588|>", "<|LOC_589|>", "<|LOC_590|>", "<|LOC_591|>", "<|LOC_592|>", "<|LOC_593|>", "<|LOC_594|>", "<|LOC_595|>", "<|LOC_596|>", "<|LOC_597|>", "<|LOC_598|>", "<|LOC_599|>", "<|LOC_600|>", "<|LOC_601|>", "<|LOC_602|>", "<|LOC_603|>", "<|LOC_604|>", "<|LOC_605|>", "<|LOC_606|>", "<|LOC_607|>", "<|LOC_608|>", "<|LOC_609|>", "<|LOC_610|>", "<|LOC_611|>", "<|LOC_612|>", "<|LOC_613|>", "<|LOC_614|>", "<|LOC_615|>", "<|LOC_616|>", "<|LOC_617|>", "<|LOC_618|>", "<|LOC_619|>", "<|LOC_620|>", "<|LOC_621|>", "<|LOC_622|>", "<|LOC_623|>", "<|LOC_624|>", "<|LOC_625|>", "<|LOC_626|>", "<|LOC_627|>", "<|LOC_628|>", "<|LOC_629|>", "<|LOC_630|>", "<|LOC_631|>", "<|LOC_632|>", "<|LOC_633|>", "<|LOC_634|>", "<|LOC_635|>", "<|LOC_636|>", "<|LOC_637|>", "<|LOC_638|>", "<|LOC_639|>", "<|LOC_640|>", "<|LOC_641|>", "<|LOC_642|>", "<|LOC_643|>", "<|LOC_644|>", "<|LOC_645|>", "<|LOC_646|>", "<|LOC_647|>", "<|LOC_648|>", "<|LOC_649|>", "<|LOC_650|>", "<|LOC_651|>", "<|LOC_652|>", "<|LOC_653|>", "<|LOC_654|>", "<|LOC_655|>", "<|LOC_656|>", "<|LOC_657|>", "<|LOC_658|>", "<|LOC_659|>", "<|LOC_660|>", "<|LOC_661|>", "<|LOC_662|>", "<|LOC_663|>", "<|LOC_664|>", "<|LOC_665|>", "<|LOC_666|>", "<|LOC_667|>", "<|LOC_668|>", "<|LOC_669|>", "<|LOC_670|>", "<|LOC_671|>", "<|LOC_672|>", "<|LOC_673|>", "<|LOC_674|>", "<|LOC_675|>", "<|LOC_676|>", "<|LOC_677|>", "<|LOC_678|>", "<|LOC_679|>", "<|LOC_680|>", "<|LOC_681|>", "<|LOC_682|>", "<|LOC_683|>", "<|LOC_684|>", "<|LOC_685|>", "<|LOC_686|>", "<|LOC_687|>", "<|LOC_688|>", "<|LOC_689|>", "<|LOC_690|>", "<|LOC_691|>", "<|LOC_692|>", "<|LOC_693|>", "<|LOC_694|>", "<|LOC_695|>", "<|LOC_696|>", "<|LOC_697|>", "<|LOC_698|>", "<|LOC_699|>", "<|LOC_700|>", "<|LOC_701|>", "<|LOC_702|>", "<|LOC_703|>", "<|LOC_704|>", "<|LOC_705|>", "<|LOC_706|>", "<|LOC_707|>", "<|LOC_708|>", "<|LOC_709|>", "<|LOC_710|>", "<|LOC_711|>", "<|LOC_712|>", "<|LOC_713|>", "<|LOC_714|>", "<|LOC_715|>", "<|LOC_716|>", "<|LOC_717|>", "<|LOC_718|>", "<|LOC_719|>", "<|LOC_720|>", "<|LOC_721|>", "<|LOC_722|>", "<|LOC_723|>", "<|LOC_724|>", "<|LOC_725|>", "<|LOC_726|>", "<|LOC_727|>", "<|LOC_728|>", "<|LOC_729|>", "<|LOC_730|>", "<|LOC_731|>", "<|LOC_732|>", "<|LOC_733|>", "<|LOC_734|>", "<|LOC_735|>", "<|LOC_736|>", "<|LOC_737|>", "<|LOC_738|>", "<|LOC_739|>", "<|LOC_740|>", "<|LOC_741|>", "<|LOC_742|>", "<|LOC_743|>", "<|LOC_744|>", "<|LOC_745|>", "<|LOC_746|>", "<|LOC_747|>", "<|LOC_748|>", "<|LOC_749|>", "<|LOC_750|>", "<|LOC_751|>", "<|LOC_752|>", "<|LOC_753|>", "<|LOC_754|>", "<|LOC_755|>", "<|LOC_756|>", "<|LOC_757|>", "<|LOC_758|>", "<|LOC_759|>", "<|LOC_760|>", "<|LOC_761|>", "<|LOC_762|>", "<|LOC_763|>", "<|LOC_764|>", "<|LOC_765|>", "<|LOC_766|>", "<|LOC_767|>", "<|LOC_768|>", "<|LOC_769|>", "<|LOC_770|>", "<|LOC_771|>", "<|LOC_772|>", "<|LOC_773|>", "<|LOC_774|>", "<|LOC_775|>", "<|LOC_776|>", "<|LOC_777|>", "<|LOC_778|>", "<|LOC_779|>", "<|LOC_780|>", "<|LOC_781|>", "<|LOC_782|>", "<|LOC_783|>", "<|LOC_784|>", "<|LOC_785|>", "<|LOC_786|>", "<|LOC_787|>", "<|LOC_788|>", "<|LOC_789|>", "<|LOC_790|>", "<|LOC_791|>", "<|LOC_792|>", "<|LOC_793|>", "<|LOC_794|>", "<|LOC_795|>", "<|LOC_796|>", "<|LOC_797|>", "<|LOC_798|>", "<|LOC_799|>", "<|LOC_800|>", "<|LOC_801|>", "<|LOC_802|>", "<|LOC_803|>", "<|LOC_804|>", "<|LOC_805|>", "<|LOC_806|>", "<|LOC_807|>", "<|LOC_808|>", "<|LOC_809|>", "<|LOC_810|>", "<|LOC_811|>", "<|LOC_812|>", "<|LOC_813|>", "<|LOC_814|>", "<|LOC_815|>", "<|LOC_816|>", "<|LOC_817|>", "<|LOC_818|>", "<|LOC_819|>", "<|LOC_820|>", "<|LOC_821|>", "<|LOC_822|>", "<|LOC_823|>", "<|LOC_824|>", "<|LOC_825|>", "<|LOC_826|>", "<|LOC_827|>", "<|LOC_828|>", "<|LOC_829|>", "<|LOC_830|>", "<|LOC_831|>", "<|LOC_832|>", "<|LOC_833|>", "<|LOC_834|>", "<|LOC_835|>", "<|LOC_836|>", "<|LOC_837|>", "<|LOC_838|>", "<|LOC_839|>", "<|LOC_840|>", "<|LOC_841|>", "<|LOC_842|>", "<|LOC_843|>", "<|LOC_844|>", "<|LOC_845|>", "<|LOC_846|>", "<|LOC_847|>", "<|LOC_848|>", "<|LOC_849|>", "<|LOC_850|>", "<|LOC_851|>", "<|LOC_852|>", "<|LOC_853|>", "<|LOC_854|>", "<|LOC_855|>", "<|LOC_856|>", "<|LOC_857|>", "<|LOC_858|>", "<|LOC_859|>", "<|LOC_860|>", "<|LOC_861|>", "<|LOC_862|>", "<|LOC_863|>", "<|LOC_864|>", "<|LOC_865|>", "<|LOC_866|>", "<|LOC_867|>", "<|LOC_868|>", "<|LOC_869|>", "<|LOC_870|>", "<|LOC_871|>", "<|LOC_872|>", "<|LOC_873|>", "<|LOC_874|>", "<|LOC_875|>", "<|LOC_876|>", "<|LOC_877|>", "<|LOC_878|>", "<|LOC_879|>", "<|LOC_880|>", "<|LOC_881|>", "<|LOC_882|>", "<|LOC_883|>", "<|LOC_884|>", "<|LOC_885|>", "<|LOC_886|>", "<|LOC_887|>", "<|LOC_888|>", "<|LOC_889|>", "<|LOC_890|>", "<|LOC_891|>", "<|LOC_892|>", "<|LOC_893|>", "<|LOC_894|>", "<|LOC_895|>", "<|LOC_896|>", "<|LOC_897|>", "<|LOC_898|>", "<|LOC_899|>", "<|LOC_900|>", "<|LOC_901|>", "<|LOC_902|>", "<|LOC_903|>", "<|LOC_904|>", "<|LOC_905|>", "<|LOC_906|>", "<|LOC_907|>", "<|LOC_908|>", "<|LOC_909|>", "<|LOC_910|>", "<|LOC_911|>", "<|LOC_912|>", "<|LOC_913|>", "<|LOC_914|>", "<|LOC_915|>", "<|LOC_916|>", "<|LOC_917|>", "<|LOC_918|>", "<|LOC_919|>", "<|LOC_920|>", "<|LOC_921|>", "<|LOC_922|>", "<|LOC_923|>", "<|LOC_924|>", "<|LOC_925|>", "<|LOC_926|>", "<|LOC_927|>", "<|LOC_928|>", "<|LOC_929|>", "<|LOC_930|>", "<|LOC_931|>", "<|LOC_932|>", "<|LOC_933|>", "<|LOC_934|>", "<|LOC_935|>", "<|LOC_936|>", "<|LOC_937|>", "<|LOC_938|>", "<|LOC_939|>", "<|LOC_940|>", "<|LOC_941|>", "<|LOC_942|>", "<|LOC_943|>", "<|LOC_944|>", "<|LOC_945|>", "<|LOC_946|>", "<|LOC_947|>", "<|LOC_948|>", "<|LOC_949|>", "<|LOC_950|>", "<|LOC_951|>", "<|LOC_952|>", "<|LOC_953|>", "<|LOC_954|>", "<|LOC_955|>", "<|LOC_956|>", "<|LOC_957|>", "<|LOC_958|>", "<|LOC_959|>", "<|LOC_960|>", "<|LOC_961|>", "<|LOC_962|>", "<|LOC_963|>", "<|LOC_964|>", "<|LOC_965|>", "<|LOC_966|>", "<|LOC_967|>", "<|LOC_968|>", "<|LOC_969|>", "<|LOC_970|>", "<|LOC_971|>", "<|LOC_972|>", "<|LOC_973|>", "<|LOC_974|>", "<|LOC_975|>", "<|LOC_976|>", "<|LOC_977|>", "<|LOC_978|>", "<|LOC_979|>", "<|LOC_980|>", "<|LOC_981|>", "<|LOC_982|>", "<|LOC_983|>", "<|LOC_984|>", "<|LOC_985|>", "<|LOC_986|>", "<|LOC_987|>", "<|LOC_988|>", "<|LOC_989|>", "<|LOC_990|>", "<|LOC_991|>", "<|LOC_992|>", "<|LOC_993|>", "<|LOC_994|>", "<|LOC_995|>", "<|LOC_996|>", "<|LOC_997|>", "<|LOC_998|>", "<|LOC_999|>", "<|LOC_1000|>", "<|LOC_BEGIN|>", "<|LOC_END|>", "<|LOC_SEP|>", "<|CROP_COL_SEP|>", "<|CROP_ROW_SEP|>", "<|IMAGE_SEP|>", "<|IMAGE_START|>", "<|IMAGE_END|>", "<|VIDEO_START|>", "<|VIDEO_END|>", "<|ASR_START|>", "<|ASR_END|>"]}
tokenizer.model ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:964f353bd1d5dba660a795b8fda8b4fe3ce847be2fa9ec5c1f670fbee4da3faf
3
+ size 1614375
tokenizer_config.json ADDED
@@ -0,0 +1,22 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token": "<s>",
3
+ "eos_token": "</s>",
4
+ "pad_token": "<unk>",
5
+ "unk_token": "<unk>",
6
+ "cls_token": "<|begin_of_sentence|>",
7
+ "sep_token": "<|end_of_sentence|>",
8
+ "mask_token": "<mask:1>",
9
+ "sys_start_token": "<mask:4>",
10
+ "sys_end_token": "<mask:5>",
11
+ "header_start_token": "<mask:6>",
12
+ "header_end_token": "<mask:7>",
13
+ "additional_special_tokens": null,
14
+ "tokenizer_class": "Ernie4_5_VLTokenizer",
15
+ "auto_map": {
16
+ "AutoTokenizer": [
17
+ "processing_ernie4_5_vl.Ernie4_5_VLTokenizer",
18
+ null
19
+ ]
20
+ },
21
+ "chat_template": "{%- if chat_template_kwargs is defined and chat_template_kwargs.options is defined -%}\n {%- set options = chat_template_kwargs.options -%}\n{%- endif -%}\n{#- 定义 options.thinking_mode 的默认值 -#}\n{%- if options is not defined -%}\n {%- set options = {'thinking_mode': true} -%}\n{%- endif -%}\n{%- set thinking_enabled = options.get('thinking_mode', true) in ['open', 'true', true] -%}\n{%- set image_count = namespace(value=0) -%}\n{%- set video_count = namespace(value=0) -%}\n{% macro render_content(content_list, accumulate=True, role=\"user\") %}\n {%- for content_item in content_list -%}\n {%- if content_item.type == 'text' -%}\n {{- content_item.text }}\n {%- elif content_item.type == 'image_url' -%}\n {%- if accumulate -%}\n {%- set image_count.value = image_count.value + 1 -%}\n {%- endif -%}\n {{ ' ' }}Picture{{ ' ' ~ image_count.value if accumulate else '' }}:<|IMAGE_START|><|image@placeholder|><|IMAGE_END|>\n {%- elif content_item.type == 'video_url' -%}\n {%- if accumulate -%}\n {%- set video_count.value = video_count.value + 1 -%}\n {%- endif -%}\n {{ ' ' }}Video{{ ' ' ~ video_count.value if accumulate else '' }}:<|VIDEO_START|><|video@placeholder|><|VIDEO_END|>\n {%- if content_item.video_url.subtitles is defined and content_item.video_url.subtitles -%}\n {{ ' ' }}<Start of Video ASR>: {%- for subtitle in content_item.video_url.subtitles -%}\n [{{ \"%.1f\"|format(subtitle[1]) }},{{ \"%.1f\"|format(subtitle[2]) }}]{{ subtitle[0] }}\n {%- endfor -%} <End of Video ASR>{{ ' ' }}\n {%- endif -%}\n {%- endif -%}\n {%- endfor -%}\n{% endmacro %}\n{#- ---- 定义 message 渲染 ---- -#}\n{%- macro build_messages(messages) -%}\n {#- ---- 初始化 多模态计数器 ---- -#}\n {%- for message in messages -%}\n {%- if message.content is string -%}\n {%- set content = message.content -%}\n {%- elif message.content is iterable -%}\n {%- set content = render_content(message.content, True, message.role) -%}\n {%- else -%}\n {%- set content = '' -%}\n {%- endif -%}\n {%- if (message.role == \"user\") -%}\n {{- 'User: ' + content-}}\n {%- elif (message.role == \"system\" and not loop.first) -%}\n {{- content + '\n' -}}\n {%- elif message.role == \"assistant\" -%}\n {{- '\nAssistant: ' -}}\n {%- set reasoning_content = '' -%}\n {%- if message.reasoning_content is defined and message.reasoning_content is string -%}\n {%- set reasoning_content = message.reasoning_content -%}\n {%- else -%}\n {%- if '</think>' in content -%}\n {%- set reasoning_content = content.split('</think>')[0].rstrip('\n').split('<think>')[-1].lstrip('\n') -%}\n {%- set content = content.split('</think>')[-1].lstrip('\n') -%}\n {%- endif -%}\n {%- endif -%}\n {%- if loop.last or (not loop.last and reasoning_content) -%}\n {{- '\n' + '<think>\n' -}}\n {{- reasoning_content.strip('\n') if options is defined and thinking_enabled else '' -}}\n {{- '\n</think>\n\n' -}}\n {%- endif -%}\n {%- if content|length > 0 -%}\n {{- content -}}\n {%- endif -%}\n {%- if message.tool_calls -%}\n {%- for tool_call in message.tool_calls -%}\n {%- if (not loop.first) -%}\n {{- '\n' -}}\n {%- endif -%}\n {%- if tool_call.function -%}\n {%- set tool_call = tool_call.function -%}\n {%- endif -%}\n {{- '<tool_call>\n{\"name\": \"' -}}\n {{- tool_call.name -}}\n {{- '\", \"arguments\": ' -}}\n {%- if tool_call.arguments is string -%}\n {{- tool_call.arguments -}}\n {%- else -%}\n {{- tool_call.arguments | tojson -}}\n {%- endif -%}\n {{- '}\n</tool_call>\n' -}}\n {%- endfor -%}\n {%- endif -%}\n {{- '<|end_of_sentence|>' }}\n {%- elif message.role == \"tool\" -%}\n {%- if loop.first or (messages[loop.index0 - 1].role != \"tool\") -%}\n {{- 'User: ' -}}\n {%- endif -%}\n {{- '\n<tool_output>\n' -}}\n {{- content -}}\n {{- '\n</tool_output>\n' -}}\n {%- endif -%}\n {%- endfor -%}\n{%- endmacro -%}\n{%- if not add_generation_prompt is defined -%}\n {%- set add_generation_prompt = true -%}\n{%- endif -%}\n{{- '<|begin_of_sentence|>' -}}\n{%- if messages[0].role == 'system' -%}\n {%- if messages and messages[0].role == 'system' -%}\n {%- if messages[0].content is string -%}\n {{- messages[0].content -}}\n {%- elif messages[0].content is iterable -%}\n {%- set sys_content = render_content(messages[0].content) %}\n {{- sys_content -}}\n {%- endif -%}\n {%- endif -%}\n {{- '\n' -}}\n{%- else -%}\n {{- 'You are a multimodal AI assistant called ERNIE developed by Baidu based on the PaddlePaddle framework.\n' -}}\n{%- endif -%}\n{%- if options is defined and options.parallel_tool_calls is defined and (options.parallel_tool_calls == \"true\" or options.parallel_tool_calls == True) -%}\n {{- '\nparallel_tool_calls=True\n' -}}\n{%- endif -%}\n{%- if tools -%}\n {{- \"\n<tool_list>\" -}}\n {{- '\n' -}}\n {{- '[' -}}\n {%- for tool in tools -%}\n {{- '{\"type\": \"function\", \"function\": ' -}}\n {{- (tool.function | tojson) -}}\n {{- '}' -}}\n {%- if not loop.last -%}\n {{- ', ' -}}\n {%- endif -%}\n {%- endfor -%}\n {{- ']' -}}\n {{- \"\n</tool_list>\" -}}\n {{- '\n' -}}\n{%- endif -%}\n{{- build_messages(messages) -}}\n{%- if add_generation_prompt -%}\n {%- set append_think_label=False -%}\n {%- if not thinking_enabled -%}\n {%- set append_think_label=True -%}\n {%- endif -%}\n {{- \"\nAssistant: \n<think>\n\" -}}\n {%- if options is defined and options.tool_choice is defined -%}\n {%- if options.tool_choice.mode == \"required\" -%}\n {{- '系统要求我必须使用一个或多个工具,注意要认真填写参数,若必填参数存在信息缺失,需做出合理的假设,不可询问用户。' -}}\n {%- if not thinking_enabled -%}\n {{- '\n</think>\n\n<tool_call>\n' -}}\n {%- set append_think_label=False -%}\n {%- else -%}\n {{- '现在开始分析用户需求,' -}}\n {%- endif -%}\n {%- endif -%}\n {%- if options.tool_choice.mode == \"force\" -%}\n {{- \"系统指定必须使用\" -}}\n {{- options.tool_choice.name -}}\n {{- '工具,因此我尝试填写合适的参数满足用户需求。\n</think>\n\n<tool_call>\n{\"name\": \"' -}}\n {{- options.tool_choice.name -}}\n {{- '\", \"arguments\":' -}}\n {%- set append_think_label=False -%}\n {%- endif -%}\n {%- endif -%}\n {{- \"\n</think>\n\n\" if append_think_label else '' -}}\n{%- endif -%}"
22
+ }